Intel’s Granite Rapids listed with huge L3 cache upgrade to tackle AMD EPYC – software development emulator spills the details

As observed by InstLatX64, the latest update to Intel’s Software Development Emulator has given us an intriguing look at the L3 cache specification of Intel’s upcoming Granite Rapids Xeon CPUs. Specifically, Intel SDE shows that Granite Rapids will now have 480MB of L3 cache, compared to Emerald Rapids’ 320 MB.

When we reviewed the Emerald Rapids-based Intel Xeon Platinum 8592+ CPU late last year, we determined that its tripled L3 cache compared to past generations contributed significantly to its gains in Artificial Intelligence inference, data center, video encoding, and general compute workloads. While AMD EPYC generally remains the player to beat in the enterprise CPU space, Emerald Rapids marks a significant improvement from Intel’s side of that battlefield, especially as it pertains to Artificial Intelligence workloads and multi-core performance in general.

So, what does Granite Rapids providing a further boost to the L3 cache mean in practical terms? Until we get to test the CPUs for ourselves, we can’t be completely sure. Keep in mind that a shared L3 cache on a CPU corresponds to the level of cache that’s shared by and accessible to all CPU cores on the die, whereas lower levels of cache (particularly L1 Caches) are usually distributed across specific cores or groups of cores.

It’s reasonable to expect that Intel wants to remain competitive in the ever-essential server CPU space (the only market where Intel and AMD can get away with selling a few dozen CPU cores for thousands of dollars), and leveraging L3 cache seemed to improve its competitive performance last time around. It seems they’re hoping that trend continues with this boost to Granite Rapids’ L3 cache ahead of its projected 2024 release.

As things currently stand, it’s hard to tell whether or not Intel will be able to truly push against AMD’s lead in multi-core throughput in the server space. You might think AMD is good at throwing a bunch of cores at a problem on desktops, but it has truly mastered this methodology in the data center.

However, Emerald Rapids’ L3 cache improvement still managed to push Intel’s Xeon CPUs into competition with EPYC Bergamo chips while technically being a refresh cycle, with solid wins in most AI workloads we tested, and at least comparable performance in most other benchmarks. Since Granite Rapids will leverage an even greater L3 cache improvement alongside the new Intel 3nm node process, the performance gain for Xeon’s next generation could be among Intel’s strongest leaps in server CPUs in quite a while.

This post was originally published on this site