Ryzen Versus Core i7 In 11 Popular Games

Pre-release Cinebench and Blender benchmarks showing Ryzen ahead of Core i7-6900K gave enthusiasts hope they'd have a cheaper alternative to Intel's brawny Broadwell-E-based CPUs. And while it's fair to say the Ryzen launch went well for AMD in comparisons of pricing and professional application performance, gaming didn't paint the processor in a very good light at all.

We are always willing to make some concessions in the name of value, so Ryzen doesn't have to beat Intel's offerings across the board. It just needs to be competitive. Where that line exists for you is completely subjective. But for many, Ryzen’s frame rates are too low, even in light of its attractive pricing. And if gaming is the primary purpose for your PC, it's hard to ignore faster and cheaper Kaby Lake-based Core i7s and i5s that serve up better results in many popular games.

Theories abound as to why Ryzen processors are struggling in gaming metrics, but some of the disparity no doubt comes from an IPC and clock rate deficit compared to Intel's Kaby Lake design. The issue also appears to stem from AMD’s Zen architecture and how applications navigate its cache hierarchy.

The Zen architecture employs a four-core CCX (CPU Complex) building block. AMD outfits each CCX with a 16-way associative 8MB L3 cache split into four slices; each core in the CCX accesses this L3 with the same average latency. Two CCXes come together to create an eight-core Ryzen 7 processor (image below), and they communicate via AMD’s Infinity Fabric interconnect. Data that traverses the void between CCXes incurs increased latency, so it's ideal to avoid the trip altogether if possible.

Unfortunately, threads migrate between the CPU Complexes, thus suffering cache misses on the local CCX's L3. Threads might also communicate with other threads (and their data) running on the CCX next door, again adding latency and chipping away at overall performance.

AMD noted in a recent blog post that most games aren’t optimized for its implementation of simultaneous multi-threading, which is particularly painful due to Ryzen’s core advantage. In fact, we’ve found that disabling SMT actually improves the chip's performance in games like Ashes of the Singularity, Arma 3, Battlefield 1, and The Division.

Ryzen represents AMD's first attempt at an SMT technology, so teething pains on the application side are understandable. Two game developers have come forward and voiced their intention to support AMD’s implementation in future updates, and AMD says it seeded the industry with 300 developer kits to jump-start the optimization effort. There are thousands of games, though. While many existing titles probably won't receive patches written with AMD in mind, we do hope that newer titles incorporate the code needed to run more smoothly.

According to AMD, this problem doesn't relate to the Windows scheduler. Normally we'd say that's a good thing, since it doesn't depend on Microsoft to fix. But if the issue was tied to the operating system, a single update could optimize for AMD's processors, similar to what we saw with Bulldozer in the Windows 8 days. Instead, we have to look out for improvements one application at a time.

AMD also points out that Ryzen is more competitive at 3840x2160 than at lower resolutions, such as 1920x1080. Obviously, gaming at higher resolutions shifts the bottleneck over to your GPU. So while AMD's observation is true, it isn't indicative of better processor performance, but rather the architecture's weakness hidden behind a slammed GPU. Many of us use our CPUs for several years, and as we swap out for faster graphics cards, the bottleneck will start swinging back to host processing. In many ways, today’s 4K is tomorrow's QHD.

The Socket AM4 ecosystem holds great promise, but our experience with the top motherboard manufacturers (and indeed their experience with AMD's chipsets) has been less than ideal. We've received a flurry of updates in the days leading up to and following Ryzen 7's launch. In some cases, new firmware improves performance. In others, the fix shines light on the underlying issues. General platform instability aside, we get the sense there wasn't enough preparation pre-launch, and AMD's partners are scrambling now as a result. But there's hope that things will get better. AMD recently announced it's working on an updated power profile to better accommodate normal desktop usage patterns (more on that in a bit).

In the meantime, we want to better understand the state of gaming with a Ryzen CPU. Today's feature includes a number of popular titles and all three Ryzen CPUs. Though we're still working on our reviews of the Ryzen 7 1700X and 1700, digging deeper on gaming, specifically, was our top priority for follow-up after publishing the AMD Ryzen 7 1800X CPU Review.

MORE: Best CPUs

MORE: Intel & AMD Processor Hierarchy

Create a new thread in the UK Article comments forum about this subject
This thread is closed for comments
6 comments
    Your comment
  • kyzarvs
    Still missing flippin' graphs on the UK site.

    This has been over a year without a reliable fix? Or does it just feel that way to me and it's only been 'several months'?

    I have given Toms a *really* long benefit of the doubt time to show they at least read the UK comments, but sadly no. I bet they are happy to take the UK advertising revenue though...
    1
  • carmichaelits
    One cannot extrapolate with new CPU architecture.
    0
  • Baksi
    is it the same render in Deus Ex?
    it is seems that ryzen was in dx11 and intel was in dx12.
    0
  • Nine Layer Nige
    Anonymous said:
    Still missing flippin' graphs on the UK site.

    This has been over a year without a reliable fix? Or does it just feel that way to me and it's only been 'several months'?

    I have given Toms a *really* long benefit of the doubt time to show they at least read the UK comments, but sadly no. I bet they are happy to take the UK advertising revenue though...


    I am in the UK (Swindon) and (as of 15th March 2017 10:41) can see the graphs without problem.
    0
  • reactive
    I'm not sure why anyone would have been interested in AMD's flawed Blender benchmarks. Anyone with any sense is using a GPU to render in Blender; even my Zotac GTX1060 renders 4x as fast as my 4 year old i7 @4 GHz. Of course, AMD can't keep up with nVidia on rendering GPUs either!
    0
  • Shneiky
    @REACTRIVE,

    1 - "AMD's flawed Blender benchmarks"

    Wrong - the benchmark is not skewed in any way.

    2 - "Anyone with any sense is using a GPU to render"

    Wrong - GPU render engines have their limitations. Specially VRam and complexity of instructions that the GPU can handle.

    I work as a VFX artists and 99% of the large budget films are based on CPU Software rendering. Some scenes take more than 100 GBs of RAM just to load. Video cards - professional or not - are even struggling to get to 32 GBs of VRam. CPU rendering systems are well scaleable with RAM and Cores and Storage.

    3 - "AMD can't keep up with nVidia on rendering GPUs either!"

    Again - wrong. AMD does and for the past 10+ years has done a lot better at OpenCL based ray-tracers than nVidia (if you take into consideration the price of both the professional and consumer offerings from both companies). The point is - nVidia is much closer to devs. so a lot of them are using CUDA which is nVidia exclusive and not much work goes to OpenCL.

    Software CPU rendering in the Ray-tracing world is here to stay for the next decade - like it or not. I certainly don't, but it's not like I have choice.
    0