Star Swarm Stress Test
Given AMD’s use of the Star Swarm demo to show how Mantle alleviates CPU dependency, we hoped to use the DirectX-based build for the opposite purpose. But our frame rate over time graph is downright frenetic. It’s hard to know whether a 300-second sample accurately pits these platforms against each other.
To be fair, Oxide Games concedes to the non-deterministic nature of its stress test. It’s the same issue we face trying to benchmark Arma 3 and Battlefield 4’s multi-player components—as soon as you involve the AI calculations needed to tax a processor, variability starts affecting the results. Removing this would shift the bottleneck back over to graphics.
Thief
The Core i7-5820K shows up at the top of another gaming chart, again followed by Core i7-4790K. Not that the results in Thief are particularly telling. All of these CPUs are fast enough to keep up with a single GeForce GTX Titan.
Tomb Raider
Tomb Raider has the -4790K on top of the -5820K, though both CPUs trail Intel’s Core i7-3970X. In reality, there’s just no way you’d be able to distinguish between any of these platforms, particularly considering their low frame time variance numbers.
World of Warcraft
WoW is another game known for exaggerating platform characteristics. And you can add it to the list of titles particularly fond of Intel’s Core i7-5820K, with the -4790K not far behind. Flip through to the frame rate over time chart, and you’ll see a tight grouping through our benchmark run.
If anything, the Core i7-5960X’s lower clock rate negatively affects its frame time variance result. The same holds true in almost every other game benchmark, too.
- Three New CPUs For Enthusiasts
- X99, LGA 2011-3 and DDR4: Get Ready For A Big Upgrade
- How We Tested Core i7-5960X, -5930K, And -5820K
- Synthetic Benchmarks
- Real-World Benchmarks
- Battlefield 4, Grid 2, And Metro: Last Light
- Star Swarm, Thief, Tomb Raider, And WoW
- Power, In Depth: Stock Clock Rates
- Power, In Depth: Eight and Six Cores at 3.5 GHz
- Power, In Depth: Eight and Six Cores at 4 GHz
- Power, In Depth: Eight and Six Cores at 4.5 GHz
- Power, In Depth: CPU Health at 4.8 GHz
- Measuring DDR4 Power Consumption
- Power Consumption Through Our Benchmark Suite
- Intel Keeps Enthusiasts On Its Most Modern Design With Haswell-E
Personally, the 3DS and After Effects benchmarks were of most interest, since they are what I spend most of the CPU time on. (3DS in particular, right now I'm logging dozens of CPU hours a day on 3DS alone). It's pretty clear that unless the platform costs of Haswell-E are much higher than IB-E, going with the old won't make sense. The 5930k beats the 4960X. which is at least 50% more expensive.
I've been waiting forever for an upgrade to my i7 930 based workstation, and I didn't feel like jumping on an IB-E a couple months before a brand-new HEDT platform is released.
I had hoped Haswell-E would be a bit more impressive, but OTOH, investing in a DDR4 platform now might be a good idea, given my workstations typically have 3-4 years in them. At the very least, a drop-in upgrade to Broadwell-E would be nice to have as an option.
Now to see how big a pounding I'll take in Denmark for X99/DDR4/Haswell-E...
Therefore anybody who's going to load up on GPUs enough to worry about PCI-E lanes will have sufficient money to drop in a 5960X on principle. Anybody who's adopting X99 for productivity purposes will not skimp on core count and also go 5960X, especially considering they're likely to go at least 32GB RAM and therefore shelling out a lot of money. Those producing on CUDA cards may not even go X99 at all because 1150 Haswell has more than enough power to run the software. Folders and CUDA Miners similarly will want all GPUs running at full tilt so will likely invest in the 5960X to get all the PCI-E lanes.
So really, the only "smart choice" is 5960X or don't go X99 at all.