Gaming And Streaming: Which CPU Is Best For Both?

Test Setup

How We Tested

Repeatability is one of the most important components of any useful benchmark methodology. All tests have some degree of uncertainty, but we're looking for a minimal and consistent amount of variability. Results plagued by wild swings in performance from one run to the next aren't usable as accurate benchmarks.

As an example, we've yet to develop any reliable multi-tasking benchmarks. In response to reader requests, we have worked diligently to create a series of tests that measure gaming performance with background applications like Web browsers, email clients, media players, Discord, and Skype open. Windows' prioritization appears to be based on fickle and unexplained factors. The operating system suspends various background processes unpredictably during one scripted sequence, then leaves them fully active during the next (even when the test environment hasn't changed). This unpredictability becomes more, well, unpredictable, as the number of open applications increases. Switching Windows into Game Mode only complicates matters further. So far, we have no solution. Our multi-tasking experiments yield deltas from 5 to 15 FPS between successive runs, which means they land nowhere near our expectations for a reliable benchmark.

Luckily, game streaming is much easier to control. Encoding is a CPU-intensive task that chews up plenty of cycles, so Windows doesn't suspend or otherwise interfere with it. This allows us to create repeatable benchmarks without extreme outliers.

What We're Measuring

Evaluating game streaming performance works across two axes: game quality and stream quality. Of course, we'll measure average, minimum, and 99th percentile frame rates with and without streaming in the background. We'll also include our usual frame time and variance results, which become more important once we start streaming.

We also need to account for stream quality. That means recording the percentage of frames encoded. Each processor pushes different frame rates, so each run correspondingly generates a different number of frames. As such, we measure the percentage of frames successfully encoded as "% of Frames Delivered." In the test below, a Threadripper 1950X CPU encoded 98.9% of the frames generated by our gaming session, meaning it skipped 1.1% of the frames due to encoding lag.

We're streaming at 60 FPS, so we also measure stream quality by listing the percentage of frames encoded within the desirable 16.667ms (60 FPS) threshold. We also include the percentage of frames that land above and below the 60 FPS threshold, which helps quantify the hitching and stuttering a viewer would see on the stream. Subjective visual measurements are still important, so we'll call out tests that generate a bad-looking stream.

Open Broadcaster System

There are several software encoding applications, but we chose Open Broadcasting System (OBS) due to its flexible tuning options, detailed output logs, and broad compatibility with streaming services. We're using the x264 software encoder, along with YouTube Gaming for our streaming service. Any run that reports frames dropped due to networking interference is discarded.

Our ultimate goal is to develop a test that measures CPU performance, so we select parameters that remove the most obvious bottlenecks. Gaming at 1920x1080 with an EVGA GeForce GTX 1080 FE side-steps a GPU limitation (as much as possible). Encoding overhead isn't as high with lesser video cards that generate fewer frames per second. We also test with a 10 Mb/s upload rate, though you can stream at 6 Mb/s or less. Our Internet connection would accommodate up to 35 Mb/s uploads. To vary game selection, we chose Grand Theft Auto V, Middle-earth: Shadow of War, and Battlefield 1 for our tests.

There are several other scenarios we could have added to increase the complexity of our testing, such as a simultaneous video stream from a webcam, recording the game to the host system, or streaming to multiple services at once. We went with just one service to reduce the number of variables...at least for now.

Finding the best streaming options requires some tuning for every game and hardware configuration. There is a delicate balance between game performance on the host system and stream quality for the remote viewer, so fine-tuning is needed to yield the best mix. We picked somewhat general settings that offered a good range of performance by our subjective measure. We also stuck with options that'd establish a level playing field for a wide range of test systems. Just be aware that there are plenty of knobs to turn, some of which could offer better performance than the ones we use (lowering the stream to 30 FPS, for instance, cuts encoding overhead significantly)

Tuning the encoding presets is one of the most direct ways to adjust streaming performance and quality for your system's capabilities. Slower encoding increases compression efficiency, which provides better output quality and reduces compression artifacts. OBS has 10 presets ranging from "ultrafast" (the lowest-quality setting with the least computational overhead) to "placebo" (offering the best streaming quality and consuming the most host processing resources). The placebo setting is aptly named; there is certainly a rapidly diminishing rate of return on stream quality after passing the "slower" preset (two ticks before placebo). More strenuous settings can quickly cripple even powerful processors, particularly if you are streaming from a single host system. Placebo with care. 

We split our test groups into three different classes. After evaluating a few Core i3- and Ryzen 3-class processors and determining that they can't stream effectively at our settings, we chose Ryzen 5 and Core i5 models for our entry-level systems. We used the "veryfast" encoding setting for this class of CPU. Naturally, higher-end processors, such as our Ryzen 7/Core i7 and Threadripper/Core i9 chips, offer more performance, so we use the "faster" and "fast" settings, respectively, for brawnier CPUs.

Because we're testing with different encoding presets, you cannot compare test results for the different classes directly.

Test System & Configuration
Hardware
Intel LGA 1151 (Z370)
Intel Core i5-8600K, Core i7-8700K
MSI Z370 Gaming Pro Carbon AC
4x 8GB G.Skill RipJaws V DDR4-3200 @ 2666 and 3200 MT/s

AMD Socket AM4
AMD Ryzen 5 1600X, Ryzen 7 1800X
MSI Z370 Xpower Gaming Titanium
2x 8GB G.Skill RipJaws V DDR4-3200 @ 2667 and 3200 MT/s

Intel LGA 1151 (Z270)
Intel Core i5-7600K, Core i7-7700K
MSI Z270 Gaming M7
2x 8GB G.Skill RipJaws V DDR4-3200 @ 2666 and 3200 MT/s

AMD Socket SP3 (TR4)
AMD Ryzen Threadripper 1950X
Asus X399 ROG Zenith Extreme
4x 8GB G.Skill Ripjaws V DDR4-3200 @ 2666 and 3200 MT/s

Intel LGA 2066
Intel Core i9-7900X, Core i9-7980XE
MSI X299 Gaming Pro Carbon AC
4x 8GB G.Skill Ripjaws V DDR4-3200 @ 2666 and 3200 MT/s

All
EVGA GeForce GTX 1080 FE
1TB Samsung PM863
SilverStone ST1500-TI, 1500W
Windows 10 Creators Update Version 1703
Corsair H115i

MORE: Best CPUs

MORE: Intel & AMD Processor Hierarchy

MORE: All CPUs Content

This thread is closed for comments
26 comments
    Your comment
  • SonnyNXiong
    Core i7 7700k is the best, can't get any better than that except for a high OC speed and high core clock speed on that core i7 7700k.
  • InvalidError
    This benchmark will need a do-over once the patch for Intel's critical ring-0 exploit comes out and slows all of Intel's CPUs from the past ~10 years by 5-30%.
  • mcconkeymike
    Sonny, I disagree. I personally run the 7700k at 4.9ghz, but the 8700k 6 core/12 thread is a badass and does what the 7700k does and better. Please do a little research on this topic otherwise you'll end up looking foolish.
  • AndrewJacksonZA
    125865 said:
    This benchmark will need a do-over once the patch for Intel's critical ring-0 exploit comes out and slows all of Intel's CPUs from the past ~10 years by 5-30%.

    Yeah. It'll be interesting to see the impact on Windows. Phoronix ( https://www.phoronix.com/scan.php?page=article&item=linux-415-x86pti&num=2 ) shows a heavy hit in Linux for some tasks, but apparently almost no hit in gaming ( https://www.phoronix.com/scan.php?page=news_item&px=x86-PTI-Initial-Gaming-Tests )

    Although gaming in Linux, seriously?! C'mon! ;-)
  • ArchitSahu
    2523846 said:
    Core i7 7700k is the best, can't get any better than that except for a high OC speed and high core clock speed on that core i7 7700k.


    What about the 8700k? i9? 7820X?
  • AgentLozen
    What a great article. It really highlights the advantage of having more cores. If you're strictly into gaming without streaming, then Kaby Lake (and Skylake by extension) is still an awesome choice. It was really interesting to see it fall apart when streaming was added to the formula. I wasn't expecting it to do so poorly even in an overclocked setting.
  • Soda-88
    A couple of complaints:
    1) the article is 10 months late
    2) the game choice is poor, you should've picked titles that are popular on twitch

    Viewer count at present moment:
    BF1: ~1.300
    GTA V: ~17.000
    ME - SoW: ~600

    Personally, I'd like to see Overwatch over BF1 and PUBG over ME: SoW.
    GTA V is fine since it's rather popular, despite being notoriously Intel favoured.

    Other than that, a great article with solid methodology.
  • guadalajara296
    I do a lot of video encoding / rendering in Adobe cc premier pro
    It takes 2 hours to render a video on Skylake cpu. would a Ryzen multi core improve that by 50% ?
  • salgado18
    1632944 said:
    2523846 said:
    Core i7 7700k is the best, can't get any better than that except for a high OC speed and high core clock speed on that core i7 7700k.
    What about the 8700k? i9? 7820X?


    Please, don't feed the trolls. Thanks.
  • lsatenstein
    To be able to respond to Paul's opening comment about reputability testing, the rule is to have at least 19 test runs. The 19 runs will provide a 5 percent confidence interval. That means or could be understood to be 19/20 the results will be within 5 percent of the mean, which is about 1 standard deviation.
  • lsatenstein
    The new flaw discovered with older existing CPUS does not affect your testing above, because your cpus are post flaw detection. But for the average person on the street (me), who does have older Intel CPUs, I just lost between 10% and 20% cpu performance, be it windows or Linux.


    Security
    'Kernel memory leaking' Intel processor design flaw forces Linux, Windows redesign
    Google the title for more info.
  • InvalidError
    702528 said:
    The new flaw discovered with older existing CPUS does not affect your testing above, because your cpus are post flaw detection.

    None of the sources I have found exempt any Intel CPUs from the flaw which appears to affect all of Intel's x86-64 CPUs up to current. It takes several months of R&D past detection to design and manufacture a new chip with the fixes baked-in, which means that unless Intel knew about the flaw 4+ months ahead of Coffee Lake's launch, then all current Intel chips have the flaw.
  • razamatraz
    Wasn't the Ring-0 bug described found in 2015? At the time there was no demonstration that it actually could be abused if I recall correctly but Intel should have taken it seriously when designing Coffee Lake CPUs...not sure if they did; the list of affected CPUs is still under NDA. In any case it seems to have much more effect on machines running VMs, multiple simultaneous users etc than on typical single user gaming machines as you'd have to already be compromised for this bug to have any impact on a single user machine...IE they'd have to get malware on your machine in the first place. This bug doesn't help anyone do that.

    Waiting on actual answers, expected in the next week or two.
  • InvalidError
    1579424 said:
    Wasn't the Ring-0 bug described found in 2015?

    None of the stories I've read about it so far make any mention of the actual date where Intel was made aware of the bug. If Microsoft had a patch ready for such a severe bug (unrestricted access to kernel space from user land) less than two months ago, I'd be surprised if Intel has been sitting on it for over a year: that would be a huge liability for Intel if a company got hacked and found out Intel knew about the vulnerability that enabled the hack long beforehand.
  • Honis
    I like how this shows how great multi-threaded performance can be with multiple proceses running in the background.

    I know when I game, I may not stream but I usually have a browser open and a few sites I frequent have junky autoplay videos. My Ryzen 7 and I would think an i7-8 series, handles these intermittent hiccups nicer than the 4 core processors.
  • Nintendork
    @guadalajara296
    Consider changing to sony vegas, Adobe suite is pretty shi*tty regarding multicore optimization.
  • Yuka
    Thanks for this review. It really does bring nice information to the table.

    Are you going to test with more forgiving settings? I can stream decently with 720p and 30FPS using my i7 2700K (no hard figures though), pretty much every single game out there. And will you guys update the numbers once the security patches are in place? I'd love to see how that affects Streaming.

    Cheers!
  • cryoburner
    512760 said:
    the game choice is poor, you should've picked titles that are popular on twitch... ...Personally, I'd like to see Overwatch over BF1 and PUBG over ME: SoW.

    The problem with games like Overwatch and Battlegrounds is that it would be difficult to get meaningful performance numbers out of them. Those games can only be played online, and there's no real way to control what other players are doing during a match. That's why they pick games where their tests can be performed in single player mode, where there can be relatively little variance between one benchmark run and the next.
  • DerekA_C
    @Cryoburner, the fact is most streamers don't stream single player games they stream multiplayer, something with actual competition, but I do get why they tested it that way. However, it isn't very realistic to stream only single player it has to demonstrate those variances.
  • James Mason
    I wish this had a "dynamic" Price to Performance metric at the end that could compare to the prices you pull from amazon/newegg to how good the CPUs (or GPUs/whatever) to provide an always relevant way to decide what's the best choice for whoever and whenever sees the article.
  • elbert
    On the test setup page why does the Z370 have (4X8GB)32GB while the AM4 only (2x8GB)16GB? Ive seen compression tests with RAM capacity causing performance differances.
  • Plumboby
    Um streaming & recording my trusty old i52400 with 8gig ram & Sapphire pulse RX560 4G is prety impressive for the age of build & the Videocard gives any big dady build a run for their $. The relive AMD runs is silky smooth to record with you dont notice it live broadcasting or recording gameplay is silky smooth on Andrenaline drivers which are meant to be bugged coz of dx9 not working (not true). The worst thing is no 1 actually tests the original mainsteam builds of yester year which is bs & no true real test of the years of streaming & recording. I got a channel open for the builds of yester year which regarded as out dated crap uploading daily to prove a point dont need wads of $ & big buget build to record gameplay & stream with it effecting preformance or lag. been impressed with my Sandy Bridge even on online multiplayer games with server lag isnt noticed you dont need a beast of a build to keep up with the elite group just to record gameplay or stream, i have proven it with a true review of my AMD card as been no true real honest tests. half the bench test Toms & others carried out is heavily flawed. The best way to test is real world open game testing for hrs on end throught out 10 titles even on older builds to give a fairer representation of results. Hence why i started a channel for the not so mainstream old builds to prove that you dont need a new gen build to keep up & be able to game smoothly & always happy to allow others to share there low end build gameplay on my channel to represent more options. To any of the Toms moderators & tester you need for a fairer test stop doing tho BS benchmarks they are not true to the number you need to test between all older mainstream builds to new for the fairest guides & representation as Any online Benchmark is flawed when your only testing what alot cant afford.
  • beavermml
    can we use Intel iGPU to offload the task of streaming? or it cannot be done?
  • PaulAlcorn
    45049 said:
    On the test setup page why does the Z370 have (4X8GB)32GB while the AM4 only (2x8GB)16GB? Ive seen compression tests with RAM capacity causing performance differances.


    Good eye, thanks for the headsup Elbert! That is a typo, which I have now corrected. We tested with 16GB for both machines.