Sign in with
Sign up | Sign in

Challenging FPS: Testing SLI And CrossFire Using Video Capture

Challenging FPS: Testing SLI And CrossFire Using Video Capture
By

"You take the red pill - you stay in Wonderland, and I show you how deep the rabbit hole goes."
- Morpheus, The Matrix

Over the years, we've accumulated mountains of data from benchmarking tools like Fraps and metrics built-in to top titles to help us evaluate performance. Historically, that information gave us our impression of how much faster one graphics card is than another, or what speed-up could be expected from a second GPU in CrossFire or SLI.

As a rule, human beings don't respond well when their beliefs are challenged. But how would you feel if I told you that the frames-per-second method for conveying performance, as it's often presented, is fundamentally flawed? It's tough to accept, right? And, to be honest, that was my first reaction the first time I heard that Scott Wasson at The Tech Report was checking into frame times using Fraps. His initial look and continued persistence was largely responsible for drawing attention to performance "inside the second," which is often discussed in terms of uneven or stuttery playback, even in the face of high average frame rates.

I still remember talking about this with Scott roughly two years ago, and we're still left with more questions than answers, despite his impressive volume of work over that time. There are a couple of reasons the escalation of this issue has taken so long.

First, as mentioned, even open-minded enthusiasts are uncomfortable with fundamental changes to what they took for granted previously (after all, that means we, you, and much of the industry was often wrong with our analysis). Nobody wants to believe that the information we were gleaning previously wasn't necessarily precise. So, many folks shied away from it for as long as possible.

Second, and perhaps even more technically-correct, there is no complete replacement for reporting average frame rate. Frame times and latency are not perfect answers to the problem; there are other variables in play, including where Fraps pulls its information from the graphics pipeline. At the end of the day, there is no metric we can use to definitively compare the smoothness of video performance based exclusively on objective observation.

That's what we're looking for; that's the Holy Grail. We'd need something to replace FPS. The bad news is that we're not there yet.


But frames per second is far from a useless yardstick. It reliably tells us when a piece of hardware delivers poor performance. When you see a card averaging less than 15 FPS, for instance, you know that combination of settings isn't running fluidly enough for a perceived sense of realism. There is no ambiguity in that. Unfortunately, averaging frames per second does not help distinguish between the consistency of rendered frames, particularly when two solutions serve up high frame rates and would appear to perform comparably. 

It's not all doom and gloom, though. This is an exciting time to be involved in PC hardware, and graphics performance gives us a new frontier to explore. There are a lot of smart people working on this problem, and it's something that'll invariably be conquered. For our part, we've put our own research into the question of smoothness, which you've recently seen reflected as charts that include average frame rates, minimum frame rates, frame rates over time, and frame time variance. None of those address the challenge completely, but they help paint a more complete picture when it comes to choosing the right graphics card for your games.

Today, we're exploring another tool that's going to help us dig into the performance of graphics cards (particularly multi-GPU configurations): Nvidia's Frame Capture Analysis Tool, or FCAT.

Ask a Category Expert

Create a new thread in the UK Article comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 6 comments.
This thread is closed for comments
  • 0 Hide
    sam_p_lay , 27 March 2013 14:44
    Thanks Don.

    That's 2.3fps on Crysis 3 though and you've got 'ofrm' on Far Cry 3.

    It'll be interesting to see the second part on this. And to see how long it takes before the tin hat conspiracy theory comments appear about how convenient it is that the nVidia tool measures the best results on nVidia hardware...
  • 2 Hide
    mi1ez , 28 March 2013 10:20
    genuinely interesting read. Good work Tom's, back at the forefront!
  • 2 Hide
    jaguarcd32x , 28 March 2013 16:00
    Before I read the article I knew that the Nvidia cards would be better because I have read quite a few articles that shows a single 790ghz ed has more frame latency issues and micro stutter than my dual card 690.
    I don't think AMD are behind in this area at all because even their single GPU solutions have worse micro stutter than Nvidia's dual. I think the real reason is to artificialy inflate max frame rates so it looks like the AMD cards perform better.
  • -1 Hide
    dr_mouse , 2 April 2013 17:38
    Anyone find it surprising that Nvidia cards do better on a benchmark Nvidia developed.

    I am not saying the results are wrong, but it was a fairly obvious result. They wouldn't make it available to the public if it didn't show their cards in the best possible light.
  • 0 Hide
    AsciiSmoke , 3 April 2013 10:16
    So we need FFROPS (Full Frames Rendered Out Per Second) instead of FPS?

    I also don't understand why, in this age of super-efficiency, we still render games at brute-force, flat-out speed.

    Although I make a living as a software developer, I don't pretend to know anything about making games or graphics hardware. However, surely there's no point whatsoever in rendering more frames than the monitor can display. I know that we always have the option of turning on V-Sync but that often makes games appear to stutter or lag. Nvidia have attempted to address this by introducing adaptive V-Sync which is a good step forward but far from polished enough to be the solution.

    Wouldn't it make more sense for the graphics pipeline to start with a synced frame for each refresh and then work backwards to the attainable quality / fidelity possible under the system's hardware? Start with the bare minimum to render the scene and then use remaining power to add fidelity. Much like a painter starting with a new canvas 60 times per second and trying to get as much info into the scene as possible before the next frame is due. There are bound to be efficiency savings like only calculating reflections on every other frame, etc. But knowing that your GPU and display are working in harmony surely means that you will always be getting the best out of your hardware?

    Like a lot of end-user consumers. The issues all seem simpler from this side of the fence and problems are just solutions waiting for someone to get around to fixing, but I'd like to think that my dual GTX 670's could be used to make my games prettier, not just used to render throw-away 'runt' frames.
  • 0 Hide
    Daboydman , 3 April 2013 10:56
    Page 12 it says cjhart. Lol wut? :p  I presume it means chart