Page 1:An Eye For Power
Page 2:Performance Per Watt
Page 3:The Tests
Page 4:Test Setup And A Side Note
Page 5:Test System
Page 6:Benchmark Results: Crysis, The Classic Approach
Page 7:Benchmark Results: Desktop Usage, Less-Than-Ideal Conditions
Page 8:Benchmark Results: Cinebench R11
Page 9:Benchmark Results: Cyberlink PowerDVD 9
Page 10:Benchmark Results: Cyberlink PowerDirector
Page 11:GPU Vs. CPU
Page 12:Measuring Power Consumption: Let's Recap
Page 13:Don't Forget Idle Power Consumption
Most of our graphics card reviews include power measurements at idle and load. But how do applications tax your GPU in between those two extremes? We line up a handful of different programs and monitor power use with a handful of AMD's latest cards.
Next to the CPU, graphics cards receive a lion’s share of attention when it comes to the analytical eye of hardware reviewers. For the uninitiated, modern graphics processors help determine the performance your PC puts out in applications dominated by 3D and video. They’ve very quickly become some of the most complex piece of hardware inside your system, evolving from simple display adapters into fully parallel processors able to handle general-purpose computing workloads.
Flagship GPUs sport higher transistor budgets than most CPUs, so it’s hardly a surprise that we have to be more diligent than ever about the power these components consume. While we generally make an effort to measure idle and load consumption in our graphics card reviews, we wanted to take a more granular and focused look at power use in specific applications.
The Runway Power Consumption of GPUs
Most of us really can’t complain about the increased muscle of modern graphics products—after all, they’re driving the push toward realism in games, parallel processing, and technologies like Blu-ray 3D. But if there were one quibble we’d cite, it’d be the escalating power draw of architectures like Nvidia’s Fermi (especially right after AMD’s flagship Cypress GPU demonstrated impressive power use). Even with power-saving strategies like clock throttling and power gating, shutting off unused pieces of the GPU, graphics cards seem to consume more and more power with each successive generation. These days, it’s common to see even mainstream graphics cards with auxiliary six-pin power connectors—many even require a pair of extra inputs. Heat is also becoming a problem. Just look at the TDP numbers for high-end graphics cards like Nvidia’s GeForce GTX 480/470 (225/250 W) and the Radeon HD 5870/5970 (188/294 W). In comparison, the most power-hungry processors from AMD and Intel are rated at 140/130 W, respectively.
Of course, those figures represent the absolute highest board power each product can output, measured and cited by each vendor in a different way (we’ve already demonstrated the GeForce GTX 480 using more power than a Radeon HD 5970).
And what about idle power—the card’s draw when you’re working on the Windows desktop? Idle board power on a Radeon HD 5970 is rated at 42 W. Believe it or not, you can idle an entire PC with integrated graphics using 40 W (Patrick even built a Core i5-based machine that idles under 25 W). Thankfully, newer cards, such as the Radeon HD 5870, HD 5770, and HD 5670, consume less power at idle (around 18-20 W).
What Most Power Consumption Tests Don't Tell You
A majority of graphics card reviews measure power consumption at full load and absolute idle. For load measurements, FurMark is typically used to push the graphics cards to use all of its available processing power. The reason here is simple. You want to know the maximum power draw, making it easy to compare one card against another, and evaluate noise in a worst-case scenario. Maximum power draw is also used to determine whether a given power supply is ample for driving a certain graphics card.
Unfortunately, (or perhaps not), we already know for a fact that AMD employs hardware and software optimizations that detect an unrealistic workload like FurMark and throttles back clock rates and power to help protect the GPU. Thus, FurMark’s most taxing modes are effectively defeated.
Under normal usage, power is rarely pushed to such an extreme anyway. Even in graphically-demanding games, you will only see consumption numbers below what you see in FurMark. This doesn't mean testing with FurMark is not at all useful (we use some of the less-demanding settings to generate comparable numbers). It just means you typically will never encounter such an extreme usage scenario.
These two screenshots from the Radeon BIOS Editor (RBE) indicate clocks and voltages used by the graphics cards under several different modes: default performance mode, idle, and video playback (employing UVD).
What about idle conditions? As with a typical CPU, a GPU is, in most cases, never completely idle. If you use Windows Vista or 7, Aero takes advantage of GPU acceleration when it’s enabled. Video decoding tasks for MPEG-1/2, VC-1, and H.264 are also offloaded to the GPU. Then there's GPU acceleration for certain general-purpose applications, like transcoding. These kinds of scenarios are rarely considered when it comes to measuring a graphics card’s power consumption.
- An Eye For Power
- Performance Per Watt
- The Tests
- Test Setup And A Side Note
- Test System
- Benchmark Results: Crysis, The Classic Approach
- Benchmark Results: Desktop Usage, Less-Than-Ideal Conditions
- Benchmark Results: Cinebench R11
- Benchmark Results: Cyberlink PowerDVD 9
- Benchmark Results: Cyberlink PowerDirector
- GPU Vs. CPU
- Measuring Power Consumption: Let's Recap
- Don't Forget Idle Power Consumption