Sign in with
Sign up | Sign in

AMD Radeon HD 7950 Review: Up Against GeForce GTX 580

AMD Radeon HD 7950 Review: Up Against GeForce GTX 580
By

When the Radeon HD 7970 launched at £450, it looked like a reasonable alternative to the GeForce GTX 590 and Radeon HD 6990. Both dual-GPU boards are measurably faster, but they’re also roughly £600, power-hungry, and in the case of the 6990, embarrassingly loud. Even still, the 7970's asking price is still pretty steep.

And that’s why a card like the Radeon HD 7950 is such a welcome addition to AMD’s portfolio. The company is, as of this writing, unwilling to comment on the 7950’s anticipated price tag. However, we’ve already run the benchmarks. We know how it stacks up to the Radeon HD 7970 and GeForce GTX 580. So, we know what we’d pay for this new board. If our target is close, we’d be looking for something under £400/under $500 USD (the Radeon HD 7970 currently retails for $550 in the U.S.).

What makes the Radeon HD 7950 worth a few bucks more than Nvidia's GeForce GTX 580? Well, let’s have a closer look at the card itself…

Update: Before publication, but after our launch coverage was finalized for international translation, AMD let us know that the Radeon HD 7950 should sell for $450 in the U.S. That's well below where I thought the company would target, given its competition. Clearly, AMD is pricing the 7950 to out-value Nvidia's GeForce GTX 580 (or force its competitor to adjust downward) rather than exist in a price structure defined by the company's single-GPU flagship. Advantage: AMD. Sadly, no word on UK pricing just yet.

That's a Radeon HD 7970 up top and a Radeon HD 7950 down below. In the right light, they'd pass as twins.

Meet AMD’s Radeon HD 7950

Physically, the Radeon HD 7950 is identical to AMD’s already-available Radeon HD 7970—save one distinguishing feature: a second six-pin auxiliary power connector. That’s a telltale indication of a sub-225 W maximum board power (75 W from the slot, plus up to 75 W from each plug). In fact, AMD rates the 7950 right at 200 W. In comparison, the Radeon HD 7970’s power ceiling is 250 W, necessitating its eight- and six-pin power connectors.

A 10.5” PCB is extended out an additional half of an inch by a metal base plate and plastic shroud. So, plan accordingly when you pick a chassis. This card is fairly long.

As with the Radeon HD 7970, AMD employs a centrifugal fan mounted on one end of the Radeon HD 7950, which blows across the length of the card and exhausts heated air out the back of your chassis. This is the design we prefer. It wasn’t possible to cool the Radeon HD 6990 or GeForce GTX 590 the same way. In both examples, a centre-mounted fan exhausted some air from a rear I/O slot and everything else was recirculated.

Because it relies on effective exhaust, one of the card’s two slots is grated for unrestricted air flow. The other slot is populated with four display outputs: one dual-link DVI connector, one full-sized HDMI port, and a pair of mini-DisplayPort outputs.

Board partners will almost certainly bundle a variety of adapters, so be sure you’re getting the components you need before making a purchase. The two Sapphire Radeon HD 7970s we bought came with DVI-to-VGA, mini-DisplayPort-to-DisplayPort, mini-DisplayPort-to-single-link DVI, and HDMI-to-DVI adapters. Meanwhile, the XFX R7950 Black Edition card we received only included an HDMI-to-DVI adapter.

More notable, though, is that all four outputs can be active at the same time, supporting extensive display configurations that you simply cannot achieve on a single Nvidia-based board.

Tahiti Pro: Same GPU, But On A Diet

Radeon HD 7950 centres on the same 4.31 billion-transistor Tahiti GPU as AMD’s faster, more expensive flagship, manufactured on TSMC’s 28 nm node.

However, instead of sporting 32 Compute Units, this scaled-back model comes equipped with 28 Compute Units. As you know, each CU plays host to four Vector Units, each with 16 shaders, ALUs, Stream Processors, or whatever else you’d like to call them. That’s a total of 64 SPs per CU. A quick little multiplication (64*28) gives you a grand total of 1792 SPs on this chip.

And because each of those four missing CUs also included four texture units, that specification drops from 128 to 112.

To help differentiate the Radeon HD 7950 even further, AMD dials back its core clock rate to 800 MHz (down from 925 MHz on the reference Radeon HD 7970). Peak compute performance correspondingly drops to 2.87 TFLOPS from 3.79 TFLOPS.

The render back-ends are independent of the CUs, and AMD leaves all eight ROP partitions enabled, yielding up to 32 raster operations per clock cycle. Six 64-bit memory controllers feed the partitions through a crossbar. An aggregate 384-bit data path populated with 3 GB of GDDR5 memory operating at 1250 MHz adds up to 240 GB/s of bandwidth. That’s a slight drop from the Radeon HD 7970’s 264 GB/s, but still a very substantial increase over the Radeon HD 6970.


Radeon HD 7950
Radeon HD 7970
Radeon HD 6970
GeForce GTX 580
Stream processors
1792
20481536
512
Texture Units
112
128
9664
Full Colour ROPs
32
32
3248
Graphics Clock
800 MHz
925 MHz880 MHz772 MHz
Texture Fillrate
89.6 Gtex/s
118.4 Gtex/s
84.5 Gtex/s49.4 Gtex/s
Memory Clock
1250 MHz
1375 MHz1375 MHz1002 MHz
Memory Bus
384-bit
384-bit256-bit384-bit
Memory Bandwidth240 GB/s
264 GB/s
160 GB/s192.4 GB/s
Graphics RAM
3 GB GDDR5
3 GB GDDR5
2 GB GDDR51.5 GB GDDR5
Die Size
365 mm2
365 mm2
389 mm2520 mm2
Transistors (Billion)
4.31
4.31
2.643
Process Technology
28 nm
28 nm40 nm
40 nm
Power Connectors
2 x 6-pin
1 x 8-pin, 1 x 6-pin1 x 8-pin, 1 x 6-pin
1 x 8-pin, 1 x 6-pin
Maximum Power
200 W
250 W
250 W
244 W
Price (Street)

£450
~£270
~£370
Ask a Category Expert

Create a new thread in the UK Article comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 9 comments.
This thread is closed for comments
  • 0 Hide
    Anonymous , 31 January 2012 14:26
    Everything's great, but what about physX ? When you compare 7950 to 580 in some titlesm, its winning, but when you come across game using physX then you are left with stuttering or no physical effects if you want playable framerate.
    It will never be a fair game if physX is still a factor in this equasion. Pity, as Im very much for AMD GPUs (own a 5870 Lightning with joy and proud) but my next card will be 'green' just because of that simple fact :( 
  • 2 Hide
    Dandalf , 31 January 2012 18:30
    All that means ChildOfBodom_ Is that you prefer the company that spends more on marketing and courting business partners in the games industry. I prefer the company that ignores that and spends their money on creating a superior quality product, thus competing in the 'traditional' sense. I also prefer open standards to proprietary, so you can have your Nvidia/PhysX and I'll stick with AMD/DirectCU.

    On topic though these new cards are really exciting! It's great to see AMD really making up for mistakes in the CPU sector. My next card will definitely be a 7000 series :) 
  • 0 Hide
    nytmode , 1 February 2012 00:54
    The second last para has the prices in £ while the one before it has it in $. I think all the prices are meant to be in $ only?
  • -1 Hide
    dizzy_davidh , 1 February 2012 07:14
    Another Pro AMD review. I always find your benchmark results for NVidia models way below what I get out of them (stock) so your either in AMD's pocket or you are doing something wrong with your analysis (or you haven't got your 'kit' together).
  • 0 Hide
    Anonymous , 2 February 2012 00:27
    Dandalf - you couldnt be more wrong - I was allways supporting "red" team and was allways blown off by their approach to crash Nvidia with quality and prices. The only problem I have is that PhysX is something Nvidia has, and AMD doesnt - and you have to admit that some games DOES use it. So when you come across a game that uses PhysX, you can either enjoy it with Nvidia's card, or simply dont have it with AMD's. And seeing current AMD's policy on pricing, you cant really deny fact, that loads of people will prefer to pay the same amount of money for product that can do more than less... Its not like we are left with choice "I prefer this 3D solution to the other 3D solution". Its a choice between very pricey and ultra-fast AMD card that does NOT do PhysX, or very pricey and ultra-fast Nvidia card, that DOES.
    And again - as I said - big fan of AMD, have spent loads of bucks on MSI 5870 Lightning when was released, but having came across some games that simply demanded of me to switch off PhysX just because my card doesnt do it, left me with rather mixed feelings.... It has nothing with big/small company or advertising/researches etc... I just dont like being limited, and unfortunatelu - PhysX is in use and game devs will keep on using it whether you like it or not...
    I wouldnt declare myself yet as for next card, as you should really wait for a reply from green team. so far - you are comparing 2 products with a year-difference in concept and manufacturing, and thats a lightyear in industry.
    cheers.
  • 0 Hide
    Dandalf , 2 February 2012 05:25
    Yes childofbodom_ but all cards have features that the others don't, it isn't just PhysX. For example Nvidia has 3D Vision Surround, AMD has Eyefinity. AMD cards are also much better at brute-forcing passworded RAR files and WPA handshakes due to their method of increasing performance using quantity of stream processors. So for you PhysX is the decider, for me it's one of those other things.

    Another reason I prefer AMD is that I would like to discourage the adoption of PhysX - as I said I prefer open standards to closed proprietary, and the only reason PhysX gains so much ground is because people buy into it. Therefore I prefer to "vote against" it by buying AMD cards. That way hopefully more game developers will be encouraged to program their games to use Havok (or another similar open physics engine) which is supported by both platforms.
  • 0 Hide
    Anonymous , 2 February 2012 14:03
    Dandalf - true what you say, I would also prefer to be non-dependant on 1 company's solution and have the same fair chances on any GPU, unfortunately as we see that wont happen in near future. Thats exactly what I was thionking buying AMD 2 yrs ago, that PhysX wont last and its only a "1 generation feature". It didnt happen, and it doesnt look as devs will stop using it as its obviously available and (probably) easier to create using proper physic cpu.
    Your "voting against" seem to me like shooting yourself in the foot, as you gain pretty much nothing and PhysX will still be in use...
    I hear what you say, AMD has still alot of good features superior to Nvidia on some fields, but they are irrelevant for me as I simply dont need/use them. Lack of physic cpu however - is not.

    My general point is - when I was buying my AMD card I knew PhysiX might be an issue in future, but the price of the card (100e less then top-of-the-line Nvidia, that was back then similary powerful but much more power/heat hungry) and it capabilities justified choosing it over Nv. But now, with current pricing sheme from AMD and soon-to-be released Nvidia - It doesnt seem justified to buy AMD, atleast at that very moment.
  • 0 Hide
    Dandalf , 3 February 2012 05:44
    Well what you call "shooting yourself in the foot" for me is making a responsible consumer decision. I have many times decided against purchasing a "superior" product because I did not wish to fund a particular company or idea, and it's something I believe is important for all consumers to do, in order to prevent the evil companies of the world from becoming too powerful. Their power then tends to spill out into politics, as they lobby governments to do stuff like crack down on piracy or some such, but this is just me being grandiose about the whole thing!

    It is true that we all make judgements before making our purchase one way or the other, and this discussion between you and I just highlights the fact that everyone can have very different values for those judgements ;) 
  • 0 Hide
    mi1ez , 3 February 2012 10:06
    Quote:
    A second Radeon HD 7950 only adds 86% to the performance of just one GPU.


    I remember only a couple of years agothis would have been amazing result. Great work from both team with multi-GPU drivers!