Although AMD took the hardware community by surprise with the performance of RV770 and the derivative Radeon HD 4850/4870, things didn’t go entirely according to plan. The company launched its boards immediately after Nvidia, which turned around and slashed the prices on its own models, determined to win this round of the ongoing graphics card war.
And yet, a month and a half after the launch of AMD’s newest round of products, the verdict hasn’t changed. Neither the GeForce GTX 260 nor the GeForce 9800 GTX+ (only recently made available) can take on the Radeon HD 4870 with regards to price or performance, even in light of heavy cuts from Nvidia.
But AMD’s not out just to make waves with the gamers looking for value. It also wants to reclaim a crown it lost a long time ago to Nvidia’s last two generations of large, monolithic programmable graphics architectures. As a means to that end, the company is putting a pair of its most impressive GPUs on a single PCB and calling it the Radeon HD 4870 X2. Now the question remains: does the new board have the muscle to take on Nvidia’s GeForce GTX 280, the single fastest card?
The Radeon HD 4870 X2
Contrary to what its code name might otherwise suggest, the R700 actually centers on a design sporting two RV770 GPUs. Thus, the Radeon HD 4870 X2 finds itself in a very high-end segment of the discrete graphics market. However, in the next few weeks you will also be able to find a Radeon HD 4850 X2, based on the same two chips but with lower frequencies and likely less memory as well.
In appearance, the Radeon HD 4870 X2 remains similar to the Radeon HD 3870 X2. However, looks can be deceiving, as we will soon see. Not surprisingly, the card itself is quite long (26.7 cm). It sports a large blower that exhausts through the back of the board and is neighbored by two dual-link DVI outputs (neither HDMI nor DisplayPort connectivity are native to the back panel). The board requires two auxiliary power connectors—one with six pins and another with eight (PCI Express 2.0-compliant). The two GPUs are positioned on the same PCB, though you won’t see them since a heatsink/fan combination covers the entire board.
This card, like it’s bi-GPU flagship predecessor, must deal with sharing its frame buffer. But contrary to CPUs and their more complex memory management techniques, these things evolve a lot more slowly in the graphics card world. By comparison, all of the bi-GPU cards up until now were similar to the Pentium D 900 (Presler), acting as an assembly of two cores functioning independently and integrating their own local memory (L2 cache for the CPU, frame buffer for the GPU). All of the graphics data is thus duplicated between the two cards. Communication between them passes through an external bus—think FSB for Intel’s Pentium D and PCI Express for these graphics cards.
As with the Radeon HD 3870 X2, a PCI Express bridge manages the communications between the two GPUs and the chipset. Once the final display outputs is calculated, each GPU sends them to another chip that assembles the result according to whichever multi-card rending technology is used (AFR, most often) and then sends it all to the monitor. Back when we tested the Radeon HD 3870 X2, we found that the biggest impact on performance was attributable to the board’s memory capacity, since memory on a bi-GPU card has to be divided in two. And in order to assure adequate performance, you often have to multiply the quantity of memory by four, which is what AMD does with the 4870 X2.



I find it is the scenery add on that really brings FSX to its knees, say flying around London Heathrow airport. Previous versions all seemed to improve frame rates significantly with better and more powerful graphics cards even though Flightsim can also be CPU limited.
Flightsim is a little more difficult to test than normal games because of the complex scenery add ons available and which many users install. It would be more realistic and interesting to test graphics cards and FSX if possible, with a demanding scenery add on installed as well (if this is not already done).
It seems surprising in this current test that FSX does not show much difference between manufactures card versions (just difference between ATI and NV) which would seem to indicate a CPU limited test, whereas in practice with typical scenery add on installed I have found a more powerful graphics card did seem to make a significant difference. It is also very interesting that ATI drivers now seem to be poor compared to NVidia in FSX, limiting ATI card performance, some explanation or comment from manufacturer would be very interesting to FS community.
Keep up the great work in testing, it is almost impossible for the individual now to make a rational choice without this sort of information. Thanks for including Flightsim in your tests, most places ignore it !
Also it looks like in this testing, FSX is CPU limited rather than graphics card limited. Wonder if any typical intensive scenery add on is being used, like UK2000 airports, or is the test with bare copy FSX ?
Previously I have found that although flightsim itself is CPU intensive, a more powerful graphics card also made a big difference when using flightsim with complex add on scenery, say flying around London Heathrow. A comment on FSX configuration used for testing would be interesting.
Keep up the great work testing, it is almost impossible for an individual to make a sensible informed choice without this type of information.
Thanks for including flightsim in your tests, most test reviews ignore us !!
as a sidenote, i hope people will move on from fannboysm to these billion dollar companies and buy wts best for them and not a brand. cheers.
btw. i hope nvidia comes up with something good as well. competition is always good.
Ahhh isn't competition wonderful...!!
Bob
Yah right,
You are a big noobie yourself... Nvidia don't have anything like this secret weapon up their sleeve. The 4870 is matched with the 280 already. The 4870 X2 has been designed from the ground up and is not 2x 4870's cobbled together like the 3870 X2.
Uhhhmmm the GTX 280 is rated at 170 watts vs 140 watts for the 4870. That is going to be massive power draw for a 280 X2 card!! LOL
Look at the connect bandwidth on this 4870 X2 monster (internal PCIe 2.0 + extra crossfire channel)!! I mean 1Gb DDR5 RAM on each card...
Ouch Nvidia are hurting finally since the 8800 GTX launched...
Don't forget the big issues with SLI scaling as well.
Bob
The 4870 is not matched with the 280, can't you read benchmarks? ATI HAVE basically just slapped 2 GPUs together, otherwise it wouldn't be called 4870 x2? I'm sure nvidia can do the same with the 280 and it will be more powerful. Don't delude yourself. I am no Nvidia fanboy, in fact up until my current card (280 GTX) I had ATI cards for years in case you were wondering. I really don't think Nvidia are hurting either just sitting on the enormous pile of money they have made.
OK The 4870 is not quite a match for the 280 (except for certain games but may improve due to there driver dependent architecture)... But you have got admit that doubling the frame buffer on each 4870 means that the 4870 X2 is not just 2x 4870 glued together!! Saying a 4870 X2 custom PCB holding 2x 4870 cores and 2Gb DDR5 RAM is just "glued" together is like the BS about the Q6600 not being a "true" quad core!! It's purely down to price/performance now. The 280 vs. 4870 X2 pricing!!
Actually according to recent THG articles Nvidia are hurting a bit. Especially since they have had to decemate the pricing for the 260 and 280.
Like I said before... Competition is good for the market!!
Bob
I agree the competition is good but it would be better if ATI come up with a single GPU based card that beats Nvidia's flagship single GPU card.
I think you are missing the point here. ATI's 4870 is like a quarter of the size of the GTX 280. It actually makes sense (cost+performance) to join them together with the massive memory bandwidth they have.
I mean just look at the die size comparison in this thread it says it all...
I am not fanboiii but Nvidia are really hurting from engineering perspective. That whooper die is at the limit the process they using to produce it (the die cutter wouldn't stretch any bigger!!) That means a much higher rate of defects (with only the GTX 260 to mop some up) and thus much lower yield...
Bob
Fanboiii, it doesn't really matter how big the card is, the point is the Nvidia single GPU card is faster than the ATI single GPU card. As long as it fits in ur case fella it's all good. If what you're saying is nVidia can't produce a dual 280 GPU card then ATI really do have a winner but it's too early to say Fanboiii.
+1 Thanks for saying what I was try to say!! BTW for the record I currently have SLI EVGA 8800GTX's so I cannot be accused of being an ATI fanboiii!! I just recognize superior engineering when I see it!!
Bob
In This Moment ATI RADEON Have the Best Video Card in The World !
Now I Have ATI RADOEN 2600XT 512Mb and is OK but This Will Be my Next Step .
I have a Friend how have 2X 8800GTS and Now hy want to Sale them and to Buy This MONSTER by AMD-ATI RADEON.
AMD-ATI RADEON THE BEST CHOICE IN GAMING & FULL HD Movies !