Sign in with
Sign up | Sign in

Radeon R9 290X Review: AMD's Back In Ultra-High-End Gaming

Radeon R9 290X Review: AMD's Back In Ultra-High-End Gaming
By

After eight months of watching Nvidia go uncontested in the ultra-high-end graphics market, AMD has a new GPU based on existing technology that promises to challenge the top position. It gets mighty loud at times, but you can't ignore the R9 290X's price.

Today, the fastest single-GPU graphics card is Nvidia’s GeForce GTX Titan (Benchmarking GeForce GTX Titan 6 GB: Fast, Quiet, Consistent). It sells for no less than £800 and comes equipped with 6 GB of fast GDDR5 memory. By all accounts, it’s really well-suited for gaming at 2560x1440, it serves up playable performance at 5760x1080 in some games, but doesn’t quite move fast enough for 3840x2160. In fact, in Gaming At 3840x2160: Is Your PC Ready For A 4K Display?, I came to the conclusion that it’d take a couple of GeForce GTX 780s to serve up satisfactory frame rates on an Ultra HD screen.

And now AMD is billing its new Radeon R9 290X as a ready-for-4K solution. Them’s fighting words, particularly with Ultra HD targeted as the next frontier in PC gaming. The technology is still very expensive, and it’s far from refined. But I challenge you to enjoy your favorite title on a 32”, 8.3-million-pixel screen, and then hand it back willingly. Expect 4K to be the battleground on which AMD and Nvidia drop their high-end GPUs moving forward.

Last week, while Nvidia put on an event in Montreal to announce a handful technologies and initiatives, including an upcoming GeForce GTX 780 Ti, AMD was taking the wraps off of a few benchmark results that indeed showed the 290X faster than GeForce GTX 780 in BioShock Infinite and Tomb Raider at 3840x2160.

What is at the heart of this new board, which seemed to effortlessly speed past Nvidia’s £500 solution? The Hawaii GPU—a much more complex piece of silicon than Tahiti, based on the same Graphics Core Next architecture. Think of it as a little bit of old and a little bit of new.

Is AMD Back To The "Big GPU" Approach?

All the way back in 2007, AMD altered its GPU strategy, shifting away from large monolithic processors in favor of more scalable designs. It’d build for a fairly mainstream price point/power target, and either derive down to create less expensive parts or stick two GPUs next to each other in an ultra-high-end configuration.

Over time, AMD’s engineers trended toward more complex chips, and the ~100 W RV670 gave way to the 150 W RV770, which was succeeded by the Radeon HD 5870’s roughly 200 W Cypress GPU, the 6970’s 250 W Cayman, and the similarly power-hungry Tahiti. Each step of the way, though, AMD managed to get two of its flagship processors onto one PCB, yielding that crazy-fast halo board. Of course, the most recent example is AMD’s Radeon HD 7990, rated for a scorching 375 W.

With Hawaii, AMD appears to eschew its sweet-spot philosophy with a 6.2-billion transistor GPU that’s 44% more complex than Tahiti, and yet manufactured using the same 28 nm process. A die size of 438 mm² is still quite a bit smaller than Nvidia’s GK110. However, it’s still larger than any graphics processor we’ve seen from the company (including R600 at 420 mm²; Tahiti only occupies 352 mm²).


Radeon R9 290X
Radeon R9 280X
GeForce GTX Titan
GeForce GTX 780
Process
28 nm
28 nm28 nm28 nm
Transistors
6.2 Billion
4.3 Billion
7.1 Billion
7.1 Billion
GPU Clock
1 GHz
1 GHz
836 MHz
863 MHz
Shaders
2816
2048
2688
2304
FP32 Performance
5.6 TFLOPS
4.1 TFLOPS
4.5 TFLOPS
4.0 TFLOPS
Texture Units
176
128
224
192
Texture Fillrate
176 GT/s
128 GT/s
188 GT/s
166 GT/s
ROPs
64
32
48
48
Pixel Fillrate
64 GP/s
32 GP/s
40 GP/s
41 GP/s
Memory Bus
512-bit
384-bit
384-bit
384-bit
Memory
4 GB GDDR5
3 GB GDDR5
3 GB GDDR5
3 GB GDDR5
Memory Data Rate
5 Gb/s
6 Gb/s
6 Gb/s
6 Gb/s
Memory Bandwidth
320 GB/s
288 GB/s
288 GB/s
288 GB/s
Board Power
250 W (Claimed)
250 W
250 W
250 W

Again, the underlying GCN architecture on which Hawaii is based remains similar. The Compute Unit building block looks exactly the same, with 64 IEEE 754-2008-compliant shaders split between four vector units and 16 texture fetch load/store units.

There are a few tweaks to the design though, including device flat addressing to support standard calling conventions, precision improvements to the native LOG and EXP operations, and optimizations to the Masked Quad Sum of Absolute Difference (MQSAD) function, which speeds up algorithms for motion estimation. Incidentally, all of those features debuted alongside the Bonaire GPU we reviewed back in March (AMD Radeon HD 7790 Review: Graphics Core Next At £115); AMD just wasn’t discussing them yet. And with the introduction of DirectX 11.2, both Bonaire and Hawaii add programmable LOD clamping and the ability to tell a shader if a surface is resident—both of which are tier-two features associated with tiled resources.

But the arrangement of AMD’s CUs is different. Whereas Tahiti boasted up to 32 Compute Units, totaling 2048 shaders and 128 texture units, Hawaii wields 44 CUs organized into four of what AMD is calling Shader Engines. The math adds up to 2816 aggregate shaders and 176 texture units. Operating at up to 1 GHz (this becomes an important distinction later), that’s 5.63 TFLOPS of floating-point or, given the same ¼ rate, 1.4 TFLOPS double-precision compute performance.

Hawaii also employs eight revamped Asynchronous Compute Engines, responsible for scheduling real-time and background task to the CUs. Each ACE manages up to eight queues, totaling 64, and has access to L2 cache and shared memory. In contrast, Tahiti had two ACEs. The Kabini and Temash APUs we wrote about earlier this year come armed with four. Why is Hawaii so dramatically different? Some evidence exists to suggest that Hawaii’s asynchronous compute approach is heavily influenced by the PlayStation 4’s design, though AMD won't confirm this itself. Apparently, Sony’s engineers are looking forward to lots of compute-heavy effects in next-gen games, and dedicating more resources to arbitrating between compute and graphics allows for efficiencies that weren’t possible before.

Tahiti’s front-end fed vertex data to the shaders through a pair of geometry processors. Though its quad Shader Engine layout, Hawaii doubles that number, facilitating four primitives per clock cycle instead of two. There’s also more interstage storage between the front- and back-end to hide latencies and realize as much of that peak primitive throughput as possible.

In addition to a dedicated geometry engine (and 11 CUs), Shader Engines also have their own rasterizer and four render back-ends capable of 16 pixels per clock. That’s 64 pixels per clock across the GPU—twice what Tahiti could do. Hawaii enables up to 256 depth and stencil operations per cycle, again doubling Tahiti’s 128. On a graphics card designed for high resolutions, a big pixel fill rate comes in handy, and in many cases, AMD claims, this shifts the chip’s performance bottleneck from fill to memory bandwidth.

The shared L2 read/write cache grows from 768 KB in Tahiti to 1 MB, divided into 16 64 KB partitions. This 33% increase yields a corresponding bandwidth increase between the L1 and L2 structures of 33% as well, topping out at 1 TB/s.

It makes sense, then, that increasing geometry throughput, adding 768 shaders, and doubling the back-end’s peak pixel fill would put additional demands on Hawaii’s memory subsystem. AMD addresses this with a redesigned controller. The new GPU features a 512-bit aggregate interface that the company says occupies about 20% less area than Tahiti’s 384-bit design and enables 50% more bandwidth per mm². How is this possible? It actually costs die space to support very fast data rates. So, hitting 6 Gb/s at higher voltage made Tahiti less efficient than Hawaii’s bus, which targets lower frequencies at lower voltage, and can consequently be smaller. Operating at 5 Gb/s in the case of R9 290X, the 512-bit bus pushes up to 320 GB/s using 4 GB of GDDR5. In comparison, Tahiti maxed out at 288 GB/s.

Ask a Category Expert

Create a new thread in the UK Article comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 17 comments.
This thread is closed for comments
  • 3 Hide
    markem , 24 October 2013 07:17
    The R9 290X just pawned the 780 and titan in performance and price.. Waiting for world records to get smashed soon.

    I will be buying 2X 290X with 2X water blocks
  • 0 Hide
    markem , 24 October 2013 07:24
    Only article not biased but fair.. Most articles either delayed to see what others were putting out, or used really low resolutions and settings.

    I cant believe Anandtech article still isn't up... They cant seem to make their minds up.. dodgy as hell
  • 0 Hide
    mauller07 , 24 October 2013 08:47
    You forgot that aureal provided first order audio reflections (reverb being sum of them from all sources) and audio occlusion which still surpasses current audio engines, current 3d surround only down samples 7.1 to stereo. Also they calculated all this taking into account the materials the reflections and occlusions were cast on.

    When they said the reverberations calculasions for the environment would take 15% of a CPU core, they meant for one sound where as the audio dsp can do it for hundreds with lower latency which is why even a current high end CPU cannot do what aureal did and now what amd is picking back up on.
  • 1 Hide
    technogiant , 24 October 2013 09:30
    So just how does this power tune work then?.....If I set a +20% power limit would that just increase vcore by 20%.....if so is that a static setting or a maximum setting....I guess that is variable as power tune varies clocks and voltage depending on temperature.

    But if it altering vcore how does that impact your overclock stability.....I'm interested as I've got an extreme cooling solution (complete submersion in -30c phase change liquid).
  • 6 Hide
    coozie7 , 24 October 2013 11:37
    " And our next request comes from Bob in his computer room. Hi Bob, what can we play for you?"
    "I DON'T CARE WHAT YOU PLAY, JUST PLAY IT LOUD, OK?"
    "Didn't you used to be in artillery, Bob?"
    "NO, JUST GOT A R9 290!"
    "OK, for you Bob, here's Paint It Black by the awesome Rolling Stones!"
  • 1 Hide
    bumnut53 , 24 October 2013 12:33
    Its an impressive card no doubt about it, but almost as loud as crossfire 7970's is too loud for me.
  • 2 Hide
    Kalzakov , 24 October 2013 13:14
    WTS Titan
  • 2 Hide
    LePhuronn , 24 October 2013 13:23
    Performance for the price is impressive, but as I feared AMD just haven't got the elegance correct - still too power hungry and loud. Also it seems that the Hawaii chip is pretty close to its limits to hit this performance level, whereas GTX 780 and Titan have room to manoeuvre.

    Put them both under water and crank to the maximum and then see what we have. Yes, the 290X is arguably the best card now at stock, but I'm inclined to believe the Titan is still the greatest card overall.
  • 1 Hide
    RobCrezz , 24 October 2013 17:23
    Great performance for the money. Hopefully Asus/MSI etc will do a better job of the cooling, and keep the noise down.
  • 0 Hide
    jkay6969 , 24 October 2013 18:40
    @All Titan Lovers, Yes Titan SHOULD be the Daddy, the one card to rule them all but it's not!?!? It seems to be very badly balanced, I believe it was released as the paper spec dream, like a top end Corvette, on paper it ticks all the boxes to be a supercar but first corner you're face first into a wall.

    When I saw the specs of the Titan I honestly thought it would change everything but it has just showed how arrogant nVidia is right now to think it's worth £850+ when even their own GTX 780 at £500ish beats it on occasion. nVidia you're not intel, not even close so stop acting like it or you WILL get burned!

    Kudos to AMD for fighting back with a high performance, low cost option. Who cares if it's loud, 3rd parties will take care of that, the important thing is nVidia drops their prices to match.

    Now AMD just needs to release a PCIe 3.0 motherboard to use this card with their CPU range. Why haven't you already AMD? WHY?
  • 2 Hide
    LePhuronn , 24 October 2013 21:05
    @jkay6969:

    It's not arrogance from Nvidia to produce the Titan, far from it. The Titan was always intended to be a technical exercise in the very best they could produce, much like the GTX 690 was. As a result, the Titan's price was intentionally inflated, again just like the 690.

    But Nvidia didn't see the Titan becoming so popular, hence it became a full production card.

    The kick comes from the GTX 780 being a cut-down Titan and hence near matching the performance. I should image the GTX 780 Ti will be the same as the Titan, just without 6GB RAM.

    The Titan raised the bar and gave AMD something to aim at. And with the 6GB RAM onboard, I still don't see the Titan's price coming down - you STILL need that much RAM to 4K game properly.
  • 0 Hide
    brianthesnail , 24 October 2013 21:41
    lets keep things simple ... the R9 290X is a exceptional card in these tough economic times ... ok the odd city banker can afford a GTX titan but for the majority of pc gamers £800 is what you will spend on a gaming pc and not a graphic card
    but AMD have saved the day again ... the R7 and R9 series now give the average gamer a chance to experience high resolution gaming without having to mortgage the house or sell the car !
    when the board partners start to release their own versions of the new AMD cards then you will see better cooling solutions and more competitive prices ....and that's good news for us everyday "cash strapped" gamers
    however the most interesting aspect of the R7 and R9 cards is the mantle support ... this may turn out to be a evolution in the way your graphic card operates .. and battlefield 4 will be the first game to support mantle ( around dec 2013 )
    excellent review !
  • 0 Hide
    CyberAngel , 24 October 2013 22:00
    Just wait and see - better cooling solutions will arrive
    together with new, improved drivers - there's great potential.
    But take a note: Nvidia is not sitting still...let the Battle of Titan's begin!!
  • 0 Hide
    AMKANMBA , 25 October 2013 11:46
    GTX titan 3GB GDDR5?
  • 0 Hide
    LePhuronn , 25 October 2013 20:43
    Quote:
    GTX titan 3GB GDDR5?


    Typo.

    Although sometimes my Titan only reports 4GB :??: 
  • 1 Hide
    9a3iqa , 29 October 2013 15:54
    Best GPU in the world right now, I will get one next month when the custom coolers from Sapphire, MSI and Gigabyte are released as that will make it cool + allow high overclockability.

    Can't wait!
  • 0 Hide
    cyrusbe , 11 November 2013 07:19
    I ordered the 290X and a arctic accelero hybrid. I think even better aircooler design will hold back the performance when the cards are installed in a case without superior airflow. I'm curious how its going to behave with the hybrid. As for the cost, reference Card with hybrid cooler will be in line with aftermarket cooler cards prices...