AMD Radeon Vega RX 64 8GB Review

AMD’s last high-end graphics card launch happened almost 26 months ago. Back then, the Radeon R9 Fury X went toe-to-toe with GeForce GTX 980 Ti and Titan X—the best Nvidia had to offer. And it kept getting better. Subsequent drivers optimized performance, while DirectX 12 helped game developers get more out of the Graphics Core Next Architecture.

Today’s official introduction of the Radeon RX Vega represents the company’s return to high-end gaming, so says AMD. But by its own admission, this isn’t going to be AMD battling for Nvidia’s performance crown. Rather, Radeon RX Vega 64 sets its sights on the performance and pricing of GeForce GTX 1080.

We already know most of what there is to know about Radeon RX Vega 64. AMD made sure of that with a carefully timed sequence of disclosures intended to keep enthusiasts buzzing about its next-gen graphics hardware. In case you missed any of that, check out AMD Teases Vega Architecture: More Than 200+ New Features, Ready First Half Of 2017 and AMD Radeon RX Vega 64: Bundles, Specs, And Aug. 14 Availability.

Today is when we see if the cliffhanger approach to marketing ends with gamers enjoying blissful satisfaction or the pangs of disappointment.

Specifications

Radeon RX Vega 64: At a Glance (Again)

Speeds and feeds are always good to recap, so borrowing from our earlier coverage:

Like the Radeon R9 Fury X's Fiji processor, Radeon RX Vega 64 employs four shader engines, each with its own geometry processor and rasterizer.

Also similar to Fiji, there are 16 Compute Units per Shader Engine, each CU sporting 64 Stream processors and four texture units. Multiply all of that out and you get 4096 Stream processors and 256 texture units.

Clock rates are way up, though. Whereas Fiji topped out at 1050 MHz, a GlobalFoundries 14nm FinFET LPP process and targeted optimizations for higher frequencies allows the Vega 10 GPU on Radeon RX Vega 64 to operate at a base clock rate of 1247 MHz with a rated boost rate of 1546 MHz. Obviously, AMD's peak FP32 specification of 12.66 TFLOPS is based on that best-case frequency. We typically use the guaranteed base in our calculations, though. Even then, 10.2 TFLOPS is still an almost 20% increase over Radeon R9 Fury X.

The liquid-cooled model steps those numbers up to a 1406 MHz base with boost clock rates as high as 1677 MHz. That’s an almost 13% higher base and ~8%-higher boost frequency, pushing AMD’s specified peak FP32 rate to 13.7 TFLOPS. You’ll pay more than just a $200 premium for the closed-loop liquid cooler, though. Board power rises from 295W on Radeon RX Vega 64 to the Liquid Cooled Edition’s 345W—a disproportionate 17% increase. Both figures exceed Nvidia’s 250W rating on GeForce GTX 1080 Ti, which isn’t even in Vega’s crosshairs.

Model
Cooling Type
BIOS Mode
Power Profile
RX Vega
Power Saver
Balanced
Turbo
Air
Primary
165W
220W
253W
Secondary
150W
200W
230W
Liquid
Primary
198W
264W
303W
Secondary
165W
220W
253W

Speaking of power, our air-cooled sample comes with two BIOS files, and each of those BIOSes supports three power profiles. The primary BIOS at its Balanced power setting is accompanied by a 220W GPU power limit. Dropping to Power Saver cuts GPU power to 165W, while increasing it to Turbo raises the ceiling to 253W. Switching over to the secondary BIOS drops Power Saver to 150W, Balanced to 200W, and Turbo to 230W. We certainly appreciate the granular control AMD enables here, but recognize that most enthusiasts aren't looking for a way to de-tune their $500 graphics card. Regardless, we're planning a follow-up story to explore the effects of each setting on board power, performance, and acoustics.

Each of Vega 10's Shader Engines sports four render back-ends capable of 16 pixels per clock cycle, yielding 64 ROPs. These render back-ends become clients of the L2, as we already know. That L2 is now 4MB in size, whereas Fiji included 2MB of L2 capacity (already a doubling of Hawaii’s 1MB L2). Ideally, this means the GPU goes out to HBM2 less often, reducing Vega 10’s reliance on external bandwidth. Since Vega 10’s clock rates can get up to ~60% higher than Fiji’s, while memory bandwidth actually drops by 28 GB/s, a larger cache should help prevent bottlenecks.

Incidentally, AMD's graphics architect and corporate fellow Mike Mantor says all of the SRAM on Vega 10 adds up to more than 45MB. Wow. No wonder this is a 12.5-billion-transistor chip measuring 486 square millimeters. That's more transistors than Nvidia's GP102 in an even larger die.

Adoption of HBM2 allows AMD to halve the number of memory stacks on its interposer compared to Fiji, cutting an aggregate 4096-bit bus to 2048 bits. And yet, rather than the 4GB ceiling that dogged Radeon R9 Fury X, RX Vega 64 comfortably offers 8GB using 4-hi stacks (AMD's Frontier Edition card boasts 16GB). An odd 1.89 Gb/s data rate facilitates a 484 GB/s bandwidth figure, matching what GeForce GTX 1080 Ti achieves using 11 Gb/s GDDR5X.

As an aside, AMD plans to make its Radeon RX Vega 56 derivative available on August 28th. That 210W card utilizes the same GPU and 8GB of HBM2, but has eight of its Compute Units disabled, eliminating 512 Stream processors and 32 texture units. It'll also run at lower core and memory clock rates. Yet, AMD claims it should outperform GeForce GTX 1070 handily at a $400 price point. Our U.S. lab is in the process of testing Radeon RX Vega 56, and we coverage should follow in the days to come.

Look, Feel & Connectors

AMD’s RX Vega 64 weighs in at 1066g, which makes it 16g heavier than the Frontier Edition. Its length is 26.8cm (from bracket to end of cover), its height is 10.5cm (from top of motherboard slot to top of cover), and its depth is 3.8cm. This makes it a true dual-slot graphics card, even though the backplate does add another 0.4cm to the back.

Both the cover and the backplate are made of black anodized aluminum, giving the card a cool and high-quality feel. The surface texture is achieved using simple cold forming that preceded the aluminum’s anodization. All of the screws are painted matte black. The red Radeon logo on the front is printed, and provides the only splash of color.

The top of the card is dominated by two eight-pin PCIe power connectors and the red Radeon logo, which lights up. There’s also a two-position switch that allows access to the aforementioned secondary BIOS optimized for lower power consumption and its corresponding driver-based power profiles. These make the card quieter, cooler, and, of course, a bit slower.

The end of the card is closed and includes mounting holes that are a common sight on workstation graphics cards. The powder-coated matte black slot bracket is home to three DisplayPort connectors and one HDMI 2.0 output. There is no DVI interface, which was a smart choice since it allows for much better airflow. The slot bracket doubles as the exhaust vent, after all.

MORE: Best Graphics Cards

MORE: Desktop GPU Performance Hierarchy Table

MORE: All Graphics Content

Create a new thread in the UK Article comments forum about this subject
This thread is closed for comments
2 comments
Comment from the forums
    Your comment
  • kyzarvs
    "the High Anti-Aliasing option (which supplements FXAA by rendering to a higher-res off-screen buffer) can reduce frame rates by 50 to 100%"

    So you can reduce the framerate to 0 (a 100% reduction)? Not sure that is the message intended!
  • genz
    "but recognize that most enthusiasts aren't looking for a way to de-tune their $500 graphics card"

    I absolutely disagree. A power profile that's built all the way down to the BIOS to use that extra metal more efficiently at the cost of less power is exactly what I want for my day to day. If you have a GPU capable of CS:GO at 500fps you still want it only at your monitor refresh rate as the rest is wasted power, and if you are watching DVDs or blu-ray with a system like this 2D mode is probably horrendous overkill but that doesn't stop most GPUs clocking up past 50% power draw just doing that. Just make an app that can switch on a daily basis, or even during and out of working hours and there's great value in the idea.

    Why even talk about Nvidia's eff. lead if nobody cares?