Nvidia GeForce GTX 1080 Pascal Review

The Display Pipeline, SLI And GPU Boost 3.0

Pascal's Display Pipeline: HDR-Ready

When we met with AMD in Sonoma, CA late last year, the company teased some details of its Polaris architecture, one of which was a display pipeline ready to support high dynamic range content and displays.

It comes as no surprise that Nvidia’s Pascal architecture features similar functionality—some of which was even available in Maxwell. For instance, GP104’s display controller carries over 12-bit color, BT.2020 wide color gamut support, the SMPTE 2084 electo-optical transfer function and HDMI 2.0b with HDCP 2.2.

To that list, Pascal adds accelerated HEVC decoding at 4K60p and 10/12-bit color through fixed-function hardware, which would seem to indicate support for Version 2 of the HEVC standard. Nvidia previously used a hybrid approach that leveraged software. Moreover, it was limited to eight bits of color information per pixel. But we’re guessing the company's push to support Microsoft’s controversial PlayReady 3.0 specification necessitated a faster and more efficient solution.

The architecture also supports HEVC encoding in 10-bit color at 4K60p for recording or streaming in HDR, and Nvidia already has what it considers a killer app for this. Using GP104’s encoding hardware and upcoming GameStream HDR software, you can stream high dynamic range-enabled games to a Shield appliance attached to an HDR-capable television in your house. The Shield is equipped with its own HEVC decoder with support for 10-bit-per-pixel color, preserving that pipeline from end to end.


GeForce GTX 1080
GeForce GTX 980
H.264 Encode
Yes (2x 4K60p)
Yes
HEVC Encode
Yes (2x 4K60p)Yes
10-bit  HEVC Encode
Yes
No
H.264 Decode
Yes (4K120p up to 240 Mb/s)
Yes
HEVC Decode
Yes (4K120p/8K30p up to 320 Mb/s)
No
VP9 Decode
Yes (4K120p up to 320 Mb/s)
No
10/12-bit HEVC Decode
Yes
No

Complementing its HDMI 2.0b support, GeForce GTX 1080 is DisplayPort 1.2-certified and DP 1.3/1.4-ready, trumping Polaris’ DP 1.3-capable display controller before it sees the light of day. Fortunately for AMD, version 1.4 of the spec doesn’t define a faster transmission mode—HBR3’s 32.4 Gb/s is still tops.

As mentioned previously, the GeForce GTX 1080 Founders Edition card sports three DP connectors, an HDMI 2.0b port and one dual-link DVI output. As with the GTX 980, it’ll drive up to four independent monitors at a time. Instead of topping out at 5120x3200 using two DP 1.2 cables, though, the 1080 supports a maximum resolution of 7680x4320 at 60Hz.

SLI: Now Officially Supporting Two GPUs

Traditionally, Nvidia’s highest-end graphics cards came armed with a pair of connectors up top for roping together two, three or four boards in SLI. Enthusiasts came to expect the best scaling from dual-GPU configurations. Beyond that, gains started tapering off as potential pitfalls multiplied. But some enthusiasts still pursued three- and four-way setups for ever-higher frame rates and the associated bragging rights.

Times are changing, though. According to Nvidia, as a result of challenges achieving meaningful scaling in the latest games, no doubt related to DirectX 12, GeForce GTX 1080 only officially supports two-way SLI. So why does the card still have two connectors? Using new SLI bridges, both connectors can be used simultaneously to enable a dual-link mode. Not only do you get the benefit of a second interface, but Pascal also accelerates the I/O to 650MHz, up from the previous generation’s 400MHz. As a result, bandwidth between processors more than doubles.

Many gamers won’t see the benefit of a faster link. Its impact is felt primarily at high resolutions and refresh rates. However, Nvidia did share an FCAT capture showing two GeForce GTX 1080s playing Middle earth: Shadow of Mordor across three 4K displays. Linking both cards with the old bridge resulted in consistent spikes in frame time, suggesting a predictable problem with timing that manifests as stuttering. The spikes weren’t as common or as severe using the new bridge.

Nvidia adds that its SLI HB bridges aren’t the only ones able to support dual-link mode. Existing LED-lit bridges may also run at up to 650MHz if you use them on Pascal-based cards. Really, the flexible/bare PCB bridges are the ones you’ll want to transition away from if you’re running at 4K or higher. Consult the chart below for additional guidance from Nvidia:


1920x1080
2560x1440
@ 60Hz
2560x1440
@ 120Hz+
4K
5K
Surround
Standard Bridge
x
x




LED Bridge
x
x
x
x


High-Bandwidth Bridge
x
x
x
x
x
x

So what brought about this uncharacteristic about-face on three- and four-way configurations from a company all about selling hardware and driving higher performance? Cynically, you could say Nvidia doesn’t want to be held liable for a lack of benefit associated with three- and four-way SLI in a gaming market increasingly hostile to its rendering approach. But the company insists it’s looking out for the best interests of customers as Microsoft hands more control over multi-GPU configurations to game developers, who are in turn exploring technologies like split-frame rendering instead of AFR.

Enthusiasts who don’t care about any of that and are hell-bent on breaking speed records using older software can still run GTX 1080 in three- and four-way setups. They’ll need to generate a unique hardware signature using software from Nvidia, which can then be used to request an “unlock” key. Of course, the new SLI HB bridges won’t work with more than two GPUs, so you’ll need to track down one of the older LED bridges built for three/four-way setups at GP104’s 650MHz link rate.

A Quick Intro To GPU Boost 3.0

Eager to extract even more performance from its GPUs, Nvidia is fine-tuning its GPU Boost functionality yet again.

In the previous generation (GPU Boost 2.0), a fixed frequency offset could be set, shifting the voltage/frequency curve by a defined amount. Potential headroom above that line basically amounted to lost opportunity.

Now, GPU Boost 3.0 lets you define frequency offsets for individual voltage points, increasing clock rate up to a certain temperature target. And rather than forcing you to experiment with stability up and down the curve, Nvidia accommodates overclocking scanners that can automate the process, creating a custom V/F curve unique to your specific GPU.

Create a new thread in the UK Article comments forum about this subject
This thread is closed for comments
10 comments
Comment from the forums
    Your comment
  • Gnuffi
    "Hitman and Tomb Raider are presented using DirectX 11. However, we have results from DirectX 12 using those games as well, which we’ll mention in the analysis"
    Am i blind or something? i cant seem to find your DX12 comments regarding those games
    mighty curious to see your findings in dx12 on just exactly how much a hit 1080 takes in DX12 vs DX11 specially when compared to AMD dx12 gain
  • Tesetilaro
    give it some time - normally Chris and Igor are standing k.o when this goes online - nvidia timeframes for receiving samples + NDA open is quite hard, especially if you see the given ammount of work they do about that...

    so they have to publish at the NDA removal day, but receive the sample "quite late" - we will for sure see tons of material about pascal + benchmarks in the next days ;)
  • cdrkf
    Looks like a powerful card and a nice jump from previous gain, although perhaps not quite the card nVidia made it out to be.

    I'm looking forward to the 1070 reviews, along with some concrete info on Polaris from AMD.
  • nebun
    well....nice card, but is it really worth the price?....let's see what AMD will have to offer, it's nice that it has 8gb ram, now let's make a dual gpu card and sli those mofos :)
  • fluffa
    based on this article, am quite happy to stay with my 980ti. not enough jump if you ask me, considering the higher clock speeds and new architecture. looks like the new architecture isn't adding much, other than giving higher clock speeds to get performance upgrade
  • Tesetilaro
    hey fluffa - 3 points to consider - even i'm with you that your 980ti will be fine for another generation but on long term...

    1. - big latency reduction
    2. - way more efficient not only but also by 16 nm and better process
    3. - new compression algorythm + improved asynch computing

    it' is the right direction, but to be straight, they have tons of work ahead if they want to improve pascal for next generation with more or less current technology level...
  • fluffa
    Hi Tesetilaro,

    I agree, completely about the other improvements. I was trying to say the normal performance is not a great leap like NVidia was leading everyone to believe, which is a shame. But on the other hand it does open the door quite widely for team red :)
  • jstaff62
    Does anyone know if the fans can turn off when not under load like maxwell? I mean the fact no one mentions it may already answer that question but seeing as it's more efficient I don't see why it shouldn't
  • ntsarb
    What's in Quadro M6000 that makes such a huge performance difference compared to the TGX 980 and 1080?

    Or... is it something in the software that's different, e.g. using a Quadro specific API by NVIDIA, which doesn't make use of the desktop graphics cards' capabilities?
  • AtotehZ
    No graphs? I only see text.