Sticky

AMD Vega MegaThread! FAQ and Resources - page 2

695 answers Last reply
  1. jaymc said:
    So the 500 series will basically be the low and mid-end while Vega is the high-end? Or is Vega an entirely new architecture?
    Reply to Gon Freecss
  2. Vega is both: new GPU (arch, but still GCN based AFAIK) and high end.

    Or so the rumor mill goes supported by AMD itself at times.

    Cheers!

    EDIT: Clarified point.
    Reply to Yuka
  3. They announced that the gcn code named Vega would be also called VEGA !! So I can only presume that the 500 series is the Polaris refresh, and that vega will use it's own name as a brand name.

    @gonfreecss Polaris low to mid, Vega high end yes.
    Reply to jaymc
  4. Oh, I see. Thanks guys!
    Reply to Gon Freecss
  5. How do you guys expect Vega to perform as opposed to current Pascal cards? I'm doing a build next month and can afford 1080Ti. Would Vega be worth waiting month more?
    Reply to vvacenovski
  6. Vega, from all the information going around, won't triumph the 1080ti and target the 1080 instead.

    But, until reviews are out there, nothing is set on stone.

    Cheers!
    Reply to Yuka
  7. vvacenovski said:
    How do you guys expect Vega to perform as opposed to current Pascal cards? I'm doing a build next month and can afford 1080Ti. Would Vega be worth waiting month more?


    Vega might beat the 1080Ti in Doom and other id tech 6-based titles (none at the moment) and play in the "normal" 1080's ballpark otherwise. It's all speculation though, and driver optimization often lag behind a new chip's revealing at AMD's.
    Reply to mitch074
  8. Yuka said:
    Same here, but I still want to know if I should give my RX480 to my GF and get myself a Vega or just get her an RX480 and keep mine.

    Still, I can understand where you're coming from. If you come up with any ideas, please let us know anyway. I'd say it's still interesting to speculate and see how the technology unfolds.

    Cheers!


    As of now, more is known about AMD's next range: the 580 will be a slightly higher clocked 480 (it's Polaris) while Vega will be a wholly different beast. You could get a factory overclocked 480 now if its performance satisfies you (mind the design though, Tom's France recently tested several 480 cards and found out that some don't actually cool the chip well and that the reference design isn't that lousy actually).
    Reply to mitch074
  9. AMD Vega powered LiquidSky streaming servers go live...
    This looks animal an now in open beta... check it out. Nice design win for Vega as well...

    https://youtu.be/KAn_oVBiBYY

    http://hexus.net/gaming/news/industry/103957-amd-vega-powered-liquidsky-streaming-servers-go-live/
    Reply to jaymc
  10. jaymc said:
    What about this... i know it's wcccft... but there AMD slides an they are claiming up to 2x over the competition .. What ??

    Very bold claim indeed.. are all these titles dx12/vulken I wonder ??

    twice as fast as the competition, 2x perf per clock.. 1600mhz base freq... up to 16gb HBM2...are these really AMD slides.

    Holy jumping catfish batman...

    Come on it can't be that good can it.. twice as good as the competition ???

    https://wccftech.com/vega-teaser-slides-leak-nda/


    It's an April fools joke, just early.

    I think Vega will be a solid product but I wish the hype train would lay off it. They always build things up so much and when the product comes out and it's just a good solid product, everyone acts disappointed. Vega will be competitive, but we have to stick to reality.

    Besides, looking at those "slides", there are some things that don't make any sense like HBM2 having a 2048 bus when HBM1 had 4096. Plus it says "up to 16gb" then shows fictitious cards that even the dual Vega card only has 8gb.

    Even though the Fury was equal in performance with the 980ti despite less memory(the bandwidth made up for the limited VRAM), people saw the 4gb it was rocking as a gimp factor despite the facts to the contrary. I doubt AMD would have a 4gb Vega card since most people don't understand much beyond the big "Xgb" memory on the shiny box. Hence the multiple people that come here on the forums asking why they aren't getting 200fps on their GTX 740 4gb. Something along the lines of:

    "Why can't I get 200FPS on XXXXX game with my 740 4gb?"
    Answer: "The GPU is too weak."
    "But it has 4gb DDR5?!?"
    Answer: "Yeah, it's about as useful as lipstick on a pig..."

    I am patiently waiting for Vega, but doubt I will be upgrading my GPU anytime soon. My CPU is in line for an upgrade first, waiting to see what CannonLake and Ryzen+ brings to the table. Probably end up with a fully new system at that point. It's been a while since I had a AMD CPU.
    Reply to Martell1977
  11. just realized that an deleted it... completely forgot about april fools... they had me goin... way too good to be true i guess...

    After seein crossfire running on two rx480's on adored tv new video... I would actually love to see a dual gpu card with vega... it seems they have crossfire running perfect was it always this smooth ?. i don't know is this cause the pci express lanes go directly to the cpu on ryzen ? same as the mouse response he speaks of in the game usb 3.1 straight to the cpu as well..

    other reviewers are talking of this "silky smooth" gaming experience with ryzen as well... just incase ya haven't seen the video it here it is, very different results in dx12 with amd gpu's, everyone else tested with nvidia gpu's: https://www.youtube.com/watch?v=0tfTZjugDeg

    but judging by that video on adored vega is going to be very competitive in dx12 especially with more cores on cpu, eg Ryzen.. or i suspect the 6900k.. unless nvidia does something about their dx12 drivers an fast....

    I can see myself buyin vega and ryzen.. don't know when but I will eventually I reckon. Ryzen's gettin better all the time... an vega looks like it's gonna be a demon in dx12 games.
    Reply to jaymc
  12. DX12 has helped a lot and AMD has been really good with their drivers lately. Crossfire is a great feature...when it's supported and has a proper game profile. I'm sure that for every game that runs great, there is one that has massive stuttering and crap performance. Just the nature of the beast.

    From what I've been reading Crossfire has been fairly stable, while SLI has been getting worse and worse. The fact that AMD allows Crossfire on all their GPU's is pretty cool. They seem to understand that people want a decent card but don't always have much money, then later on want more performance without breaking the bank and might want to try Crossfire out. That is what happened with me. I bought a 6870, then a few years later wanted more so I bought a used one from eBay. That Crossfire setup ran nicely for me until I had the money for an upgrade to my current R9 390.

    A dual Vega card is not out of the question, but I have to wonder, would it be bottlenecked by the PCIe 3.0 slot. A card that powerful (assuming it's somewhere near the 1080ti/Titan XP). A beast of a card like that would need a lot of bandwidth but wont have the convenience of using 2 PCIe slots.
    Reply to Martell1977
  13. Martell1977 said:
    DX12 has helped a lot and AMD has been really good with their drivers lately. Crossfire is a great feature...when it's supported and has a proper game profile. I'm sure that for every game that runs great, there is one that has massive stuttering and crap performance. Just the nature of the beast.

    From what I've been reading Crossfire has been fairly stable, while SLI has been getting worse and worse. The fact that AMD allows Crossfire on all their GPU's is pretty cool. They seem to understand that people want a decent card but later on want more performance without breaking the bank and might want to try Crossfire out. That is what happened with me. I bought a 6870, then a few years later wanted more so I bought a used on from eBay. That Crossfire setup for ran nicely for me until I had the money for an upgrade to my current R9 390.

    A dual Vega card is not out of the question, but I have to wonder, would it be bottlenecked by the PCIe 3.0 slot. A card that powerful (assuming it's somewhere near the 1080ti/Titan XP). A beast of a card like that would need a lot of bandwidth but wont have the convenience of using 2 PCIe slots.


    they did release an awfull lot of update's to crossfire in recent past... I couldn't help thinkin if they were considering releasing a new dual gpu card with vega... It's very good point you make though would one pci express lane handle the traffic, good question. I have no idea..Could one lane handle the traffic of two 1080's even ?
    Reply to jaymc
  14. Martell1977 said:
    jaymc said:
    What about this... i know it's wcccft... but there AMD slides an they are claiming up to 2x over the competition .. What ??

    Very bold claim indeed.. are all these titles dx12/vulken I wonder ??

    twice as fast as the competition, 2x perf per clock.. 1600mhz base freq... up to 16gb HBM2...are these really AMD slides.

    Holy jumping catfish batman...

    Come on it can't be that good can it.. twice as good as the competition ???

    https://wccftech.com/vega-teaser-slides-leak-nda/


    It's an April fools joke, just early.

    I think Vega will be a solid product but I wish the hype train would lay off it. They always build things up so much and when the product comes out and it's just a good solid product, everyone acts disappointed. Vega will be competitive, but we have to stick to reality.

    Besides, looking at those "slides", there are some things that don't make any sense like HBM2 having a 2048 bus when HBM1 had 4096. Plus it says "up to 16gb" then shows fictitious cards that even the dual Vega card only has 8gb.

    Even though the Fury was equal in performance with the 980ti despite less memory(the bandwidth made up for the limited VRAM), people saw the 4gb it was rocking as a gimp factor despite the facts to the contrary. I doubt AMD would have a 4gb Vega card since most people don't understand much beyond the big "Xgb" memory on the shiny box. Hence the multiple people that come here on the forums asking why they aren't getting 200fps on their GTX 740 4gb. Something along the lines of:


    "Why can't I get 200FPS on XXXXX game with my 740 4gb?"
    Answer: "The GPU is too weak."
    "But it has 4gb DDR5?!?"
    Answer: "Yeah, it's about as useful as lipstick on a pig..."

    I am patiently waiting for Vega, but doubt I will be upgrading my GPU anytime soon. My CPU is in line for an upgrade first, waiting to see what CannonLake and Ryzen+ brings to the table. Probably end up with a fully new system at that point. It's been a while since I had a AMD CPU.


    no it is not. if the game needs more than 4GB then Fury will not be able to cope with it. if that is possible then AMD would not need to create HBCC with VEGA. in case of Fury AMD have to play around with their drivers to lower the VRAM usage in games. they were probably do it the same way nvidia did with VRAM management with their cards with odd memory config.
    Reply to renz496
  15. The pricing of Vega will make or break the card. If it's anywhere near the price of a 1080ti I doubt I'll be buying it. If it's a couple hundred of dollars less then I'll buy 1 or 2 of them :) My machines need new cards pretty badly.
    Reply to Th3pwn3r
  16. Th3pwn3r said:
    The pricing of Vega will make or break the card. If it's anywhere near the price of a 1080ti I doubt I'll be buying it. If it's a couple hundred of dollars less then I'll buy 1 or 2 of them :) My machines need new cards pretty badly.


    So you wou;dn't buy Vega if it was the same price as the 1080ti but 10% more performance? I don't know how it will perform, but it seems you are assuming it will be less for the same price.
    Reply to Martell1977
  17. renz496 said:
    no it is not. if the game needs more than 4GB then Fury will not be able to cope with it. if that is possible then AMD would not need to create HBCC with VEGA. in case of Fury AMD have to play around with their drivers to lower the VRAM usage in games. they were probably do it the same way nvidia did with VRAM management with their cards with odd memory config.


    What I was referring to was when the VRAM gets full and the GPU needs to swap out frames, the bandwidth somewhat compensates and lowers the latency. It's not perfect, but the Fury X seems to manage. I'm sure there is more to it than just that, but I believe that is part of the equation.
    Reply to Martell1977
  18. Inspired by AdoredTV latest work. The Division with Ryzen 7 1700. AMD vs. Nvdia (DX11 vs. DX12). +30% gains with the AMD DX12 driver!

    https://www.reddit.com/r/Amd/comments/62n813/inspired_by_adoredtv_latest_work_the_division/?st=j0zlpi2u&sh=98f09e68

    "Results ( avg fps/cpu load/gpu load )
    AMD DX11: 128.4 fps CPU: 33% GPU: 71%
    NVIDIA DX12: 143.9 fps CPU: 42% GPU: 67%
    NVIDIA DX11: 161.7 fps CPU: 40% GPU: 80%
    AMD DX12: 189.8 fps CPU: 49% GPU: 86%
    it is really time for vega. Kudos to jim."
    Reply to jaymc
  19. Martell1977 said:
    Th3pwn3r said:
    The pricing of Vega will make or break the card. If it's anywhere near the price of a 1080ti I doubt I'll be buying it. If it's a couple hundred of dollars less then I'll buy 1 or 2 of them :) My machines need new cards pretty badly.


    So you wou;dn't buy Vega if it was the same price as the 1080ti but 10% more performance? I don't know how it will perform, but it seems you are assuming it will be less for the same price.


    Well, I'll have to factor in power consumption as well but 1080/1080ti are tried and true. We have real world data from real people telling us how it performs, people that aren't working at Nvidia. What do we have for VEGA? Just what AMD tells us. I was sick of AMD playing the waiting/delay it game over a decade ago. They still continue to stall things out far too much this day and age.
    Reply to Th3pwn3r
  20. Th3pwn3r said:
    Well, I'll have to factor in power consumption as well but 1080/1080ti are tried and true. We have real world data from real people telling us how it performs, people that aren't working at Nvidia. What do we have for VEGA? Just what AMD tells us. I was sick of AMD playing the waiting/delay it game over a decade ago. They still continue to stall things out far too much this day and age.


    Because it has been released. Vega is a unknown to everyone but AMD. Every vendor picks benchmarks that make their product look as good as possible, that's just marketing.

    I seriously doubt AMD is delaying for no reason. They had a delay waiting for HBM2 to be ready and avilable in mass quantity for Vega, which means millions of chips. nVidia wasn't willing to wait and went with GDDR5(X) for their consumer products. I am not aware of any other delays, but I would think that at this point in time, they are in production and testing, as making millions of GPU's is not a short process.

    AMD might be able to get fairly close to nVidia on power consumption this time as they have the 14nm process + HBM2 and architecture refinements. However, this is all speculation as there is nothing concrete yet, but I really doubt AMD is going to release a 400w monster of a card that can't even keep up with the 1080, they can't afford that. This generation of GPU's and CPU's are kind of a make it or break it for AMD. Ryzen has been a fair success and performs well, but is really just getting off the ground. Vega needs to be competitive in price, performance and power consumption.

    The nVidia praise for power consumption is interesting since nVidia fanboys kept screaming how it didn't matter back in the fermi days, but now tout it as proof of their superiority. I think for most people, price/performance is the main concern and AMD tends to win that battle...like they are with the 470, 480 4gb and 480 8gb at the moment.
    Reply to Martell1977
  21. The real problem is that we're all spoiled. Most people want the best for the least, the most for the least. Some people don't care though and will pay any price for the absolute best they can get like in the case of the extreme processors years ago that were $1300 versus the processor right under for $400.

    Anyhow, for me the power consumption is a big deal because the less power consumed, the less heat created as a by product. I don't even have hard drives in my machines anymore due to power consumption. Plus, I learned that storing things is kind of a waste of space considering how well I can stream everything these days :)

    As of right now I have two machines running older AMD cards(R290s I believe) and they still serve me pretty well for what I play most of the time but my 3rd machine is running a Gigabyte 1080 Xtreme edition which can handle the 4k gaming I use it for. So...depending on what happens with Vega I may be investing in the $1400-1600 in 1080 ti cards or whatever the price for two Vega cards will be. Some people have posted numbers ranging from $300-$600 for Vega depending on which variant you choose. Not sure where they came up with those numbers but...they were there.
    Reply to Th3pwn3r
  22. Martell1977 said:
    renz496 said:
    no it is not. if the game needs more than 4GB then Fury will not be able to cope with it. if that is possible then AMD would not need to create HBCC with VEGA. in case of Fury AMD have to play around with their drivers to lower the VRAM usage in games. they were probably do it the same way nvidia did with VRAM management with their cards with odd memory config.


    What I was referring to was when the VRAM gets full and the GPU needs to swap out frames, the bandwidth somewhat compensates and lowers the latency. It's not perfect, but the Fury X seems to manage. I'm sure there is more to it than just that, but I believe that is part of the equation.


    Martell1977 said:
    Th3pwn3r said:
    Well, I'll have to factor in power consumption as well but 1080/1080ti are tried and true. We have real world data from real people telling us how it performs, people that aren't working at Nvidia. What do we have for VEGA? Just what AMD tells us. I was sick of AMD playing the waiting/delay it game over a decade ago. They still continue to stall things out far too much this day and age.


    Because it has been released. Vega is a unknown to everyone but AMD. Every vendor picks benchmarks that make their product look as good as possible, that's just marketing.

    I seriously doubt AMD is delaying for no reason. They had a delay waiting for HBM2 to be ready and avilable in mass quantity for Vega, which means millions of chips. nVidia wasn't willing to wait and went with GDDR5(X) for their consumer products. I am not aware of any other delays, but I would think that at this point in time, they are in production and testing, as making millions of GPU's is not a short process.

    AMD might be able to get fairly close to nVidia on power consumption this time as they have the 14nm process + HBM2 and architecture refinements. However, this is all speculation as there is nothing concrete yet, but I really doubt AMD is going to release a 400w monster of a card that can't even keep up with the 1080, they can't afford that. This generation of GPU's and CPU's are kind of a make it or break it for AMD. Ryzen has been a fair success and performs well, but is really just getting off the ground. Vega needs to be competitive in price, performance and power consumption.

    The nVidia praise for power consumption is interesting since nVidia fanboys kept screaming how it didn't matter back in the fermi days, but now tout it as proof of their superiority. I think for most people, price/performance is the main concern and AMD tends to win that battle...like they are with the 470, 480 4gb and 480 8gb at the moment.


    nah that go both way. back in 2010 many in this forum saying that using nvidia GTX480 will make your power bill skyrocket by the end of month. but when AMD start losing their power efficiency competitiveness no one mention those "power bill issue" anymore.
    Reply to renz496
  23. Th3pwn3r said:
    The real problem is that we're all spoiled. Most people want the best for the least, the most for the least. Some people don't care though and will pay any price for the absolute best they can get like in the case of the extreme processors years ago that were $1300 versus the processor right under for $400.

    Anyhow, for me the power consumption is a big deal because the less power consumed, the less heat created as a by product. I don't even have hard drives in my machines anymore due to power consumption. Plus, I learned that storing things is kind of a waste of space considering how well I can stream everything these days :)

    As of right now I have two machines running older AMD cards(R290s I believe) and they still serve me pretty well for what I play most of the time but my 3rd machine is running a Gigabyte 1080 Xtreme edition which can handle the 4k gaming I use it for. So...depending on what happens with Vega I may be investing in the $1400-1600 in 1080 ti cards or whatever the price for two Vega cards will be. Some people have posted numbers ranging from $300-$600 for Vega depending on which variant you choose. Not sure where they came up with those numbers but...they were there.


    and that's the thing that killing AMD.....
    Reply to renz496
  24. Found this on Nvidia forum it's in relation to AMD's GPU's working far better with Dx12 than Nvidia's...

    Original poster keeps pushing for someone from Nvidia to answer but to no avail.. but get's some interesting info along the way...

    User supermanjoe states "In the architecture side, the main difference of the IHV implementations is the concurrency and interleaving support. In the practice GCN can work very well with any codepath. Pascal is better than Maxwell, but still not work well with mixed graphics-compute warp interleaving. But it won’t get negative scaling like Maxwell. Probably the secret of GCN is the ability to execute compute pipelines along with any graphics pipeline on the same CU with any state. But there are not much data on how they are doing this. GCN1 is not as good as GCN2/3/4 at this, but it is still the second best desing."

    This looks like a hardware problem for sure if true that is I guess... anyhow hole tread can be read here: https://forums.geforce.com/default/topic/964990/-iquest-the-future-of-directx-12-nvidia-/?offset=20

    So According to this poster Nvidia don't know how AMD is doing it so well... And Maxwell got negative scaling with Dx12... It appears they have a real problem here...

    Here's another quote from the same thread "In theory the best scenario for the game engines is to offload the long running compute jobs to a compute queue, so these can run asynchronously with the graphics pipelines, and this is the best case scenario for AMD too. But this is also the worst case scenario for Nvidia, even for Pascal. Now most games are designed for this, and Nvidia can’t handle it well."

    Well there ya have it..
    Jay
    Reply to jaymc
  25. jaymc said:
    Found this on Nvidia forum it's in relation to AMD's GPU's working far better with Dx12 than Nvidia's...

    Original poster keeps pushing for someone from Nvidia to answer but to no avail.. but get's some interesting info along the way...

    User supermanjoe states "In the architecture side, the main difference of the IHV implementations is the concurrency and interleaving support. In the practice GCN can work very well with any codepath. Pascal is better than Maxwell, but still not work well with mixed graphics-compute warp interleaving. But it won’t get negative scaling like Maxwell. Probably the secret of GCN is the ability to execute compute pipelines along with any graphics pipeline on the same CU with any state. But there are not much data on how they are doing this. GCN1 is not as good as GCN2/3/4 at this, but it is still the second best desing."

    This looks like a hardware problem for sure if true that is I guess... anyhow hole tread can be read here: https://forums.geforce.com/default/topic/964990/-iquest-the-future-of-directx-12-nvidia-/?offset=20

    So According to this poster Nvidia don't know how AMD is doing it so well... And Maxwell got negative scaling with Dx12... It appears they have a real problem here...

    Here's another quote from the same thread "In theory the best scenario for the game engines is to offload the long running compute jobs to a compute queue, so these can run asynchronously with the graphics pipelines, and this is the best case scenario for AMD too. But this is also the worst case scenario for Nvidia, even for Pascal. Now most games are designed for this, and Nvidia can’t handle it well."

    Well there ya have it..
    Jay


    both company have their own way of solving problem. but for certain things both company will not going to tell how they handle things. AMD for example will not going to give the exact details how freesync actually work with their GPU. when pressed for it they will mention "secret sauce". same with nvidia. to explain their DX12 performance nvidia most likely need to tell how they they handle DX11 optimization as well. and that's where nvidia "secret sauce" is at the moment.
    Reply to renz496
  26. Gforlife said:
    AndrewJacksonZA said:

    "What's new, though, is a Vega timetable: Su revealed that the Vega GPUs will ship during the second quarter as well."



    This could be because of the HBM2 memory used if you look at the databook under graphics memory on skhynix.com the HBM2 is only 204.8GB/s not the 256GB/s that we expected. The 204.8GB/s will put it under the GTX 1080, perhaps they will roll out VEGA with the 204.8GB/s HBM2 then when its fully finished start shipping VEGA with the 256GB/s.Makes me think about getting an RX 470/480 to replace my 7970 and hold off until 256GB/s HBM2 is garanteed. The only other option is if they are gonna use samsung HBM2 but I don't know if they have reached 256GB/s either


    The Skhynix 4GiB 4 stack HBM2 chip is rated for 256GiB/s. If you use two of these chips, you will have 8GiB and 512GiB/s of bandwidth.
    Reply to Kewlx25
  27. Yuka said:
    The problem for AMD here is how to tweak GCN to squeeze extra Mhz out of it without throwing power out of the window. nVidia did an excellent job of tweaking Pascal to get a lot of extra hertz out of the GPU.

    This is just a broad sentiment I have about GCN, but AMD is being very stubborn about not dropping some parts of GCN that just waste space in a card that would be aimed to the consumer market first and Pro market second. Having full HSA compliance when I haven't read anywhere it's being used is just stubbornness in my eyes.

    In any case, Vega... I'm still thinking they'll have a GDDR5X variant and a HBM2 variant.

    Cheers!


    According to AMD, both effective throughput and efficiency are negatively affected by their large processing units. The 480 has overly large execution units that consume the same amount of power if processing a smaller batch or a larger batch. Not only does this mean lots of power is being used processing non-existent data, but those units are being under-utilized.

    With Vega, they're making some of the execution units smaller and able to merge together. This allows for smaller batches to not consume resources they don't need while allowing larger units to still have high throughput. Of course dynamically sized units makes certain things more complex, but it's an overall large win.

    I think they said that this change alone will allow for 20% reduction in power or 20% increase in performance, depending on if the executing code will make better use of the new setup or allow the unused execution units to actually stay idle.

    Nvidia will still have a power lead in efficiency. AMD will need to make many other tweaks to become competitive, but they're in the same ballpark, so that's good. With my 1070 undervoltaged, I'm seeing about 150fps in Overwatch 1080p Ultra at 30% TDP(50watts). Pascal is crazy.
    Reply to Kewlx25
  28. renz496 said:

    both company have their own way of solving problem. but for certain things both company will not going to tell how they handle things. AMD for example will not going to give the exact details how freesync actually work with their GPU. when pressed for it they will mention "secret sauce". same with nvidia. to explain their DX12 performance nvidia most likely need to tell how they they handle DX11 optimization as well. and that's where nvidia "secret sauce" is at the moment.


    Your wrong AMD FreeSync is not a black box. FreeSync is hardware implementation of VESA DisplayPort 1.2a standard adaptive synchronization, it's not a secret technology, Nvidia could make they own version, but they won't, as they are to busy counting your money :)
    Reply to Tomasz_5
  29. Tomasz_5 said:
    renz496 said:

    both company have their own way of solving problem. but for certain things both company will not going to tell how they handle things. AMD for example will not going to give the exact details how freesync actually work with their GPU. when pressed for it they will mention "secret sauce". same with nvidia. to explain their DX12 performance nvidia most likely need to tell how they they handle DX11 optimization as well. and that's where nvidia "secret sauce" is at the moment.


    Your wrong AMD FreeSync is not a black box. FreeSync is hardware implementation of VESA DisplayPort 1.2a standard adaptive synchronization, it's not a secret technology, Nvidia could make they own version, but they won't, as they are to busy counting your money :)


    the Vesa adaptive sync indeed an open standard. but i'm not talking about that. i was talking how technically freesync being handle inside AMD GPU. if you asked them they will not going to answer this. for example we know how nvidia Gsync behave when the frame rate drop below the 30FPS limit window (PCPer have an article explaining this). but for AMD when ask how exactly they handle this they will not going to give you the details. PCPer for their part just assuming that AMD is doing something similar to what nvidia did with Gsync but the never confirmed it is really the way they do it.

    also despite being open standard AMD is the only one present when proposing the spec for adaptive sync. company when they proposing a spec they always do it the way their hardware work. and in adaptive sync case there is no other company objecting the way how adaptive sync must be handled. so saying that nvidia can just simply adopting adaptive sync into their GPU is false. if the required hardware does not exist inside nvidia gpu then no matter how open the spec is nvidia will not be able to implement them. can nvidia develop that required hardware inside their gpu? maybe they can but is it as easy as that? what if AMD have patent for that kind of hardware? if nvidia try to do create something similar inside their what if AMD try to sue nvidia for violating AMD patent? if i remember correctly intel have mentioned that they have interest in supporting adaptive sync but it has been two or three years since that but to this day we still not seeing any of intel gpu capable of using adaptive sync monitors out there. there probably some hurdles that preventing intel from really adopting the tech inside their GPU.
    Reply to renz496
  30. If I'm pretty sure I'm going to buy a 1080ti sometime in May or June, how long should I expect to hold out to see what Vega brings just to be sure? I'm also going to be getting a 4k monitor, so I have to decide on free sync or g sync.
    Reply to axlrose
  31. axlrose said:
    If I'm pretty sure I'm going to buy a 1080ti sometime in May or June, how long should I expect to hold out to see what Vega brings just to be sure? I'm also going to be getting a 4k monitor, so I have to decide on free sync or g sync.


    The obvious answer is "until it's officially benchmarked".

    Not even with a full list of official specifications you'd be able to accurately measure where it will place, so it's a moot point to even tell you "hold for it".

    What I can say is if you get the 1080ti, when Vega comes out, you will still have top notch performance. As usual, the only factor you have to take into account is "am I willing to wait?". Everything else is noise.

    Cheers!
    Reply to Yuka
  32. Is there a Vega date yet?
    Reply to axlrose
  33. axlrose said:
    Is there a Vega date yet?


    there is none at the moment. but they said it might launch next month together with Arkane's Prey.
    Reply to renz496
  34. renz496 said:

    the Vesa adaptive sync indeed an open standard. but i'm not talking about that. i was talking how technically freesync being handle inside AMD GPU. if you asked them they will not going to answer this. for example we know how nvidia Gsync behave when the frame rate drop below the 30FPS limit window (PCPer have an article explaining this). but for AMD when ask how exactly they handle this they will not going to give you the details. PCPer for their part just assuming that AMD is doing something similar to what nvidia did with Gsync but the never confirmed it is really the way they do it.

    also despite being open standard AMD is the only one present when proposing the spec for adaptive sync. company when they proposing a spec they always do it the way their hardware work. [...]


    How AMD handles adaptive sync: we'll know exactly how they do that once DC lands in the Linux kernel. Heck, I guess if you look at the out-of-tree patches now, you'll see exactly how they do it. bridgman seems to be saying so (he's a well-known AMD employee prowling forums such as Phoronix and reddit).

    Why Nvidia pushes G-sync and not Adaptive Sync: vendor lock-in. Actually, it seems that they're using Adaptive Sync in their mobile chips as those panels don't make use of their G-sync clock generator - of course they don't advertise it as such. Tested here.

    Why Intel isn't implementing Adaptive Sync: with the rumours of Intel going for AMD for their future GPU, they actually will.

    Why is no one else using Adaptive Sync: there are no other GPU makers needing such a wide range of refresh rates, as other use cases can either use a proprietary screen clock controller (hand-held, mobile) or they don't need variable frame rate (media centres).
    Reply to mitch074
  35. Quote:
    Why Nvidia pushes G-sync and not Adaptive Sync: vendor lock-in. Actually, it seems that they're using Adaptive Sync in their mobile chips as those panels don't make use of their G-sync clock generator - of course they don't advertise it as such. Tested here.


    this stuff has been discussed a lot. but from what i can remember what being discussed back in 2014 the standard to make it work on regular monitor did not exist at the time. some people that familiar with eDP and how monitors working said the protocol for the adaptive sync to work cannot work over cables that longer than 10cm. at least it was the case with eDP. also the Gsync module on regular monitors is not just there for for Gsync functionality but also for uniform integration. remember some of the early Adaptive Sync monitors having issues to work with Freesync and have to be send back to manufacturer for firmware updates? with nvidia Gsync there is no such problem because all issues are being fixed by nvidia driver unless you got panel defect and needs your panel to be replaced physically.
    Reply to renz496
  36. renz496 said:

    this stuff has been discussed a lot. but from what i can remember what being discussed back in 2014 the standard to make it work on regular monitor did not exist at the time. some people that familiar with eDP and how monitors working said the protocol for the adaptive sync to work cannot work over cables that longer than 10cm. at least it was the case with eDP. also the Gsync module on regular monitors is not just there for for Gsync functionality but also for uniform integration. remember some of the early Adaptive Sync monitors having issues to work with Freesync and have to be send back to manufacturer for firmware updates? with nvidia Gsync there is no such problem because all issues are being fixed by nvidia driver unless you got panel defect and needs your panel to be replaced physically.

    Well, there is nothing that actually prevents updating a monitor's firmware over DP, except that no screen maker is ready to open up the API to do so - that's one main point for G-sync, but is it worth a $100 premium on EVERY screen? And, considering that Nvidia decided not to make use of that possibility when they finally managed to make ULMB work with Gsync on those panels that could have handled it, I'm not too convinced it is.

    G-sync was very useful at a time when this technology was still in its infancy - and truly, screens using G-sync and Adaptive Sync were very different at the beginning, with G-sync being much better. Nowadays, I'm not so sure - advantages like frame doubling being managed in the screen's module are compensated at no extra cost on the card's side with a driver option on AMD hardware, G-sync doesn't allow colour management by panel makers, and non-G-sync screens are now able to hit 144Hz+ too.

    As for cable length, I think I remember reading that it was the main reason why Nvidia split from the design group for adaptive sync: in early versions of the spec, no timing information was transferred and no one agreed on how to do this. The final version of the spec (which came out one year after G-sync hit the market) did mostly solve that problem but even then the actual implementation was quirky. Later revisions for the hardware pretty much did away with that though.
    Reply to mitch074
  37. regardless if there is module or not part of the reason why Gsync monitors are more expensive is also because of more tight control and effort by nvidia to make sure every Gsync monitors out there provide the same experience. anandtech recently made an article about AMD Freesync 2 and in their discussion with AMD it seems AMD also agree that if they want more streamline experience with Freesync monitors they need to work even more closely with panel maker instead just let panel maker do what they want to do with current adaptive sync monitors. but working closely means more effort and resource have to be spend on AMD end. right now AMD is thinking about charging royalties with Freesync 2 (and upwards).

    also i think nothing stop from monitor makers from making cheap Gsync monitors. but nvidia branding did carry that premium "aura" on it and it seems monitor maker intend to fully exploit that. i still remember when nvidia said in the beginning that they intend the first Gsync monitor to cost no more than $400. in the end that's end up being nvidia very own pipe dream. don't underestimate board partner desire to make profit. Asus for example will not hesitant to charge premium dollar for their nvidia based product. take their pricing for GTX1050Ti Strix for example. they boldly price them around $175-$180 despite very knowing that AMD drop the price of RX470 4GB down to $170 mark before nvidia start selling their 1050s.
    Reply to renz496
  38. renz496 said:
    regardless if there is module or not part of the reason why Gsync monitors are more expensive is also because of more tight control and effort by nvidia to make sure every Gsync monitors out there provide the same experience. anandtech recently made an article about AMD Freesync 2 and in their discussion with AMD it seems AMD also agree that if they want more streamline experience with Freesync monitors they need to work even more closely with panel maker instead just let panel maker do what they want to do with current adaptive sync monitors. but working closely means more effort and resource have to be spend on AMD end. right now AMD is thinking about charging royalties with Freesync 2 (and upwards).

    also i think nothing stop from monitor makers from making cheap Gsync monitors. but nvidia branding did carry that premium "aura" on it and it seems monitor maker intend to fully exploit that. i still remember when nvidia said in the beginning that they intend the first Gsync monitor to cost no more than $400. in the end that's end up being nvidia very own pipe dream. don't underestimate board partner desire to make profit. Asus for example will not hesitant to charge premium dollar for their nvidia based product. take their pricing for GTX1050Ti Strix for example. they boldly price them around $175-$180 despite very knowing that AMD drop the price of RX470 4GB down to $170 mark before nvidia start selling their 1050s.


    Looking it up, there are a few reasons why G-sync monitors are more expensive:
    • the G-sync module itself isn't cheap, and more expensive than a "normal" clock generator considering it's not usually found in large enough quantities to become cheap; it also includes some electronics that are found directly on the GPU's card on competitors' offers (RAM chips)
    • as it is a plug-in card and not simply soldered on, designing the screen's chassis is more difficult than a "standard" one as a connector always takes up more space than a handful of chips directly soldered on the PCB
    • DisplayPort, even without FreeSync, is expensive (that's the main reason behind AMD's FreeSync-over-HDMIinitiative)
    • and, yes, since G-sync has a premium image, screen makers don't hesitate to bleed the consumer dry.

    I think the module costs $35, replaces $12 worth of components, but costs $20 more in design constraints - that's $43 extra over an Adaptive Sync screen. Double that for screen maker's margin on the feature, add taxes... You got your $100.
    Reply to mitch074
  39. Isn't the Rx580 supposed to be launching today ?

    Os it delayed again:
    http://www.thetech52.com/amd-polaris-refresh-rx580-rx570-delayed-till-april-18th-ryzen-5-launch-ahead/

    Also we have no Sticky for the Rx500 Series..

    Jay
    Reply to jaymc
  40. jaymc said:
    Isn't the Rx580 supposed to be launching today ?
    The 570 and the 580 have launched: http://www.tomshardware.com/reviews/amd-radeon-rx-580-review,5020.html


    jaymc said:
    Also we have no Sticky for the Rx500 Series..
    Because IMHO we don't need a sticky for the 500 series since it's just a refined 400 series with better manufacturing processes and higher clock speeds.
    Reply to AndrewJacksonZA
  41. Fair enough so relevant post's go in there then...

    Maybe we could rename the 400 series sticky to cover both ?

    It's a little confusing.. Especially for someone not too familiar with the architecture.
    Reply to jaymc
  42. AndrewJacksonZA said:
    jaymc said:
    Isn't the Rx580 supposed to be launching today ?
    The 570 and the 580 have launched: http://www.tomshardware.com/reviews/amd-radeon-rx-580-review,5020.html


    jaymc said:
    Also we have no Sticky for the Rx500 Series..
    Because IMHO we don't need a sticky for the 500 series since it's just a refined 400 series with better manufacturing processes and higher clock speeds.


    from many discussion that i heard it is based on 14nm LPC instead of LPP. they said it is basically the same as LPP only it is much cheaper to manufacturer. for those that hoping higher performance for the same power consumption then it is not much improvement in that regard. officially AMD listed RX580 TBP at 185w vs 150w for RX480.
    Reply to renz496
  43. LPC and LPE? I thought that it was LPE and LPP:
    Low Power Early (for the first run of the process node,) and
    Low Power Performance/Plus (for the second/subsequent and more performant "refinements" of the process node.)
    https://www.globalfoundries.com/news-events/press-releases/globalfoundries-achieves-14nm-finfet-technology-success-for-next-generation-amd-products

    What are you referring to with "LPC" please?
    Reply to AndrewJacksonZA
  44. So 500 series is not what we are hoping competes with 1080ti?
    Reply to axlrose
  45. axlrose said:
    So 500 series is not what we are hoping competes with 1080ti?


    Could be, we don't know what the Vega cards will be called. I'm thinking there will be a 590, maybe a 590X and then whatever the Fury X successor is.
    Reply to Martell1977
  46. Hope it's called RX Vega. That name is just amazing.
    Reply to Gon Freecss
  47. They said they were gonna keep the codename for the gpu, an call the final product "vega". This was announced at capsaicin an cream.

    So "Rx Vega" or just Radeon "Vega" I guess..
    Reply to jaymc
  48. jaymc said:
    They said they were gonna keep the codename for the gpu, an call the final product "vega". This was announced at capsaicin an cream.

    So "Rx Vega" or just Radeon "Vega" I guess..


    I don't accept much of anything as set in stone until the launch event...especially with AMD. They might change it on a whim, you never know. But, Vega is a good name and we can hope it fares better than the Chevrolet Vega did, lol.
    Reply to Martell1977
  49. AndrewJacksonZA said:
    LPC and LPE? I thought that it was LPE and LPP:
    Low Power Early (for the first run of the process node,) and
    Low Power Performance/Plus (for the second/subsequent and more performant "refinements" of the process node.)
    https://www.globalfoundries.com/news-events/press-releases/globalfoundries-achieves-14nm-finfet-technology-success-for-next-generation-amd-products

    What are you referring to with "LPC" please?


    it's only from the discussion that i heard. the "C" is referring to cost. but some people also said only samsung has LPC. and instead of "cost" the C in samsung LPC refers to "Compact". samsung LPC was supposed to be using much lower power than LPP. but as we see the power consumption rating for this new 500 series did not improve much. so they said this is more like GF own improvement towards the initial LPP (some people refer to it as LPP+). it is a bit confusing of course but AMD themselves never really explain which exact process they use and most reviewer did not have much interest digging deeper in that regard as well other than the cards performance.
    Reply to renz496
Ask a new question Answer

Read More

Vega Next Generation AMD Graphics