Sticky

AMD Vega MegaThread! FAQ and Resources - page 4

694 answers Last reply
  1. mitch074 said:
    Thing is, with how DX11 is made, Nvidia is pretty much impossible to beat with its current chips - AMD built Mantle and GCN together to show how much of a dead end DX11 (and OpenGL) were, with wasted resources and bottlenecks all around, and is now pushing GCN and DX12/Vulkan to remove as many of them as they could.

    I'm impressed with AMD for managing to change the playground so much (they moved both Microsoft and Intel over to their side with their low-overhad APIs and managed to force Nvidia to leave their comfort zone) and actually manage to provide worthwhile products in an environments where juggernauts with ten times their budget just lug their weight around to win developers over. Managing to duke it out in GameWorks titles and kicking ass on games targeted at their architectures (Doom, DE:MD) is quite an accomplishment when they don't have half the R&D and production capacities their competitors have.


    part of it thanks to both PS4 and Xbox one is using AMD hardware. and that hardware also using the same exact architecture than their current GPU on desktop market. right now many multi plat games running faster on AMD hardware even when there is no low level API being involved. heck some of them is even nvidia sponsored title like The Division, Titanfall 2, RE7 to name a few. nvidia know this that's why they are back in the console business. it is also probably the reason why nvidia keep releasing faster GPU like 1080ti when AMD does not even have proper GPU to compete with their 1080. those low level API might favor AMD hardware but nvidia counter it with faster GPU that not even with the help low level API can reach them.
    Reply to renz496
  2. renz496 said:
    part of it thanks to both PS4 and Xbox one is using AMD hardware. and that hardware also using the same exact architecture than their current GPU on desktop market. right now many multi plat games running faster on AMD hardware even when there is no low level API being involved. heck some of them is even nvidia sponsored title like The Division, Titanfall 2, RE7 to name a few. nvidia know this that's why they are back in the console business. it is also probably the reason why nvidia keep releasing faster GPU like 1080ti when AMD does not even have proper GPU to compete with their 1080. those low level API might favor AMD hardware but nvidia counter it with faster GPU that not even with the help low level API can reach them.


    While Nvidia might be back in the console business, the Switch isn't exactly a graphics powerhouse - especially when you consider that they are forced to use their competitor's technology (i.e. Vulkan) to actually make it.

    Then, if GameWorks games work so well on AMD hardware, it could be because of AMD's versatility : a driver update solved The Witcher 3's speed problems a few weeks after the game came out, while we're still waiting for Nvidia's driver that enables async compute in Doom. AMD devs did say that several units inside their recent GCN revisions can be repurposed on the fly with a new driver. I personally think that's why AMD isn't letting go of GCN: the flexibility of that solution is much better IMHO than Nvidia's that needs to re-spin a whole new architecture with every new card family to squeeze more horsepower for current apps.

    My take is that Nvidia may be leading when it comes to pure horsepower while AMD's solution is more flexible; I would put my bets on the second's long-term viability. I just hope AMD is strong enough to see it really come to fruition.
    Reply to mitch074
  3. Quote:
    While Nvidia might be back in the console business, the Switch isn't exactly a graphics powerhouse - especially when you consider that they are forced to use their competitor's technology (i.e. Vulkan) to actually make it.


    it's not about how powerful those console are. the intention is so that game developer to be more familiar with nvidia more recent architecture when using low level API. and while nintendo did say they are supporting Vulkan the console have specific low level API developed by nvidia (NVN). some developer even said that developing on the switch is much easier than for example PS4.

    Quote:
    Then, if GameWorks games work so well on AMD hardware, it could be because of AMD's versatility : a driver update solved The Witcher 3's speed problems a few weeks after the game came out,


    that's because CDPR allow AMD (and the public) to access and override in game setting though AMD CCC. batman origin for example the developer did not give AMD the access requested by AMD to override tessellation setting in the game. so it still depends on how open are game developer/publisher towards IHV.

    Quote:
    while we're still waiting for Nvidia's driver that enables async compute in Doom.


    nvidia async compute did not work the same as AMD async compute. from what i can understand in case of Doom the developer also using specific extension to use specific hardware on AMD GCN that did not exist on nvidia GPU. that is one of the reason why it is faster on AMD hardware with Vulkan.

    Quote:
    I personally think that's why AMD isn't letting go of GCN: the flexibility of that solution is much better IMHO than Nvidia's that needs to re-spin a whole new architecture with every new card family to squeeze more horsepower for current apps.


    i don't think it was like that. AMD have hardware in two major console. so many game engine and game development are focused on AMD hardware first. so over the time their hardware feature are used more and more. nvidia in their case need to change their architecture to keep up with this not just because they make changes to increase their raw power. just look at pascal itself. it is very identical to maxwell design wise except for GP100. why? maybe because there isn't much changes with AMD GCN over the years so nvidia did not really need to change the base design going from maxwell to pascal....except for async compute related stuff. but then again nvidia architecture in general did not really need async compute because their utilization is already good to begin with unlike AMD. that's why we often see nvidia cards with much less raw theoretical performance able to match AMD card that supposed to have much higher theoretical performance.
    Reply to renz496
  4. AMD Radeon RX Vega Specifications Leak In Linux Driver Update - Alleged Benchmarks Surface:

    http://www.game-debate.com/news/22847/amd-radeon-rx-vega-specifications-leak-in-linux-driver-update-alleged-benchmarks-surface
    Reply to jaymc
  5. it has been discussed a lot this past few days in other forum. but here at toms it seems no one really care about it. as usual i will wait for review. many people out there want AMD to take the crown from nvidia so badly. seeing the trend for AMD release this few years the hype is crazy. in fact i start suspecting that some people did this on purpose to AMD so when the said product did not meet the hype expectation it is regarded as "bad product" by many people. it doesn't help AMD when they also start hyping their product much early starting from polaris generation. remember when Raja said that they were confident that were few months ahead of their competitor when it comes to finfet? in the end nvidia still beat them to the market with finfet process.
    Reply to renz496
  6. current assessment by AMD about Vega's performance VS the 1080: "it's nice." Not exactly overhyping...
    Reply to mitch074
  7. mitch074 said:
    current assessment by AMD about Vega's performance VS the 1080: "it's nice." Not exactly overhyping...


    that "it's nice" was refering to how Vega compared to 1080ti and the new titan Xp not regular 1080.

    check PandaNation question and direct answer for it in this AMA thread:

    http://www.tomshardware.com/forum/id-3378010/join-tom-hardware-amd-thursday-april-6th/page-3.html

    in AMD case any kind of hype is very dangerous. does not matter from where they come. last year AMD try to let people know few months before launch that Polaris will not going to compete with nvidia GP104 performance wise but even when the tip coming directly from AMD some of them choose to ignore it and only want to believe good thing they heard about polaris.
    Reply to renz496
  8. In-depth look at some of the more "Interesting Features" on Vega...

    There's a lot cramed into this video... found myself pausing an rewinding few times to take it all in..

    Interesting stuff all the same.. the YT states that where Ryzen is weak in AVX, Vega is strong..
    This is important an with good reason, ya can see the puzzle start to come together here with regard to AMD's Server Solution that is..
    He goes onto say that this synergy may give AMD the ability to take advantage of a hardware "niche" in the HPC..
    I also believe that this is the kicker that will hopefully cause us to start hearing of design win's in the HPC for Naples & Vega.. but I digress.

    Also note at 25.05 how the Infinity Fabric connects the L2 cache on the GPU directly to the CPU an the PCI Express lanes, interesting..

    Vega & Zen together look like quite a team.

    https://youtu.be/m5EFbIhslKU
    Reply to jaymc
  9. as interesting as it might be let's see the adoption first.........
    Reply to renz496
  10. jaymc said:

    Also note at 25.05 how the Infinity Fabric connects the L2 cache on the GPU directly to the CPU an the PCI Express lanes, interesting..

    Vega & Zen together look like quite a team.


    Makes me wonder if AMD put something in Ryzen/Vega that will take advantage of that access. Could we see the GPU offloading work to the CPU automatically at a hardware level?
    Reply to grndzro7
  11. grndzro7 said:
    Makes me wonder if AMD put something in Ryzen/Vega that will take advantage of that access. Could we see the GPU offloading work to the CPU automatically at a hardware level?



    They've called that HSA, if I'm not mistaken. It already works in their APUs, I wonder if they're not trying to do that with Ryzen+Vega...
    Reply to mitch074
  12. mitch074 said:
    grndzro7 said:
    Makes me wonder if AMD put something in Ryzen/Vega that will take advantage of that access. Could we see the GPU offloading work to the CPU automatically at a hardware level?



    They've called that HSA, if I'm not mistaken. It already works in their APUs, I wonder if they're not trying to do that with Ryzen+Vega...


    You perhaps didn't read the phrase "hardware level"

    HSA doesn't necessarily work at a hardware level. It's mainly about data access. I'm talking about using hardware blocks designed for processing specific common workloads normally targeted for GPU's and making them run on the CPU similar to instruction set extensions.
    Reply to grndzro7
  13. grndzro7 said:
    You perhaps didn't read the phrase "hardware level"

    HSA doesn't necessarily work at a hardware level. It's mainly about data access. I'm talking about using hardware blocks designed for processing specific common workloads normally targeted for GPU's and making them run on the CPU similar to instruction set extensions.


    No, I did read it well enough. If it weren't the case and were software-only, HSA would work as soon as you use an Athlon and a GCN card together - but it doesn't. Why? Because common addressing and resource sharing - you'd need a memory controller on the CPU able to converse with the GPU's VRAM controller directly. Thus, hardware-level support. Moreover, your hardware block processing GPU tasks is... An integrated GPU, so you'd get an APU, basically - HSA again.
    Reply to mitch074
  14. Rumor has it 3 different variations of Vega. 867FXX C1,C2,C3 the slowest being close to a 1070. The highest end card in Time Spy is getting a score of 9753@ a core clock of 1600MHz and lowest at 5950@ a core clock 1200MHz. A stock 1080Ti get around 9500, 1080@7400, and 1070@6000. https://www.youtube.com/watch?v=Wb-kZlbPOqM
    Reply to goldstone77
  15. Yeah but when can we actually buy one?
    Reply to axlrose
  16. goldstone77 said:
    Rumor has it 3 different variations of Vega. 867FXX C1,C2,C3 the slowest being close to a 1070. The highest end card in Time Spy is getting a score of 9753@ a core clock of 1600MHz and lowest at 5950@ a core clock 1200MHz. A stock 1080Ti get around 9500, 1080@7400, and 1070@6000. https://www.youtube.com/watch?v=Wb-kZlbPOqM


    AFAIK the only "real" one was the one that score around 1070 performance. the score is available on 3dmark database:

    http://www.3dmark.com/spy/1544741

    TBH i don't know why some people take the prank that coming from WCCFTECH comment section as a rumor.
    Reply to renz496
  17. Well, that sucks! I guess we will have to wait till Computex to find out the deal.... :( AMD said it goes up for sale in June.

    also,
    "AMD will finally be disclosing more information about its next generation CPU & graphics architectures Vega, Navi and Zen+ in 10 days. The company is set to unveil its long-term CPU & graphics roadmaps for 2017 and beyond in a little over a week, sources close to AMD have told us. If you’ve been waiting to hear more about Vega, Navi & Zen+ make sure to tune in to wccftech on Tuesday May 16th."
    http://wccftech.com/amd-taking-the-covers-off-vega-navi-may-16th/
    If we can trust anything they say...
    Reply to goldstone77
  18. goldstone77 said:
    Rumor has it 3 different variations of Vega. 867FXX C1,C2,C3 the slowest being close to a 1070. The highest end card in Time Spy is getting a score of 9753@ a core clock of 1600MHz and lowest at 5950@ a core clock 1200MHz. A stock 1080Ti get around 9500, 1080@7400, and 1070@6000. https://www.youtube.com/watch?v=Wb-kZlbPOqM


    Good Video... Looks very promising indeed.. it say's 1600mhz (maybe this is liquid cooled) Maybe a cherry picked chip.

    Or could we be looking at even more headroom I wonder. Sound's like wishful thinking but I reckon it's possible, we shall see I guess..especially as the process matures. It is designed from the ground up as a high speed architecture an we have Polaris coming in at 1400mhz.

    An then I hear there will be liquid cooling options as well...

    It seem's their being very careful not to over hype this time round... but all in all, the more info were getting, it's starting to look more an more beastly.. Happy Day's

    An FP16 coming in at 25 Terraflops WOW... if they can get dev's to take advantage of this.. but they will have the option to avail of it in Polaris on the consoles.. so this should help. An give more oomph to the console games.. (an the ports to PC) :)
    Reply to jaymc
  19. goldstone77 said:
    Well, that sucks! I guess we will have to wait till Computex to find out the deal.... :( AMD said it goes up for sale in June.

    also,
    "AMD will finally be disclosing more information about its next generation CPU & graphics architectures Vega, Navi and Zen+ in 10 days. The company is set to unveil its long-term CPU & graphics roadmaps for 2017 and beyond in a little over a week, sources close to AMD have told us. If you’ve been waiting to hear more about Vega, Navi & Zen+ make sure to tune in to wccftech on Tuesday May 16th."
    http://wccftech.com/amd-taking-the-covers-off-vega-navi-may-16th/
    If we can trust anything they say...


    they will tells us a bit more information but will AMD also going to launch Vega on the same day? in the end it could be another Vega T-shirt give away.......just like they did in late feb event.
    Reply to renz496
  20. Quote:
    Good Video... Looks very promising indeed.. it say's 1600mhz (maybe this is liquid cooled) Maybe a cherry picked chip.


    the score showing Vega matching 1080ti performance is coming from the prank at WCCFTECH comment section.

    Quote:
    An FP16 coming in at 25 Terraflops WOW... if they can get dev's to take advantage of this.. but they will have the option to avail of it in Polaris on the consoles.. so this should help. An give more oomph to the console games.. (an the ports to PC)


    hard to say. because according to many game developer most of modern game development rely more and more towards FP32. especially if you were aiming for console quality graphic and above. unless you want your game to look like one that coming from 2005 relying heavily on FP16 is no go. AMD did demo tressfx using FP16 back in feb event. with FP16 they were able to render twice as many hair for the same performance vs FP32. but that demo only render the hair only. not the entire game. that should give you the hint why they do that.

    there are talk about mixing the usage of FP16 and FP32 (logically this is the only way to use FP16 in current generation game without affecting the image quality severely) but special attention are needed on the optimization side. or else there will be no performance benefit using FP16. worse it might end up with more effort but the performance difference is none existence compared to using FP32 entirely. this is the major reason why all developer out there did not do this even when our GPU are capable of doing it for years.

    the one that really pushing for FP16 in games is Imagination Technologies. but they pushed for FP16 because they were aware that games on mobile are not as complex as games on home console and PC. they do it so they can offer higher performance in power and bandwidth limited situation on mobile SoC.
    Reply to renz496
  21. For consumers, for the time being, I don't think there's a real benefit of using FP32, but for anyone using complex calculation programs that need the precision, then they'll suffer.

    This is starting to feel like the "32 bits vs 16bits depth" from 3DFX vs nVidia back in the day. The difference is, at that time, nVidia did have a tangible difference to show; I'm not so sure now if this "FP16 vs FP32" is tangible for consumers playing games.

    Cheers!
    Reply to Yuka
  22. @renz496

    Yeah that's what I meant, certain aspects of the game can be rendered in FP16... Actually why does Nvidia have this feature disabled. I heard it's because of money, that they prefer to sell it as an extra in the HPC ?

    I believe it might hit 1550mhz on the reference board's, I was reading this is roughly what's required to hit 12.5 t/f.. we live in hope I guess.
    This is not really that much of a stretch considering the Rx580 saphire is hitting 1400mhz+ an the Vega chip is designed for faster clock speeds.
    Reply to jaymc
  23. while it's true nvidia limiting their FP16 support exclusively for their tesla (for deep learning) that's also because majority if not all console/pc games development are using FP32 only. especially on how complex the look of modern games are. the thing with FP16 is it will make game development more complicated on game developer side of things in term of optimization. if not done properly you probably did not gain any advantage that using FP16 supposed to give you. to save time and effort that's why game developer decided to use FP32 for everything. also one of the reason to use FP16 is to save on power and bandwidth. but our hardware on PC definitely not limited in this way. just look how much raw performance available on current 1080ti. we only become limited in performance once we push crazy resolution like 4k and above. and we expect to get another 15%-20% more performance than 1080ti next year. but one of the biggest hurdle on game development is how rushed are majority of games are today to the market. day one patch is almost the norm in every game. game developer already busy fixing the issues within their own game so adding more complexity to their game optimization effort will definitely not the thing they want to add to the list of their already busy work.
    Reply to renz496
  24. Thanks for that, a very enlightening perspective for sure.

    Here's a "rumour" don't know if you guy's have seen it yet.. but it's alarming if true to say the least..

    Check it out here: http://www.tweaktown.com/news/57418/amd-radeon-rx-vega-less-20-000-available-launch/index.html
    Reply to jaymc
  25. DDR4 had about a 300% price premium against DDR3 when it first came out. How long did that last? Barely a month if I recall correctly, then it started dropping pretty fast until the current supply crunch. The timing on that could be really unfortunate if it's causing supply/price constraints on the Vega launch. Even if it's a limited release, at least it will give benchmarkers a chance to test Ryzen against Intel with a high end AMD card.
    Reply to TMTOWTSAC
  26. Oh the pain... Here's couple of links outlining the problems SK Hynix are obviously having problems with HBM2 2.0 Gbps memory...

    https://videocardz.com/65649/sk-hynix-updates-memory-product-catalog-hbm2-available-in-q1-2017

    http://www.isportstimes.com/articles/22787/20170208/sk-hynix-hbm2-lower-bandwidth-affect-amd-bid-to-beat-nvidia.htm

    I don't see why they don't just go to Samsung for their HBM2 like Nvidia did for Quadro.. ??
    Reply to jaymc
  27. seen that rumor at VCZ (still it was originated from tweaktown). is the yield for HBM2 is really that bad at SK Hynix? only 16k at launch? AMD probably can increase their yield if they were using much slower HBM. even for Quadro GP100 the HBM 2 that use for the card is only rated at 1.4Gbps (and SK Hynix HBM 2.0 was supposed to operate at least at 1.6Gbps speed).

    http://www.anandtech.com/show/11102/nvidia-announces-quadro-gp100

    also AMD main partner in developing HBM is SK Hynix. Nvidia have been using samsung solution since the very beginning. if there is really an issue with HBM2 yield nvidia probably end up using all HBM2.0 that samsung can produce to date. and demand for tesla P100 will be very high even at launch because some of the client already place their order (in magnitude of thousands of GPU per client) even before the card officially launch. AFAIK majority if not all GP100 nvidia able to produce was fully booked in 2016. that's why Quadro GP100 only coming out almost a year after nvidia officially launch Tesla GP100.
    Reply to renz496
  28. AMD Vega Dual GPU Liquid Cooled Graphics Card Spotted In Linux Drivers:

    http://wccftech.com/amd-dual-gpu-vega-liquid-cooled-graphics-card-spotted-in-linux-drivers/
    Reply to jaymc
  29. jaymc said:
    AMD Vega Dual GPU Liquid Cooled Graphics Card Spotted In Linux Drivers:

    http://wccftech.com/amd-dual-gpu-vega-liquid-cooled-graphics-card-spotted-in-linux-drivers/


    I like it! Now that looks like it would give the 1080Ti and Titan good competition!
    Reply to goldstone77
  30. though AMD tend to price their dual gpu at very high price. right now AMD called them pro duo. dual Vega? expect no less than $1500. that's how much dual fiji card cost last time. and recently AMD did release dual polaris card. priced at $1k. you can get 4-5 RX480 with that much :D
    Reply to renz496
  31. I have hoping for this news for a long time.... I think first it may be marketed as a server part (at a ridiculous price).. But eventually I think it will be released to consumer market at a much more reasonable price.

    Also I think it's going to be utilized through CrossFire, I have seen lot's an lot of updates over the last 6 month's or so to crossfire in driver support..kinda been waiting for this as a result... an it's working excellent btw
    Or it may be managed by a chip or firmware on the card itself which alternates code to each GPU.. which would be great, but but not that likely..

    Edit:
    There's code in the driver for a PLX Chip.. which it looks like is to increase the amount of PCI Express Lanes (16x/16x) going to the slots... Possibly aimed at x390/x399 ?
    I know that the x370 only has 24 PCI Express 16x/8x. I've heard nothing of PLX Chip's so far..
    Reply to jaymc
  32. I know this isn't the right subject for this thread but I wasn't sure where to put this. http://hothardware.com/news/nvidia-debuts-tesla-v100-volta-gpu-dgx-1-at-gdc-2017
    Reply to goldstone77
  33. goldstone77 said:
    I know this isn't the right subject for this thread but I wasn't sure where to put this. http://hothardware.com/news/nvidia-debuts-tesla-v100-volta-gpu-dgx-1-at-gdc-2017


    we have specific thread for nvidia here. though it might be the time to make a new thread since that thread is dedicated to pascal.
    Reply to renz496
  34. For the dual GPU talk: AMD has always used special inter-connects in their PCBs with dualies to increase effective bandwidth between the GPUs. The benefit has always been on the low side, since intra-GPU talk has never been a thing, AFAIK. Maybe with DX12 and Vulkan that has changed? Not sure. In any case, extra PCI lanes in the MoBo are worse than having them in the GPU PCB directly, but I guess it's harder to code for that way (i.e. driver dependency for effective use of it).

    However it turns out to be, it has been proven that dual GPU configs are good for numbers, but not that good for real life experience. I have to wonder is DX12 and Vulkan would alleviate that.

    Cheers!
    Reply to Yuka
  35. Yuka said:
    For the dual GPU talk: AMD has always used special inter-connects in their PCBs with dualies to increase effective bandwidth between the GPUs. The benefit has always been on the low side, since intra-GPU talk has never been a thing, AFAIK. Maybe with DX12 and Vulkan that has changed? Not sure. In any case, extra PCI lanes in the MoBo are worse than having them in the GPU PCB directly, but I guess it's harder to code for that way (i.e. driver dependency for effective use of it).

    However it turns out to be, it has been proven that dual GPU configs are good for numbers, but not that good for real life experience. I have to wonder is DX12 and Vulkan would alleviate that.

    Cheers!


    The dual GPU setups have offered close to twice the performance in the past.




    http://hothardware.com/reviews/amd-radeon-pro-duo-benchmarks

    GeForce + Radeon: Previewing DirectX 12 Multi-Adapter with Ashes of the Singularity
    by Ryan Smith on October 26, 2015 10:00 AM EST


    http://images.anandtech.com/graphs/graph9740/78166.png

    "Ultimately as gamers all we can do is take a wait-and-see approach to the whole matter. But as DirectX 12 game development ramps up, I am cautiously optimistic that positive experiences like Ashes will help encourage other developers to plan for multi-adapter support as well."

    http://www.anandtech.com/show/9740/directx-12-geforce-plus-radeon-mgpu-preview

    I think we will see great performance from dual GPU cards with DirectX 12.

    EDIT found a couple more benchmarks



    http://wccftech.com/amd-radeon-pro-duo-benchmark-results-leaked/
    Reply to goldstone77
  36. Imagine running two of these Dual GPU Vega's in Crossfire... (four Vega's) haha... wowzers :)
    Reply to jaymc
  37. jaymc said:
    Imagine running two of these Dual GPU Vega's in Crossfire... (four Vega's) haha... wowzers :)


    I'm willing to see the glass half full empty, just because the stuttering AMD has had in the past with their multi-GPU configs has been horrible.

    So, that being said, it doesn't really matter how good the internal shenanigans of the PCB are if the drivers are going to make the Dualies suck. Hence, I'm wondering if Vulkan or DX12 will give the Dual GPU boards some advantage over what has been out there in the past (full dependent on drivers).

    Cheers!

    EDIT: Striked bit.
    Reply to Yuka
  38. Yuka said:
    jaymc said:
    Imagine running two of these Dual GPU Vega's in Crossfire... (four Vega's) haha... wowzers :)


    I'm willing to see the glass half full empty, just because the stuttering AMD has had in the past with their multi-GPU configs has been horrible.

    So, that being said, it doesn't really matter how good the internal shenanigans of the PCB are if the drivers are going to make the Dualies suck. Hence, I'm wondering if Vulkan or DX12 will give the Dual GPU boards some advantage over what has been out there in the past (full dependent on drivers).

    Cheers!

    EDIT: Striked bit.


    Realistically, from past performance in some games ~50% performance gain in games DirectX 12 or not.
    Reply to goldstone77
  39. goldstone77 said:
    Yuka said:
    jaymc said:
    Imagine running two of these Dual GPU Vega's in Crossfire... (four Vega's) haha... wowzers :)


    I'm willing to see the glass half full empty, just because the stuttering AMD has had in the past with their multi-GPU configs has been horrible.

    So, that being said, it doesn't really matter how good the internal shenanigans of the PCB are if the drivers are going to make the Dualies suck. Hence, I'm wondering if Vulkan or DX12 will give the Dual GPU boards some advantage over what has been out there in the past (full dependent on drivers).

    Cheers!

    EDIT: Striked bit.


    Realistically, from past performance in some games ~50% performance gain in games DirectX 12 or not.


    Can Dx12 can break up the workload an farm it to each GPU.. ?
    Reply to jaymc
  40. the thing with DX12 is the job was supposed to be done by game maker. it was the main pitch behind DX12: more direct control for game developer instead relying on IHV support. but realistically if you have been paying attention to what's happen since the introduction of DX12 with windows 10 majority of game developer have no desire to do it themselves. recently Gear of War 4 rolling out the very long awaited multi GPU support that everyone has been waiting. but to use multi GPU you still need one of nvidia latest driver that include multi GPU support for the game. and here i thought DX12 multi gpu is totally independent from IHV drivers (they can like how it was done on Ashes hence two 1060 in tandem able to work together in that game despite nvidia officially did not support SLI for the card). meaning even if you're using old drivers multi gpu should work as long as you patch the game to the latest update. that's why some people said DX12 will be the last nail in killing multi gpu. because you're pushing the responsibility from the one that want to push the tech (GPU maker) to the one that try to avoid using the tech as much as possible (game maker).
    Reply to renz496
  41. renz496 said:
    the thing with DX12 is the job was supposed to be done by game maker. it was the main pitch behind DX12: more direct control for game developer instead relying on IHV support. but realistically if you have been paying attention to what's happen since the introduction of DX12 with windows 10 majority of game developer have no desire to do it themselves. recently Gear of War 4 rolling out the very long awaited multi GPU support that everyone has been waiting. but to use multi GPU you still need one of nvidia latest driver that include multi GPU support for the game. and here i thought DX12 multi gpu is totally independent from IHV drivers (they can like how it was done on Ashes hence two 1060 in tandem able to work together in that game despite nvidia officially did not support SLI for the card). meaning even if you're using old drivers multi gpu should work as long as you patch the game to the latest update. that's why some people said DX12 will be the last nail in killing multi gpu. because you're pushing the responsibility from the one that want to push the tech (GPU maker) to the one that try to avoid using the tech as much as possible (game maker).


    That's not exactly true - a game had to be tailored for multi GPU in older revisions of DX, and those usually had to be identical (Nvidia's SLI) or at the very least from similar generations (AMD's CrossFire) to work at all - nevermind the need for validated drivers, which are a requisite for it to work, or the limited kinds of load sharing you could perform: alternate frame rendering, scanline rendering were pretty much all you'd get. DX12 allows compositing rendering, where GPUs work on different objects and then one composites them together on-frame.

    DX12 makes it so that you can mix and match whatever hardware resources you have, provided you go and make use of them the same way you' d go and detect what CPU core you have, how fast and how any of them there actually are. Of course, that requires from the game maker to detect and probably benchmark the capabilities of whatever GPU hardware it can find (it does add complexity) but this is neither an unknown (see CPU cores) nor repetitive: once the graphics engine is geared towards this kind of detection and balancing, it's DONE - no need to look further. So of course engine makers have some work ahead of them, but most of them (or at least, the good ones) actually enjoy having more capabilities: straightforward API geared towards harnessing more resources are much easier to work with than finding workarounds and hacks to do the same.

    Now of course, whenever a hardware maker simply shuts down his GPU when another is used for rendering makes all of this moot: Nvidia didn't approve of the use of their GPU for PhysX computations when actual rendering was done on an AMD card and wrote a shutdown routine in their drivers, but this is actually the kind of load balancing DX12 (and soon, Vulkan) would allow.
    Reply to mitch074
  42. Quote:
    That's not exactly true - a game had to be tailored for multi GPU in older revisions of DX


    i know that. UE4 for example are not AFR friendly. in the past there are people asking Epic about how to take advantage of SLI when developing their games using UE4 and one of Epic developer respond was to avoid using SLI (or multi gpu in general) if they want to use all UE4 features.

    Quote:
    Of course, that requires from the game maker to detect and probably benchmark the capabilities of whatever GPU hardware it can find (it does add complexity) but this is neither an unknown (see CPU cores) nor repetitive: once the graphics engine is geared towards this kind of detection and balancing, it's DONE - no need to look further.


    except multi GPU is not the same as multi core CPU. i was thinking something similar before: why games cannot be optimized for multi gpu down to game engine level the same way it was done on multi core CPU? they said it might be possible to do such thing with DX12 but so far i haven't seen any proof of this yet. i want to see real world implementation of this before we discuss this further.

    Quote:
    Now of course, whenever a hardware maker simply shuts down his GPU when another is used for rendering makes all of this moot: Nvidia didn't approve of the use of their GPU for PhysX computations when actual rendering was done on an AMD card and wrote a shutdown routine in their drivers, but this is actually the kind of load balancing DX12 (and soon, Vulkan) would allow.


    now if you understand what exactly the issue with PhysX then you will know that nvidia have no reason to block their card from working in DX12 multi gpu even in mixed combination.
    Reply to renz496
  43. renz496 said:
    i know that. UE4 for example are not AFR friendly. in the past there are people asking Epic about how to take advantage of SLI when developing their games using UE4 and one of Epic developer respond was to avoid using SLI (or multi gpu in general) if they want to use all UE4 features.
    [...]
    except multi GPU is not the same as multi core CPU. i was thinking something similar before: why games cannot be optimized for multi gpu down to game engine level the same way it was done on multi core CPU? they said it might be possible to do such thing with DX12 but so far i haven't seen any proof of this yet. i want to see real world implementation of this before we discuss this further.
    [...]
    now if you understand what exactly the issue with PhysX then you will know that nvidia have no reason to block their card from working in DX12 multi gpu even in mixed combination.


    I'm mentioning multi core CPUs for one simple reason: if you take Bulldozer and Core with HT for example, you will have to balance your threads differently depending on the architecture: loading a pair of Bulldozer cores with FP128 tasks each for example is a great way to bog them down, while it won't bat an eye to loading both of them with int computations; doing that on 2 logical cores in a HT Core system will bog that one down though. So usually, you'll add routines in your code to detect what kind of core you're on, and then you'll dispatch your threads accordingly. DX12 allows you to do that, not with cores but with render targets, while DX11 and older only allowed you to mention what job was independent from the others and the driver had to guess the best way to dispatch them.

    As for the "problem" with PhysX, considering that some people hacked Nvidia's drivers to workaround the artificial limitations and managed to run games on AMD hardware with PhysX actually running on a secondary Nvidia GPU, then yes, I could see Nvidia refusing to run their cards alongside other GPUs. Considering they're even now locking down SLI on anything but their most expensive hardware and how little work they're doing on supporting DX12 properly, then yes, I think they don't want customers to use anything but their own hardware.
    Reply to mitch074
  44. mitch074 said:
    As for the "problem" with PhysX, considering that some people hacked Nvidia's drivers to workaround the artificial limitations and managed to run games on AMD hardware with PhysX actually running on a secondary Nvidia GPU, then yes, I could see Nvidia refusing to run their cards alongside other GPUs. Considering they're even now locking down SLI on anything but their most expensive hardware and how little work they're doing on supporting DX12 properly, then yes, I think they don't want customers to use anything but their own hardware.


    part of the problem with PhysX was licensing. if AMD pay for the licensing for PhysX then they will have direct access for optimizing PhysX in their system. not just hybrid system but running GPU PhysX natively on AMD gpu also possible. there are third party effort called "radeon PhysX" in the past to make this happen (nvidia even willing to help make it happen). but ultimately it is AMD have no desire to make it happen because PhysX is not their tech.

    second PhysX is nvidia tech. they have to make sure it works without problem be in nvidia only system or in hybrid system. but the issue on hybrid system most definitely more complicated because there are chance nvidia driver might end up conflicting with AMD drivers. when that happen they have to figure it out themselves. AMD will not going to help and most definitely not going to change the way they handle their driver because of nvidia tech that they have no desire to support. imagine if there is issue with hybrid system that nvidia refuse to fix. some end user might bring the issue to court because nvidia did not want to support tech that they are selling to customer. so before it become that complicated for them (be it on the software side and legal issue that can stem from it) they blocked hybrid PhysX system. sure you can use hacked driver to keep using hybrid system but that way if there is conflict nvidia are not responsible for your issue because legally nvidia only support PhysX on their hardware setup only.

    Nvidia blocking hybrid system because PhysX is their tech. not because they simply refuse to see their GPU to be used together with other GPU. DX12 multi GPU is not nvidia tech. so they have no reason to block it.
    Reply to renz496
  45. renz496 said:
    part of the problem with PhysX was licensing. if AMD pay for the licensing for PhysX then they will have direct access for optimizing PhysX in their system. not just hybrid system but running GPU PhysX natively on AMD gpu also possible. there are third party effort called "radeon PhysX" in the past to make this happen (nvidia even willing to help make it happen). but ultimately it is AMD have no desire to make it happen because PhysX is not their tech.

    second PhysX is nvidia tech. they have to make sure it works without problem be in nvidia only system or in hybrid system. but the issue on hybrid system most definitely more complicated because there are chance nvidia driver might end up conflicting with AMD drivers. when that happen they have to figure it out themselves. AMD will not going to help and most definitely not going to change the way they handle their driver because of nvidia tech that they have no desire to support. imagine if there is issue with hybrid system that nvidia refuse to fix. some end user might bring the issue to court because nvidia did not want to support tech that they are selling to customer. so before it become that complicated for them (be it on the software side and legal issue that can stem from it) they blocked hybrid PhysX system. sure you can use hacked driver to keep using hybrid system but that way if there is conflict nvidia are not responsible for your issue because legally nvidia only support PhysX on their hardware setup only.

    Nvidia blocking hybrid system because PhysX is their tech. not because they simply refuse to see their GPU to be used together with other GPU. DX12 multi GPU is not nvidia tech. so they have no reason to block it.


    What did Nvidia disallowing the use of their card for PhysX computations have to do with the license? I sure hope Nvidia hardware is licensed to run Nvidia software! I am talking about off-screen computations, which are separate from rendering, and that were allowed regardless of the GPU on PhysX standalone cards before Nvidia bought the company. Nvidia artificially restrict the use of off-screen PhysX computations when it's not a Nvidia GPU doing the actual rendering - eventhough both operations are independent! Thus driver compatibility has nothing to do with it.

    And as I was saying, what some people wanted was to get a big, fat AMD card for rendering and a small, cheap Nvidia card for PhysX; Nvidia disallowed this in their drivers, but some managed to hack the drivers and it worked well enough : see here.

    Of course neither Nvidia nor AMD would support their competitors' hardware! But to go as far as actively disabling features is bad. And what about Intel? Considering that when you buy an Intel CPU, half the silicon is taken up by a GPU, DX12 is finally a way for game makers to make use of it (whether it be through DirectCompute for physics computations like TressFX or DX12 for managing, say, the game's HUD), who would say "no" to 5-10% more computing resources by using hardware you actually own?
    Reply to mitch074
  46. back in 2009 nvidia have said openly that they have no problem for AMD to license their PhysX tech. but back then AMD said it is not needed since they are working with Bullet to offer vendor neutral solution from the get go. only after AMD respond that nvidia directly block hybrid system from working. true with hybrid system nvidia software will still run on nvidia hardware. but if AMD have the license for PhysX then they can directly address the problem if any issue arise from hybrid system. try to remember what AMD problem with gameworks are? they said they can't optimize for the game because gameworks is blackbox. amd doesn't not have the access to gameworks source code because they did not pay the licensing fee to get access to the source code. in fact if AMD did license PhysX from nvidia end user can stop using hybrid system altogether because with the license AMD can legally implement GPU PhysX to run natively on their GPU.

    yes hybrid PhysX system work well enough with hacked drivers. but can you guarantee it will work 100% without problem all the time? that's the problem here. and that still not counting if AMD directly make their drivers to conflict with nvidia PhysX driver on purpose. AMD can always say "we optimize our drivers for the best way for our hardware. if our drivers are conflicting with nvidia PhysX then it's not our issue. it's their issue for putting the said tech inside the game". then user will forced nvidia to make it work even if they know it is not compatible with AMD hardware combination because PhysX is nvidia tech that nvidia sells to consumer and they have to be responsible with that.

    from consumer point of view nvidia blocking hybrid system indeed is bad. but nvidia for their part also try to avoid more complicated issue from arising on their end. be it from consumer or AMD. to my knowledge nvidia never did go after the people that hacked their drivers and ask them to stop from doing it. that is indirectly tell people that if they still want to use hybrid system then use it at their own risk since nvidia will not going to address to your issue.

    with DX12 it was developer freedom. they will be the one deciding how to implement things in their games. look at ashes itself. they make it possible for 980ti and Fury X to work together. they also make it possible for two 1060 work together despite nvidia did not support SLI on 1060s. did nvidia patch in new drivers to stop that from working?
    Reply to renz496
  47. renz496 said:
    back in 2009 nvidia have said openly that they have no problem for AMD to license their PhysX tech. but back then AMD said it is not needed since they are working with Bullet to offer vendor neutral solution from the get go. only after AMD respond that nvidia directly block hybrid system from working. true with hybrid system nvidia software will still run on nvidia hardware. but if AMD have the license for PhysX then they can directly address the problem if any issue arise from hybrid system. try to remember what AMD problem with gameworks are? they said they can't optimize for the game because gameworks is blackbox. amd doesn't not have the access to gameworks source code because they did not pay the licensing fee to get access to the source code. in fact if AMD did license PhysX from nvidia end user can stop using hybrid system altogether because with the license AMD can legally implement GPU PhysX to run natively on their GPU.

    yes hybrid PhysX system work well enough with hacked drivers. but can you guarantee it will work 100% without problem all the time? that's the problem here. and that still not counting if AMD directly make their drivers to conflict with nvidia PhysX driver on purpose. AMD can always say "we optimize our drivers for the best way for our hardware. if our drivers are conflicting with nvidia PhysX then it's not our issue. it's their issue for putting the said tech inside the game". then user will forced nvidia to make it work even if they know it is not compatible with AMD hardware combination because PhysX is nvidia tech that nvidia sells to consumer and they have to be responsible with that.

    from consumer point of view nvidia blocking hybrid system indeed is bad. but nvidia for their part also try to avoid more complicated issue from arising on their end. be it from consumer or AMD. to my knowledge nvidia never did go after the people that hacked their drivers and ask them to stop from doing it. that is indirectly tell people that if they still want to use hybrid system then use it at their own risk since nvidia will not going to address to your issue.

    with DX12 it was developer freedom. they will be the one deciding how to implement things in their games. look at ashes itself. they make it possible for 980ti and Fury X to work together. they also make it possible for two 1060 work together despite nvidia did not support SLI on 1060s. did nvidia patch in new drivers to stop that from working?


    Who can guarantee that a graphics card can work with all motherboards? Nobody does. Why would it be any different for what amounts to a physics coprocessor? Because that's exactly what PhysX is: a dedicated circuit for physics computations; the results are rerouted through the CPU to the display controller afterwards, nevermind what branding the coprocessor bears: AGEIA or NVIDIA, so why should it matter what graphics card I use for display?

    The fact that Nvidia finally released the restrictions in their drivers might just be an indication on how much developers may be looking for more open solutions: physics in DirectCompute seem to work quite well on whatever GPU one uses.
    Reply to mitch074
  48. https://videocardz.com/69475/amd-radeon-vega-spotted-with-16gb-memory-and-1600-mhz-clock

    The 4096 core version of Vega might have a boost clock of 1600MHz, meaning 13.1 Tflops of raw compute performance.

    Although whether or not this is the RX Vega is unknown. It could be anything. But it does have 64 CUs and 4096 SPs.

    I wonder what it will overclock to? Maybe 1700MHz? It will be nice when we finally find out.
    Reply to BurgerandChips66
  49. Quote:
    Who can guarantee that a graphics card can work with all motherboards?


    there are thousands of combination but at the very least GPU maker and motherboard maker "hope" by using common standard such as PCI-E interface they can make all of them work together without too much issue. remember when AMD card that use PCI-E 2.1 standards having issues with PCI-E 1.1 slot? back then as long as the motherboard still supported by it's maker they release updated BIOS to their motherboard to solve the issue.

    Quote:
    The fact that Nvidia finally released the restrictions in their drivers might just be an indication on how much developers may be looking for more open solutions: physics in DirectCompute seem to work quite well on whatever GPU one uses.


    well when it comes to GPU accelerated physics it doesn't matter if there are more open solution or not. because ultimately game developer have no interest to utilize it. we have neutral vendor solution for GPU accelerated physics for almost 7 years now. and i have yet to see them used in game.
    Reply to renz496
  50. renz496 said:
    there are thousands of combination but at the very least GPU maker and motherboard maker "hope" by using common standard such as PCI-E interface they can make all of them work together without too much issue. remember when AMD card that use PCI-E 2.1 standards having issues with PCI-E 1.1 slot? back then as long as the motherboard still supported by it's maker they release updated BIOS to their motherboard to solve the issue.
    [...]
    well when it comes to GPU accelerated physics it doesn't matter if there are more open solution or not. because ultimately game developer have no interest to utilize it. we have neutral vendor solution for GPU accelerated physics for almost 7 years now. and i have yet to see them used in game.

    There are at least two: Rise of the Tomb Raider and Deus Ex: Mankind Divided. Both use TressFX 3, which is MIT-licensed and works through DirectCompute.
    Reply to mitch074
Ask a new question Answer

Read More

Vega Next Generation AMD Graphics