Analysis: PhysX On Systems With AMD Graphics Cards

Summary And Conclusion

CPU-Based PhysX summary

To summarize the headlines of the last few months and summarize the test results, we can conclude the following:

  • The CPU-based PhysX mode mostly uses only the older x87 instruction set instead of SSE2.
  • Testing other compilations in the Bullet benchmark shows only a maximum performance increase of 10% to 20% when using SSE2.
  • The optimization performance gains would thus only be marginal in a purely single-core application.
  • Contrary to many reports, CPU-based PhysX supports multi-threading.
  • There are scenarios in which PhysX is better on the CPU than the GPU.
  • A game like Metro 2033 shows that CPU-based PhysX could be quite competitive.

Then why is the performance picture so dreary right now?

  • With CPU-based PhysX, the game developers are largely responsible for fixing thread allocation and management, while GPU-based PhysX handles this automatically.
  • This is a time and money issue for the game developers.
  • The current situation is also architected to help promote GPU-based PhysX over CPU-based PhysX.
  • With SSE2 optimizations and good threading management for the CPU, modern quad-core processors would be highly competitive compared to GPU PhysX. Predictably, Nvidia’s interest in this is lackluster.

The AMD graphics card + Nvidia graphics card (as dedicated PhysX card) hybrid mode

Here, too, our verdict is a bit more moderate compared to the recent hype. We conclude the following:


One can claim that using the additional card results in a huge performance gain if PhysX was previously running on the CPU instead of the GPU. In such cases, the performance of a Radeon HD 5870 with a dedicated PhysX card is far superior to a single GeForce GTX 480. Even if you combine the GTX 480 with the same dedicated PhysX card, the lead of the GTX 480 is very small. The GPU-based PhysX solution is possible for all AMD users if the dedicated Nvidia PhysX-capable board is powerful enough. Mafia II shows that there are times when even a single GeForce GTX 480 reaches its limits and that “real” PhysX with highly-playable frame rates is only possible with a dedicated PhysX card.


On the other hand, we have the fact that Nvidia incorporates strategic barriers in its drivers to prevent these combinations and performance gains if non-Nvidia cards are installed as primary graphics solutions.

It's good that the community does not take this lying down, but instead continues to produce pragmatic countermeasures. But there are more pressing drawbacks. In addition to the high costs of buying an extra card, we have added power consumption. If you use an older card, this is disturbingly noticeable, even in idle mode or normal desktop operation. Everyone will have to decide just how much money an enthusiast project like this is worth. It works, and it's fun. But whether it makes sense for you is something only you can decide for yourself.

Create a new thread in the UK Article comments forum about this subject
This thread is closed for comments
Comment from the forums
    Your comment
  • david__t
    This has been going on for ages now and I don't think AMD are going to try and counter nVidia just yet, because they obviously think that the limited numbers of games that support this makes the issue not worthy of much R&D money. Also, unless they are going to produce drivers that 'fool' the nVidia drivers in to making PhysX work, they will have to come up with their own Physics solution - which is another bit of code that the developers will have to tackle causing even more hassle. Dedicated Physics cards that work with any GPU was the way to go, it was just brought to market too early before the software was out to make it a 'must have' purchase.
    Personally I find it ridiculous that you can have an Extreme Edition CPU sat in your PC which costs £1000 and still they cannot make Physics work on it properly. Whether this is due to nVidia bias or lack of funds during developement remains to be seen.
  • mi1ez
    If you install an extra Nvidia GPU for PhysX, just think of the folding ppd! Brucey Bonus!
  • jamie_macdonald
    Nvidia stated themselves sometime ago that "physX is old and clunky and will soon have a complete re-write to bring it to the modern age and make it CPU freindly" ...

    ...I'd rather wait for that to happen, pretty sure they will make it more usable soon.

    I have a decent Nvidia card so i do not need to offload it but i do understand it is high time it was updated. :D
  • swamprat
    The current situation is also architected to help promote GPU-based PhysX over CPU-based PhysX.

    Aside from the use of 'architected' as a word, isn't that a generally levied accusation rather than something you've actually proven? The following comment that Nvidia CBA to work on it would seem to possibly explain the position. It might be deliberate on Nvidia's part and you'd see why (although getting a decent albeit smaller advantage with GPU but having a wider base of games using physx might do them better in some ways) if you can't prove it then you didn't ought to report it as fact.
    Besides, if everyone had better physx then there could be more and more use of it - so having extra GPU umph would probably come back into play (?)
  • gdilord
    Thank you for the article Igor, it was a very interesting read.

    I hope that Tom's does more articles and features on what I consider the enthusiast/indie/homebrew sector. I really do enjoy reading these articles.
  • LePhuronn
    What about running CUDA with Radeons? Can I drop in a (say) GTX 460 next to my (say) HD 6970 Crossfire and still use the GTX 460 for CUDA apps?

    Same workarounds? Non-issue? Impossible?
  • hanrak
    Excellent article! Great read and very interesting. I may just go and get that Nvidia card to go with my 5970 now :)
  • wild9
    I think you've got more chance of resolving the American/Mexican border sham, than you have seeing a unified Physics standard. Corporate interests vs. a clear, workable and altogether fair solution.
  • Rab1d-BDGR
    In addition to the high costs of buying an extra card, we have added power consumption. If you use an older card, this is disturbingly noticeable, even in idle mode or normal desktop operation.

    Not necessarily, say you had a Geforce 9800 Green edition - those cards can run off the PCIe bus power with no additional connectors yet provide 112 CUDA cores. Running PhysX on that will be barely noticeable as it quietly sips a few watts here-and-there while the radeon 5970 or dual GTX 470s doing the graphics are happily guzzling away and the dial on your electric meter whizzes round.
  • ben BOys
    awesome investigateing i never knew this! This further helps the cause of ATI since you can get a powerful card for cheap price and then get cheap nvidia card for the physx when physx become mainstream. get a 6870 and a 9xxxGT nvidia card and have best price proformance combo!
  • monkeymanuk
    We have PhysX running on our Gaming rigs for a few customers using Radeon Hardware.

    Take a look.
  • Gonemad
    Did anybody think of a PCIe card that could house an extra, completely functional, Intel or AMD CPU? All the way around... I bet there are some situations where it would trump having a physx card, a new /other GPU, or a full-blown CPU + motherboard upgrade.

    Well, too bad I don't have any Nvidia cards containing PhysX laying around.
  • kaprikawn
    Nvidia own PhysX, why shouldn't they be able to nerf it on ATI-based solutions? It isn't healthy to have a company who so clearly has a conflict of interest controlling something so fundamental as physics in gaming.

    Instead of demonising Nvidia who are only doing what is in their commercial interests, people should be looking to someone like Microsoft to implement a more platform agnostic approach in DirectX.

    The better solution would be an open-source alternative of course, but that's just wishful thinking.