Sign in with
Sign up | Sign in

Analysis: PhysX On Systems With AMD Graphics Cards

Analysis: PhysX On Systems With AMD Graphics Cards
By

Rarely does an issue divide the gaming community like PhysX has. We go deep into explaining CPU- and GPU-based PhysX processing, run PhysX with a Radeon card from AMD, and put some of today's most misleading headlines about PhysX under our microscope.

The history and development of game physics is often compared to that of the motion picture. The comparison might be a bit exaggerated and arrogant, but there’s some truth to it. As 3D graphics have evolved to almost photo-realistic levels, the lack of truly realistic and dynamic environments is becoming increasingly noticeable. The better the games look, the more jarring they seem from their lack of realistic animations and movements.

When comparing early VGA games with today's popular titles, it’s amazing how far we’ve come in 20 to 25 years. Instead of animated pixel sprites, we now measure graphics quality by looking at breathtaking natural occurrences like water, reflections, fog, smoke, and their movement and animation. Since all of these things are based on highly complex calculations, most game developers use so-called physics engines with prefabricated libraries containing, for example, character animations (ragdoll effects) or complex movements (vehicles, falling objects, water, and so on).

Of course, PhysX is not the only physics engine. Up until now, Havok has been used in many more games. But while both the 2008 edition Havok engine and the PhysX engine offer support for CPU-based physics calculations, PhysX is the only established platform in the game sector with support for faster GPU-based calculations as well.

This is where our current dilemma begins. There is only one official way to take advantage of PhysX (with Nvidia-based graphics cards) but two GPU manufacturers. This creates a potential for conflict, or at least enough for a bunch of press releases and headlines. Like the rest of the gaming community, we’re hoping that things pan out into open standards and sensible solutions. But as long as the gaming industry is stuck with the current situation, we simply have to make the most of what’s supported universally by publishers: CPU-based physics.

Preface

Why did we write this article? You might see warring news and articles on this topic, but we want to shine some light on the details of recent developments, especially for those without any knowledge of programming. Therefore, we will have to simplify and skip a few things. On the following pages, we’ll inquire whether and to what extent Nvidia is probably limiting PhysX CPU performance in favor of its own GPU-powered solutions, whether CPU-based PhysX is multi-thread-capable (which would make it competitive), and finally whether all physics calculations really can be implemented on GPU-based PhysX as easily and with as many benefits as Nvidia claims.

Additionally, we will describe how to enable a clever tweak that lets users with AMD graphics cards use Nvidia-based secondary boards as dedicated PhysX cards. We are interested in the best combination of different cards and what slots to use for each of them.

Display all 13 comments.
This thread is closed for comments
  • 1 Hide
    david__t , 18 November 2010 17:43
    This has been going on for ages now and I don't think AMD are going to try and counter nVidia just yet, because they obviously think that the limited numbers of games that support this makes the issue not worthy of much R&D money. Also, unless they are going to produce drivers that 'fool' the nVidia drivers in to making PhysX work, they will have to come up with their own Physics solution - which is another bit of code that the developers will have to tackle causing even more hassle. Dedicated Physics cards that work with any GPU was the way to go, it was just brought to market too early before the software was out to make it a 'must have' purchase.
    Personally I find it ridiculous that you can have an Extreme Edition CPU sat in your PC which costs £1000 and still they cannot make Physics work on it properly. Whether this is due to nVidia bias or lack of funds during developement remains to be seen.
  • 1 Hide
    mi1ez , 18 November 2010 18:21
    If you install an extra Nvidia GPU for PhysX, just think of the folding ppd! Brucey Bonus!
  • 0 Hide
    jamie_macdonald , 18 November 2010 21:07
    Nvidia stated themselves sometime ago that "physX is old and clunky and will soon have a complete re-write to bring it to the modern age and make it CPU freindly" ...

    ...I'd rather wait for that to happen, pretty sure they will make it more usable soon.

    I have a decent Nvidia card so i do not need to offload it but i do understand it is high time it was updated. :D 
  • 0 Hide
    swamprat , 19 November 2010 15:05
    Quote:
    The current situation is also architected to help promote GPU-based PhysX over CPU-based PhysX.

    Aside from the use of 'architected' as a word, isn't that a generally levied accusation rather than something you've actually proven? The following comment that Nvidia CBA to work on it would seem to possibly explain the position. It might be deliberate on Nvidia's part and you'd see why (although getting a decent albeit smaller advantage with GPU but having a wider base of games using physx might do them better in some ways) if you can't prove it then you didn't ought to report it as fact.
    Besides, if everyone had better physx then there could be more and more use of it - so having extra GPU umph would probably come back into play (?)
  • 1 Hide
    gdilord , 19 November 2010 18:12
    Thank you for the article Igor, it was a very interesting read.

    I hope that Tom's does more articles and features on what I consider the enthusiast/indie/homebrew sector. I really do enjoy reading these articles.
  • 1 Hide
    LePhuronn , 19 November 2010 19:14
    What about running CUDA with Radeons? Can I drop in a (say) GTX 460 next to my (say) HD 6970 Crossfire and still use the GTX 460 for CUDA apps?

    Same workarounds? Non-issue? Impossible?
  • 0 Hide
    hanrak , 19 November 2010 20:18
    Excellent article! Great read and very interesting. I may just go and get that Nvidia card to go with my 5970 now :) 
  • 0 Hide
    wild9 , 20 November 2010 03:47
    I think you've got more chance of resolving the American/Mexican border sham, than you have seeing a unified Physics standard. Corporate interests vs. a clear, workable and altogether fair solution.
  • 0 Hide
    Rab1d-BDGR , 21 November 2010 17:22
    Quote:
    In addition to the high costs of buying an extra card, we have added power consumption. If you use an older card, this is disturbingly noticeable, even in idle mode or normal desktop operation.


    Not necessarily, say you had a Geforce 9800 Green edition - those cards can run off the PCIe bus power with no additional connectors yet provide 112 CUDA cores. Running PhysX on that will be barely noticeable as it quietly sips a few watts here-and-there while the radeon 5970 or dual GTX 470s doing the graphics are happily guzzling away and the dial on your electric meter whizzes round.
  • 0 Hide
    ben BOys , 24 November 2010 02:11
    awesome investigateing i never knew this! This further helps the cause of ATI since you can get a powerful card for cheap price and then get cheap nvidia card for the physx when physx become mainstream. get a 6870 and a 9xxxGT nvidia card and have best price proformance combo!
  • 0 Hide
    monkeymanuk , 26 November 2010 18:50
    We have PhysX running on our Gaming rigs for a few customers using Radeon Hardware. http://www.southampton-computers.co.uk/shop/gamer-systems-c-7.html

    Take a look.
  • 0 Hide
    Gonemad , 26 November 2010 21:32
    Did anybody think of a PCIe card that could house an extra, completely functional, Intel or AMD CPU? All the way around... I bet there are some situations where it would trump having a physx card, a new /other GPU, or a full-blown CPU + motherboard upgrade.

    Well, too bad I don't have any Nvidia cards containing PhysX laying around.
  • 0 Hide
    kaprikawn , 27 November 2010 14:43
    Nvidia own PhysX, why shouldn't they be able to nerf it on ATI-based solutions? It isn't healthy to have a company who so clearly has a conflict of interest controlling something so fundamental as physics in gaming.

    Instead of demonising Nvidia who are only doing what is in their commercial interests, people should be looking to someone like Microsoft to implement a more platform agnostic approach in DirectX.

    The better solution would be an open-source alternative of course, but that's just wishful thinking.