Analysis: PhysX On Systems With AMD Graphics Cards

CPU PhysX: Multi-Threading?

Does CPU PhysX Really Not Support Multiple Cores?

Our next problem is that, in almost all previous benchmarks, only one CPU core has really been used for PhysX in the absence of GPU hardware acceleration--or so some say. Again, this seems like somewhat of a contradiction given our measurements of fairly good CPU-based PhysX scaling in Metro 2033 benchmarks.

Graphics card
GeForce GTX 480 1.5 GB
Dedicated PhysX card
GeForce GTX 285 1 GB
Graphic drivers
GeForce 258.96
PhysX
9.10.0513


First, we measure CPU core utilization. We switch to DirectX 11 mode with its multi-threading support to get a real picture of performance. The top section of the graph below shows that CPU cores are rather evenly utilized when extended physics is deactivated.

In order to widen the bottleneck effect of the graphics card, we start out with a resolution of just 1280x1024. The less the graphics card acts as a limiting factor, the better the game scales with more cores. This would change with the DirectX 9 mode, as it limits the scaling to two CPU cores.

We notice a small increase in CPU utilization when activating GPU-based PhysX because the graphics card needs to be supplied with data for calculations. However, the increase is much larger with CPU-based PhysX activated, indicating a fairly successful parallelization implementation by the developers.

Looking at Metro 2033, we also see that a reasonable use of PhysX effects is playable, even if no PhysX acceleration is available. This is because Metro 2033 is mostly limited by the main graphics card and its 3D performance, rather than added PhysX effects. There is one exception, though: the simultaneous explosions of several bombs. In this case, the CPU suffers from serious frame rate drops, although the game is still playable. Most people won’t want to play at such low resolutions, so we switched to the other extreme.

Performing these benchmarks with a powerful main graphics card and a dedicated PhysX card was a deliberate choice, given that a single Nvidia card normally suffers from some performance penalties with GPU-based PhysX enabled. Things would get quite bad in this already-GPU-constrained game. In this case, the difference between CPU-based PhysX on a fast six-core processor with well-implemented multi-threading and a single GPU is almost zero.

Assessment

Contrary to some headlines, the Nvidia PhysX SDK actually offers multi-core support for CPUs. When used correctly, it even comes dangerously close to the performance of a single-card, GPU-based solution. Despite this, however, there's still a catch. PhysX automatically handles thread distribution, moving the load away from the CPU and onto the GPU when a compatible graphics card is active. Game developers need to shift some of the load back to the CPU.

Why does this so rarely happen?

The effort and expenditure required to implement coding changes obviously works as a deterrent. We still think that developers should be honest and openly admit this, though. Studying certain games (with a certain logo in the credits) begs the question of whether this additional expense was spared for commercial or marketing reasons. On one hand, Nvidia has a duty to developers, helping them integrate compelling effects that gamers will be able to enjoy that might not have made it into the game otherwise. On the other hand, Nvidia wants to prevent (and with good reason) prejudices from getting out of hand. According to Nvidia, SDK 3.0 already offers these capabilities, so we look forward to seeing developers implement them.

Create a new thread in the UK Article comments forum about this subject
This thread is closed for comments
13 comments
Comment from the forums
    Your comment
  • david__t
    This has been going on for ages now and I don't think AMD are going to try and counter nVidia just yet, because they obviously think that the limited numbers of games that support this makes the issue not worthy of much R&D money. Also, unless they are going to produce drivers that 'fool' the nVidia drivers in to making PhysX work, they will have to come up with their own Physics solution - which is another bit of code that the developers will have to tackle causing even more hassle. Dedicated Physics cards that work with any GPU was the way to go, it was just brought to market too early before the software was out to make it a 'must have' purchase.
    Personally I find it ridiculous that you can have an Extreme Edition CPU sat in your PC which costs £1000 and still they cannot make Physics work on it properly. Whether this is due to nVidia bias or lack of funds during developement remains to be seen.
    1
  • mi1ez
    If you install an extra Nvidia GPU for PhysX, just think of the folding ppd! Brucey Bonus!
    1
  • jamie_macdonald
    Nvidia stated themselves sometime ago that "physX is old and clunky and will soon have a complete re-write to bring it to the modern age and make it CPU freindly" ...

    ...I'd rather wait for that to happen, pretty sure they will make it more usable soon.

    I have a decent Nvidia card so i do not need to offload it but i do understand it is high time it was updated. :D
    0
  • swamprat
    Quote:
    The current situation is also architected to help promote GPU-based PhysX over CPU-based PhysX.

    Aside from the use of 'architected' as a word, isn't that a generally levied accusation rather than something you've actually proven? The following comment that Nvidia CBA to work on it would seem to possibly explain the position. It might be deliberate on Nvidia's part and you'd see why (although getting a decent albeit smaller advantage with GPU but having a wider base of games using physx might do them better in some ways) if you can't prove it then you didn't ought to report it as fact.
    Besides, if everyone had better physx then there could be more and more use of it - so having extra GPU umph would probably come back into play (?)
    0
  • gdilord
    Thank you for the article Igor, it was a very interesting read.

    I hope that Tom's does more articles and features on what I consider the enthusiast/indie/homebrew sector. I really do enjoy reading these articles.
    1
  • LePhuronn
    What about running CUDA with Radeons? Can I drop in a (say) GTX 460 next to my (say) HD 6970 Crossfire and still use the GTX 460 for CUDA apps?

    Same workarounds? Non-issue? Impossible?
    1
  • hanrak
    Excellent article! Great read and very interesting. I may just go and get that Nvidia card to go with my 5970 now :)
    0
  • wild9
    I think you've got more chance of resolving the American/Mexican border sham, than you have seeing a unified Physics standard. Corporate interests vs. a clear, workable and altogether fair solution.
    0
  • Rab1d-BDGR
    Quote:
    In addition to the high costs of buying an extra card, we have added power consumption. If you use an older card, this is disturbingly noticeable, even in idle mode or normal desktop operation.


    Not necessarily, say you had a Geforce 9800 Green edition - those cards can run off the PCIe bus power with no additional connectors yet provide 112 CUDA cores. Running PhysX on that will be barely noticeable as it quietly sips a few watts here-and-there while the radeon 5970 or dual GTX 470s doing the graphics are happily guzzling away and the dial on your electric meter whizzes round.
    0
  • ben BOys
    awesome investigateing i never knew this! This further helps the cause of ATI since you can get a powerful card for cheap price and then get cheap nvidia card for the physx when physx become mainstream. get a 6870 and a 9xxxGT nvidia card and have best price proformance combo!
    0
  • monkeymanuk
    We have PhysX running on our Gaming rigs for a few customers using Radeon Hardware. http://www.southampton-computers.co.uk/shop/gamer-systems-c-7.html

    Take a look.
    0
  • Gonemad
    Did anybody think of a PCIe card that could house an extra, completely functional, Intel or AMD CPU? All the way around... I bet there are some situations where it would trump having a physx card, a new /other GPU, or a full-blown CPU + motherboard upgrade.

    Well, too bad I don't have any Nvidia cards containing PhysX laying around.
    0
  • kaprikawn
    Nvidia own PhysX, why shouldn't they be able to nerf it on ATI-based solutions? It isn't healthy to have a company who so clearly has a conflict of interest controlling something so fundamental as physics in gaming.

    Instead of demonising Nvidia who are only doing what is in their commercial interests, people should be looking to someone like Microsoft to implement a more platform agnostic approach in DirectX.

    The better solution would be an open-source alternative of course, but that's just wishful thinking.
    0