Nvidia just announced the new RTX 2080 and I see a lot of weird, confused comments on the Internet.
So for the record:
“Nobody uses PhysX”:
wrong.
PhysX is the default physics engine in both Unity and Unreal. Which means it is used in tons of games, on a lot of different platforms (PC, Xbox, PS4, Switch, mobile phones, you name it).
“PhysX” is not just the GPU effects you once saw in Borderlands. It has also always been a regular CPU-based physics engine (similar to Bullet or Havok).
When your character does not fall through the ground in Fortnite,
it’s PhysX. When you shoot a bullet in PayDay 2,
it’s PhysX. Ragdolls? Vehicles? AI? PhysX does all that in a lot of games. It is used everywhere and it is not going away.
______________________
“That game does not use PhysX because it is not listed on X or Y“:
potentially wrong.
Some people pointed out the lists of “PhysX games” found online as “proofs” that PhysX was dead and not used anymore. Unfortunately these lists are incomplete. Even the one on Nvidia’s own website is not up-to-date. There are a couple of reasons for this:
- PhysX is kind of “done” now, and more-or-less in maintenance mode. The team size shrunk a lot while the focus moved to raytracing, autonomous cars, AI, etc. There is nobody tracking PhysX titles anymore, and sometimes we don’t even have a matching PhysX documentation online. In fact right now, the online doc is for version 3.4.0, and it has been built more than a year ago. It does not match the latest build on GitHub.
- It became a lot more difficult to track PhysX titles after we put the code on GitHub. Anybody can register, download and use PhysX these days. And we don’t require the game to include a PhysX logo anywhere, so there are a lot of them we don’t even know about - we hear about them when they ask for console builds or when they ask for help.
- There is little value for Nvidia to track and list all the games using CPU PhysX. So, we don’t.
______________________
“
PhysX is crippled“:
wrong.
I already posted about that one. You can read about it
here and
here for example.
In short: no it’s not.
______________________
“
The only reason devs use PhysX is because Nvidia pays them“:
wrong.
It’s the opposite. PhysX is a middleware like the others. Unless you get some kind of special mutually beneficial deal, you actually have to pay for support and/or to use the console libs. In the old days you got it for free in exchange for adding GPU PhysX effects to your game. But it was always a choice left to devs, and few actually took it. See next point.
______________________
“
GPU PhysX is dead“:
debatable.
“GPU PhysX” can mean different things, and admittedly the differences are not always clear or obvious.
If we’re talking GPU particles or cloth, they have indeed been deprecated in recent PhysX versions, replaced with FLEX (an entirely different engine,
unrelated to PhysX so far). The FLEX implementation is better, so there was no need to keep supporting both.
If we’re talking GPU rigid bodies, it is wrong. PhysX 3.4 (the most recent public version of PhysX) even
introduced a
brand new GPU rigid body codepath - the most efficient we ever built. So it is the opposite of
dead: that new version was basically just born yesterday.
However it is fair to say that the GPU stuff is not
used much (although a bunch of projects do so). I suspect this is both for the obvious reason everybody points out (it would only run on Nvidia hardware) and because the CPU stuff is fast enough for regular physics in regular games.
______________________
“
Raytracing is the new PhysX“:
LOL
This is such a ridiculous statement…
Raytracing has been the Holy Grail of computer graphics for as long as I can remember. I was already playing with GFA Raytrace back on the Atari ST in, what, 1991? There has been dreams of “realtime raytracing” for decades. It’s been used in movies for years. There is and there has been more interest for raytracing than there ever was for accelerated physics. It’s just so weird to compare both.
Now if you mean that this is an Nvidia-only thing that will only run on Nvidia hardware, this is (hopefully) wrong again. The main difference is that there is already a Microsoft-provided hardware-agnostic raytracing API, called
DXR. There has never been something similar for “direct physics”. So you should not compare it to PhysX, instead you might compare it to the introduction of programmable shaders when everybody was using a fixed-function rendering pipeline. That comparison would make a lot more sense.
Beyond that, you might remember how some games needed a card that supported “shader model 3″ in order to run. Or before that, before shaders, how the D3D caps told the game whether your card supported cube-mapping, fog, “TnL”, you name it. It has always been like this: different hardware from different companies have different capabilities.
______________________
“
Nvidia is forcing raytracing down devs’ and gamers’ throats“:
wrong.
If they could control/force devs and gamers, you would have GPU physics everywhere by now. Your own statement that “GPU PhysX is dead” is not compatible with the statement that “Nvidia is forcing people”.
______________________
“
X or Y can do raytracing already“:
right, but…
You can do raytracing everywhere in software. GFA Raytrace did raytracing back on the Atari ST. PhysX can do raytracing. Firing a bullet in any game is done with a “raycast”, which is basically the same as raytracing - for a single ray.
You could also do raytracing in hardware before, as in Nvidia’s own OptiX library, or on AMD with Radeon Rays. All true.
The difference, and the thing that gamers usually do not fully grasp, is that raytracing is really, really, REALLY slow. In the movie industry for example, they’re not talking in frames-per-second, but rather
seconds/minutes/hours-per-frame. So while it is true that one could do raytracing before, the
combination of dedicated RT Cores (to trace the rays) and Tensor Cores (to do some denoising on the final image) is indeed new, and the results are closer than ever to the “realtime raytracing” people have been labelling “the future” for so long.
______________________
“
Raytracing is a proprietary tech“:
wrong.
Look, you just said that AMD could do raytracing as well, so how is this “proprietary” now? These two statements are again incompatible.
“Raytracing” is an algorithm. It has been here since forever, it has nothing to do with Nvidia.
Now Nvidia’s
implementation of Microsoft’s raytracing API is of course “proprietary” if you want. But that is exactly the same situation as for the regular Direct3D rasterization API, or exactly the same as for Nvidia’s implementation of the OpenGL API: the implementation is specific to Nvidia, just because the underlying hardware is different from AMD’s. You cannot avoid that.
The ball is now in AMD’s court. The API is here, it’s hardware-agnostic, they “just” have to implement it for their hardware. You cannot blame Nvidia for not doing AMD’s job, now can you?
______________________
“
It’s a gimmick until it’s supported by consoles“:
right.
Or probably right, I don’t know, but in my opinion this is a very fair statement. Like for shaders before, it probably will only survive if consoles eventually support it.
But the raytracing API itself comes from Microsoft, so it is easy to imagine that it will eventually appear in some future Xbox consoles. And this time they will really deserve their “next-gen” moniker.
______________________
“
It’s a waste of time“:
debatable
As an old demomaker who spent years doing graphics stuff in assembly for free on old hardware, I am impermeable to the “waste of time” argument.
But you can see it as an investment in the future anyway. Realtime raytracing has long been the holy grail of computer graphics. It’s high time it became the present rather than the elusive future. If you never “waste time” like this, nothing ever happens.