NVIDIA's Gelato: Hardware-accelerated renderer!!

Software rendering film-quality pictures and movies is a very heavy process. It employs all of the heaviest-duty technologies around... And it benefits greatly from multithreading - parallel execution.

Current processors render quite well indeed in sofware, like P4s with HT and A64 with their quite decent rendering performance. However, the execution resources on these CPUs are generic... What if the heavy-duty hardware on mainstream/performance graphics cards - which was, BTW, designed for rendering tasks - was so flexible as to allow part of the software-rendering to be done in hardware?

Compare the GeForce 6800 Ultra's execution resources to that of a typical high-end P4: 3.2Ghz, dual DDR-400, 6.4GB/s memory bandwidth, dual logical and single physical processors with some 55 million transistors. Now enter G6800-Ultra: 220 million transistors, DDR3 at a hefty 32GB/s throughput, 16 parallel pixel shaders, several hardware encoders and decoders, and enough floating-point-operations throughput to put the current CPUs to shame.

Long introduction, but NVIDIA just introduced a software that enables using a NVIDIA Quadro FX 4000's hardware to accelerate film-quality software rendering. It's called <A HREF="http://www.nvidia.com/object/IO_12820.html" target="_new">Gelato</A>.

Nice, isn't it? Even more so if you consider how devastatingly powerful these graphics processors are nowadays... Superscalar architectures and all. Great and interesting news, even if it only applies to workstations... Isn't it?

<i><font color=red>You never change the existing reality by fighting it. Instead, create a new model that makes the old one obsolete</font color=red> - Buckminster Fuller </i><P ID="edit"><FONT SIZE=-1><EM>Edited by Mephistopheles on 04/21/04 02:00 AM.</EM></FONT></P>
6 answers Last reply
More about nvidia gelato hardware accelerated renderer
  1. This has been a long time comming. There have been many experiments that have used GPU's to do heavy math instead of the CPU. Now with the new DX9.0c specification, GPU's can truly rival CPU's in terms of being a powerful and programmable processor.

    Of course, while GPU's are becomming more and more CPU-like, the other side of the spectrum is that CPU's are becomming more and more GPU-like. Most modern-day CPU's are moving away from just normal superscalar to an SIMD/VLIW/CMT/SMT-style throughput computing like GPU's have been doing. Another feature the CPU's have borrowed from GPU's is to move the memory closer to the CPU itself. GPU's have their memory controllers on-die and memory chips built onto a single PCB close to the core. CPU's have had constant increases in cache to make memory seem "closer". The K8 took this one step further and moved the memory controller on-die.

    I suspect in the future we will see a convergence of GPU and CPU technologies to the point where there really won't be much of a difference and we won't even need both.

    "We are Microsoft, resistance is futile." - Bill Gates, 2015.
  2. Heh, and about bloody time too!

    It's so annoying watching a CPU takes ages to render something that a GPU could render in seconds.

    Axis of Stupid = coop, Kanavit, FUGGER, SoDNighthawk, and ninkey.
  3. A worthy thing to note is that while GPU's are made for rendering, they were not made for cinematic quality rendering. The DX9-compatible cards (specifically the NV30 and 40) have made leaps compared to previous cards in this respect (with full 32-bit floating point throughout the entire pipeline), however, that's still not enough for many production-quality rendering.
    Most professional rendering software uses 64-bit double precision FP. This is not implemented (nor will be anytime soon I suspect) on GPU's. So you can't just offload everything onto the GPU. Most of the heavy math will still be done on the processor while the GPU may be able to handle the SP floating point operations.

    "We are Microsoft, resistance is futile." - Bill Gates, 2015.
  4. Hoooo! That's awesome......and now I need a tissue!

    SEX is like math. Add the bed, subtract the clothes, divide the legs, and hope you dont multiply
  5. I don't expect GPUs to do the lighting calculations, but is there any reason why geometry stuff like z-buffering and mesh/object preparation can't be offloaded on the GPU?

    Axis of Stupid = coop, Kanavit, FUGGER, SoDNighthawk, and ninkey.
  6. That kinda stuff usually requires a lot of precision and movie companies don't settle for 32-bit FP. You want that finger placed at exactly the right place, the skin to show exactly as you want it, etc.

    "We are Microsoft, resistance is futile." - Bill Gates, 2015.
Ask a new question

Read More

CPUs Hardware Rendering Software