Let’s take a trip back in time – way back to 2003 when Intel and AMD became locked in a fierce struggle to offer increasingly powerful processors. In just a few years, clock speeds increased quickly as a result of that competition, especially with Intel’s release of its Pentium 4.
But the clock speed race would soon hit a wall. After riding the wave of sustained clock speed boosts (between 2001 and 2003 the Pentium 4’s clock speed doubled from 1.5 to 3 GHz), users now had to settle for improvements of a few measly megahertz that the chip makers managed to squeeze out (between 2003 and 2005 clock speeds only increased from 3 to 3.8 GHz).
Even architectures optimized for high clock speeds, like the Prescott, ran afoul of the problem, and for good reason: This time the challenge wasn’t simply an industrial one. The chip makers had simply come up against the laws of physics. Some observers were even prophesying the end of Moore’s Law. But that was far from being the case. While its original meaning has often been misinterpreted, the real subject of Moore’s Law was the number of transistors on a given surface area of silicon. And for a long time, the increase in the number of transistors in a CPU are accompanied by a concomitant increase in performance – which no doubt explains the confusion. But then, the situation became complicated. CPU architects had come up against the law of diminishing returns: The number of transistors that had to be added to achieve a given gain in performance was becoming ever greater and was headed for a dead end.
- Vive le GeForce FX!
- The advent of GPGPU
- The CUDA APIs
- A Few Definitions
- The Theory: CUDA from the Hardware Point of View
- Hardware Point of View, Continued
- The Theory: CUDA from the Software Point of View
- In Practice
- Conclusion, Continued