Nvidia: Moore's Law is Dead, Multi-core Not Future

Bill Dally, the chief scientist and senior vice president of research at Nvidia, wrote an article for Forbes purporting that Moore's Law, the theory that transistor count and performance would double every 18 months, is dead.

The problem, according to Dally's paper on Forbes, is that current CPU architectures are still serial processors, while he believes that the future is in parallel processing. He gives the example of reading an essay, where a single reader can only read one word at a time – but having a group of readers assigned to a paragraph each would greatly accelerate the process.

"To continue scaling computer performance, it is essential that we build parallel machines using cores optimized for energy efficiency, not serial performance. Building a parallel computer by connecting two to 12 conventional CPUs optimized for serial performance, an approach often called multi-core, will not work," he wrote. "This approach is analogous to trying to build an airplane by putting wings on a train. Conventional serial CPUs are simply too heavy (consume too much energy per instruction) to fly on parallel programs and to continue historic scaling of performance."

"Going forward, the critical need is to build energy-efficient parallel computers, sometimes called throughput computers, in which many processing cores, each optimized for efficiency, not serial speed, work together on the solution of a problem. A fundamental advantage of parallel computers is that they efficiently turn more transistors into more performance," Dally added.

Dally also posed that focusing on parallel computing architectures will help resurrect Moore's law, "Doubling the number of processors causes many programs to go twice as fast. In contrast, doubling the number of transistors in a serial CPU results in a very modest increase in performance--at a tremendous expense in energy."

One big driver of the current processor design are the programs written to run on current chips. Dally said that the long-standing, 40-year-old serial programming practices are ones that will be hard to change, and that programmers trained in parallel programming are scarce.

"The computing industry must seize this opportunity and avoid stagnation, by focusing software development and training on throughput computers - not on multi-core CPUs," Dally concluded. "Let's enable the future of computing to fly--not rumble along on trains with wings."

Create a new thread in the UK News comments forum about this subject
This thread is closed for comments
9 comments
Comment from the forums
    Your comment
  • Tomtompiper
    A good analogy, why then is his company pushing Fermi, a hark back to the era of the Valve, hot, power hungry and inefficient?
    0
  • Silmarunya
    As good as his ideas might sound, does he take into account how much effort it would take to turn them into reality? How many programs written today can run on a system he proposes? The gains made by more efficient computing won't outweigh the effort needed for a transition, at least not for home use. Servers and super computers are a different matter.
    And not to mention the many years in which both serial and parallel systems will run alongside each other, meaning every app would have to be coded twice...

    And a company that took years to design the piece of junk Fermi really is, isn't exactly in a position to tell what computing should evolve into. If there is a single company not in a position to speak about energy efficiency, it's Nvidia. 480's using nearly as much power as 5970's but at significantly lower performance aren't good examples of efficient computing.
    0
  • Clintonio
    TomtompiperA good analogy, why then is his company pushing Fermi, a hark back to the era of the Valve, hot, power hungry and inefficient?

    He said it himself; Not many are proficient in parallel programming. I expect the support doesn't exist. Nvidia sending out a product that doesn't work with anything would be... dumb?
    0