Intel Announces The Nervana Neural Network Processor

Intel announced the Nervana Neural Network Processor (NNP), which is a custom application specific integrated circuit (ASIC) built to maximize machine learning performance.

Leapfrogging Or Catching Up?

After playing catch-up with Nvidia on machine learning for the past few years, Intel has begun acquiring multiple companies working on machine learning technology. One of those acquisitions was Nervana, which at the time was offering a software-as-a-service (SaaS) platform for customers wanting to create their own custom deep learning software that would then run on Nervana’s Nvidia Titan X GPU farm.

However, before Intel acquired the company, Nervana was also working on building a custom ASIC with the goal of squeezing as much performance as possible out of its silicon for machine learning training. This is a route Google also took with its Tensor Processing Unit (TPU). Even Nvidia has recently started re-orienting itself towards the more custom Tensor Core machine learning accelerators.

Nervana NNP Technical Details

The Nervana Neural Network Processor (NNP) doesn’t have a standard cache hierarchy, and on-chip memory is directly managed by software. This enables the chip to achieve high levels of computation for each die, which translates to faster training time for neural networks.

Because neural net training on a single chip is largely constrained by memory bandwidth and power, the team invented a new more efficient numeric format, called “flexpoint.”

According to Intel, flexpoint allows scalar computations to be implemented as fixed-point math. This results in smaller circuits, and thus a decrease in power consumption. However, fixed-point math typically limits software flexibility as well, so it remains to be seen how attractive this will look to Intel’s customers.

More To Come

Intel hasn’t revealed any performance numbers yet, but Nervana previously said it expects its first chip to have about 10x the efficiency of Nvidia’s Maxwell architecture. However, both Google and Nvidia have surpassed those numbers on their own by now, and Nvidia has recently teased yet another large increase in performance (at least for inference) with the GPU generation following Volta.

Therefore, it remains to be seen if Intel’s new chip can keep up with those advancements, too, perhaps by using a more advanced process than what the Nervana team was initially expecting to use (28nm).

Intel said that it will ship the first-generation Nervana NNP by the end of the year, and that it already has a roadmap with multiple Nervana NNP generations. This shows some commitment to this line of products, which may convince customers to buy into this platform and learn how to use it.

Create a new thread in the UK News comments forum about this subject
This thread is closed for comments
No comments yet
Comment from the forums
    Your comment