Nvidia held its 3Q 2017 earnings call today, and as expected, the company had a stellar quarter. Top line numbers include a record $2.64 billion in revenue, led by a 25% growth in gaming and a whopping 109% growth in the data center. Gross margins weighed in at 59.7%, so it's clear the company is executing well.
But aside from the financial numbers, what everyone really wants to know is Nvidia's take on the recent industry announcements that will certainly have a profound and lasting impact on the GPU industry.
First came Intel and AMD's bombshell announcement that AMD's Radeon Graphics are worming their way into Intel's eighth-generation H-Series processors. As surprising as that is considering the acrimonious history between the two companies, it was just the start.
A day later, Raja Koduri, AMD's senior vice president and chief architect of the AMD Radeon Technologies Group (RTG), announced he was leaving the company. This comes on the heels of Koduri's extended leave of absence from the company shortly after the launch of the Vega graphics cards.
Twenty-four hours later, Intel announced that it had brought on Koduri to head its newly formed Core and Visual Computing Group business unit with the intention of developing high-end discrete graphics cards for a "broad range of computing segments." That's shocking because Nvidia and AMD have been the only two discrete GPU producers for the last 20 years.
All of these announcements have a tremendous impact on the wildly successful Nvidia, but the company avoided addressing the recent news during its opening statements in the financial call. But at the end of the Q&A portion, in response to a question regarding Intel's renewed interest in developing a discrete GPU and its newfound partnership with AMD, Huang responded:
"Yeah, there's a lot of news out there....first of all, Raja leaving AMD is a great loss for AMD, and it's a recognition by Intel probably that the GPU is just incredibly important now. The modern GPU is not a graphics accelerator, we just left the letter "G" in there, but these processors are domain-specific parallel accelerators, and they are enormously complex, they are the most complex processors built by anybody on the planet today. And that's the reason why IBM uses our processors for the worlds largest supercomputers, [and] that's the reason why every single cloud, every major server around the world has adopted Nvidia GPUs."
Huang's statement aligns with our thoughts that Intel's return to the discrete graphics industry is more centered on capturing some of the burgeoning use-cases for parallelized workloads, such as AI, than it is about gaming. Nvidia's push into the data center has been fruitful, as evidenced by the 109% revenue growth, but it's really just the beginning of GPU penetration into several other segments, such as autonomous driving.
Huang dove further into the company's design process:
"The amount of software engineering that goes on top of it is significant as well. So, if you look at the way that we do things, we plan our roadmap about five years out. It takes about three years to build a new generation, and we build multiple GPUs at the same time, and on top of that, there are some 5,000 engineers working on system software and numerics libraries, and solvers, and compilers, and graph analytics, and cloud platforms, and virtualization stacks in order to make this computing architecture useful to all of the people we serve.
So when you think about it from that perspective, it's just an enormous undertaking. Arguably the most significant undertaking of any processor in the world today. And that's why we are able to speed up applications by a factor of 100."
This statement highlights the complexity of developing a new GPU. Intel will face similarly significant challenges.
Huang also addressed the new Intel H-Series processors that feature AMD's semi-custom Radeon Graphics chip:
"And lastly, with respect to the chip that they built together, I think it goes without saying, now that the energy efficiency of Pascal GeForce and the MaxQ design technology and all of the software we have created has really set a new design point for the industry, it is now possible to build a state of the art gaming notebook with the most leading edge GeForce processors, and we want to deliver gaming experiences many times that of a console in 4K and have that be in a laptop that is 18mm thin. The combination of Pascal and MaxQ has really raised the bar, and that's really the essence of it."
That was the last of Huang's statements on the matter, though some of his responses to other questions during the call are telling. Huang prominently repeated the statement that "We are a one architecture company." Huang said the company has a singular focus on one architecture so they can assure broad compatibility with all aspects of the software ecosystem, not to mention assuring support longevity, stating, "We support our software for as long as we shall live."
Huang also pressed the point that investing in five different architectures dilutes focus and makes it impossible to support them forever, which has long-term implications for customers. Earlier in the call, Huang had pressed another key point:
"If you have four or five different architectures to support, that you offer to your customers, and they have to pick the one that works the best, you are essentially are saying that you don't know which one is the best [.....] If there's five architectures, surely over time, 80% of them will be wrong. I think that our advantage is that we are singularly focused."
Huang didn't specifically name Intel in this statement, but Nvidia's focus on a single architecture stands in stark contrast to Intel's approach of offering five (coincidence?) different solutions, such as CPUs, Xeon Phi, FPGAs, ASICs, and now GPUs, for parallel workloads.
As others have opined, Intel's announcement that it's building a discrete GPU is tantamount to an open declaration of war on Nvidia. It wouldn't make sense for Intel to telegraph its intentions to its rivals several years before a product comes to market, so the real question is just how far along Intel's GPU is already is in development. The company could already have a new architecture, and it's possible it is just a scaled-up iteration of its existing iGPU technology.
All these questions and more hang thick in the air, but it's anyone's guess how long we will have to wait for answers. If Intel is just beginning the effort, it could be years before a product makes its way to market.