Earnings Labs

NVIDIA Corporation (NVDA)

Q4 2017 Earnings Call· Thu, Feb 9, 2017

$100.00

+0.50%

Key Takeaways · AI generated
AI summary not yet generated for this transcript. Generation in progress for older transcripts; check back soon, or browse the full transcript below.

Same-Day

-2.41%

1 Week

-7.90%

1 Month

-12.71%

vs S&P

-15.45%

Transcript

Operator

Operator

Good afternoon. My name is Victoria, and I'm your conference operator for today. Welcome to NVIDIA's Financial Results Conference Call. Thank you. I'll now turn the call over to Arnab Chanda, Vice President of Investor Relations to begin your conference.

Arnab K. Chanda - NVIDIA Corp.

Management

Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the fourth quarter and fiscal 2017. With me on the call today from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. It's also being recorded. You can hear a replay via telephone until the February 16, 2017. The webcast will be available for replay up until next quarter's conference call to discuss Q1 financial results. The content of today's call is NVIDIA's property. It cannot be replaced, reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, the February 9, 2017, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO Commentary, which is posted on our website. With that, let me turn the call over to Colette.

Colette M. Kress - NVIDIA Corp.

Management

Thanks, Arnab. We had a stellar Q4 and fiscal 2017 with records in all of our financial metrics; revenue, gross margin, operating margins and EPS. Growth was driven primarily by Datacenter tripling with a rapid adoption of AI worldwide. Quarterly revenue reached $2.17 billion, up 55% from a year earlier, and up 8% sequentially, and above our outlook of $2.1 billion. Fiscal 2017 revenue was just over $6.9 billion, up 38% and nearly $2 billion more than fiscal 2016. Growth for the quarter and fiscal year was broad based with record revenue in each of our four platforms, Gaming, Professional Visualization, Datacenter and Automotive. Our full year performance demonstrates the success of our GPU platform-based business model. From a reporting segment perspective, Q4 GPU revenue grew 57% to $1.85 billion from a year earlier. Tegra Processor revenue was up 64% to $257 million. Let's start with our Gaming platform. Q4 Gaming revenue was a record $1.35 billion, rising 66% year-on-year and up 8% from Q3. Gamers continued to upgrade to our new Pascal-based GPUs. Adding to our gaming lineup we launched GTX 1050 class GPUs for notebooks, bringing eSports and VR capabilities to mobile at great value. The GTX 1050 and 1050 Ti were featured in more than 30 new models launched at last month's Consumer Electronics Show. To enhance the gaming experience, we announced G-SYNC HDR, a technology that enables displays which are brighter and more vibrant than any other gaming monitor. Our partners have launched more than 60 G-SYNC-capable monitors and laptops, enabling smooth play without screen tear artifact. eSports too continues to attract new gamers. Major tournaments with multi-million dollar purses are drawing enormous audiences. This last quarter, Dota 2 held its first major tournament of the season in Boston. Tickets sold out in minutes. The prize…

Operator

Operator

Certainly. Your first question comes from the line of C.J. Muse with Evercore. C.J., your line is open.

C.J. Muse - Evercore Group LLC

Analyst · Evercore. C.J., your line is open

Can you hear me? Yeah, my apologies. Stuck on a plane here. Great, great, great results. I guess, was hoping to get a little more color on the Datacenter side. Now, that we've completed a full fiscal year 2017, would love to get some clarity on the different moving parts and contributions there. And then I guess, looking into 2018 how you see the growth unfolding thereafter? Thank you.

Jen-Hsun Huang - NVIDIA Corp.

Analyst · Evercore. C.J., your line is open

Yeah, C.J. first of all, thanks a lot. Well, the single biggest mover would have to be Datacenter. I mean, when you look back on last year and we look forward, there's a lot of reasons why Datacenter business overall grew 3x, grew by a factor of three. And so I would expect that to happen, to continue. There's several elements of our Datacenter business. There's the high performance computing part. There's the AI part. There's GRID, which is graphics virtualization. There's cloud computing, which is providing our GPU platform up in the cloud for startups and enterprises and all kinds of external customers to be able to access in the cloud, as well as a brand new AI supercomputing appliance that we created last year for anybody who would like to engage deep learning and AI, but don't have the skills or don't have the resources or don't have the desire to build their own high performance computing cluster. And so we integrated all of that with all of the complicated software stacks into an appliance that we maintain over the cloud. We call that DGX-1. And so these pieces, AI, high performance computing, cloud computing, GRID and DGX all in contribution, contributed to our growth in Datacenter quite substantially. And so my sense is that, that as we look forward to next year, we're going to continue to see that major trend. Of course, gaming was a very large and important factor. And my expectation is that gaming is going to continue to do that. And then longer-term, longer-term our position in self-driving cars, I think, is becoming more and more clear to people over time. And I expect that self-driving cars will be available on the road starting this year with early movers, and no later than 2020 for Level 4 by the majors, and you might even see some of them pull into 2019. And so those are some of the things that we're looking forward to.

Operator

Operator

Your next question is from Vivek Arya with Bank of America.

Vivek Arya - Bank of America Merrill Lynch

Analyst · Bank of America

Thanks. I actually had one question for Jen-Hsun, and one sort of clarification for Colette. So Jen-Hsun, where are we in the gaming cycle? It's been very strong the last few years. What proportion of your base do you think has upgraded to Pascal, and where does that usually peak before you launch your next-generation products? And then for Colette, just inventory dollars in days picked up. If you could give us some comment on that. And then just on OpEx productivity, you did a very good job last year, but this time you're saying OpEx will go up mid-teen. Do you still think there is operating leverage in the model? Thank you.

Jen-Hsun Huang - NVIDIA Corp.

Analyst · Bank of America

Well, let's say we typically assume that we have an installed base of a couple of 100 million GeForce gamers, and we've upgraded about two quarters of them, as in two operating quarters out of four years. It takes about three to four years to upgrade the entire installed base. And we started ramping Pascal, as you know, a few quarters ago. And our data would suggest that the upgrade cycle is going well, and we have plenty to go.

Colette M. Kress - NVIDIA Corp.

Management

Thanks, Vivek. On your question on inventory, as you know, in many of our businesses, we are still carrying a significant architecture, and a broad list of different products for those architectures across. We feel comfortable with our level of inventory as we look forward into fiscal year 2018 and our sales going forward. Your second question was regarding OpEx and comparing it to where we finished in 2017 and moving into fiscal year 2018. We do have some great opportunities, large businesses, for us to go capture the overall TAMs for, and we are going to be continuing to invest in the Datacenter, specifically in AI, self-driving cars, as well as gaming. And so rather than a focus on what the specific operating margin is, we're going to focus primarily just on growing the overall TAM, and capturing that TAM on the top line.

Operator

Operator

Your next question comes from the line of Mark Lipacis from Jefferies.

Mark Lipacis - Jefferies LLC

Analyst · Mark Lipacis from Jefferies

Thanks for taking my question. Question back on the Datacenter, the growth was impressive. And I'm wondering, you mentioned that the hyperscale players really have embraced the products first, and I'm wondering if you could share with us to the extent that you think that they're embracing it for their own use, or to the extent that they're deploying it for services such as machine learning as a service and enterprises, are really kind of tapping into this also through the hyperscale guys. And I'm wondering, if you could help, you mentioned that the enterprise is where you expect to see embracing the technology next and healthcare, retail, transport, finance, and I'm wondering if you could share with us how you feel about that visibility, where you're getting that visibility from. Thank you.

Jen-Hsun Huang - NVIDIA Corp.

Analyst · Mark Lipacis from Jefferies

Well, on hyperscale, you're absolutely right, that there's internal, what we call, internal use for deep learning, and then there's the hosting GPU in the cloud for external high-performance computing use, which includes deep learning. Inside the hyperscalers, the early adopters are moving obviously very, very fast. But everybody has to follow. Everybody has to follow. Deep learning has proven to be too effective, and you guys, everybody knows now that every hyperscaler in the world is investing very heavily in deep learning. And so my expectation is that, over the next coming years, deep learning and AI will become the essential tool by which they do their computing. Now, when they host it in the cloud, people out in the cloud use it for a variety of applications, and one of the reasons why the NVIDIA GPU is such a great platform is because of its broad utility. We've been working on GPU computing now for coming up on 12 years, and industry-after-industry, our GPU computing architecture has been embraced for high-performance computing, for data processing, for deep learning and such. And so when somebody hosts it up in the cloud, for example, Amazon putting our GPUs up in the cloud, that instance has the ability to do molecular dynamics, to deep learning training, to deep learning inferencing. Companies could use it for offloading their computation to start-ups being able to build their company and build their application, and then host it for hundreds of millions of people to use. And so I think the hyperscalers are going to continue to adopt GPU both for internal consumption, and cloud hosting for some time to come. And we're just in the beginning of that cycle, and that's one of the reasons why we have quite a fair amount of enthusiasm…

Operator

Operator

Your next question comes from the line of Atif Malik with Citigroup.

Atif Malik - Citigroup Global Markets, Inc.

Analyst · Atif Malik with Citigroup

Hi. Thanks for taking my question, and congratulations to the team on great results and guide. My first question is for Jen-Hsun. Jen-Hsun, on the adoption of VR for gaming, if I look at the price points of the headset and the PC, a little bit high for a wider adoption. Could the use of GPU in the cloud, like you guys are introducing the GeForce NOW, be a way for the price points on VR to come down? And then I have a follow up for Colette.

Jen-Hsun Huang - NVIDIA Corp.

Analyst · Atif Malik with Citigroup

The first year of VR has sold several hundred thousand units, and many hundreds of thousands of units. And our VR works, SDK, which allows us to process graphics in very low latency, dealing with all of the computer vision processing, whether it's lens warping and such, it has delivered really excellent results. The early VR is really targeted at early adopters. And I think the focus of ensuring an excellent experience that surprises people, that delight people, by Oculus and by Valve and by Epic and by Vive, by ourselves, by the industry, has really been a good focus. And I think that we've delivered on the promise of a great experience. The thing that we have to do now is that we have to make the headsets easier-to-use, with fewer cables. We have to make it lighter, we have to make it cheaper. And so those are all things that the industry is working on, and as the applications continue to come online, you're going to see that they're going to meet themselves and find success. I think the experience is very, very clear that VR is exciting. However, remember that we are also in the VR – we also brought VR to computer-aided design and to professional applications. In this particular area, the cost is just simply not an issue. And in fact, many of the applications previously were power walls or caves, VR caves that cost hundreds of thousands of dollars. And now you could put that same experience, if not even better, on the desk of designers and creators. And so I think that you're going to find that creative use and professional use of VR is going to grow quite rapidly. And just recently, we announced a brand new Quadro 5000, P5000 with VR, the world's first VR notebook that went to market with HP and Dell. And they're doing terrifically. And so I would think about VR in the context of both professional applications as well as consumer applications, but I think the first year was absolutely a great success.

Operator

Operator

Your next question comes from the line of Romit Shah with Nomura.

Romit Shah - Instinet, LLC

Analyst · Romit Shah with Nomura

Yes. Thank you, and first of all, congratulations on a strong fiscal 2017. If I may, Jen-Hsun, the revenue beat this quarter wasn't as big as we've seen the last several periods, and most of it came from Datacenter. I totally understand that when the Gaming business expands as much as it has, it becomes harder to beat expectations by the same margin. But I was wondering if you could just spend some time talking about gaming demand, and how you think it was during the holiday season.

Jen-Hsun Huang - NVIDIA Corp.

Analyst · Romit Shah with Nomura

Well, the global PC gaming market is still vibrant and growing. And the number of eSports gamers around the world is growing. You guys know that Overwatch is a home run. Activision Blizzard's Overwatch is raging all over Asia and eSports fans all over the world are picking it up, and it's graphically very intensive. Without a 1050 class and above, it's simply a non-starter and to really enjoy it, you need at least a 1060. And so this last quarter we launched a 1050 and a 1050 Ti all over the world, and we're seeing terrific success out of that. And my expectation going into next year is that Overwatch is going to continue to spread all over the world. It's really basically just started. It started in the West and it's now moving into the East where the largest eSports markets are. And so Overwatch is going to be a huge success. League of Legends is going to continue to be a huge success. And my expectation is that the eSports along with AAA titles that are coming out this year is going to keep PC gaming continue to grow. And so I quite frankly thought Q4 was pretty terrific, and we had a record quarter. We had a record year, and I don't remember the last time that a large business the size of ours, and surely the size of a Datacenter business, grew by a factor of three. And so I think we're in a great position going into next year.

Operator

Operator

Your next question comes from the line of Raji Gill with Needham & Company. Rajvindra S. Gill - Needham & Co. LLC: Yeah, thanks. Jen-Hsun, can you talk a little bit about the evolution of artificial intelligence, and kind of make a distinction between artificial intelligence versus machine learning versus deep learning? They're different kind of categorizations and implementations of those different sub-segments. So I wanted to get a sense from you how NVIDIA's end-to-end computing platform kind of dominates machine learning relative to, say, the competition. Then I have a question on the gross margins, if I could.

Jen-Hsun Huang - NVIDIA Corp.

Analyst · dominates machine learning relative to, say, the competition. Then I have a question on the gross margins, if I could

Yes. First of all thanks, thanks for the question. The way to think about that is deep learning is a breakthrough technique in the category of machine learning, and machine learning is an essential tool to enable AI, to achieve AI. If a computer can't learn, and if it can't learn continuously and adapt with the environment, there's no way to ever achieve artificial intelligence. Learning, as you know, is a foundational part of intelligence, and deep learning is a breakthrough technique where the software can write software by itself by learning from a large quantity of data. Prior to deep learning, other techniques like expert systems and rule-based systems and hand-engineered features, where engineers would write algorithms to figure out how to detect a cat, and then they would figure out how to write another algorithm to detect a car. You could imagine how difficult that is and how imperfect that is. It basically kind of works, but it doesn't work good enough, well enough to be useful. And then deep learning came along. The reason why deep learning took a long time to come along is because its singular handicap is that it requires an enormous amount of data to train the network, and it requires an enormous amount of computation. And that's why a lot of people credit the work that we've done with our programmable GPUs and our GPU computing platform and the early collaboration with deep learning. AI researchers as the big bang, if you will, the catalyst that made modern AI possible. We made it possible to crunch through an enormous amount of data to train these very deep neural networks. Now, the reason why deep learning has just swept the world, it started with a convolution of neural networks, but reinforcement networks and…

Operator

Operator

Your next question comes from the line of Matt Ramsay with Canaccord.

Matthew D. Ramsay - Canaccord Genuity, Inc.

Analyst · Matt Ramsay with Canaccord

Thank you very much. Jen-Hsun, you guys obviously have won some business with your automotive supercomputer at Tesla in recent periods. And I was curious if you could comment on some of the application porting, and moving features from the previous architecture onto your architecture? And I guess how that's gone, and what you guys have learned through that process, and how it might be applied to some of your future partnerships. Thank you.

Jen-Hsun Huang - NVIDIA Corp.

Analyst · Matt Ramsay with Canaccord

Sure. First of all, you know that we are a full stack platform. The way we think about all of our platforms is from the application all the way back to the fundamental architecture in a semiconductor device. And so in the case of DRIVE PX, we created the architecture, optimized for neural net, for sensor fusion, for high-speed processing. The semiconductor design, in the case of DRIVE PX 2 called Tegra Parker, the system software for high-speed sensor fusion and moving data all the way around the car, the better you do that, the lower cost the system will be. The neural networks on top of that, that sits on top of our deep learning SDK, called cuDNN and TensorRT, basically frameworks of AI, and then on top of that, the actual algorithms for figuring out how to use that information from perception to localization to action planning. And then on top of that, we have an API and an SDK that is integrated into map makers, and we integrate into every single map, HD map service in the world, from HERE to TomTom to ZENRIN in Japan, to Baidu in China. So this entire stack is a ton of software. But your question specifically has to do with the perception layer. And that perception layer quite frankly is just a small part of the self-driving car problem. And the reason for that is because in the final analysis, you want to detect lanes. You've got video coming in, you want to detect lanes, you have video coming in, you want to detect the car in front of you. And all we have to do, it's not trivial, but it's also not monumental, we have to detect and sense the lanes and the cars and we train our networks…

Operator

Operator

Your next question comes from the line of Joe Moore with Morgan Stanley. Joe L. Moore - Morgan Stanley & Co. LLC: Great. Thank you for taking the question. I wondered if you could talk a little bit about the inference market. Where are you in terms of hyperscale adoption for specialized inference type solutions, and how big you think that market can ultimately be? Thank you.

Jen-Hsun Huang - NVIDIA Corp.

Analyst · Joe Moore with Morgan Stanley

Yes. The inference market is going to be very large. And as you know very well, in the future almost every computing device will have inferencing on it. A thermostat will have inferencing on it, a bicycle lock will have inferencing on it, cameras will have inferencing on it, and self-driving cars would have a large amount of inferencing on it. Robots, vacuum cleaners, you name it, smart microphones, smart speakers, all the way into the data center. And so I believe that long-term there will be a trillion devices that has inferencing connected to edge computing devices near them, connected to cloud computing devices, cloud computing servers. So that's basically the architecture. And so the largest inferencing platform will likely be arm devices. I think that that goes without saying. Arm will likely be running inferencing networks, 1-bit XNOR, 8-bit, and even some floating-point. It just depends on what level of accuracy do you want to achieve, what level of perception do you want to achieve, and how fast do you want to perceive it? And so the inferencing market is going to be quite large. We're going to focus in markets where the inferencing precision, the inferencing, the perception scenario and the performance by which you have to do is mission-critical. And of course, self-driving cars is a perfect example of that. Robots, manufacturing robots, will be another example of that. In the future you're going to see in our GTC, if you have a chance to see that, we're working with AI City partners all over the world for end-to-end video analytics, and that requires very high throughput, a lot of computation. And so the examples go on and on, all the way back into the data center. In the data center, there are several areas where inferencing is quite vital. I mentioned one number earlier, just mapping the earth, mapping the earth at the street level, mapping the earth in HD, in three-dimensional level for self-driving cars. Now, that process is going to require, well, just a pile of GPUs running continuously as we continuously update the information that needs to be mapped. There's inferencing, which is called offline inferencing where you have to retrain a network after you deployed it, and you would likely retrain and re-categorize, reclassify the data using the same servers that you used for training. And so even the training servers will be used for inferencing. And then lastly, all of the nodes in cloud will be inferencing nodes in the future. I've said before that I believe that every single node in the cloud data center will have inferencing capability and accelerated inferencing capability in the future. I continue to believe that and these are all opportunities for us.

Operator

Operator

Your next question comes from the line of Charles Long from Goldman Sachs. Toshiya Hari - Goldman Sachs & Co.: Hello. Can you hear me?

Jen-Hsun Huang - NVIDIA Corp.

Analyst · Charles Long from Goldman Sachs

Sure. Toshiya Hari - Goldman Sachs & Co.: Hi. This is Toshiya from Goldman. Thanks for taking the question, and congrats on the results. I had a question on gross margins. I think you're guiding Q1 gross margins only mildly below levels you saw in fiscal Q4, despite the royalty stream from Intel rolling over. And I'm guessing improvement of mix in Datacenter and parts of Gaming are driving this. But A, is that kind of the right way to think about the puts and takes going into Q1? And B, if that is indeed the case, should we expect gross margins to edge higher in future quarters and future years as data center becomes a bigger percentage of your business?

Colette M. Kress - NVIDIA Corp.

Management

Yeah, this is Colette. Let me see if I can help answer that. So you're correct in terms of how to look at that in Q1. The delta from Q4 to Q1 is, we only have a partial part of recognition from the Intel, and that stops in the middle of March. So as we move forward as well, going into Q2, we will also have the absence of what we had in Q1 moving to Q2. I'm not here to give guidance on Q2 because we just give guidance out one quarter, but keep that in mind, there's a partial amount of Intel still left in Q1 and then it depletes in Q2. If you think about our overall model, our overall business model, it has moved to higher and value-added platforms, and that's what we're selling. So our goal is absolutely to continue to concentrate on providing those higher-value platforms, that gives us the opportunity for gross margin as we make those investments in terms of an OpEx. We'll see what that kind of mix looks like as we go into Q2, but just to leave you with a understanding of Intel is probably what we can do here. Okay?

Operator

Operator

Your next question comes from the line of Stephen Chin from UBS.

Stephen Chin - UBS Securities LLC

Analyst · Stephen Chin from UBS

Hi, thanks for taking my questions. First one is on the Datacenter tightening, just given the expected sequential growth in that business during the April quarter, can you talk about what products are helping to drive that? Is it possibly the DGX-1 computer box or is it more GPUs for training purposes at the hyperscale cloud datacenter?

Jen-Hsun Huang - NVIDIA Corp.

Analyst · Stephen Chin from UBS

It would have to be Tesla processors using in the cloud. There are several SKUs of Tesla processors. There's the Tesla processors used for high-performance computing, and it has FP64, FP32, ECC, it's designed, and has CUDA of course, and it has been optimized for molecular dynamics, astrophysics, quantum chemistry, fluid dynamics, the list goes on and on. The vast majority of the world's high-performance supercomputing applications, imaging applications, 3D reconstruction application, it has been ported onto our GPUs over the course of the last decade and some, and that's a very large part of our Tesla business. Then of course, we introduced on top of the architecture our deep learning stack. Our deep learning stack starts with cuDNN, the numerics kernels, a lot of algorithms inside them to be optimized for numerical processing of all kinds of different precisions. It's integrated into frameworks of different kinds. There's so many different frameworks, from TensorRT to Caffe to Torch to Theano to MXnet to CNTK, the work that we did with Microsoft, which is really excellent, scaling it up from one GPU to many GPUs across multiple racks, and that's our deep learning stack, and that's also very important. And then the third is GRID. GRID is a completely different stack. It's the world's first graphics virtualization stack, fully integrated into Citrix, integrated into VMware. Every single workstation and PC application has been verified, tested and has the ability to be streamed from a datacenter. And then last year, I think we announced it in – we started shipping it in August, our DGX-1, the world's first AI supercomputer appliance, which integrates a whole bunch more software of all different types, and has the ability to – we introduced our first NVIDIA docker. It containerizes applications. It makes it possible for you to have a whole bunch of users use one DGX. They could all be running different frameworks because most environments are heterogeneous. And so that's DGX-1. And it's got an exciting pipeline ahead of it, and it's really designed for companies and workgroups who don't want to build their own supercomputer like the hyperscalers, and aren't quite ready to move into the cloud because they have too much data to move to the cloud. And so everybody basically can easily buy a DGX-1. It's fully integrated, fully supported, and get to work on deep learning right away. And so each one of these are all part of our Datacenter business. But the largest, because it's been around the longest since our Tesla business, but they're all growing, every single one of them.

Operator

Operator

Your next question comes from the line of Steve Smigie with Raymond James.

J. Steven Smigie - Raymond James Financial, Inc.

Analyst · Steve Smigie with Raymond James

Great. Thanks a lot for the time. Just a quick question in the auto market. At CES, you had some solutions you were demonstrating. It showed in pretty significant decline in terms of the size and what was being offered. You really shrunk it down a lot, yet still having great performance. If you think out to sort of the Level 4 solution that you talked about for 2020, how small can you ultimately make that? It seems like you could be sort of relative to the size of the car pretty small. So just curious to hear your comment on that, and what impact having this system in the car makes on it.

Jen-Hsun Huang - NVIDIA Corp.

Analyst · Steve Smigie with Raymond James

We currently have DRIVE PX. DRIVE PX today is a one-chip solution for Level 3. And with two chips, two processors, you can achieve Level 4. And with many processors, you could achieve Level 5 today. And some people are using many processors to develop their Level 5, and some people are using a couple of processors to develop their Level 4. Our next generation, so that's all based on the Pascal generation. That's all based on the Pascal generation. Our next generation, the processor is called Xavier. We announced that recently. Xavier basically takes four processors and shrink it into one. And so we'll be able to achieve Level 4 with one processor. That's the easiest way to think about it. So we'll achieve Level 3 with one processor today. Next year, we'll achieve Level 4 with one processor, and with several processors, you could achieve Level 5. But I think that the number of processors is really interesting because we need to do the processing of sensor fusion, and we got to do perception. We have to do localization. We have to do driving. There's a lot of functional safety aspects to it, failover functionality. There are all kinds of black box recorders, all kinds of different functionality that goes into the processor. And I think it's really quite interesting. But in the final analysis, what's really, really hard, and this is one of the reasons why our positioning in the autonomous driving market is becoming more and more clear, is that in the final analysis, there's really a software problem. And it's an end-to-end software problem. It goes all the way from processing, in the perception processing in the car to AI processing to helping you drive, connected to HD clouds for HD map processing all over…

Operator

Operator

Your next question comes from the line of Craig Ellis with B. Riley & Company. Craig A. Ellis - B. Riley & Co. LLC: Thanks for sneaking me in, and congratulations on the very good execution. Jen-Hsun, I wanted to come back to the Gaming platform. You've now got the business running at a $5 billion annualized run rate. So congratulations on the growth there. I think investors look at that as a business that's been built on the strength of a vibrant enthusiast market. But at CES, you announced the GeForce NOW offering, which really allows you to tap into the more casual potential gamer. So the question is, what will GeForce NOW do incrementally for the opportunity that you have with your Gaming platform?

Jen-Hsun Huang - NVIDIA Corp.

Analyst · Craig Ellis with B

Yeah, I appreciate that. I think, first of all, the PC gaming market is growing because of a dynamic that nobody ever expected, a dynamic that nobody ever expected 20 years ago. And that's basically how video games went from being a game to becoming a sport. And not only is it a sport, it's a social sport. And in order to play some of these modern eSports games, it's a five-on-five. and so you kind of need four other friends. And so as a result, in order to enjoy, to be part of this phenomenon that's sweeping the world, that it's rather sticky. And that's one of the reasons why Activision Blizzard is doing so well, that's one of the reasons why Tencent is doing so well. These two companies have benefited tremendously from the eSport dynamic, and we're seeing it all over the world. And although it's free to play for some people, of course, you need to have a reasonably good computer to run it. And that's one of the reasons why you need GeForce in your PC so that you can enjoy these sports. When it's also a sport, nobody likes to lose, and surely nobody likes to blame their equipment when they do lose. And so having GeForce, it gives you confidence and it gives you an edge, and for a lot of gamers it's just the gold standard. And so I think that number one, eSports is one the reasons why gaming continues to grow. And I think at this point it's fair to say that even though it's now the second most-watched spectator sport on the planet behind Super Bowl, it is also the second highest paid winning sport behind football. It will soon be the largest sport in the world, and I…

Operator

Operator

Unfortunately, that is all the time we have for questions. Do you have any closing remarks?

Jen-Hsun Huang - NVIDIA Corp.

Analyst · Evercore. C.J., your line is open

I want to thank all of you guys for following us. We had a record year, a record quarter. And most importantly, we're at the beginning of the AI computing revolution. This is a new form of computing, new way of computing, where parallel data processing is vital to success, and GPU computing that we've been nurturing for the last decade and some is really the perfect computing approach. We're seeing tremendous growth and exciting growth in the data center market. Datacenter now represents, had grew 3x over year-over-year, and it's on its way to become a very significant business for us. Gaming is a significant business for us, and longer term, self-driving cars is going to be a really exciting growth opportunity. The thing that has really changed our company, what really defines how our company goes to market today, is really the platform approach, that instead of just building a chip that is industry standard, we created software stacks on top of it to serve vertical markets that we believe will be exciting long term that we can serve. And we find ourselves incredibly well positioned now in gaming, in AI and in self-driving cars. I want to thank all of you guys for following NVIDIA, and have a great year.

Operator

Operator

This concludes today's conference call. You may now disconnect.