This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.
spk07: Good afternoon. My name is Christina, and I will be your conference operator today. Welcome to NVIDIA's Financial Results Conference Call. All lines have been placed on mute. After the speaker's march, there will be a question and answer period. At this time, if you'd like to ask a question, please press star then the number one on your telephone keypad. To withdraw your question, press the pound key. And I'll turn the call over to Simona Jankowski from Investor Relations to begin your conference.
spk12: Thank you. Good afternoon, everyone. And welcome to NVIDIA's Conference Call for the first quarter of fiscal 2020. With me on the call today from NVIDIA are Jensen Huang, President and Chief Executive Officer, and Collette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the second quarter of fiscal 2020. The content of today's call is NVIDIA's property. It can be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent forms 10K and 10Q, and the reports that we may file on form 8K with the Securities and Exchange Commission. All our statements are made as of today, May 16, 2019, based on information currently available to us. Accepted required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that, let me turn the call over to Collette.
spk13: Thanks, Simona. Q1 revenue was $2.2 billion in line with our outlook and down 31% -on-year and up 1% sequentially. Starting with our gaming business, revenue of $1.05 billion was down 39% -on-year and up 11% sequentially, consistent with our expectations. We are pleased with the initial ramp of Turing and the reduction of inventory in the channel. During the quarter, we filled out our Turing lineup with the launch of mid-range GeForce products that enable us to delight gamers with the best performance at every price point, starting at $149. New product launches this quarter included the GeForce GTX 1660 Ti, 1660, and 1650, which bring Turing to the high-volume PC gaming settings for both desktop and laptop. These GPUs deliver up to 50% performance improvement over their Pascal-based predecessors, leveraging new shader innovations such as concurrent floating point and integer operations, a unified cache and adaptive shading, all with the incredibly power-efficient architecture. We expect continued growth in the gaming laptops this year. GeForce gaming laptops are one of the shining spots of the consumer PC market. This year, OEMs have built a record of nearly 100 GeForce gaming laptops. GeForce laptops start at $799 and all the way up to an amazing GeForce RTX 2080 4K laptops that are more powerful than even next-generation consoles. The content ecosystem for ray-traced games is gaining significant momentum. At the March Game Developers Conference, ray-tracing sessions were packed. Support for ray-tracing was announced by the industry's most important game engines, Microsoft DXR, Epic's Unreal Engine, and Unity. Ray-tracing will be the standard for next-generation games. In March, at our GPU Technology Conference, we also announced more details on our cloud gaming strategy through our GeForce Now service and the newly announced GFN Alliance. GeForce Now is a GeForce gaming PC in the cloud. For the one billion PCs that are not game-ready, expanding our reach well beyond today's 200 million GeForce gamers, it's an open platform that allows gamers to play the games they own instantly in the cloud on any PC or Mac anywhere they like. The service currently has 300,000 monthly active users with one million more on the wait list. To scale out to millions of gamers worldwide, we announced the GeForce Now Alliance, expanding GFN through partnerships with the global telecom providers. SoftBank in Japan and LG U Plus in South Korea will be among the first to launch GFN later this year. Nvidia will develop the software and manage the service and share the subscription revenue with Alliance partners. GFN runs on Nvidia's Edge computing servers. As telcos race to offer the new services for their 5G network, GFN is an ideal new 5G application. Moving to data center. Revenue was 634 million. Down 10% year on year and down 7% sequentially, reflecting the pause in hyperscale spending. While demand from some hyperscale customers bounced back nicely, others paused or cut back. Despite the uneven demand backdrop, the quarter had significant positives. Consistent with the growth drivers we outlined on our previous earnings call. First, inference revenue was up sharply both year on year and sequentially with broad based adoption across a number of hyperscale and consumer internet companies. As announced at GTC, Amazon and Alibaba joined other hyperscales such as Google, Baidu and Tencent in adopting the T4 in their data centers. A growing list of consumer internet companies is also adopting our GPUs for inference, including LinkedIn, Expedia, Microsoft, PayPal, Pinterest, Snap and Twitter. The contribution of inference to our data center revenue is now well into the double digit percent. Second, we expanded our reach in enterprise, teaming up with major OEMs to introduce the T4 enterprise and edge computing servers. These are optimized to run the NVIDIA CUDA-X AI acceleration libraries for AI and data analytics. With an easy to deploy software stack from NVIDIA and our ecosystem partners, this wave of NVIDIA edge AI computing systems enables companies in the world's largest industries, transportation, manufacturing, industrial, retail, healthcare and agricultural to bring intelligence to the edge where the customers operate. And third, we made significant progress in data center rendering and graphics. We unveiled a new RTX server configuration packing 40 GPUs into an eight-use space and up to 32 servers in a pod, providing unparalleled density, efficiency and scalability. With a complete stack, this server design is optimized for three data center graphic workloads, rendering, remote workstation and cloud gaming. The rendering opportunity is starting to take shape with early RTX server deployment at leading studios, including Disney, Pixar and Weta. In the quarter, we announced our pending acquisition of Mellanox for $125 per share in cash, representing a total enterprise value of approximately 6.9 billion, which we believe will strengthen our strategic position in data center. Once complete, the acquisition will unite two of the world's leading companies in high-performance computing. Together, NVIDIA's computing platform and Mellanox's interconnects power over 250 of the world's top 500 supercomputers and have as customers every major cloud service provider and computer maker. Data centers in the future will be architect as giant compute engines with tens and thousands of compute nodes designed holistically with their interconnects for optimal performance. With Mellanox, NVIDIA will optimize data center scale workloads across the entire computing, networking and storage stack to achieve higher performance, greater utilization and lower operating cost for customers. Together, we can create better AI computing systems for the cloud to enterprise to the edge. As stated at the time of the announcement, we look forward to closing the acquisition by the end of this calendar year. Moving to pro visualization, revenue reached 266 million, up 6% from a prior year and down 9% sequentially. Year on year growth was driven by both desktop and mobile workstations, while the sequential decline was largely seasonal. Areas of strength included the public sector, oil and gas and manufacturing. Emerging applications such as AI, AR, VR contributed an estimated 38% of pro visualization revenue. The real time ray tracing capabilities of RTX are a game changer for the visual effects industry and we are seeing tremendous momentum in the ecosystem. At UTC, we announced that the world's top 3D application providers have adopted NVIDIA RTX in their product releases set for later this year, including Adobe, Autodesk, Chaos Group, DeSalt and Pixar. With this rich software ecosystem, NVIDIA RTX is transforming the 3D market. For example, Pixar is using NVIDIA RTX ray tracing on its upcoming films. What a digital is using it for upcoming Disney projects and Siemens, NX Ray Trace Studios users will be able to generate rendered images up to four times faster in their product design workflows. We are excited to see the tremendous value NVIDIA RTX is bringing to the millions of creators and designers served by ecosystem partners. Finally, turning to automotive. Q1 revenue was 166 million, up 14% from a year ago and up 2% sequentially. Year on year growth was driven by growing adoption of next generation AI cockpit solutions and autonomous vehicle development deals. At UTC, we had major customer and product announcements. Toyota selected NVIDIA's -to-end platform to develop, train and validate self-driving vehicles. This broad partnership includes advancements in AI computing infrastructure using NVIDIA GPUs, simulation using NVIDIA Drive Constellation Platform and in-car AV computers based on the Drive, AGX, Xavier or Pegasus. We also announced the public availability of Drive Constellation, which enables millions of miles to be driven in virtual worlds across the broad range of scenarios with greater efficiency, cost effectiveness and safety than what's possible to achieve in the real world. Constellation will be reported in our data center market platform. And we introduced NVIDIA Safety Force Field, a computational defensive driving framework that shields autonomous vehicles from collisions. Mathematically verified and validated in simulation, Safety Force Field will prevent a vehicle from creating, escalating or contributing to an unsafe driving situation. We continue to believe that every vehicle will have an autonomous capability one day, whether with driver or driverless. To help make that vision a reality, NVIDIA has created an -to-end platform for autonomous vehicles from AI computing infrastructure to simulation to in-car computing. And Toyota is our first major win that validates the strategy. We see this as a 30 billion addressable market by 2025. Moving to the rest of the P&L and balance sheet. Q1 GAAP gross margins was .4% and non-GAAP was 59% down year on year to lower gaming margins and mix up sequentially from Q4, which had 128 million charge from DRAM boards and other components. GAAP operating expenses were 938 million and non-GAAP operating expenses were 753 million, up 21% and 16% year on year respectively. We remain on track for high single digit OPEX growth and fiscal 2020 while continuing to invest in the key platforms driving our long-term growth, namely graphics, AI and self-driving cars. GAAP EPS was 64 cents and non-GAAP EPS was 88 cents. We did not make any stock repurchases in the quarter. Following the announcement of the pending micro Mellanox acquisition, we remain committed to returning 3 billion to shareholders through the end of fiscal 2020 in the form of dividends and repurchases. So far we have returned 800 million through share repurchases and quarterly cash dividends. With that, let me turn to the outlook for the second quarter of fiscal 2020. While we anticipate substantial quarter over quarter growth, our Q2 outlook is somewhat lower than our expectation earlier in the quarter when our outlook for fiscal 2020 revenue was flat to down slightly from fiscal 2019. The data center spending pause around the world will likely persist in the second quarter and visibility remains low. Any gaming, the CPU shortage while improving will affect the initial ramp of our laptop business. For Q2, we expect revenue to be 2.55 billion, plus or minus 2%. We expect a stronger second half than a first half and we are returning to our practice of providing revenue outlook one quarter at a time. Q2 gap and non-gap gross margins are expected to be .2% and 59.5%, respectively plus or minus 50 basis points. Gap and non-gap operating expenses are expected to be approximately 985 million, 765 million, respectively. Gap and non-gap OINE are both expected to be income of approximately 27 million. Gap and non-gap tax rates are both expected to be 10%, plus or minus 1%, excluding discrete items. Capital expenditures are expected to be approximately 120 million to 140 million. Further financial details are included in the CFO commentary and other information available on our IR website. In closing, let me highlight upcoming events for the financial community. We'll be presenting at the Bank of America Global Technology Conference on June 5th, at the RBC Future of Mobility Conference on June 6th, and at the NASDAQ Investor Conference on June 13th. Our next earnings call to discuss financial results for the second quarter of fiscal 2020 will take place on August 15th. We will now open the call for questions. Operator, will you please pull? Thank you.
spk07: At this time, if you'd like to ask a question, please press star then the number one on your telephone keypad. That is star then the number one. And your first question comes to the line of Aaron Rakers with Wells Fargo.
spk05: Yeah, thanks for taking the question. Collette, I was wondering if you could give a little bit more color or discussion around what exactly you've seen in the data center segment and whether or not or what you're looking for in terms of signs that we can kind of return to growth or maybe this pause is behind us. I guess what I'm really asking is kind of what's changed over the last, let's call it, three months relative to your prior commentary from a visibility perspective and just demand perspective within that segment.
spk13: Sure, thanks for the question as we start out here. I think when we had discussed our overall data center business three months ago, we did indicate that our visibility as we turned into the new calendar year was low. We had a challenge in terms of completing some of the deals at the end of that quarter. As we moved into Q1, I think we felt solid in terms of how we completed. We saw probably a combination of those moving forward continuing with their CAPEX expenditures and building out in terms of what they need for the data centers. Some others are still in terms of a pause. So as we look in terms of with Q2, I think we see a continuation of what we have in terms of the visibility, not the best visibility going forward, but still rock solid to what we think are benefits of what we provide in terms of a platform. Our overall priorities are aligned to what we see with the hyperscales as well as the enterprises as they think about using AI in so many of their different workloads, but we'll just have to see as we go forward how this turns out. But right now, visibility probably just remains the same about where we were when we started three months ago.
spk05: Okay, and then as a quick follow up on the gaming side, last quarter you talked about that being down, I think it was termed as being down slightly for the full year. Is that still the expectation or how has that changed?
spk13: So at this time, we don't plan on giving a full year overall guidance. I think our look in terms of gaming, all of the still drivers that we thought about earlier in the quarter and we talked about our investor day and we have continued to talk about are still definitely in line. The drivers of our gaming business and Turing, RTX for the future are still on track, but we're not providing guidance at this time for the full year.
spk07: And your next question comes from line of Harland Sir with JP Morgan.
spk04: Good afternoon, thanks for taking my question. On the last audience call you had mentioned, China gaining demand is a headwind. At the analyst day in mid-March, I think Jensen had mentioned that the team was already starting to see better demand trends out of China, maybe given the relaxed stance on gaming bands. Do you anticipate continued China gaming demand on a go-forward basis and maybe talk about some of the dynamics driving that demand profile in the China geography?
spk14: Sure. China looks fine. I think China has stabilized. The gaming market in China is really vibrant and it continues to be vibrant. Tencent's releasing new games. I think you might have heard that Epic Store's now open in Asia and games are available from the West. So there are all kinds of positive signs in China. There's some 300 million PC gamers in China and people are expecting it to grow. We're expecting the total number of gamers to continue to grow from the one plus billion PC gamers around the world to something more than that. Things look fine.
spk04: Thanks for that. Then as a follow-up, a big part of the demand profile in the second half of the year for the gaming business is always the lineup of AAA rated games. Obviously you guys have a very close partnership with all of the game developers. How does the pipeline of new games look? They get launched October, November timeframe. Either a total number of blockbuster games and also games supporting real-time ray tracing as well as some of your DLSS capabilities.
spk14: Yeah, well it's seasonal. Second half of the year we expect to see some great games. We won't pre-announce anybody else's games for them but this is a great PC cycle because it's the end of the console cycle and PCs is where the action's at these days with Battle Royale and eSports and so much social going on. The PC gaming ecosystem is just really vibrant. Our strategy with RTX was to take a lead and move the world to ray tracing. At this point I think it's fairly safe to say that the leadership position that we've taken has turned into a movement that has turned next generation gaming ray tracing into a standard. Almost every single game platform will have to have ray tracing and some of them already announced it. And the partnerships that we've developed are fantastic. Microsoft DXR is supporting ray tracing. Unity is supporting ray tracing. Epic is supporting ray tracing. Leading publishers like EA has adopted RTX and supporting ray tracing. And movie studios. Pixar has adopted, announced that they're using RTX and will use RTX to accelerate their rendering of films. And so Adobe and Autodesk jumped onto RTX and will bring ray tracing to their content and their tools. And so I think at this point it's fair to say that ray tracing is the next generation and it's going to be adopted all over the world.
spk07: And your next question comes from Timothy R. Curry with UBS.
spk08: Thank you. I guess the first question is for Colette. So what went into this decision to pull for your guidance versus just cutting it? Is it really just fear around how long it could take for data center to come back? Thank you.
spk13: Yeah, I'll start off here and kind of go back to where our thoughts were in Q&A. So what was the reason we chose Q1 and why we provided for your guidance when we were in Q1? When we looked at Q1 and what we were guiding, we understood that it was certainly an extraordinary quarter. Something that we didn't feel was a true representative of our business. And we wanted to get a better view of our trajectory of our business in terms of going forward. We are still experiencing, I think the uncertainty as a result of the pause in terms of the, with the overall hyposcale data centers. And we do believe that's gonna extend into Q2. However, we do know and expect that our Q2, or excuse me, our H2 will likely be sizably larger than our overall H1. And the core dynamics of our business at every level is exactly what we expected. Just that said though, we're going to return to just quarterly guidance at this time.
spk08: Okay, thanks. And then just as a follow up, can you give us some even qualitative, if not quantitative sense of the $320 million incremental revenue for July, how that breaks out? Is the thinking sort of that data center's gonna be flat to maybe up a little bit and pretty much the remainder of the growth comes from gaming? Thanks.
spk13: Yeah, so when you think about our growth between Q1 and Q2, yes, we do expect in terms of our gaming to increase. We do expect our Nintendo Switch to start again in sizable amounts once we move into Q2. And we do at this time expect probably our data center business to grow.
spk07: And your next question comes from a line of Toshia Hari with Goldman Sachs.
spk02: Thanks for taking the question. Jensen, I had a follow up on the data center business. I was hoping you could provide some color in terms of what you're seeing, not only from your hyperscale customers, which you've talked extensively on, but more on the enterprise and HP side of your business. And specifically on the hyperscale side, you guys talk about this pause that you're seeing from your customer base. When you're having conversations with your customers, do they give you a reason as to why they're pausing? Is it too much inventory of GPUs and CPUs and so on and so forth? Or is it optimization giving them extra capacity? Is it caution on their own business going forward? Or is it a combination of all the above? Any color on that would be helpful too. Thank you.
spk14: Hyperscalers are digesting the capacity they have. They, at this point, I think it's fairly clear that in the second half of last year, they took on a little bit too much capacity. And so everybody has paused to give themselves a chance to digest. However, our business on inference is doing great. And we're working with CSPs all over the world to accelerate their inference models. Now the reason why recently the inference activity has gotten just off the charts because of breakthroughs in what we call conversational AI. In fact, today, I think I just saw it today, but I've known about this work for some time. Harry Shum's group, Microsoft AI Research Group, today announced their multitask DNN, General Language Understanding Model. And it broke benchmark records all over the place. And basically what this means is the three fundamental components of conversational AI, which is speech recognition, natural language understanding, which this multitask DNN is a breakthrough in. And it's based on a piece of work that Google did recently called BERT. And then -to-speech. All of the major pieces of a conversational AI are now put together. Of course, it's gonna continue to evolve, but these models are gigantic to train. And in the case of Microsoft's network was trained on voltage GPUs. And these systems require large amounts of memory. The models are enormous. It takes an enormous amount of time to train these systems. And so we're seeing a breakthrough in conversational AI. And across the board, internet companies would like to make their AI much more conversational so that you can access through phones and smart speakers and be able to engage AI practically everywhere. The work that we're doing in industries makes a ton of sense. We're seeing AI adoption in just about all the industries, from transportation to healthcare, to retail, to logistics, industrials, agriculture. And the reason for that is because they have a vast amount of data that they're collecting. And I heard a statistic just the other day from a talk that Satya gave that some 90% of today's data was created just two years ago. And it's being created by and gathered by these industrial systems all over the world. And so if you wanna put that data to work, and you could create the models using our systems, our GPUs for training, and then you can extend that all the way out to the edge. This last quarter, we started to talk about our enterprise server based on T4. This inference engine that has been really successful for us at the CSPs is now going out into the edge. And we call them edge servers and enterprise servers. And these edge systems are gonna do AI basically instantaneously. It's too much data to move all the way to the cloud. You might have data sovereignty concerns. You want it to have very, very low latency. Maybe it needs to have multi-sensor fusion capabilities so it understands the context better. For example, what it sees and what it hears has to be harmonious. And so you need that kind of AI, those kinds of sensor computing at the edge. And so we're seeing a ton of excitement around this area. Some people call it the intelligent edge. Some people call it edge computing. And now with 5G networks coming, we're seeing a lot of interest around the edge computing servers that we're making. And so those are kind of the activities that we're seeing.
spk02: Thank you. And as a quick follow-up on the gaming side, Collette, can you characterize product mix within gaming? You saw in the current quarter, you cited mix as one of the key reasons why gross margins were down year over year, albeit off a high base. Going into Q2 in the back half, would you expect a skew mix within gaming to improve or stay the same? I ask because it's important for gross margins, obviously. Thank you.
spk13: Yeah, when you look at our sequential gross margin increase, that will be influenced by our larger revenue, our larger revenue and better mix, which you're correct is our largest driver for overall gross margin. However, we will be beginning the Nintendo Switch back up and that does have lower gross margins than the company average, influencing therefore Q2 gross margin guidance that we have provided. As we look forward towards the rest of the year, we think mix and the higher revenue again will influence and likely rise our overall gross margins for the full year.
spk07: And your next question comes from line of Joe Moore with Morgan Stanley.
spk09: Great, thank you. You've talked quite a bit about GeForce now. In the remarks and at the analyst day, it seems like cloud gaming is gonna be a big topic at 83. Is that gonna be your preferred way to go to market with cloud gaming and do you expect to sell GPUs to sort of traditional cloud vendors in non-GeForce now fashion?
spk14: Yes, our strategy for cloud gaming is to extend our PC position for GeForce gamers into the cloud. And our strategy for building out our network is partnerships with telcos around the world. And so we'll build out some of it and on top of the service, we have our entire PC gaming stack. And when we host the service, we'll move to a subscription model. And with our telcos around the world who would like to provision the service at their edge servers, and many of them would like to do so, in conjunction with their 5G telco services to offer cloud gaming as a differentiator in all of these different countries where PC exposure has been relatively low, we have an opportunity to extend our platform out to that billion PC gamers. And so that's our basic strategy. We also offer our edge server platform to all of the cloud service providers. Google has Nvidia GPU graphics in the cloud. Amazon has Nvidia GPU graphics in the cloud. And Microsoft has Nvidia GPU graphics in the cloud. And these GPUs will be fantastic also for cloud gaming and workstation graphics and also ray tracing. And so the platform is capable of running all of the things that Nvidia runs. And we try to put it in every data center, in every cloud, from every region that's possible.
spk09: Thank you very much.
spk07: And your next question comes to the line of Vivek Arya with Banks of America, Miracle Inch.
spk01: Thanks for taking my question. I actually had a clarification for Colette and a question for Jensen. Colette, are you now satisfied that the PC gaming business is operating at normal levels when you look at the Q2 guidance? Like are all the issues regarding inventory and other issues, are they over? Or do you think that the second half of the year is more the normalized run rate for your PC gaming business? And then Jensen, on the data center, Nvidia has dominated the training market. Inference sounds a lot more fragmented and competitive. There's a lot of talk of software being written more at the framework level. How should we get the confidence that your lead in training will help you maintain a good lead in inference also? Thank you.
spk13: Thanks for the question. So let's start with your first part of the question regarding have we reached overall normalized gaming levels? When we look at our overall inventory in the channel, we believe that this is relatively behind us and moving forward that it will not be an issue. Going forward, we will probably reach normalized level for gaming somewhere between Q2 and Q3, similar to our discussion that we had back at Analyst A at the beginning of the quarter.
spk14: Nvidia's strategy is accelerated computing. It is very different than an accelerator strategy. For example, if you were building a smart microphone, you need an accelerator for speech recognition, ASR. Our company is focused on accelerated computing. And the reason for that is because the world's body of software is really gigantic. And the world's body of software continues to evolve. And AI is nowhere near done. We're probably at the first couple of innings of AI of that. And so the amount of software and the size of the models are gonna have to continue to evolve. Our accelerated computing platform is designed to enable the computer industry to bring forward into the future all the software that exists today, whether it's TensorFlow or Cafe or PyTorch, or it could be classical machine learning algorithms like XGBoost, which is actually right now the most popular framework in machine learning overall. And there are so many different types of classical algorithms, and not to mention all of the handwritten engineered algorithms by programmers. And those algorithms and those hand-engineered algorithms also would like to be mixed in with all of the deep learning or otherwise classical machine learning algorithms. This whole body of software doesn't run on a single function accelerator. If you would like that body of software to run on something, it would have to be sufficiently general purpose. And so the balance that we made was we invented this thing called a Tensor Core that allows us to accelerate deep learning to the speed of light. Meanwhile, it has the flexibility of CUDA so that we can bring forward everything in classical machine learning, as people have started to see with Rapids and it's being announced, being integrated into machine learning pipelines in the cloud and elsewhere. And then also all of the high performance computing applications or computer vision algorithms, image processing algorithms that don't have deep learning or machine learning alternatives. And so our company is focused on accelerated computing. And speaking of inference, that's one of the reasons why we're so successful in inference right now. We're seeing really great pickup. And the reason for that is because the type of models that people wanna run on one application, and let's just use one application, one very, very exciting one, conversational AI, you would have to do speech recognition. You would have to then do natural language understanding to understand what the speech is. You might have to convert, you have to translate to another language. Then you have to do something related to maybe making a recommendation or making a search. And then after that, you have to convert that recommendation and search and the intent into speech. While some of it could be eight bit integer, some of it really wants to be 16 bit floating point. And some of it, because of the development state of it, may wanna be in 32 bit floating point. And so the mixed precision nature and the computational algorithm nature, flexibility nature of our approach make it possible for cloud providers and people who are developing AI applications to not have to worry about exactly what model it runs or not. We run every single model. And if it doesn't currently run, we'll help you make it run. And so the flexibility of our architecture and the incredible performance in deep learning is really a great balance and allows customers to deploy it easily. So our strategy is very different than an accelerator. I think the only accelerators that I really see successful at the moment are the ones that go into smart speakers. And surely there are a whole bunch being talked about but I think the real challenges is how to make it run real workloads. And we're gonna keep cranking along in our current strategy and keep raising the bar as we have in the past.
spk07: Thank you. Your next question comes from a line of Stacey Resgon with Bernstein Research.
spk11: Hi guys, thanks for taking my question. This is a question for Collette. Collette, you said inference and rendering within data center were both up very strongly. But I guess that has to imply that the training slash acceleration piece is quite weak, even weaker than the overall. Given those should be adding to efficiency, I'm just surprised it's down that much. Is this truly just digestion? I mean, is it share? I mean, your competitor is now shipping some parts here. I guess, how do we get confidence that we haven't seen a ceiling on this? I mean, do you think given the trajectory, you can exit the year above the prior peaks? I guess you kind of have to, given at least the qualitative outlook in the second. I guess maybe just any color you can give us on any of those trends would be super helpful.
spk13: Sure, as we discussed, Stacey, we are seeing many of the hyperscales definitely purchasing in terms of the inferencing into the installment that it continues. Also in terms of the training, the training instances that they will need for their cloud or for internal use, absolutely. But we have some that have paused and going through those periods. So that we do believe this will come back. We do believe as we look out into the future that they will need that overall deep learning for much of their research, as well as many of their workloads. So no concern on that, but right now we do see a pause. I'll turn it over to Jensen to see if he has additional comments.
spk14: Let's see. I think that when it comes down to training, if your infrastructure team tells you not to buy anything, the thing that suffers is time to market and some amount of experimentation that allows you to get better. And waiting longer. I think that for computer vision type of algorithms and recommendation type of algorithms, that posture may not be impossible. However, the type of work that everybody is now jumping on top of, which is natural language understanding and conversational AI and the breakthrough that Microsoft just announced, if you wanna keep up with that, you're gonna have to buy much, much larger machines. And I'm looking forward to that. And I expect that that's gonna happen. But in the latter part of last year, Q4 and Q1 of this year, we did see pause from the hyperscalers, but I don't expect it to last.
spk11: Just as a quick follow up, I just wanted to ask about the regulatory around Mellanox in the context of what we're seeing out of China now. How do we sort of gauge the risk of, I guess, potential further deterioration in relationships sort of spilling over on the regulatory front around that deal? We've seen that obviously with some of the other large deals in the space. What are your thoughts on that?
spk14: Well, first principles, the acquisition is going to enable data centers around the world, whether it's US or elsewhere, China, to be able to advance much, much more quickly. We're gonna invest in building infrastructure technology. And as a combined company, we'll be able to do that much better. And so this is good for customers, and it's great for customers in China. The two matters that we're talking about are different. And one is related to competition with respect to our acquisition, to competition in the market, and the other one is related to trade. And so the two matters are just different. And in our particular case, we bring so much value to the marketplace in China. And I'm confident that the market will see that.
spk07: And your next question comes from Ryan of CJ Muse with Evercore ISI.
spk06: Yeah, good afternoon. Thank you for taking my question. I guess a question on the non-cloud part of your data center business. So if you think about the trends you're seeing in enterprise virtualization and HPC, and all the work you're doing around rapids rendering, et cetera, can you kind of talk through the visibility you have today for that part of your business? I think that's roughly 50% of the mix. So is that a piece that you feel confident can grow in 2019? And any color around that would be appreciated.
spk14: We expect it to grow in 2019. A lot of our T4 inference work is related to what people call edge computing. And it has to be done at the edge because the amount of data that otherwise would be transferred to the cloud is just too much. It has to be done at the edge because of data sovereignty issues and data privacy issues. And it has to be done at the edge because the latency requirement is really, really high. It has to respond basically like a reflex to make a prediction or make a suggestion or stop a piece of machinery instantaneously. And so a lot of that work, a lot of the work that we're doing in T4 inference is partly in the cloud, a lot of it is at the edge. T4 servers for enterprise were announced, I guess about halfway through the quarter. And the OEMs are super excited about that because the number of companies in the world who want to do data analytics, predictive data analytics is quite large. And the size of the data is growing so significantly. And with Moore's law ending, it's really hard to power through terabytes of data at a time. And so we've been working on building the software stack from the new memory architectures and storage architectures all the way to the computational middleware. And it's called Rapids and I appreciate you saying that. And that's being put together and the activity in GitHub is just fantastic. And so you can see all kinds of companies jumping in to make contributions because they would like to be able to take that open source software and run it in their own data center on our GPUs. And so I expect the enterprise side of our business, both for enterprise big data analytics or for edge computing to be a really good growth driver for us this year.
spk06: As a follow up, real quickly on auto, it's a business that you've talked about more R&D focus, but clearly I think it's surprised positively. What's the visibility like there and how should we think about growth trajectory into the second half of the year?
spk14: Our automotive strategy has several components. There's the engineering component of it where our engineers and their engineers have to co-develop the autonomous vehicles. And then there's three other components. There's the component of AI computing infrastructure we call DGX and or any of the OEM servers that include our GPUs that are used for developing the AIs. The cars are collecting a couple of terabytes per day per test car. And all of that data has to be powered through and crunched through in the data center. And so we have an infrastructure of what we call DGX that people could use. And we're seeing a lot of success there. We just announced this last quarter, a new infrastructure called Constellation that lets you essentially drive thousands and thousands of test cars in your data center. And they're all going through pseudo directed random or directed scenarios that allows you to either test untestable scenarios or regress against previous scenarios. And we call that Constellation. And then lastly, after working on a car for several years, we would install the computer inside the car and we call that drive. And so these are the four components of opportunities that we have in the automotive industry. We're doing great in China. There's a whole bunch of electric vehicles being created. The robot taxis developments around the world largely use Nvidia's technology. We recently announced a partnership with Toyota. There's a whole bunch of stuff that we're working on and I'm anxious to announce them to you. But this is an area that is the tip of the iceberg of a larger space we call robotics and computing at the edge. But if you think about the basic computational pipeline of a self-driving car, it's no different essentially than a smart retail or the future of computational medical instruments, agriculture, industrial inspection, you know, delivery drones, all basically use essentially the same technique. And so this is the foundational work that we're gonna do for a larger space that people call the intelligent edge or computing at the edge.
spk07: Your next question comes from a line of Chris Casso with Raymond James.
spk03: Yes, thank you, good afternoon. First question is on notebooks. And just to clarify what's been different from your expectations this year, is it simply that the OEMs didn't launch the new models you'd expected given the shortage or is it more just about unit volume? And then, you know, just following up on that, you know, what's your level of confidence in that coming back to be a driver as you go to the second half of the year?
spk14: In Q2, we were, we had to deal with some CPU shortage issues at the OEMs. It's improving, but the initial ramp will be affected. And so the CPU shortage situation has been described fairly broadly and it affected our initial ramp. We don't expect it to affect our ramp going forward. And, you know, the new category of gaming notebooks that we created called Max-Q has made it possible for really amazing gaming performance to fit into a thin and light. And these new generations of notebooks with our Max-Q design and the Turing GPU, which is super energy efficient, in combination made it possible for OEMs to create notebooks that are both affordable all the way down to 799, thin and really delightful, all the way up to something incredible with a RTX 2080 and a 4K display. And these are thin notebooks that are really beautiful that people would love to use. And this, the invention of the Max-Q design method and all the software that went into it that we announced last year, we had, I think last year we had some 40 notebooks or so, maybe a little bit less than that. And this year we have some hundred notebooks that are being designed at different price segments, by different OEMs across different regions. And so I think this year is gonna be quite a successful year for notebooks. And it's also the most successful segment of consumer PCs. It's the fastest growing segment. It is very largely under-penetrated because until Max-Q came along, it wasn't really possible to design a notebook that is both great in performance and experience and also something that a gamer would like to own. And so finally we've been able to solve that difficult puzzle and created powerful gaming machines that are inside a notebook that's really wonderful to own and carry around. And so this is gonna be a really, this is a fast growing segment and all the OEMs know it. And that's why they put so much energy into creating all these different types of designs and styles and sizes and shapes. And we have a hundred Turing GPU notebooks, gaming PCs ramping right now.
spk03: That's very helpful, thank you. As a follow-up, I just wanted to follow up on some of the previous questions on the automotive market. And we've been talking about it for a while. Obviously the design cycles are very long. So you do have some visibility. And I guess the question is, when can we expect an acceleration of auto revenue is next year of the year? And then what would be the driver of that in terms of dollar contribution? I presume some of the level two plus things you've been talking about would probably be most likely there given the amount of volume there if you can confirm that and just give some color on the expectations for drivers.
spk14: Yeah, level two plus, call it 2020, late 2021, or 2022-ish, so that's level two plus. I would say 2019, very, very, very early for robot taxis. Next year, substantially more volume for robot taxis, 2021, bigger volumes for robot taxis. The ASP differences, the amount of computation you put into a robot taxi, because of sensor resolutions, sensor diversity and redundancy, the computational redundancy and the richness of the algorithm, all of it put together. It's probably an order of magnitude plus in computation. And so the economics would reflect that. And so robot taxi is kind of like next year, year after ramp, and then think of level two plus as 2021, 2022. Overall, remember that our economics come from four different parts. And so there's the NRE components of it, there's the AI development infrastructure, computing infrastructure part of it, the simulation part of it called Constellation, and then the economics of the car. And so we just announced Constellation, the enthusiasm around it is really great. Nobody should ever ship anything they don't simulate. And my expectation is that billions of miles will get simulated inside a simulator long before they'll ship it. And so that's a great opportunity for Constellation.
spk07: At the next question comes from line of Matt Ramsey with Cowen.
spk10: Thank you very much, good afternoon. I had two questions, one for Jensen and one for Collette. I guess Jensen, you've done, you said in many forums that the move down to the new process noted seven nanometer across the business was not really sufficient to have a platform approach. And I agree with that, but maybe you could talk a little bit about your product plans, at least in general terms around seven nanometer franchise in the gaming business and also in your training accelerator program. And wonder if that might be waiting for some of those products or at least anticipation of those might be the cause of a little bit of a pause here. And secondly, Collette, maybe you could talk us through your expectations. I understand there's a lack of visibility in certain parts of the business on revenue, but maybe you could talk about OPEX trends through the rest of the year where you might have a little more visibility. Thank you.
spk14: The entire reason for Q4 and Q1 is attributed to oversupply in the channel as a result of cryptocurrency. It has nothing to do with Turing. In fact, Turing is off to a faster start than Pascal was. And it continues to be on a faster pace than Pascal was. And so the pause is in gaming is now behind us. We're on a growth trajectory with gaming. RTX has took the lead on ray tracing and it's now gonna become the standard for next generation gaming support from basically every major platform and software provider on the planet. And our notebook growth is gonna be really great because of the Max-Q design that we invented. And the last couple of quarters was also intersected with, overlapped with the seasonal slowdown that, not sell, but build the seasonal builds of the Nintendo Switch. And we're gonna go back to the normal build cycle. And as Collette said earlier, somewhere between Q2 and Q3, we'll get back to normal levels for gaming. And so we're off to a great start in Turing and I'm super excited about that. And in the second half of the year, we would have fully ramped up from top to bottom our Turing architecture, spanning everything from 179 to as high performance as you like. And we have the best performance and best GPU at every single price point. And so I think we're in a pretty good shape. In terms of process nodes, we tend to design our own process with TSMC. If you look at our process and you measure its energy efficiency, it's off the charts. And in fact, if you take our Turing and you compare it against a seven nanometer GPU on energy efficiency, it's incomparable. In fact, a world seven nanometer GPU already exists. And it's easy to go and pull that and compare the performance and energy efficiency against one of our Turing GPUs. And so the real focus for our engineering team is to engineer a process that makes sense for us and to create an architecture that is energy efficient. And the combination of those two, the combination of those two things allows us to sustain our leadership position. Otherwise, buying off the shelf process is something that we can surely do, but we wanna do much more than that.
spk13: Okay, and to discuss your question regarding OPEX trajectory for the rest of the year, we're still on track to our thoughts on leaving the fiscal year with a -over-year growth in overall OPEX on a non-GAAP basis in the five single digits. We'll see probably an increase sequentially, quarter to quarter along there, but our -over-year growth will start to decrease and decline as we will not be growing at the speed that we did in this last year. But I do believe we're on track to meet that goal.
spk07: And I'll now turn the call back over to Jensen for any closing remarks.
spk14: Thanks, everyone. We're glad to be returning to growth. We are focused on driving three growth strategies. First, RTX ray tracing. It's now clear that ray tracing is the future of gaming and digital design, and Nvidia RTX is leading the way. With the support of Microsoft DXR, Epic, Unity, Adobe, and Autodesk, game publishers like EA, movie studios like Pixar, industry support has been fantastic. Second, accelerated computing and AI computing. The pause in hyperscale spending will pass. Accelerated computing and AI are the greatest forces in computing today, and Nvidia is leading these movements. Whether cloud or enterprise or AI at the edge for 5G or industries, Nvidia's one scalable architecture from cloud to edge is a focal point platform for the industry to build AI upon. Third, robotics. Some call it embedded AI, some edge AI, or autonomous machines. The same computing architecture is used for self-driving cars, pick and place robotics arms, delivery drones, and smart retail stores. Every machine that move, or machines that watch other things that move, whether with driver or driverless, will have robotics and AI capabilities. Our strategy is to create an -to-end platform that spans Nvidia's DGX AI computing infrastructure to Nvidia constellation simulation, to Nvidia AGX embedded AI computing. And finally, we're super excited about the pending acquisition of Melonox. Together we can advance cloud and edge architectures for HPC and AI computing. See you next quarter.
spk07: And this concludes today's conference call. You may now disconnect.
Disclaimer