NVIDIA Corporation

Q2 2022 Earnings Conference Call

8/18/2021

spk08: Good afternoon. My name is Mel, and I will be your conference operator today. At this time, I would like to welcome everyone to the NVIDIA's second quarter earnings call. All lines have been placed on mute to prevent any background noise. After the speaker's remarks, there will be a question and answer session. If you would like to ask a question during this time, simply press star followed by the number one on your telephone keypad. If you would like to withdraw your question, press the pound key. Thank you. Simona Jankowski, you may begin your conference.
spk07: Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the second quarter of fiscal 2022. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer, and Colette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the third quarter of fiscal 2022. The content of today's call is in video's property. It can be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results in business, please refer to the disclosure in today's earnings release, our most recent Form 10-K and 10-Q, and the report that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, August 18, 2021, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that, let me turn the call over to Colette.
spk06: Thanks, Simona. Q2 was another strong quarter, with revenue of $6.5 billion and year-on-year growth of 68%. We set records for total revenue as well as for gaming, data center, and professional visualization. Starting with gaming, revenue was 3.1 billion, was up 11% sequentially and up 85% from a year earlier. Demand remained exceptionally strong, outpacing supply. We are now four quarters into Ampere Architecture product cycle for gaming. and it continues to be our best ever. At Computex in June, we announced two powerful new GPUs for gamers and creators, the GeForce RTX 3080 Ti and RTX 3070 Ti, delivering 50% faster performance than their prior generation with acclaimed features such as real-time ray tracing, NVIDIA DLSS AI rendering, reflex, and broadcast. Laptop demand was also very strong. OEMs adopted Ampere architecture GPUs in a record number of designs, from the top-of-the-line gaming laptops to those to Main Street price points as low as $799 that bring the power of GeForce GPUs to gamers, students, and creators on the go. Ampere architecture-powered laptops feature our third generation Max-Q power optimization technology that enables ultra-thin designs, such as the new Alienware X15, the world's most powerful sub-16mm gaming laptop. NVIDIA RTX technology has reset computer graphics and spurred our biggest-ever refresh cycle. Ampere has been our fastest ramping gaming GPU architecture on Steam, and the combination of Turing and Ampere RTX GPUs have only upgraded about 20% of our installed base. 80% have yet to upgrade to RTX. And the audience for global esports will soon approach a half a billion people. while the number of those who live stream games is expected to reach over 700 million. The number of PC gamers on Steam is up almost 20% over the past year. More than 60 RTX games now support NVIDIA's RTX Ray Tracing or DLSS, including today's biggest game franchises, such as Minecraft, Fortnite, and Cyberpunk. New RTX games this quarter include Red Dead Redemption, Red Dead Redemption 2, one of the top-rated games of all time, popular titles like Rainbow Six Siege and Rust, and Minecraft RTX in China with over 400 million players. To competitive gamers, NVIDIA Reflex, which includes latency, is now supported by 20 games. Let me say a few words on cryptocurrency mining. In an effort to address the needs of miners and direct GeForce to gamers, we increased the supply of cryptocurrency mining processors, or CMP, and introduced low hash rate GeForce GPUs with limited Ethereum mining capability. Over 80% of our Ampere architecture-based GeForce shipments in the quarter were low hash rate GPUs. The combination of crypto to gaming revenue is difficult to quantify. CMP revenue, which is recognized in OEM, was $266 million, lower than our original $400 million estimate on reduced mining profitability, and we expect a minimal contribution from CMP going forward. GeForce Now reached a new milestone this quarter, surpassing 1,000 PC games, more than any other cloud gaming service. The premium tier is available for a subscription of $10 per month, giving gamers access to RTX class performance, even on an underpowered PC, Mac, Chromebook, iOS, or Android device. Moving to Pro Visualization. Q2 revenue was a record $519 million, up 40% sequentially and up 156% year-on-year. Strong sequential revenue growth was led by desktop workstations driven by demand to outfit design offices at home as remote work becomes the norm across industries. This was also the first big quarter of the Ampere architecture ramp for pro-visualization. Key verticals driving Q2 demand include automotive, public sector, and healthcare. At SIGGRAPH last week, we announced an expansion of NVIDIA Omniverse, our simulation and collaboration platform that provides the foundation of the metaverse. Through new integrations with Blender, the world's leading open source 3D animation tool, and Adobe, we're opening the Omniverse platform to millions of additional users. We are also collaborating with Apple and Pixar to bring advanced physics capabilities to Pixar's Universal Scene Description Framework, embracing open standards to provide 3D workflows to billions of devices. Omniverse Enterprise software is in the early access stage and will be generally available later this year on a subscription basis from NVIDIA's partners, including Dell, HP, Lenovo, and many others. Over 500 companies are evaluating Omniverse Enterprise, including BMW, Volvo, and Lockheed Martin. And more than 50,000 individual creators have downloaded Omniverse since it entered open beta in December. Moving to automotive, our Q2 revenue was 152 million, down 1% sequentially and up 3% year on year. Sequential revenue declines in infotainment were largely offset by growth in self-driving. Looking further out, we have substantial design wins set to ramp that we expect will drive a major inflection in revenue in the coming years. This quarter, we announced several additional wins. Self-driving startup AutoX unveiled its latest autonomous driving platform for rubber taxis powered by NVIDIA Drive. The performance and safe capabilities of the software-defined NVIDIA Drive platform has enabled AutoX to become one of the first companies in the world to provide full self-driving mobility services without the need for a safety driver. In autonomous trucking, Drive Ecosystem Partner Plus signed a deal with Amazon to provide at least 1,000 self-driving systems to Amazon's fleet of delivery vehicles. These systems are powered by NVIDIA Drive for high performance, energy efficient, and centralized AI compute. An autonomous trucking startup, Embark, is building on NVIDIA Drive. The system is being developed for trucks for four major OEMs, Freightliner, Navistar International, Picard, and Volvo, representing the vast majority of the Class 8 OEMs. or largest-size trucks in the U.S. The NVIDIA Drive platform is being rapidly adopted across the transportation industry from passenger-owned vehicles to robotaxi to trucking and delivery vehicles. We believe everything that moves will be autonomous someday. Moving to data center. Revenue of 2.4 billion grew 16% sequentially and 35% from the year ago quarter. The year ago quarter, which was our first quarter to include Mellanox. Growth was driven by both hyperscale customers and vertical industries, each of which had record revenues. Our flagship A100 continued to ramp across hyperscale and cloud computing customers. with Microsoft Azure announcing general availability in June, following AWS and Google Cloud Platform's general availability in prior quarters. Vertical industry demand was strong, with sequential growth led by financial services, supercomputing, and telecom customers. We also had exceptional growth in inference, which reached a record more than doubling year on year. Revenue from inference-focused processors includes the new A30 GPU, which provides four times the inference performance of the T4. Customers are also turning to NVIDIA GPUs to take AI to production and shifting from CPUs to GPUs, driven by the stringent performance, latency, and cost requirements of deploying and scaling deep learning AI workloads. NVIDIA networking products posted solid results. We see momentum across regions driven by our technology leadership with upgrades to high-speed products, such as ConnectX 6, as well as new customer wins across cloud service providers, enterprise, and high-performance computing. We extended our leadership in supercomputing. The latest top 500 list shows that NVIDIA technologies power 342 of the world's top 500 supercomputers, including 70% of all new systems and eight of the top 10 to help companies harness the new industrial high-performance computing revolution. We deliver a turnkey AI data center solution with the NVIDIA DGX SuperPOD, the same technology that powers our new Cambridge One supercomputer in the UK and a number of others in the top 500. We expanded our AI software and subscription offerings, making it easier for enterprises to adopt AI from the initial development stage through to deployment and operations. We announced NVIDIA Base Command, our software as a service offering for operating and managing large-scale multi-user, and multi-team AI development workloads on DGX SuperPOD. Base Command is the operating and management system software for distributed training clusters. We also announced general availability of NVIDIA Fleet Command, our managed edge AI software as a service offering. Fleet Command helps companies solve the problem of securely deploying and managing AI applications across thousands of remote locations, combining the efficiency and simplicity of central management with the cost performance and data sovereignty benefits of real time processing at the edge. Early adopters of Fleet Command include some of the world's leading retail manufacturing and logistics companies and the specialty software companies that work with them. The new NVIDIA Base Command and Fleet Command software and subscription offerings followed last quarter's announcements of the NVIDIA AI Enterprise Software Suite, which is in early access with general availability expected soon. Our enterprise software strategy is supported by the NVIDIA Certified System Program with Server OEMs, which are bringing to market over 55 systems ready to run on NVIDIA's AI software out of the box to help enterprise simplify and accelerate their AI deployment. The NVIDIA ecosystem keeps getting stronger. NVIDIA Inception, our acceleration platform for AI startups, just surpassed 8,500 members. With cumulative funding of over $60 billion and members in 90 countries, Inception is one of the largest AI startup ecosystems in the world. CUDA now has been downloaded 27 million times since it launched 15 years ago, with 7 million in the last year alone. TensorRT, for inference, has been downloaded nearly 2.5 million times across more than 27,000 companies. And the total number of developers in the NVIDIA ecosystem now exceeds 2.6 million, up four times in the past four years. Let me give you a quick update on Arm. In nearly one year since we initially agreed to combine with Arm, we have gotten to know the company, its business, and its people much better. We believe more than ever in the power of our combination and the benefits it would deliver for Arm. for the UK and for its customers across the world in the era of AI. Arm has great potential. We love their business model and commit to keep its open licensing approach. And with NVIDIA's scale and capabilities, Arm will make more IP and sooner for their mobile and embedded customers while expanding into data center, IoT, and other new markets. NVIDIA accelerates computing, which starts with the CPU. Whatever new markets are opened with the CPU and are accelerated computing opportunities. We've announced accelerated platforms for Amazon Graviton, Ampere Computing, MediaTek, and Mara Valve, spanning cloud computing, AI, cloud gaming, supercomputing, edge AI, to Chrome PCs. We plan to invest in the UK. and we have with the Cambridge One supercomputer, and through ARM, making UK a global center in science, technology, and AI. We are working through the regulatory process, although some ARM licensees have expressed concerns and objected to the transaction, and discussions with regulators are taking longer than initially thought. We are confident in the deal. and that regulators should recognize the benefits of the acquisition to ARM, its licensees, and the industry. Moving to the rest of the P&L, GAAP gross margin of 64.8% for the second quarter was up 600 basis points from a year earlier, reflecting the absence of certain Mellanox acquisition-related costs. GAAP gross margins was up 70 basis points sequentially. Non-GAAP gross margins was 66.7%, up 70 basis points from a year earlier and up 50 basis points sequentially, reflecting higher ASPs within desktop GeForce GPUs on continued growth in high-end Ampere-architectured products, partially offset by a mix shift within data center. Q2 GAAP EPS was $1.94, up 276% from a year earlier. Non-GAAP EPS was $1.04, up 89% from a year earlier, adjusting for the four-to-one stock split effecting this quarter. Q2 cash flow from operations was a record $2.7 billion. Let me turn to the outlook for the third quarter of fiscal 2022. We expect another strong quarter with sequential growth driven largely by accelerating demand in data center. In addition, we expect sequential growth in each of our three other market platforms. Gaining demand is continuing to exceed supply as we expect channel inventories to remain below target levels as we exit Q3. The contribution of CMP to our revenue outlook is minimal. Revenue is expected to be $6.8 billion, plus or minus 2%. GAAP and non-GAAP gross margins are expected to be 65.2% and 67%, respectively, plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately $1.96 billion and $1.37 billion, respectively. GAAP and non-GAAP other income and expenses are both expected to be an expense of approximately $60 million, excluding gains and losses on equity securities. GAAP and non-GAAP tax rates are supposed to be expected 11%. plus or minus 1%, excluding discrete items. Capital expenditures are expected to be approximately $200 million to $225 million. Further financial details are included in the CFO commentary and other information available on our IR website. In closing, let me highlight upcoming events for the financial community. We will be attending the following virtual events. the BMO Technology Summit on August 24th, the New Street Big Ideas in Semiconductors Conference on September 9th, the Citi Global Tech Conference on September 13th, the Piper-Sandler Global Technology Conference on September 14th, and the Evercore ISI Auto Tech and AI Forum on September 21st. Our earnings call to discuss the third quarter results is scheduled for Wednesday, November 17th. We will now open the call for questions. Operator, would you please poll for questions?
spk08: Thank you. At this time, I would like to remind everyone, in order to ask questions, press star, then the number one on your telephone keypad. We'll pause for just a moment to compile the Q&A roster. And as a reminder, please limit yourself to one question. Your first question comes from the line of Vivek Arya of Bank of America. Your line is now open. You may ask your question.
spk09: Vivek Arya Thanks for taking my question. I actually had a near and longer term question on the data center. I think near term you mentioned the possibility of accelerating data center growth from the 35% rate. I was hoping if we could give us some more color around that confidence and visibility. And then longer term, you know, Jensen, we have seen a lot of announcements from NVIDIA about your enterprise software opportunity. I honestly don't know how to model that. It sounds very promising, but how should we model it? What problem are you trying to solve? You know, is it cannibalizing demand you might have otherwise seen from your public cloud customers, or is this incremental to growth? So just any guidance or any just insights into how to think about NVIDIA's enterprise software opportunity longer term. Thank you.
spk13: Yes, in fact, thanks for the question. We are seeing accelerated, as we've already reported, that we have record revenues in both hyperscale, cloud, and industrial enterprise this last quarter. And we're seeing accelerated growth. The acceleration in hyperscale and cloud comes from the transition of the cloud service providers in taking their AI applications, which are now heavily deep learning driven, into production. There were several things that we've spoken about in the past that really makes NVIDIA the ideal platform to scale out with. And if my line is a little wordly, let me apologize. The several elements of our platform are, number one, Ampere GPU, which is now a universal GPU for AI, for training, but incredibly good for inference. It's terrific in its throughput, it's terrific in its fast response time as well, and therefore the cost of deployment, the cost of operating the AI applications is below it. The second is the introduction of TensorRT, which is our optimizing compiler that makes it possible for us to compile and optimize any AI application to our GPUs. And whether it's computer vision or natural language understanding, conversational AI, recommender systems. The type of applications that are deploying AI is really quite vast. And then lastly, this software-influenced server that we offer called Triton, which supports every one of our GPUs. It supports CPUs as well as GPUs. So every Internet service provider could deploy operate their entire data center using Triton. These several things are really accelerating our growth. So the first element is the deployment, the transition of deep learning AI applications into large-scale deployment. In the enterprise, the application that's driving AI, as you know, every enterprise wants to move and race towards being a tech company. and take advantage of connected clouds and connected devices and artificial intelligence to achieve it. They have an opportunity to deploy AI services out of the edge. And in order to do so, there are several things that have to happen. First, we have to create a computing platform that allows them to do training in the IT environment that they understand. which is virtualized, which is largely managed by VMware. And our collaboration with VMware of creating a new type of system that could be integrated into enterprise has been quite a significant effort, and it's in volume production today. The second is a server that allows the enterprise customers to deploy their AI models out to the edge. And the AI engine, the software suite that we've been developing over the last 10 years now have been integrated into this environment and allows the enterprises to basically run AI out of the box. There are three elements of our software product there. First is NVIDIA AI Enterprise, and that basically puts All of the state-of-the-art AI solvers and engines and libraries that we've industrialized and perfected over the years made it available to enterprise license. Second is an operating system platform called Base Command that allows for distributed scale-up software development for training and developing models. And then the third is Fleet Command, which is an operating system software product that lets you operate and deploy and manage the AI models out to the edge. These three software products, in combination with the server called NVIDIA Certified, taken out through our network of partners, is our strategy to accelerate the adoption of AI by the enterprise customers. And so we're really enthusiastic about entering into the software business model. This is an opportunity that could represent, of course, tens of millions of servers. We believe all of them will be GPU accelerated. We believe that enterprises will be will be deploying and taking advantage of AI to revolutionize their industry. And using quite a traditional enterprise software licensing business model, this could represent billions of dollars of business opportunity for us.
spk08: Thank you. Next question comes from the line of Stacey Rasgan of Bernstein. Your line is now open. You may ask your question.
spk12: Hi, guys. Thanks for taking my questions. I wanted to go back, collect the sequential guidance. You gave a little bit of color by segment. If I look at your gaming revenues, it's kind of like three quarters in a row you've been up, you know, call a ballpark 10 or 11%. And my understanding is that was sort of a function of your ability to bring on supply. So I guess what does the supply edition look like As you're going from Q2 into Q3, do you think you can still maintain that kind of sequential growth or does it dial down? Because I also need to, I also would play that against your other commentary suggesting that the sequential growth, and I assume on a dollar basis, was being driven primarily by data centers. So how do I think about the interplay within those comments of sequential growth of gaming, especially given the trajectory we've had over the last several quarters?
spk06: Yeah, so let me start and I'll let Jensen add a bit, Stacy, to your question. Yes, we're providing the guidance for Q3 of $6.8 billion in revenue. Now, excluding CMP, we expect our revenue to grow over $500 million sequentially. The lion's share of that sequential revenue increase will be coming from data center. We do expect gaming to be up slightly on a sequential basis, but remember, we are still supply constrained. Automotive and provis are also expected to be up slightly quarter over quarter. And from a CMP perspective, we'll probably just have minimal amounts in Q3. So our Q3 results don't have seasonality with them for gaming, and are really about the supply that we believe we can have for Q3. I'll see if Jensen wants to add any more color.
spk13: Thank you. Thanks for the question. As you know, RTX is a fundamental reset of computer graphics. This is a technology called ray tracing that has been the holy grail of computer graphics for quite a long time, for 35 years. In our NVIDIA research for 10 years, we finally made it possible to do real-time ray tracing with RTX. RTX's demand is quite incredible. And as you know, we have a large install base of PC gamers. They use an architecture called GTX based on programmable shaders that we invented some 20 years ago. And now we've reset the entire install base. And Ampere is off to just an incredible start. It's the best selling GPU architecture in the history of our company. And yet we've only upgraded some 20%, less than 20% of our total install base. So there's another 80% of the world's PC gaming market that we have yet to upgrade to RTS. Meanwhile, the number of PC gamers in the world grew substantially. Steam grew 20% this last year. And so I think we're right in the beginning of our RTX transition. Meanwhile, computer graphics has expanded into so many different new markets. RTX, we've known, we've always believed, would reinvent the way that people did design. And we're seeing that happening right now as we speak as workstations is growing faster than ever and has achieved record revenues. And at the same time, because of all of our work with cloud gaming, we now see public clouds putting cloud graphics, whether it's workstations or PC or cloud gaming consoles, up in the cloud. So we're seeing strong demand in PCs, in laptops, in workstations, in mobile workstations, in cloud. And so RTX is really doing great. Our challenge there is that demand is so much greater than supply. And as Colette said, we're going to be supplying these things.
spk08: Thank you. Next question comes from the line of Matt Ramsey of Cohen. Your line is now open.
spk03: Yes, thank you very much. Good afternoon, everybody. Before my question, Jensen, I just wanted to say congrats on the Noyce Award. That's a big honor. For my question, I wanted to follow on on Stacy's question about supply. And Colette, maybe you could give us a little bit of commentary around supply constraints in gaming in the different tiers or price tiers of your gaming cards. I'm just trying to get a better understanding as to how you guys are managing supply across the different price tiers. And I guess it translates into a question of, are the gaming ASPs that we're seeing in the October quarter guidance, are those what you would call sustainable going forward? Or do you feel like that mix may change as supply comes online? Thank you.
spk06: So I'll start here. Thanks for the question on our overall mix as we go forward. First, our supply constraint in our gaming business is largely attributed to our desktop and notebook. That can mean a lot of different things from our components that are necessary to build so many of our products. But our mix is really important. Our mix, as we are also seeing many of our gamers very interested in our higher end, higher performance products, we will continue to see that as a driver of that overall lifts both our revenue and can lift our overall gross margins. So there's quite a few different pieces into our supply that we have to think about, but we are going to try and make the best solutions for our gamers at this time.
spk08: Thank you. Next question. We have the line from CJ Muse. Your line is from Evercore. Your line is now open.
spk02: Yeah, thank you. Good afternoon. I guess a follow-up question on the supply constraints. When do you think that they'll ease, and how should we think about gaming into the January quarter vis-a-vis typical seasonality, given I would assume you would continue to be supply constrained? Thank you.
spk13: I could take it or you can, either one of us.
spk06: Go ahead, Johnson, and I'll follow up if there's some other things.
spk13: Okay. Yeah, we're supply-constraining graphics, and we're supply-constraining graphics while we're delivering record revenues in graphics. Cloud gaming is growing. Cloud graphics is growing. RTX made it possible for us to address the design and the creative workstations. Historically, the rendering Ray tracing and photorealistic images has largely been done on CPUs and for the very first time. And you could actually accelerate it with NVIDIA GPUs and these RTX GPUs. And so the workstation market is really doing great. The backdrop of that, of course, is that people are building offices in their homes. And for many of the designers and creators around the world, some 20 million of them, they have to create, they have to build a workstation or an office at home as well as the one at work because remote work is the new norm. And meanwhile, of course, RTX has reset all of our consumer graphics and the few hundred million install-based PC gamers and it's time to upgrade. And so there's a whole bunch of reasons we're achieving record revenues while we're supply constrained. We have enough supply to meet our second half company growth plans. And next year, we expect to be able to achieve our company's growth plans for next year. Meanwhile, we have and are securing pretty significant long-term supply commitments as we expand into all these different market initiatives that we've set ourselves up for. And so I would expect that we will see a supply-constrained environment for the vast majority of next year is my guess at the moment. But a lot of that has to do with the fact that our demand is just so great. You know, RTX is really a once-in-a-generation reset of computer, modern computer graphics. Nothing like this has happened since the beginning of computer graphics. And so the invention is really quite groundbreaking, and you can see its impact.
spk08: Thank you. Next question comes from the line of Harlan Sur of J.P. Morgan. Your line is now open.
spk01: Good afternoon. Congratulations on the strong results outlook and execution. The Mellanox networking franchise, this has been a really strong and synergistic addition to the NVIDIA compute portfolio. I think kind of near the midterm, the team is benefiting from the transition to 200 and 400 gig networking connectivity and cloud and hyperscale. And then I think in addition to that, you guys are getting some good traction with the Bluefield SmartNIC products. Can you just give us a sense on how the business is trending year over year, and do you expect continued quarter-over-quarter networking momentum into the second half of this year, especially as the cloud and hyperscalers are going through a server and CapEx spending cycle?
spk13: Yeah, I really appreciate that question. Mellanox had a solid growth quarter, and the Mellanox networking business is really growing incredibly. There are three three dynamics happening all at the same time. The first is the transition that you're talking about. You know that the world's data centers, hyperscale data centers, use this form of computing called disaggregated, which basically means that a single application is running on multiple servers at the same time. This is what makes it possible for them to scale out. The more users for an AI application or a service, they just have to add more servers. And so the ease of scale out that disaggregated computing provides also puts enormous pressure on the network. And Mellanox has the world's lowest latency and the highest bandwidth and performance networking on the planet. And so the ability to scale out and the ability to provision disaggregated applications is really much, much better with no-announce networking. So that's number one. Number two, almost every company in the world has to be a high-performance computing company now. You see that the cloud service providers, one after another, are building effectively supercomputers. What historically was InfiniBand and supercomputing centers, the cloud service providers had to build supercomputers themselves. And the reason for that is because of artificial intelligence and training these gigantic models. The rate of growth of network sizes, AI model sizes, is doubling every two months. It's doubling not every year or two years, it's doubling every two months. And so you could imagine the size. We're now talking about training AI models that are 100 trillion parameters large. The human brain has 150 plus trillion synapses or neurons. And so that gives you a sense of the scale of AI models that people are developing. And so you're going to see supercomputers that are built out of Mellanox InfiniBand and their high-speed networking along with NVIDIA GPU computing in more and more cloud service providers. You're also seeing it in enterprises for use in the discovery of drugs. There's a digital biology revolution going on as computation is able. The large-scale computing that we're able to do now and AI better understand biology and better understand chemistry. and bringing both of those fields into the field of information sciences. And so you're seeing large supercomputers being built in enterprises around the world as well. And so the second dynamic has to do with our incredibly great networking, InfiniBand networking that is de facto standard in high-performance computing. And the third dynamic is data centers going software. In order to orchestrate and run a data center with just a few people, essentially running an entire data center, hundreds of thousands of servers as if it's just one computer in front of you, that entire data center is software-defined. And the amount of software that goes into that software-defined data center running on today's CPUs is the networking stack, the storage stack, and now, because of zero trust, the security stack. All of that is putting enormous pressure on the available computing capacity for applications, which is ultimately what data centers are designed to do. And so the software-defined data center needs to have a place to take that infrastructure software and accelerate it, to offload it, to accelerate it, and very importantly, to isolate it from the application plane so that intruders can't jump into the operating system, if you will, of your data center, the fabric of your data center. And so the answer to that is Bluefield. The ability to offload, accelerate, and isolate the data center software infrastructure, and to free up all of the CPUs to run what they're supposed to run, which is the application. Just about every data center in the world is moving towards a zero trust, model, and Bluefield is just incredibly well positioned. So these three dynamics, disaggregated computing, which means really strong and fast networking, every company needing high-performance computing, and then lastly, software-defined data centers going through zero trust. And so these are really important dynamics, and I appreciate the opportunity to tell you all that. You can just tell how super excited I am about the prospects and the networking business and the importance that they have in building modern data centers.
spk08: Thank you. Next question comes from the line of Aaron Reckers of Wells Fargo. Your line is open.
spk04: Yeah, thanks for taking the question. I think you hit on a lot of my questions around the data center in that last response. So maybe I'll just ask kind of on a P&L basis, you know, one of the things that I see in the results or, more importantly, the guide is you're now, Colette, guiding over a 67% gross margin potentially. I'm curious as we move forward, how do you think about the incremental operating gross margin upside still from here and how you're thinking about the operating margin leverage for the company from here through the P&L? Thank you.
spk13: Yeah, Colette, if you don't mind, let me take that and if you could just follow up with some details, that would be great. I think at the highest level, I really appreciate the question, at the highest level, the important thing to realize is that artificial intelligence is the single greatest technology force that the computer industry has ever seen and potentially the world's ever seen. The automation opportunities, automation opportunities, which drives productivity, which translates directly to cost savings for companies, is enormous. And it opens up opportunities for technology and computing companies like it's never happened before. And let me just give you some examples. The fact that we could apply so much technology to warehouse logistics, retail automation, customer call center automation is really quite unprecedented. The fact that we could automate truck driving and last mile delivery, providing an automated chauffeur, those kind of services and benefits and products are never imaginable before. And so the size of the IT industry as well, the industry that computer companies like ourselves are part of has expanded tremendously. And so the thing that we want to do is to invest as smartly but as quickly as we can to go after these large business opportunities where we can make a real impact. And in doing so, while doing so, to do so in a way that is architecturally sensible. One of the things that is really an advantage of our company is the nature of the way that we build products, the nature of the way that we build software, our discipline around the architecture, which allows us to be so efficient while addressing climate science on the one hand, digital biology on the other, artificial intelligence, and robotics and self-driving cars. And, of course, we already talked about computer graphics and video games. Using one architecture and having the discipline now for almost 30 years has given us incredible operating leverage. That's where the vast majority of our operating leverage comes from, which is architectural. The technology is architectural, our products are architectural in that way, and the companies have been built architecturally in that way. And so hopefully as we go after these large, large market opportunities that AI has provided us, and we do so in a smart and disciplined way with great leverage through our architecture, we can continue to drive really great operating leverage for the company and for our shareholders.
spk08: Thank you. We have the next question. It comes from the line of John Pitzer of Credit Suisse. Your line is open.
spk05: Yeah, good afternoon, guys. Thanks for letting me ask the question. I apologize for the short-term nature of the question, but it's what I get asked most frequently. I kind of want to return to the impact of crypto or the potential impact of crypto. Collider Jensen, is there any way to kind of gauge the effectiveness of the low hash rate G-force? Why only 80% and not 100%? And how confident are you that the CMP business being down is a reflection of crypto cooling off versus perhaps LHR not being that effective. And I bring it up because there's a lot of blogs out there that would suggest there are, you know, as much as you guys are trying to limit the ability of miners to use G-force, there are some workarounds.
spk13: Yeah. Go ahead.
spk06: Yeah. Let me start there and answer a couple of the questions about our strategy that we've put in place in this last couple quarters. As you recall, what we put in place was the low hash rate cards as well as putting in the CMP cards. The low hash rate cards were to provide for more supply for our GeForce gamers that are out there. We articulated one of the metrics that we were looking at is what percentage of those cards in Ampere we were able to sell with low-risk hash cards. Almost all of our cards in Ampere are low hash rates. but also we are selling other types of cards as well. But at this time, as we move forward, we're much higher than 80%, but just at the end of this last quarter, we were approximately at 80%. So, yes, that is moving up. So the strategy is in place and will continue as we move into Q3. I'll move it to Jensen here to see if he can discuss further.
spk13: Yeah, there's the question about the strategy of how we're steering G-Force applied to gamers. We moved incredibly fast this time with TMP and with our LHR settings for G-Force. And our entire strategy is about steering GeForce supplies to gamers. And we have every reason to believe that because of the rise in theme, which is really a measure of gamers, the rate of growth of our theme adoption of Ampere GPUs, there's some evidence that we're successful. But there's several reasons why it's just different this time. The first reason, of course, is the LHR, which is new, and the speed at which we responded with TMPs to steer GeForce Applied Gamers to second, is where at the very beginning of the Ampere and RTX cycles. As I mentioned earlier, RTX is a complete reinvention of computer graphics, and every evidence is that gamers are incredibly and game developers are incredibly excited about ray tracing. This form of computer rendering, graphics rendering, is just dramatically more beautiful. And we're at the beginning of that cycle, and only 20% has been upgraded so far. So we have 80% to go in a market that is already quite large and installed based on quite large, but also grown. Last year, gamers grew 20% and just measured by speed. The third reason is that our demand is strong and our channels lean. And you can see that every day with the shortage of supply. As quickly as we're shipping it, there's still great shortages all over the world. And then lastly, we just have more growth drivers today because of RTF than ever. We have the biggest wave of NVIDIA laptops. Just laptops is our fastest-growing segment of computing, and we have the largest wave of laptops coming. The demand for RTX and workstations, whereas previously the workstation market was a slow-growing market, is now a fast-growing market and has achieved records. And after more than a decade of working on cloud graphics, Cloud graphics is in great demand. And so all of these segments are seeing high demand while we continue to supply limited. So I think the situations are very different, and RTX is making a huge difference.
spk08: Thank you. We have the next question comes in the line of Chris Casper of Raymond James. Your line is now open.
spk10: Yes, thank you. Good evening. My question is about the split between the hyperscale and the vertical customers in the data center business and the trends you're seeing in each. I think in your prepared remarks you said both would be up in the October quarter, but I'm interested to see if you're seeing any differing trends there, particularly in the vertical business as perhaps business conditions normalize and companies return to the office and they adjust their spending plans accordingly.
spk06: Let me start out with the question and I'll let Jensen ask the tail. So far with our data center business, with our Q2 results, our vertical industries is still quite a strong percentage. Essentially 50% of our data center business is going to our vertical industries. Our hyperscales make up the other portion of that, slightly below the 50%. And then we also have our supercomputing business with a very small percentage, but doing quite, quite well. As we move into Q3, as we discussed, we will see an acceleration of both our vertical industries and our hyperscales as we move into Q3. With that backdrop, we'll see if Jensen has additional commentary.
spk13: There's a fundamental difference in hyperscale use of HPC or AI versus the industrial use of HPC and AI. In the world of hyperscalers and internet service providers, they're making recommendations on movies and songs and articles and search results and so on and so forth. And the difference in The improvement in accuracy that deep learning and artificial intelligence, large recommender systems, can provide is really needle-moving for them. In the world of industries, the reason why artificial intelligence is transformative, recognizing that most of the things that I just mentioned earlier, it's not really a dynamic in the world's largest industries, whether it's healthcare or logistics or transportation or retail. But that's the majority of the reasons why in some of the physical sciences industries, whether it's energy or transportation and such or healthcare, the simulation of physics, the simulation of the world, was not achievable using traditional first principle simulation approaches. But artificial intelligence or data-driven approaches has completely shaken that up and put it on its head. And some examples, whether it's fusing artificial intelligence so that you could speed up the simulation or the prediction of the protein structure or the 3D structure of proteins, which was recently achieved by a couple of very important networks, is groundbreaking. And by understanding the protein structure, 3D structure, we can better understand its function and how it would adapt to other proteins and other chemicals. And it's a fundamental step of the process in drug discovery, and that has just taken a giant leap forward. In the areas of climate science, it is now possible to consider using data-driven approaches to create models that overcome this, not overcome, but accelerate and make it possible for us to simulate much, much larger simulations of multi-physics geometry-aware simulations, which is basically climate science. These are really important fields of work that wouldn't have been possible for another decade, at least. And just as we've made possible using artificial intelligence, the realization of real-time ray tracing, in every field of science, whether it's climate simulation, energy discovery, drug discovery, we're starting to see the industry recognizing that the fusion of the first principle of simulation and data-driven artificial intelligence approaches is going to get a giant leap up. And that is a second dynamic. The other dynamic for industries is for the very first time they could deploy AI models out to the edge to do a better job with agriculture, to do a better job with asset protection and warehouses, to do a better job with automating retail. AI is going to make it possible for all of these types of automations to finally be realized. And so the dynamics are all very different. That last one has to do with edge AI, which is made possible by putting AI right at the point of data and right at the point of action, because you need it to be low cost, you need it to be high performance and instantly responsive, and you can't afford to stream all of the data to the cloud all the time. And so each one of them has a slightly different .
spk08: Thank you. Your final question comes from the line of William Stein of True Risk Securities. Your line is open.
spk11: Great. Thanks so much for taking my question. Jetson, I'm wondering if you can talk for a moment about Omniverse. This looks like a really cool technology, but I tend to get very few questions from investors about it, but it looks to me like this could be potentially very meaningful technology for you longer term. Can you explain perhaps what capabilities and what markets this is going after? It looks like perhaps this is going to position you very well in augmented in virtual reality but maybe uh maybe it's a sort of different uh market or group of markets it's it's a bit confusing to us so if you could um maybe help us understand it uh i think we'd really appreciate it thank you i i really appreciate the question and it's one of the most important things we're doing um the the uh
spk13: Omniverse, first of all, just what is it? It's a simulator. It's a simulator that's physically accurate and physically based. And it was made possible because of two fundamental technologies we invented. One of them is, of course, RTX, the ability to physically simulate light behavior in the world, which is ray tracing. The second is the ability to compute or simulate the physics of, simulate the artificial intelligence behavior of agents and objects inside a world. So we have the ability now to simulate physics in a realistic way, and we created an architecture that allows us to do it in the cloud, distribute it in a computed way, and to be able to scale it out to a very large system. But the question is, what would you do with such a thing? This simulator, this is a simulator, it's a simulation of virtual worlds with portals, we call them connectors, portals based on an industry standard, open standard that was pioneered by Pixar, and as we mentioned earlier, that we're partnering with Pixar and Apple to make it even broadly adopted, more broadly adopted, it's called USB, Universal Steam Description. They're basically portals or wormholes into virtual worlds. And this virtual world will simulate, could be simulating, it could be a concert for consumers. It could be a theme park for consumers. In the world of industries, you could use it for simulating robots so that robots could learn how to be robots inside these virtual worlds before they're downloaded from the simulator, simulated to the real world. You could use it to simulate factories, which is one of the early works that we've done with BMW that I showed at GTC. Factory of the future that is designed completely in Omniverse, simulated in Omniverse, robots trained in Omniverse with goods and materials that are its original CAD data put into the factory. The logistics plan, like an ERP system, except this is an ERP system of physical goods and physical simulation, simulated through this omniverse world, and you could plan the entire factory in omniverse. This entire factory now becomes what is called a digital twin. It could be a factory, it could be a stadium, it could be an airport, it could be an entire city, it could be a fleet of cars. These digital twins would allow us to stimulate new algorithms, new AIs, new optimization algorithms before we deploy it into the physical world. And so what is Omniverse? Well, Omniverse is going to be an overlay, if you will, of virtual worlds that increasingly people call the metaverse. And you've now heard several companies talk about the metaverse. We all come from different perspectives, some of it from a social perspective, some of it from a gaming perspective, some of it, in our case, from an industrial and design and engineering perspective. But the omniverse is essentially an overlay of the Internet, an overlay of the physical world, and it's going to fuse all these different worlds together long-term. You mentioned VR and AR. You'll be able to go into the omniverse world using virtual reality. So you wormhole into the virtual world using VR. You could have an AI or an object portal into our world using augmented reality. So you could have a beautiful piece of art that you've somehow purchased and it belongs to you because of NFT and And it's only enjoyed in the virtual world, and you can overlay it into your physical world using AR. And so I'm fairly sure that at this point, the omniverse or the metaverse is going to be a new economy that is larger than our current economy. And we'll enjoy a lot of our time in the future in omniverse and in metaverses. And we'll do a lot of our work there, and we'll have a lot of robots. They're doing a lot of our work on our behalf. And we'll wake up in the morning, and they show us the results. And so Omniverse to us is an extension of our AI strategy. It's an extension of our high-performance computing strategy, and it makes it possible for companies and industries to be able to create digital twins that simulate their physical version before they deploy it and while they operate it.
spk08: Thank you. I will now turn the call over back to Mr. Jensen Long for closing remarks.
spk13: Thank you. We had an excellent quarter fueled by surging demand for NVIDIA computing. Our pioneering work in accelerated computing continues to advance graphics, scientific computing, and AI. Enabled by NVIDIA accelerated computing, developers are creating the most impactful technologies of our time, from natural language understanding and recommender systems to autonomous vehicles and logistics centers, to digital biology and climate science research, to a metaverse world that obeys the laws of physics. This quarter, we announced NVIDIA Base Command and Fleet Command to develop, deploy, scale, and orchestrate the AI workloads that run on the NVIDIA AI Enterprise Software Suite. With our new enterprise software, wide range of NVIDIA-powered systems, and global network of systems and integration partners, we can accelerate the world's largest industries as they race to benefit from the transformative power of AI. We are thrilled to have launched NVIDIA Omniverse, a simulation platform nearly five years in the making that runs physically realistic virtual worlds and connects to other digital platforms. We imagine engineers, designers, and even autonomous machines connecting to Omniverse to create digital twin simulated worlds that help train robots, operate autonomous factories, simulate police of autonomous vehicles, and even predict human impact on Earth's climate. The future will have artificial intelligence augmenting our own and the metaverse augmenting our physical world. It will be populated by real and AI visitors and open new opportunities for artists, designers, scientists, and even businesses. A whole new digital economy will emerge. Omniverse is a platform for building the metaverse vision. We're doing some of our best work and most impactful work in our history. I want to thank all of NVIDIA's employees for their amazing work and the exciting future we are inventing together. Thank you. See you next time.
spk08: Thank you. This concludes today's conference call. You may now disconnect.
Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-