This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.
NVIDIA Corporation
5/26/2021
Good afternoon. My name is Sunidra, and I will be your conference operator today. At this time, I would like to welcome everyone to the NVDS Financial Results conference call. All lines have been placed on mute to prevent any background noise. After the speaker's remarks, there will be a question and answer session. If you would like to ask a question during this time, simply press star followed by the number one on your telephone keypad. If you would like to withdraw your question, press the pound key. Thank you. Simone Jinkowski, you may begin your conference.
Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the first quarter of fiscal 2022. With me on the call today from NVIDIA are Jensen Wang, President and Chief Executive Officer, and Colette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the second quarter of fiscal 2022. The content of today's call is NVIDIA's property. It can be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results in business, please refer to the disclosure in today's earnings release, our most recent forms 10-K and 10-Q, and the report that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, May 26, 2021, based on information currently available to us. Except it's required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that, let me turn the call over to Colette. Thanks, Simona.
Q1 was exceptionally strong, with revenue of $5.66 million. and year-on-year growth accelerating to 84%. We set a record in total revenue in gaming, data center, and professional visualization, driven by our best-ever product lineups and structural tailwinds across our businesses. Starting with gaming, revenue of $2.8 billion was up 11% sequentially and up 106% from a year earlier. This is the third consecutive quarter of accelerating year-on-year growth, beginning with the fall launch of our GeForce RTX 30 series GPUs. Based on the Ampere GPU architecture, the 30 series has been our most successful launch ever, driving incredible demand and setting records for both desktop and laptop GPU sales. Channel inventories are still lean. and we expect to remain supply constrained into the second half of the year. With our Ampere GPU architecture now ramping across the stack in both desktop and laptops, we expect the RTX upgrade cycle to kick into high gear, as the vast majority of our GPU installed base needs to upgrade. Laptops continue to drive strong growth this quarter, as we started ramping the Ampere GPU architecture across our lineup. Earlier this month, all major PC OEMs launched G4 RTX 30 series laptops based on the 3080, 3070, and 3060 as part of their spring refresh. In addition, mainstream versions based on the 3050 and 3050 Ti will be available this summer, just in time for back to school, starting at price points as low as $799. This is the largest ever wave of GeForce gaming laptops, over 140 in total, as OEMs address the rising demand from gamers, creators, and students for NVIDIA's powered laptops. The RTX 30 series delivers our biggest generational leap in performance ever. It also features our second generation ray tracing technology and frame rate boosting, AI powered DLSS. The RTX is a reset for graphics with over 60 accelerated games. This quarter, we added many more, including Call of Duty, Modern Warfare, Crisis Remastered, and Outriders. We also announced that DLSS is now available in Unreal Engine 4 and soon in the Unity game engine, enabling game developers to accelerate frame rates with minimal effort. The RTX 30 series also offers NVIDIA Reflex, a new technology that reduces system latency. Reflex is emerging as a must-have feature for esports gamers who play competitive titles like Call of Duty Warzone, Fortnite, Valorant, and Apex Legends. We estimate that about 75% of GeForce gamers play esports games, and 99% of esports pros compete on GeForce. We believe gaming also benefited from crypto mining demand, although it's hard to determine to what extent. We've taken actions to optimize GeForce GPUs for gamers while separately addressing mining demand with cryptocurrency mining processors, or CMPs. Last week, we announced that newly manufactured GeForce RTX 3080, RTX 3070, and RTX 3060 PI graphic cards will have their Ethereum mining capabilities reduced by half and carry a low hash rate, or LHR, identifier. Along with the updated RTX 3060, this should allow our partners to get more GeForce cards into the hands of gamers at better prices. To help address mining demand, CMP products launched this quarter, optimized for mining performance and efficiency. Because they don't meet the specifications required of a GeForce GPU, they don't impact the supply of GeForce GPUs to gamers. CMP revenue was 155 million in Q1, reported as part of the OEM and other category. And our Q2 outlook assumes CMP sales of 400 million. Our GeForce Now cloud gaming platform passed 10 million registered numbers this quarter. GFN offers nearly 1,000 PC games from over 300 publishers, more than any other cloud gaming service, including 80 of the most popular free-to-game play games. GFN expands the reach of GeForce to billions of underpowered Windows PCs, Macs, Chromebooks, Android devices, iPhones, and iPods. GFN is offered in over 70 countries with our latest expansions including Australia, Singapore, and South America. Moving to ProVis. Q1 revenue was $372 million, up 21% both sequentially and year-on-year. Strong notebook growth was driven by a new sleek and powerful RTX-powered mobile workstations with Max-Q technology, and the enterprises continued to support remote workforce initiatives. Desktop workstations rebounded as enterprise resumed the spending that has been deferred during the lockdown, with continued growth likely as offices open. Key verticals driving Q1 demand include manufacturing, healthcare, automotive, and media and entertainment. At GTC, we announced the coming general availability of NVIDIA Omniverse Enterprise, the world's first technology platform that enables global 3D design teams to collaborate in real time in a shared space, working across multiple software speeds. This incredible technology builds on NVIDIA's entire body of work and is supported by a large, rapidly growing ecosystem. Early adopters include sophisticated design teams at some of the world's leading companies, such as BMW Group, Foster & Partners, and WPP. Over 400 companies have been evaluating Omniverse, and nearly 17,000 users have downloaded the open beta. Omniverse is offered as a software subscription on a per user and a per server basis. As the world becomes more digital, virtual, and collaborative, we see a significant revenue opportunity for Omniverse. We also announced powerful new Ampere architecture GPUs for next-generation desktop and laptop workstations. The new RTX-powered workstations will be available from all major OEMs. Moving to automotive, Q1 revenue was $154 million, up 6% sequentially and down 1% year-on-year. Growth in AI cockpit revenue was partially offset by the expected decline in legacy entertainment revenue. We extended our technology leadership with the announcement of the next generation NVIDIA Drive Atlan SoC. Atlan will deliver an unrivaled 1,000 trillion operations per second of performance and integrate data center class, NVIDIA Bluefield networking, and security technologies to enhance vehicle performance and safety, making it a true data center on wheels. Outlet, which targets automakers 2025 models, will follow the NVIDIA Drive Orin SOC, which delivers 254 tops that has been selected by leading vehicle makers for production timelines starting next year. The NVIDIA Drive platform has achieved global adoption across the transportation industry. Our automotive design wind pipeline now exceeds $8 billion through fiscal 2027. Most recently, Volvo Cars announced that it will use NVIDIA Drive Orange, building on our next great momentum with some of the largest automakers, including Mercedes-Benz, SAIC, and Hyundai Motor Group. In robo-taxis, we added GM Cruise to the growing number of companies adopting the NVIDIA Drive platform, which includes Amazon Zoox and DD. We've also had a great traction with new energy vehicle makers. Our latest wins include Faraday Future, R Auto, IAM Motors, and VinFast, which joined previously announced wins with SAIC, NIO, Xpeng, and BayAuto. In trucking, Navistar is partnered with TuSimple in selecting NVIDIA Drive for autonomous driving, joining previously announced Volvo Autonomous Solutions and Plus. NVIDIA is helping to revolutionize the transportation industry. Our full-stack software-defined AV and AI cockpit platform spans silicon, systems, software, and AI data center infrastructure, enabling over-the-air upgrades to enhance safety and the joy of driving throughout the vehicle's lifetime. Starting with our lead partner, Mercedes-Benz, NVIDIA Drive can transform the automotive industry with amazing technologies delivered through new software and services business models. Moving to data center. Revenue topped $2 billion for the first time growing 8% sequentially and up 79% from the year-ago quarter, which did not include Mellanox. Hyperscale customers led our growth this quarter as they built infrastructure to commercialize AI in their services. In addition, cloud providers have adopted the A100 to support growing demand for AI from enterprises, startups, and research organizations. Customers are deploying NVIDIA's A100 and DGX platforms to train deep neural networks with rising computational intensity, led by two of the fastest-growing areas of AI, natural language understanding and deep recommendators. In March, Google Cloud Platform announced general availability of the A100, with early customers including Square for its cache application, and Alphabet's DeepMind. The A100 is deployed across all major hyperscale and cloud service providers globally, and we see strengthening demand in the coming quarters. Every industry is becoming a technology industry and accelerating investments in AI infrastructure, both through the cloud and on-premise. Our vertical industries grew both sequentially and year-on-year, led by consumer internet companies. For example, Naver, a leading internet technology company in Korea and Japan, is training giant AI language models at scale on DGX SuperPod to pioneer new services across e-commerce, search, entertainment, and payment applications. We continue to gain traction in inference with hyperscale and vertical industry customers across a broadening portfolio of GPOs. We had record shipments of GPUs used for inference. Inference growth is driving not just the T4, which was up strongly in the quarter, but also the universal A100 Tensor Core GPU, as well as the new Ampere architecture-based A10 and A30 GPUs. all excellent at training as well as inferencing. Customers are increasingly migrating from CPUs to GPUs for AI inference for two chief reasons. First, GPUs can better keep up with the exponential growth in the size and the complexity of deep neural networks and respond with the required low latency. In April's MLPerf AI Inference Benchmark, NVIDIA achieved the top results across every category, spanning computer vision, medical imaging, recommendator systems, speech recognition, and natural language processing. And second, NVIDIA's full-stack inference platform, including Triton, is the complexity of deploying AI applications by supporting models from all major frameworks and optimizing for different query types, including batch, real-time, and streaming. Triton is supported by several partners in the cloud services, including Amazon, Google, Microsoft, and Tencent. Examples of how customers use NVIDIA's inference platform include Microsoft for grammar checking in Office, the United States Postal Service for real-time package analytics, T-Mobile for customer service, Pinterest for image search, and GE Healthcare for heart disease detection. We also had strong results with Mellanox networking products. Like our compute business, strong growth was driven by hyperscale customers across both Ethernet and InfiniBand. We achieved key design wins and proof-of-concept trials for the NVIDIA Bluefield 2 DPU with cloud service providers and consumer Internet companies. We also unveiled Bluefield 3, the first DPU built for AI and accelerated computing, with support from VMware, NetApp, Splunk, cloudflare and others bluefield 3 is the industry's first 400 gig dpu and delivers the equivalent data center services of up to 300 cpu cores it transforms traditional server infrastructure into zero trust environments in which every user is authenticated by offloading and isolating data center services from business applications With Bluefield III, our DPU roadmap will deliver an unrivaled 100x performance increase over a three-year period. As we look back at the first full year since closing the Mellanox acquisition, we are extremely pleased with how the business has performed. has not only exceeded our financial projections but it has been instrumental in key new platforms like the dgx superpod and the bluefield dpu enabling our data center scale computing strategy in april we held our largest ever gpu technology conference with more than 200 000 registrants From 195 countries, Jensen's keynote had over 14 million views. At GTC, we announced our first data center CPU, NVIDIA Grace, targeted at processing massive next-generation AI models with trillions of parameters. The ARM-based processor will enable 10x the performance and energy efficiency of today's fastest servers. With Grace, NVIDIA has a three-chip strategy with GPU, DPU, and now CPU. The Swiss National Supercomputing Center and the US Department of Energy's Los Alamos National Laboratory are the first to announce plans to build Grace-powered supercomputers. Grace will be available in early 2023. GTC is first and foremost for developers we announced NVIDIA developed and optimized pre-trained model availability on the NVIDIA GPU Cloud Registry. Developers can choose a pre-trained model and adapt it to fit their specific needs using NVIDIA TAO. are transfer learning software. TAO fine-tunes the model with customers' own small data set to give models a custom fit without the cost, time, and massive data sets required to train a neural network from scratch. Once a model is optimized and ready for deployment, users can integrate it with an NVIDIA application framework that fits their use. For example, The NVIDIA Jarvis framework for interactive conversational AI is now generally available and used by customers such as T-Mobile and Snap. And the NVIDIA Merlin framework for deep recommendators is an open beta with customers such as Snap and Tencent. With the chosen application framework, users can launch NVIDIA Fleet Command software to deploy and manage the AI application across a variety of NVIDIA GPU-powered devices. For enterprise customers, we unveiled a new enterprise-grade software offering available as a perpetual license or subscription. NVIDIA AI Enterprise is a comprehensive suite of AI software that speeds development and deployment of AI workloads and simplifies management of enterprise AI infrastructure. Through our partnership with VMware, hundreds of thousands of vSphere customers will be able to purchase NVIDIA AI Enterprise with the same familiar pricing model that IT managers use to procure VMware infrastructure software. We also made several announcements at GCC about accelerating the delivery of both NVIDIA AI and accelerated computing to enterprises and edge users among the world's largest industries. Leading server OEMs launched NVIDIA certified systems, which are industry standard servers based on the NVIDIA EGX platform. They run NVIDIA AI enterprise software and are supported by the NVIDIA A30 and A10 GPUs. initial customers including Lockheed Martin and Mass General Brigham. In addition, we announced the NVIDIA AI on 5G platform supported on NVIDIA EGX servers to enable high-performance 5G RAM and AI applications. The AI on 5G platform leverages the NVIDIA aerial software and the NVIDIA Bluefield 2 A100 converged card, which combines our GPUs and GPUs. We are teaming with Fujitsu, Google Cloud, Manavir, Redisys, and Wind River in developing solutions based on our AI on 5G platform to subpoena the creation of smart cities and factories, advanced hospitals, and intelligence stores. Another highlight at GTC was the announcement of a broad range of initiatives to strengthen the ARM ecosystem across cloud data centers, HPC, enterprise and edge, and PCs. In the cloud, we are bringing together AWS Graviton2 processors and NVIDIA GPUs to provide a range of benefits, including lower cost support for richer game streaming experiences and greater performance for ARM-based workloads. In HPC, we are bringing together an Ampere Ultra CPU with NVIDIA GPUs, DPUs, and NVIDIA HPC software developer kit. Initial supercomputing centers deploying it include Oak Ridge and Los Alamos National Labs. In the enterprise and edge, we are bringing together Marvell ARM-based Octeon processors and the NVIDIA GPUs to accelerate video analytics and cybersecurity solutions. And in PCs, we are bringing together MediaTek's ARM-based processors with NVIDIA's RTX GPUs to enable realistic ray-traced graphics and cutting-edge AI in a new class of ARM-based laptops. On our ARM acquisition, We are making steady progress in working with the regulators across key regions. We remain on track to close the transaction within our original timeframe of early 2022. ARMS IP is widely used, but the company needs a partner that can help it achieve new heights. NVIDIA is uniquely positioned to enhance ARMS capabilities. And we are committed to invest in developing the ARM ecosystem, enhancing R&D, adding IP, and turbocharging its development to grow into new markets in the data center, IoT, and embedded devices, areas where it only has a light footprint, or in some cases, none at all. Moving to the rest of the P&L. Gap gross margin for the first quarter was down 100 basis points from a year earlier and up 100 basis points sequentially. Non-gap gross margin was up 40 basis points from a year earlier and up 70 basis points sequentially. The sequential non-gap increase was largely driven by a more favorable mix within data center and the addition of CMP products. Q1 gap EPS was 3.03, up 106% from a year earlier. Non-GAAP EPS was 3.66, up 103% from a year ago. Q1 cash flow from operations was $1.9 billion. Let me turn to the outlook for the second quarter of fiscal 2022. We expect broad-based sequential year-on-year revenue growth in all of our market platforms. Our outlook includes $400 million in CMP. Aside from CMP, the sequential revenue increase in our Q2 outlook is driven largely by data center and gaming. In data center, we expect sequential growth in both compute and networking. In gaming, With the move to low hash rate GeForce CPUs and increasing the amount of CMP products, we are making a significant effort to serve miners with CMPs and provide more GeForce cards to gamers. If there is additional CMP demand, we have supply flexibility to support it. We believe these actions combined with strong gaming demand will drive an increase in our core gaming business for Q2. Now, to look at our outlook for Q2, revenue is expected to be 6.3 billion, plus or minus 2%. GAAP and non-GAAP gross margins are expected to be 64.6% and 66.5%, respectively, plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately 1.76 billion and 1.26 billion, respectively. GAAP and non-GAAP other income and expenses are both expected to be an expense of approximately 50 million. Gap and non-gap tax rates are both expected to be 10%, plus or minus 1%, excluding discrete items. Capital expenditures are expected to be approximately $300 million to $325 million. Further financial details are included in the CFO commentary and other information available on our IR website. Let me highlight that Jeff Fisher and Manavir Das will keynote Computex on the evening of May 31st U.S. time, as well as several upcoming events for the financial community. We'll be virtually attending the Evercore TMT Conference on June 7th, the B of A 2021 Global Technology Conference on June 9th, and the NASDAQ Virtual Investor Conference on June 16th. Our earnings call to discuss our second quarter results is scheduled for Wednesday, August 18th. With now, we will open the call for questions. Operator, would you please pose for questions?
At this time, I would like to remind everyone, in order to ask a question, press star then the number one on your telephone keypad. We'll pause for just a moment to compile the Q&A roster. And your first question comes from with UBS.
Thanks a lot. Collette, I was wondering if you can double-click a little more on the guidance. I know of the 6 to 650 in growth, you said 250 is coming from CMP, and both gaming and data center will be up. Can we assume that they're up about equally, so you're getting about 200 roughly from each of those? And I guess second part of that is within data center, I'm wondering, can you speak to the networking piece? It sounds like maybe it was up a bit more modestly than it's been up the past few quarters. I'm just wondering what the outlook is there. Thanks.
Yeah, thanks so much for the question on our guidance. So I first want to start off with we see demand really across all of our markets, all of our different market platforms. We do plan to grow sequentially. You are correct that we are expecting an increase in our CMP. And outside of our CMP growth, we expect the lion's share of our growth to come from our data center and gaming. In our data center business, right now our product lineup couldn't be better. We have a strong overall portfolio, both for training and for inferencing, and we're seeing strong demand across our hyperscales and vertical industries. We've made a deliberate effort on the gaming perspective to supply to our gamers the cards that they would like, given the strong demand that we see. So that will also support the sequential growth that we're receiving. So you are correct that we do see growth sequentially coming from data center and gaming, both contributing quite well to our growth.
Thanks a lot, Colette.
Oh, I didn't answer your second question, my apologies, on Mellanox. Additionally, Mellanox is an important part of our data center. It is quite integrated with our overall products. We did continue to see growth this last quarter, and we are also expecting them to sequentially grow as we move into Q2. They are a smaller part of our overall data center business, but again, we do expect them to grow.
And your next question comes from C.G. Meese with Evercore ISI.
Yeah, good afternoon. Thank you for taking the question. In your prepared remarks, I think I heard you talk about a vision for acceleration in data center as we go through the year. And as you think about the purchase obligations, they reported up 45% year-on-year. How much of that is related to long lead time data center? And how should we interpret that in terms of what kind of ramp we could see in the second half? particularly as you think about perhaps adding more growth from enterprise on top of what was hyperscale-driven growth in the April quarter. Thank you.
So let me take the first part of your question regarding our purchasing, our purchasing of inventory and what we're seeing in just both our purchase commitments and our inventory. The market has definitely changed to where long lead times are required to build out our data center products. So we're on a steady stream to both commit longer term so that we can make sure that we can serve our customers with the great lineup of products that we have. So yes, a good part of those purchase commitments is really about those long lead times of the components to create the full systems. I'll turn the second part of the question over to Jim.
what was the second part of the question?
The second part of the question was, uh, what do we see in the second half as it relates, uh, to the lineup of enterprise? Um, and, uh, we articulated in our pre remarks regarding that we see in acceleration. Thank you.
Yeah. Yeah. We're, we're, uh, we're seeing, we're seeing, uh, strength across the board in data centers, and we're seeing strengthening demand. CGA, our data center, as you know, is an accelerated range of applications. From scientific computing, both physical and life sciences, data analytics and classical machine learning, cloud computing and cloud graphics, which is becoming more important because of remote work, And very importantly, AI, both for training as well as inferencing for classical machine learning models like XGBoost all the way to deep learning-based models like conversational AI, natural language understanding, recommender systems, and so on. And so we have a large suite of applications, and our NVIDIA AI and NVIDIA HPC SDKs accelerate these applications and data centers. They run on systems that range from HGX for the hyperscalers to DGX for on-prem, to EGX for enterprise and edge, all the way out to AGX autonomous systems. Uh, this quarter, uh, at UTC, we announced one of our largest initiatives and it's taken us several years. You've seen, you've seen me working on it, you know, in, in, in open out in the open, uh, over the course of the last several years. And it's called EGX is our enterprise, uh, AI, uh, platform. Uh, we're democratizing AI. We're bringing it out of the cloud. We're bringing it to enterprises and we're bringing it out to the edge. And the reason for that is because the vast majority of the world's automation that has to be done has data that has data sovereignty issues or data rate issues that can't move to the cloud easily. And so we have to move the computing to their premise, and oftentimes all the way out to the edge. The platform has to be secure, has to be confidential, it has to be remotely manageable, and of course it has to be high performance, and it has to be cloud native. It has to be built like the cloud, the modern way of building cloud data sets. And so these stacks have to be modern on the one hand, it has to be integrated into classical enterprise systems on the other hand, which is the reason why we've worked so closely with VMware and accelerated VMware's operating system, data center operating system, software-defined data center stacks on Bluefield. Meanwhile, we ported NVIDIA AI, NVIDIA HPC, onto VMware so that they could run distributed, large-scale, accelerated computing for the very first time. And that partnership was announced at VMworld, it was announced at GTC, and we're in the process of going to market with all of our enterprise partners. Their OEMs, their value-added resellers, their solution integrators, all over the world. And so this is a really large endeavor and the early indications of it are really exciting. And the reason for that is because, as you know, our data center business is more than 50% vertical industry enterprise already. It's more than 50% vertical industry enterprises already. And by creating this easy to adapt and easy to integrate stack, it's going to allow them to move a lot faster. And so this is the next major wave of AI. This is a very exciting part of our initiative. And it's something that I've been working on for, we've been working on for quite a long time. And so I'm delighted with the launch this quarter at GTC. The rest of the data center is doing great too. As Colette mentioned, hyperscale demand is strengthening. We're seeing that for computing and networking. You know that the world's cloud data centers are moving to deep learning because every small percentage that they get out of predictive inference drives billions and billions of dollars of economics for them. And so the movement towards deep learning shifts the data center workload away from CPUs because accelerators are so important. And so hyperscale, we're seeing great traction and great demand. And then lastly, supercomputing. Supercomputing centers all over the world are building out, and we're really in a great position there to fuse, for the very first time, simulation-based approaches as well as data-driven-based approaches, what is called artificial intelligence. And so across the board, our data center is gaining momentum. And we just see great strength right now, and it's growing strength. And we've really set up for years of growth in the data center. This is the largest segment of computing, as you know. And this segment of computing is going to continue to grow for some time to come.
And your next question comes from Aaron Brinkers with Wells Fargo.
Yeah, thanks for taking the question. Congratulations on the results. I'm going to try to slip in two of them here. First of all, Collette, I think in the past you talked about how much of your gaming installed base is kind of on the pre-race and ray tracing platforms or really kind of context behind the upgrade cycle that's still in front of us. That's kind of question one. On the heels of the last question, I'm just curious, things like VMware's Project Monterey, as we think about the Bluefield 2 product and Bluefield 3, how should we think about those starting to become or when should they become really material incremental revenue growth contributors for the company? Thank you.
so yeah we have uh definitely uh discussed in terms of the great opportunity that we have in front of us of folks uh moving to our ray traced uh gpus and we're in the early stages of that we've had a strong cycle already but still we probably have approximately 15 percent moving up a little bit from that at this time so it's a great opportunity for us to continue to upgrade a good part of that installed base. Not only just with our desktop GPUs, but the RTX laptops are also a great driver of growth and upgrading folks to RTX.
Colin, do you want me to take the second one? Yes, please. Erin, a great question on Bluefield. First of all, The modern data center has to be re-architected for several reasons. There are several fundamental reasons that makes it very, very clear that the architecture has to change. The first is that it's cloud native, which means that a data center is shared for everybody. It's multi-tenant. You don't know who's coming and going, and it's exposed to everybody on the internet. Number two, you have to assume that it's a zero-trust environment. because you don't know who's using it. It used to be that we have perimeter security, but those days are gone because it's cloud native, it's remote access, it's multi-tenant, it's public cloud. The infrastructure is used for internal and external applications. So number two, it has to be zero trust. The third reason is something that started a long time ago, which is software defined uh, in every way, because you want, you don't want a whole bunch of bespoke, uh, custom, uh, gear inside of data center. You want to operate the data center with software. You want it to be software defined. The software defined data center movement enabled this one pane of glass, um, a few it managers orchestrating millions and millions of nodes of computers at one place. And the software runs what used to be storage, networking, security, virtualization, and all of those things have become a lot larger and a lot more intensive. And it's consuming a lot of the data center. In fact, the estimate, you know, depending on how you want to think about it, how much security you want to put on it, if you assume that it's a zero trust data center, probably half of the CPU cores inside that data center is running not applications. And that's kind of strange because you created the data center to run services and applications, which is the only thing that makes money. The other half of the computing is completely soaked up running the software-declined data center just to provide for those applications. And that you could imagine even accepting, if you like, as the cost of doing business. However, it commingles the infrastructure, the security plane, and the application plane, and it exposes the data center to attackers. And so you fundamentally want to change the architecture as a result of that, to offload that software-defined virtualization and the infrastructure operating system, if you will, and the security services, to accelerate it because Moore's Law has ended and moving software that was running on one set of CPUs, which is really, really good already, to another set of CPUs isn't going to make it more effective. Separating it doesn't make it more effective. And so you want to offload that and take that application and software and accelerate it using accelerators, a form of accelerated computing. And so these things are fundamentally what Bluefield is all about. And we created the processor that allows us to, Bluefield 2 replaces approximately 30 CPU cores Bluefield 3 replaces approximately 300 CPU cores, just to give you a sense of it. And then Bluefield 4, we're in the process of building already. And so we've got a really aggressive pipeline to do this. Now, how big is this market? The way to think about that is every single networking chip in the world every single networking chip in the world will be a smart networking chip. It will be a programmable, accelerated infrastructure processor. And that's what the DPU is. It's a data center on a chip. And I believe every single server node will have it. It will replace today's NICs with something like Bluefield. And it will offload uh about half of the software processing that's consuming data centers today but most importantly it will enable this future world where every single packet every single application is being monitored in in real time all the time uh for for uh intrusion and so so how big is that application how big is that market just You know, 25 million servers a year, that's the size of the market. And we know that servers are growing, and so let's give you a feeling for that. And in the future, servers are going to move out to the edge, and all of those edge devices will have something like Bluefield. And then how are we doing? We're doing POCs now with just about every internet company. We're doing really exciting work there. We've included it in high performance computing so that it's possible for supercomputers in the future to be cloud native, to be zero trust, to be secure and still be a supercomputer. And then we expect next year to have meaningful if not significant revenues contribution from Bluefield. And this is going to be a really large growth market for us. You can tell I'm excited about this. And I put a lot of my energy into it. The company's working really hard on it. And this is a form of accelerated computing that's going to really make a difference.
And your next question comes from Vivek Arya with Bank of America Securities.
Thanks for taking my question. Jensen, is NVIDIA able to ring fence this crypto impact in your CMP product? So even if, let's say, crypto goes away for whatever reason, the decline is a lot more predictable and manageable than what we saw in the 2018-19 cycle. And then kind of part B of that is, how do you think about your core PC gamer demand? Because when we see these kind of 106% year-on-year growth rate, it brings questions of, sustainability. So give us your, you know, perspectives on these two topics. Just how does one ring sense kind of the crypto effect? And what do you think about the sustainability of your core PC gamer demand? Thank you.
Sure. Thanks a lot. First of all, it's hard to estimate exactly how how much way and where crypto mining is being done. However, we can only assume that the vast majority of it is contributed by professional miners, especially when the amount of mining increases tremendously like it has. And so we created CMP, and CMP and GeForce are not fungible. You could use GeForce for mining, but you can't use CMP for gaming. CMP yields better. And producing those doesn't take away from the supply of G-force. And so it protects our G-force supply for the gamers. And the question that you had is, what happens when on the tail end of this? There are several things that we hope. And we learned a lot from the last time, but you never learn enough about this dynamic. What we hope is that the CMPs will satisfy the miners and will stay in mines, in the professional mines. And we're trying to produce a fair amount of them, and we've secured a lot of demand for the CMPs, and we'll fulfill it. And what makes it different this time is several things. We're in the beginning of our RTX cycle, whereas Pascal was the last GTX. And not only that, it was at the tail end of the GTX cycle. It was the last GTX, and it was the tail end of the GTX cycle. We're at the very beginning of the RTX 30 cycle. And because we reinvented computer graphics, we reset the computer industry. And after three years, the entire graphics industry has followed. Every game developer has moved to ray tracing. Every content developer and every content tool has moved to ray tracing. And so if you move to ray tracing, these applications are so much better, and they simply run too slow on GTXs. And so we're seeing a reset of the install base, if you will, and at a time when the gaming market is the largest ever. You know, we've got this incredible installed base of GeForce users. We've reinvented computer graphics and we've reset the installed base and created an upgrade opportunity that's really exciting at a time when the market is, the gaming market, the gaming industry is really large. And what's really exciting on top of that is that gaming is no longer just gaming. It's infused into sports. It's infused into art. It's infused into social. And so gaming has such a large cultural impact now. It's the largest form of entertainment. And I think that the experience we're going through is going to last a while. And so, one, I hope that crypto will, the CMP will steer our GeForce applied to gamers. We see strong demand, and I expect to see strong demand for quite some time because of the dynamics that I've described. And hopefully in the combination of those two, we'll see strong growth and, you know, through strong growth in our core gaming business through the year.
And your next question comes from John Pitzer with Credit Suisse.
Yeah, good afternoon, guys. Thanks for letting me ask the question. I had two hopefully quick questions. First, you know, I hearken back to the mantra you guys put out a couple of analyst days ago, the more you spend, the more you save. You've always been very successful as you brought down the cost of doing something to really drive penetration growth. And so I'm curious with the NVIDIA Enterprise AI software stack, is there a sense that you can give us is how much that brings down the cost of deployment in AI inside the enterprise? And do you think whether COVID lockdown related or cost related, there's pent up demand that this unlocks? And then my second question is just around government subsidies. A lot of talks out of Washington about subsidizing the chip industry. A lot of that goes towards, you know, building fabs domestically. But when I look at AI, I can't think of anything more important to maintain sort of leadership in relative to national security. How do we think about NVIDIA and kind of the impact that these government subsidies might have on either you or your customers or your business trends?
The more you buy, the more you shall save. There's no question about that. And the reason for that is because we're in the business of accelerated computing. We don't accelerate every application. However, for the applications we do accelerate, the acceleration is so dramatic. And because we sell a component, the entire system, the TCO, the TCO of the entire system and all the services and all the people and the infrastructure and the energy cost has been reduced by X factors, sometimes 10 times, sometimes 15 times, sometimes five times. And so when we set our minds on accelerating a certain class of applications, and recently we worked on KuQuantum so that we could help the quantum industry, quantum computing industry, accelerate their simulators so that they could discover new algorithms and invent future computers, even though it won't happen until 2030 for the next 20 years, In 15 years, we're going to have some really, really great work that we can do using NVIDIA GPUs to do quantum simulations. We recently did a lot of work in natural language understanding in computational biology so that we could decode biology and understand how biology is to infer, to understand it, and to predictably improve upon it and design new proteins. Those words are so vital, and that's what accelerated computing is all about. Our enterprise software, and I really appreciate the question, our enterprise software used to be just about the BGPU, which is virtualizing the GPU inside the VMware environment or inside the Red Hat environment and makes it possible for multiple users to use one GPU, which is the nature of of enterprise virtualization. But now with NVIDIA AI, NVIDIA Omniverse, NVIDIA Fleet Command, whether you're doing collaboration or virtual simulations for robotics and digital twins, designing your factory, or you're doing data analytics, learning what the predictive features are that could create an AI model, predictive model, that you can deploy out at the edge using Fleet Command. We now have an end-to-end suite of software that is consistent with today's enterprise service agreements. It's consistent with today's enterprise business models. and allows us to support customers directly and provide them with the necessary service promises that they expect because they're trying to build a mission-critical application on top. And more importantly, by productizing our software, we provide the ability for our large network of partners, OEM partners, value-added resellers, system integrators, solution providers, for this large network of hundreds of thousands of IT sales professionals that we are connected to through our network, we give them a product that they can take to market. And so the distribution channel, the sales channel of VMware, the sales channel of of Cloudera, the sales channel of all of our partners in EDA and design, Autodesk, Dassault, so on and so forth. All of these sales channels and all of these partners are now partners in taking our stacks to market. And we have a fully integrated system that that are open to the OEMs so that they could create systems that run the stack. And it's all certified, all tested, all benchmarked, and of course, very importantly, all supported. And so this new way of taking our products to market, whereas our cloud business is going to continue to grow and that part of AI is going to continue to grow, That business is direct. We sell components directly to them. We support them directly. But there are 10 of those customers in the world. For enterprises, there are thousands. Industries far and wide. And so I think we now have a great stack and a great software stack that allows us to take it to the world's market so that everybody could buy more and save more.
And your final question comes from Stacy with Bernstein.
Hi, guys. Thanks for taking my questions. This is for Colette. So, Colette, last quarter you had kind of suggested that Q1 would be the trough for, I guess, for gaming as well as the rest of the company, but gaming in particular, and it would grow sequentially through the year. I guess given the strength we're seeing in the first half, do you still believe that that is the case? And I kind of heard you guys, I think, kind of dance around that point a little bit in response to one of the other questions. But could you clarify that? Is that still your belief that that core gaming business can grow sequentially through the rest of the year? And I guess same question as well for data center, especially since, Sounds like hyperscale is now coming back, like after a few quarters of digestion and then all of the other tailwinds you've talked about. I mean, is there any reason to think the data center itself shouldn't also grow sequentially like through the rest of the year?
Yeah, Stacey, thanks for the question. So I first want to start with when we talked about our Q1 results and when we were looking at Q1, we were really discussing a lot about what we expected between Q4 and Q1. Given what we knew was still high demand for gaming, we believed we would continue to grow between Q4 and Q1, which often we don't. And we absolutely had the strength of overall demand to grow. What that then led was, again, continued growth from Q1 to Q2 as we are working hard to provide more supply for the strong demand that we see. We have talked about that we have additional supply coming. We expect to continue to grow as we move into the second half of the year as well for gaming. Now, we only guide one quarter at a time, but our plan is to take the supply, serve the overall gamers, work on building up the channel. As we know, the channel is quite lean. And so, yes, we do and still expect growth in the second half of the year, particularly when we see the lineup of games, the holiday overall coming, the back to school, all very important cycles for us. um and there's a great opportunity uh to upgrade this rtx installed base now in terms of data center we'll look in terms of our guidance here we have uh growth from q1 to q2 planned in our overall guidance and we do see as things continue to open up a time to accelerate in the second half of the year for data center We have, again, a great lineup of products here. Couldn't be a better lineup now that we've also added the infancy products and the host of overall applications that are using our software that we have. So this could be an opportunity as well to see that continued growth. will work in terms of serving the supply that we need for both of these markets. But yes, we can see definitely growth in the second half of the year.
There are no further questions at this time. CEO Jensen Wong, I'll turn the call back over to you.
Why, thank you. Thank you for joining us today. NVIDIA computing platform is accelerating. Launched at GTC, we are now ramping new platforms and initiatives. There are several that I'll mention. First, enabled by the fusion of NVIDIA RTX, NVIDIA AI, NVIDIA PhysX, we built Omniverse, a platform for virtual collaboration and virtual worlds to enable tens of millions of artists and designers to create together in their own metaverses. Second, we laid the foundation to be a three chip data center scale computing company with GPUs, DPUs, and CPUs. Third, AI is the most powerful technology force of our time. We partner with cloud and consumer internet companies to scale out and commercialize AI powered services. and we're democratizing AI for every enterprise and every industry. With NVIDIA EGX certified systems, the NVIDIA Enterprise AI Suite, pre-trained models for conversational AI, language understanding, recommender systems, and our broad partnerships across the IT industry, we are removing the barriers for every enterprise to access state VR AI. Fourth, the work of NVIDIA Clara in using AI to revolutionize genomics and biology is deeply impactful for the healthcare industry, and I look forward to telling you a lot more about this in the future. And fifth, the electric, self-driving, and software-defined car is coming. With NVIDIA Drive, we are partnering with the global transportation industry to reinvent the car architecture, reinvent mobility, reinvent driving, and reinvent the business model of the industry. Transportation is going to be one of the world's largest technology industries. From gaming, metaverses, cloud computing, AI, robotics, self-driving cars, genomics, computational biology, NVIDIA is doing important work and innovating in the fastest growing markets today. As you can see, on top of our computing platforms that span PC, HPC, Cloud, Enterprise to Autonomous Edge, we've also transformed our business model beyond chips. NVIDIA vGPU, NVIDIA AI Enterprise, NVIDIA Fleet Command, and NVIDIA Omniverse as enterprise software license and subscription to our business model. And NVIDIA GeForce Now and NVIDIA Drive with Mercedes-Benz as the lead partner are end-to-end services on top of that. I want to thank all of the NVIDIA employees and partners for the amazing work you're doing. We look forward to updating you on our progress next quarter. Thank you.
This concludes today's conference call. You may now disconnect.