Advanced Micro Devices, Inc.

Q1 2023 Earnings Conference Call


spk15: Hello, and welcome to the AMD first quarter 2023 earnings conference call. If anyone should require operator assistance, please press star zero on your telephone keypad. A question and answer session will follow the formal presentation. As a reminder, this conference is being recorded. It's now my pleasure to turn the call over to Ruth Cotter. Please go ahead, Ruth.
spk09: Thank you and welcome to AMD's first quarter 2023 financial results conference call. By now, you should have had the opportunity to review a copy of our earnings press release and accompanying slideware. If you've not reviewed these documents, they can be found on the investor relations page of We will refer primarily to non-GAAP financial measures during this call. The full non-GAAP to GAAP reconciliations are available in today's press release and slides posted on our website. Participants on today's conference call are Dr. Lisa Su, our Chair and Chief Executive Officer, and Jean Hu, our Executive Vice President, Chief Financial Officer, and Treasurer. This is a live call and will be replayed via webcast on our website. Before we begin today's call, we would like to note that Jean Hu will attend the JPMorgan Annual Technology, Media, and Communications Conference on Tuesday, May 23rd. Dan McNamara, Senior Vice President and General Manager, Server Business Unit, will attend the Bank of America Global Technology Conference on Tuesday, June 6th. And our second quarter 2023 quiet time is expected to begin at the close of business on Friday, June 16th. Today's discussion contains forward-looking statements based on current beliefs, assumptions, and expectations, speak only as of today, and as such involve risk and uncertainties that could cause actual results to differ materially from our current expectations. Please refer to the cautionary statement in our press release for more information on factors that could cause actual results to differ materially. Now with that, I'll hand the call over to Lisa. Lisa?
spk08: Thank you, Ruth, and good afternoon to all those listening in today. We executed very well in the first quarter as we delivered better than expected revenue and earnings in a mixed demand environment, launched multiple leadership products across our businesses, and made significant progress accelerating our AI roadmap and customer engagements across our portfolio. First quarter revenue was $5.4 billion, a decrease of 9% year over year. Sales of our data center and embedded products contributed more than 50% of overall revenue in the quarter as cloud and embedded revenue grew significantly year over year. Looking at the first quarter business results, data center segment revenue of 1.3 billion was flat year over year with higher cloud sales offset by lower enterprise sales. In cloud, the quarter played out largely as we expected. Epic CPU sales grew by a strong double digit percentage year over year but declined sequentially as elevated inventory levels with some MDC customers resulted in a lower sell-in TAM for the quarter. Against this backdrop, we were pleased that the largest cloud providers further expanded their AMD deployments in the quarter to power a larger portion of their internal workloads and public instances. 28 new AMD instances launched in the first quarter, including multiple confidential computing offerings from Microsoft Azure, Google Cloud, and Oracle Cloud, that take advantage of the unique security features of our EPIC processors. In total, we now have more than 640 AMD-powered public instances available. Enterprise sales declined year over year and sequentially as end customer demand softened due to near-term macroeconomic uncertainty. We continued growing our enterprise pipeline and closed multiple wins with Fortune 500 automotive, technology, and financial companies in the quarter. We made strong progress in the quarter, ramping our Zen 4 Epic CPU portfolio. All of our large cloud customers have Genoa running in their data centers and are on track to begin broad deployments to power their internal workloads and public instances in the second quarter. For the enterprise, Dell, HPE, Lenovo, Supermicro, and other leading providers entered production on new Genoa server platforms that complement their existing third gen Epic platforms. We are on track to launch Bergamo, our first cloud native server CPU, and Genoa X, our fourth gen epic processor with 3D chiplets for leadership and technical computing workloads later this quarter. Although we expect server demand to remain mixed in the second quarter, we are well positioned to grow our cloud and enterprise footprint in the second half of the year based on the strong customer response to the performance and TCO advantages of Genoa, Bergamo, and Genoa X. Now looking at our broader data center business. In networking, Microsoft Azure launched their first instance powered by our Pensando DPU and software stack that can significantly increase application performance for networking-intensive workloads by enabling 10x more connections per second compared to non-accelerated instances. We expanded our data center product portfolio with the launch of our first ASIC-based Alveo data center media accelerator, that supports four times the number of simultaneous video streams compared to our prior generation. In supercomputing, the Max Planck Society announced plans to build the first supercomputer in the EU powered by fourth gen EPYC CPUs and Instinct MI300 accelerators that is expected to deliver a three times increase in application performance and significant TCO improvements compared to their current system. Our AI activities increased significantly in the first quarter, driven by expanded engagements with a broad set of data center and embedded customers. We expanded the software ecosystem support for our Instinct GPUs in the first quarter, highlighted by the launch of the widely used PyTorch 2.0 framework, which now offers native support for our Rackham software. In early April, researchers announced they used the Lumi supercomputer, powered by third-gen EPYC CPUs and Instinct MI250 accelerators, to train the largest Finnish language model today. Customer interest has increased significantly for our next generation Instinct MI300 GPUs for both AI training and inference of large language models. We made excellent progress achieving key MI300 silicon and software readiness milestones in the quarter, and we're on track to launch MI300 later this year to support the El Capitan exascale supercomputer win at Lawrence Livermore National Laboratory, and large cloud AI customers. To execute our broad AI strategy and significantly accelerate this key part of our business, we brought together multiple AI teams from across the company into a single organization under Victor Peng. The new AI group has responsibility for owning our end-to-end AI hardware strategy and driving development of a complete software ecosystem, including optimized libraries, models, and frameworks spanning our full product portfolio. Now turning to our client segment, revenue declined 65% year-over-year to $739 million as we shipped significantly below consumption to reduce downstream inventory. As we stated on our last earnings call, we believe the first quarter was the bottom for our client processor business. We expanded our leadership desktop and notebook processor portfolio significantly in the quarter. In desktops, we launched the industry's fastest gaming processors with our Ryzen 7000 X3D series CPUs that combine our Zen 4 core with industry-leading 3D chiplet packaging technology. In mobile, the first notebooks powered by our Dragon range CPUs launched to strong demand, with multiple third-party reviews highlighting how our 16-core Ryzen 9 7945HX CPU is now the fastest mobile processor available. We also ramped production of our Zen 4-based Phoenix Ryzen 7040 series CPUs in the first quarter for ultra-thin and gaming notebooks to support the more than 250 ultra-thin gaming and commercial notebook design wins on track to launch this year from Acer, Asus, Dell, HP, and Lenovo. Looking at the year, we continue to expect the PC-10 to be down approximately 10% for 2023 to approximately 260 million units. Based on the strength of our product portfolio, we expect our client CPU sales to grow in the second quarter and in the seasonally stronger second half of the year. Now turning to our gaming segment, revenue declined 6% year-over-year to $1.8 billion as higher semi-custom revenue was offset by lower gaming graphics sales. Semi-custom SOC revenue grew year-over-year as demand for premium consoles remained strong following the holiday cycle. In gaming graphics, channel sell-through of our Radeon 6000 and Radeon 7000 series GPUs increased sequentially. We saw strong sales of our high-end Radeon 7900 XTX GPUs in the first quarter, and we're on track to expand our RDNA3 GPU portfolio with the launch of new mainstream Radeon 7000 series GPUs this quarter. Looking at our embedded segment, Revenue increased significantly year over year to a record $1.6 billion. We saw strength across the majority of our embedded markets led by increased demand from industrial, vision and healthcare, test and emulation, communications, aerospace and defense, and automotive customers. Demand for our adaptive computing solutions continues to grow as industrial, vision, and healthcare customers actively work to add more advanced compute capabilities across their product lines. We also released new Vitus AI software libraries to enable advanced visualization and AI capabilities for our medical customers and launched our next generation CREA platform that provides a turnkey solution to deploy our leadership adaptive computing capabilities for smart camera, industrial, and machine vision applications. In communications, we saw strength with wired customers as new infrastructure design wins ramped into production. We also launched zinc RF SOC products to accelerate 4G and 5G radio deployments in cost-sensitive markets and formed our first teleco solutions lab to validate end-to-end solutions based on AMD CPUs, adaptive SOCs, FPGAs, DPUs, and software. In automotive, deployments of our adaptive silicon solutions for high-end ADAS and AI features grew in the quarter. highlighted by Subaru rolling out its AMD-based EyeSight 4 platform across their full range of vehicles. In addition, we expanded our embedded processor portfolio with the launches of Ryzen 5000 and EPYC 9000 embedded series processors with leadership performance and efficiency as we focus on growing share in the security, storage, edge server, and networking markets. Looking more broadly across our embedded business, we are making great progress in bringing together our expanded portfolio and scale to drive deeper engagements with our largest embedded customers. In summary, I'm pleased with our operational and financial performance in the first quarter. In the near term, we continue to see a mixed demand environment based on the uncertainties in the macro environment. Based on customer demand signals, we expect second quarter revenue will be flattish sequentially, with growth in our client and data center segments offset by modest declines in our gaming and embedded segments. We remain confident in our ability to grow in the second half of the year, driven by adoption of our Zen4 product portfolio, improving demand trends in our client business, and the early ramp of our Instinct MI300 accelerators for HPC and AI. Looking longer term, we have significant growth opportunities ahead based on successfully delivering our roadmaps and executing our strategic data center and embedded priorities, led by accelerating adoption of our AI products. We are in the very early stages of the AI computing era, and the rate of adoption and growth is faster than any other technology in recent history. And as the recent interest in generative AI highlights, Bringing the benefits of large language models and other AI capabilities to cloud, edge, and endpoints requires significant increases in compute performance. AMD is very well positioned to capitalize on this increased demand for compute based on our broad portfolio of high performance and adaptive compute engines, the deep relationships we have established with customers across a diverse set of large markets, and our expanding software capabilities. We are very excited about our opportunity in AI This is our number one strategic priority, and we are engaging deeply across our customer set to bring joint solutions to the market, led by our upcoming Instinct MI300 GPUs, Ryzen 7040 series CPUs with Ryzen AI, Zynq Ultra Scale Plus MPSOCs, Alveo V70 data center inference accelerators, and Versal AI adaptive data center and edge SOCs. I look forward to sharing more about our AI progress over the coming quarters as we broaden our portfolio and grow this strategic part of our business. Now I'd like to turn the call over to Jean to provide some additional color on our first quarter results. Jean?
spk01: Thank you, Lisa, and good afternoon, everyone. I'll start with a review of our financial results for the first quarter and then provide our current outlook for the second quarter of fiscal 2023. As a reminder, for comparative purposes, first quarter 2022 results included the only partial quarter financial results from the acquisition of Xilinx, which closed in February 2022. Revenue in the fourth quarter was $5.4 billion, a decrease of 9% year-over-year, as embedded segment strength was offset by lower client segment revenue. Growth margin was 50%, down 2.7 percentage point from a year ago, primarily impacted by client segment performance. Operating expenses were $1.6 billion, increasing 18% year-over-year, primarily due to inclusion of a full quarter of expenses from Zilink and Pensando acquisitions. Operating income was $1.1 billion, down $739 million year-over-year, and the operating margin was 21%. Interest expense, taxes, and other was $128 million. For the first quarter, diluted earning per share was $0.60, due to better-than-expected revenue and operating expenses. Now turning to our reportable segment for the first quarter. Starting with the data center segment, revenue was $1.3 billion, flat year-over-year, driven primarily by higher sales of epic processors to cloud customers, offset by lower enterprise server processor sales. Data center segment operating income was $148 million, or 11% of revenue, compared to $427 million, or 33% a year ago. Lower operating income was primarily due to product mix and increased R&D investments to address larger opportunities ahead of us. Client segment revenue was $739 million, down 65% year-over-year, as we shipped significantly below consumption to reduce downstream inventory. We expect improvement in second-quarter client segment revenue and a seasonally stronger second half. Client segment operating loss was $172 million compared to operating income of $692 million a year ago, primarily due to lower revenue. Gaming segment revenue was $1.8 billion, down 6% year-over-year. Semi-customer revenue grew a double-digit percentage year-over-year, which was more than offset by lower gaming graphics revenue. Gaming segment operating income was $314 million, or 18% of revenue, compared to $358 million, or 19% a year ago. The decrease was primarily due to lower gaming graphics revenue. Embedded segment revenue was $1.6 billion, up $967 million year-over-year, primarily due to full quarter of Xilinx revenue and strong performance across multiple end markets. Embed segment operating income was $798 million, or 51% of revenue, compared to $277 million, or 46% a year ago, primarily driven by the inclusion of a full quarter of Xilinx. Turning to the balance sheet and the cash flow. During the quarter, we generated $486 million in cash from operations, reflecting our strong financial model despite the mixed demand environment. Free cash flow was $328 million. In the first quarter, we increased the inventory by $464 million, primarily in anticipation of the ramp of new data center and the client's product in advanced process notes. At the end of the quarter, cash, cash equivalents, and short-term investment were $5.9 billion, and we returned $241 million to shareholders through share repurchases. we have a $6.3 billion in remaining authorization for share repurchases. In summary, in an uncertain microeconomic environment, the AMD team executed very well, delivering better than expected top line revenue and earnings. Now turning to our second quarter 2023 outlook, we expect revenue to be approximately $5.3 billion, plus or minus $300 million. a decrease of approximately 19% year-over-year, and approximately flat sequentially. Year-over-year, we expect the client, gaming, and data center segments to decline, partially offset by embedded segment growth. Sequentially, we expect client and data center segment growth to be offset by modest gaming and embedded segment declines. In addition, we expect non-GAAP growth margin to be approximately 50%. Non-GAAP operating expenses to be approximately 1.6 billion. Effective tax rate to be 13%. And the diluted share count is expected to be approximately 1.62 billion shares. In closing, I'm pleased with our strong top line and the bottom line execution. We have a very strong financial model and will continue to invest in our long-term strategic priorities, including accelerating our AI offerings to drive sustainable value creation over the long term. With that, I'll turn it back to Ruth for Q&A session.
spk09: Thank you, Jean. And Kevin, we're happy to pull the audience for questions.
spk15: Thank you. We'll now be conducting a question and answer session. We ask you please ask one question, one follow-up, then return to the queue. If you'd like to be placed into question queue, please press star one at this time. A confirmation tone will indicate your line is in the question queue. You may press star two if you'd like to remove your question from the queue. One moment, please, while we poll for questions. Our first question today is coming from Vivek Arya from Bank of America. Your line is now live.
spk13: Thanks for the question. For my first one, Lisa, when I look at your full year data center outlook for some growth, that implicitly suggest data center, right, could be up 30% in the second half versus the first half, right? And I'm curious, what is your confidence and visibility and some of the assumptions that go into that view? Is it, you know, you think there is a, you know, much bigger ramp in the new products? Is it enterprise recovery? Is it pricing? So just give us a sense for how we should think about the confidence and visibility of this strong ramp that is implied in your second half data center outlook.
spk08: All right. So, in fact, thanks for the question. Maybe let me, you know, give you some context on what's going on in the data center right now. You know, first of all, we have said that it's a mixed environment in the data center. So the first half of the year, there are some of the larger cloud customers that are working through some inventory and optimization as well as a weaker enterprise. As we go into the second half of the year, we see a couple things. You know, first, you know, our roadmap is very strong. So, you know, the feedback that we're getting Working with our customers on Genoa, it's ramping well. It is very differentiated in terms of TCO and overall performance, so we think it's very well positioned. Much of the work that we've done in the first half of the year, in the first quarter and here in the second quarter, is to ensure that we complete all of that work such that we can ramp across a broader set of workloads as we go into the second half of the year. And then I would say, you know, from a overall market standpoint, I think, you know, enterprise will still be mixed with the notion that, you know, we expect some improvement. Depends a little bit on the macro, you know, situation. And then as we go into the second half of the year, in addition to Genoa, we're also ramping Bergamo. So that's on track to launch here in the second quarter, and we'll ramp in the second half of the year. And then as we get towards the end of the year, we also have our GPU ramp of MI300. So with that, we start the ramp in the fourth quarter of our supercomputing wins, as well as our early cloud AI wins. So those are all the factors. Of course, we'll have to see how the year plays out. But we feel very good about how we're positioned from an overall product and roadmap standpoint for data center.
spk13: And for my follow up, Lisa, how do you see the market share evolve in the data center in the second half? Do you think that the competitive gap between your and your competitors products has that narrowed or you still think that in the second half, you have a chance to gain market share in the data center?
spk08: Yeah, absolutely, Vivek. Well, I mean, we've gained share nicely over the last four years when you look at our data center progression. It's actually been pretty steady. As we go into the second half of this year, I think we continue to believe that we have a very strong competitive position. So we do think that positions us well to gain share. In the conversations that we're having with customers, I think they're enthusiastic about about Zen 4 and what it can bring into, you know, cloud workloads as well as enterprise workloads. I think actually Genoa is extremely well positioned for enterprise where we have been underrepresented. So we feel good about the roadmap. I mean, obviously it's competitive, but we feel very good about our ability to continue to gain share.
spk15: Thank you, Lisa. Thank you. Next question is coming from Toshi Ahari from Goldman Sachs. Your line is now live.
spk11: Hi, good afternoon. Thank you so much for taking the question. Lisa, I wanted to ask about the embedded business. It's been a really strong business for you since the acquisition of Xilinx. You're guiding the business down sequentially in Q2. Is this sort of the macro slash cycle kicking in, or is it something supply-related? If you can kind of expand on the Q2 outlook there and your expectations for the second half, that would be helpful. Thanks.
spk08: Yeah, absolutely, Toshia. So first, thanks for the question. I mean, I think the embedded business has performed extremely well over the last four or five quarters. Q1 was another record for the embedded business. When we look underneath it, there is a broad set of market segments that we have exposure to, and the majority of them are actually doing very well. Our thought process for, you know, sort of modest decline into Q2 is that, you know, we did have a bunch of backlog that were in the process of clearing, and that backlog will clear in Q2. And then we expect that the growth will moderate a bit. We still very much like the positioning of sort of our aerospace and defense, our industrial, our test and emulation business, our automotive business. We expect wireless trends to be a little bit weaker as well as consumer trends. So those are kind of the puts and takes in the market. But I would say the business has performed well above our expectations.
spk11: That's helpful. Thank you. And then as my follow-up, maybe one for Gene on the gross margin side of things. In your slide deck, I think you're guiding gross margins up half over half in the second half. Can you maybe speak to the puts and takes and the drivers as you think about gross margins over the next six to nine months? Thank you.
spk01: Yeah, thank you for the question. Our growth margin is primarily driven by mix. If you look at the first half, first quarter performance and second quarter guide, we are very pleased with the strong growth margin performance in both the data center and the embedded segment. We have been seeing headwinds from client segment impacting our growth margin. Going to the second half, we do expect growth margin improvement because the data center is going up and the embedded continue to be relatively strong. The pace of improvement in the second half actually will be largely dependent on the client segment. We think the client segment growth margin is also going to improve. But overall, it's going to be below corporate average. So the pace of improvement of growth margin could be dependent on the pace of the client business recovery in second half. But in the longer term, right, when we look at our business opportunities, the largest incremental revenue opportunities are going to come from data center and the embedded segment. So we are feeling very good about longer-term gross margin going up continuously.
spk11: Very helpful. Thank you.
spk15: Thank you. Next question is coming from Aaron Rakers from Wells Fargo. Your line is now live.
spk07: Yeah, thanks for taking the question. I've got two as well. I guess the first question going back on just like the cadence of the server CPU cycle with Genoa and Bergamo, and I think it's great to hear that you guys are on track to launch Bergamo here, but there's been some discussion here throughout this last quarter around some DDR5 challenges. I think there's PMEC issues. I'm just curious of how you, if those issues have presented themselves or how you would characterize the cadence of of the ramp cycle of Genoa at this point?
spk08: Yeah, sure, Aaron. So, yeah, look, I think Genoa, you know, we always said as we launched it that it would be a little bit more of a longer transition compared to Milan because it is a new platform. So, it is the new DDR5. It's PCI Gen 5. And for many of our top customers, they're also doing other things other than upgrading the CPUs. So, From that standpoint, I would say the ramp is going about as we expected. We've seen, you know, a lot of interest, a lot of, you know, customer engineering work that we're doing together in the data centers with our customers. You know, we feel great about the set of workloads and, you know, we see expansion in the workloads going forward. So, you know, overall, you know, our expectation is particularly as we go into the second half, we'll see Genoa ramp more broadly, but Genoa and Milan are going to coexist throughout the year, just given sort of the breadth of platforms that we have.
spk07: And anything specific on the DDR5 questions that come up? And I'm curious, my second question, just real quick, is the MI300, if we look out beyond just the El Capitan deployment through the course of this year, how do you guys think about success in that data center GPU market? Do we think about beyond just El Capitan?
spk08: Yeah, sure, Erin. So back to the DDR5 question. We haven't seen anything specific on DDR5. It's just normal platform bring up that we're seeing. Now, as it relates to your question about MI300, look, we're really excited about the AI opportunity. I think that is success for us, is having a significant part of the AI overall opportunity You know, AI for us is broader than cloud. I mean, it also includes what we're doing in client and embedded. But specifically as it relates to MI300, you know, MI300 is actually very well positioned for both, you know, HPCs or supercomputing workloads as well as for AI workloads. And with the recent interest in generative AI, I would say the pipeline for MI300 has expanded considerably. here over the last few months, and we're excited about that. We're putting a lot more resources. I mentioned on the prepared remarks the work that we're doing, sort of taking our Xilinx and sort of the overall AMD AI efforts and collapsing them into one organization that's primarily to accelerate our AI software work as well as platform work. So success for MI300 is for sure. a significant part of, you know, sort of the growth in AI in the cloud. And, you know, I think we feel good about how we're positioned there.
spk07: That's great. Thank you.
spk15: Thank you. Next question is coming from Matt Ramsey from TD Cal, and your line is now live.
spk06: Oh, yes. Thank you. Good afternoon, everybody. Lisa, my first question, I think just the way the business has trended, right, with enterprise and with China, in the data center market being a bit softer recently, and it seems like that's kind of continuing into the second quarter. It occurs to me that a big percentage of your data center business, particularly in server in the second half of the year, is going to be driven by US hyperscale. And I guess my question is, the level of visibility you have to unit volumes, to pricing, to timing of ramps, if you could walk us through that a little bit given it's customer concentrated. I imagine you'd have some level of visibility and you mentioned growth for the year in data center. If you could be a little bit more precise there. I understand there's market dynamics, but it's a bit of a vague comment and folks have just pushed me to ask about quantifying the growth for the year. Thanks.
spk08: Sure, Matt. Thanks for the question. Look, I think as we work with our largest cloud customers in the data center segments, particularly with our Epic CPUs, we have very good conversations in terms of what their ramp plans are, what their qualification plans are, which workloads, which instances. So I feel that we have good visibility. Obviously, some of this is still dependent on overall macro situation and overall demand. But our view is that there is a lot of good progress in the data center. Now, in terms of quantification, as I said, there's a lot of puts and takes. My view is that enterprise will improve as we go into the second half, and we're even seeing, I would say, some very early signs of some improvement in China. as well. So, you know, our view is, I think, double-digit data center growth is what we currently see. And, you know, certainly we would like to, you know, ramp, you know, Genoa and Bergamo as, you know, a large piece, you know, given the strength of those products. You know, we'd like to see them, you know, grow share here over the next couple of quarters.
spk06: Thank you for that, Lisa. That's helpful. For my follow-up question, I think in the prepared scripts, we're obviously under shipping sell through to clear the channel in the client business in the first half of the year. And I think the language that was used was seasonal improvements in the second half. So are you guys expecting to come back to shipping in line with sell through? So to stop under shipping demand and then for on top of that seasonal improvements in the market or And if you could just kind of help me think about the magnitudes and the moving pieces in client for the second half. Thanks.
spk08: Yeah. So, you know, we've been under shipping sort of consumption in the client business for about three quarters now. And, you know, certainly our goal has been to normalize the inventory in the supply chain so that, you know, shipments would be closer to consumption. We expect that that will happen in the second half of the year. And that's what the comment meant that We believe that there will be improvements in the overall inventory positioning. And then we also believe that the client market is stabilizing. So Q1 was the bottom for our business as well as for the overall market. From what we see, although it will be a gradual set of improvements, we do see that the overall market should be better in the second half of the year. We like our product portfolio a lot. I'm excited about having AI enabled on our, you know, Ryzen 7000 series. We have leadership notebook platforms with Dragon Range. You know, our desktop roadmap is also quite strong with our, you know, new launch of the Ryzen 7000 X3D products. And so I think, you know, here in the second quarter, we'll still undership consumption a bit. And by the second half of the year, we should be more normalized between shipments and consumption. And, you know, we expect some seasonal improvement into the second half.
spk06: Thanks, Lisa.
spk15: Thank you. Next question is coming from Joe Moore from Morgan Stanley. Your line is now live.
spk04: Thank you. Yeah, I guess same question in terms of the cloud business. You mentioned some combination of kind of digestion of spending and inventory reduction. Can you give us a sense of, you know, how much inventory was there in hyperscale and How much has it come down, you know, and how much are you sort of maybe undershipping demand in that segment?
spk08: Yeah, I think, Joe, this is a bit harder because every customer is different. What we're seeing is, you know, different customers are at, you know, a different place in their sort of overall cycle. But let me say it this way, though. I think we have, you know, good visibility with all of our large customers in terms of what they're trying to do, you know, for the quarter, for the year. Obviously, some of that will depend on how the macro plays out. But from our viewpoint, I think we're also going through a product transition between Milan and Genoa in some of these workloads. So if you put all those things into the conversation, that's why our comment was that we We do believe that the second quarter will grow modestly, and then there'll be more growth in the second half of the year as it relates to the data center business. So lots of puts and takes, every customer's in a bit of a different cycle. But overall, the number of workloads that they're going to be using AMD on, we believe will expand as we go through the next few quarters.
spk04: Great. And then my follow-up, I mean, you mentioned interest in MI300 around generative AI. Can you talk to, you know, is that right now kind of a revenue pipeline with major hyperscalers? Or is that sort of more indication of interest level? I'm just trying to figure out, you know, where you are in terms of establishing yourself in that market.
spk08: Yeah, I would say, Joe, you know, we've been at this for quite some time. So AI has been very much a strategic priority for AMD for quite some time. With MI250, we've actually made strong progress. We mentioned the prepared remarks, some of the work that was done on the Lumi supercomputer with generative AI models. We've continued to do quite a bit of library optimization with MI250 and software optimization to really ensure that we could increase the overall performance and capabilities. MI300 looks really good, I think. From everything that we see, you know, the workloads have also, you know, changed a bit in terms of whereas a year ago, you know, much of the conversation was primarily focused on training. You know, today that has, you know, migrated to sort of large language model inferencing, which is particularly good for GPUs. So, I think from a MI300 standpoint, we do believe that we will, start ramping revenue in the fourth quarter with, you know, cloud AI customers, and then it'll be more meaningful in 2024. Thank you. Thank you.
spk15: Next question is coming from Harlan Sir from J.P. Morgan. Your line is now live.
spk14: Hi, good afternoon. Thanks for taking my question. Good to see the strong dynamics and embedded, you know, very diverse end markets and Given their strong market share position here, the Xilinx team is in a really good position to catalyze Epic Attach or Ryzen Attach to their FPGA and adaptive compute solutions, right? I think embedded x86 is like a $6 to $8 billion per year market opportunity. So, Nisa, given your year with Xilinx in the portfolio, can you just give us an update on the Synergy Unlock and driving higher AMD compute attached to Xilinx sockets?
spk08: Yeah, thanks, Harlan. It's a great question. The Xilinx portfolio has done extremely well with us, very strong. I would say we continue to get more content attached to the FPGAs and the adaptive SOCs. We have seen the beginnings of good traction with the cross-selling, and that is an opportunity to take both Ryzen and Epic CPUs into the broader embedded market. I think the customers are very open to that. I think we have a sales force and a go-to-market capability across this customer set that is very helpful for that. So I do believe that this is a long-term opportunity for us to continue to grow our embedded business, and we've already seen some design wins as a result of the the combination of, you know, the Xilinx portfolio and the AMD portfolio, and I think we'll see a lot more of that going forward.
spk14: Great. Thanks for that. And in terms of other opportunities, you know, there appears to be this trend towards more of your cloud and hyperscale customers opting to do their own silicon solutions around accelerated compute or AI offload engines, right? And if I look at it, right, there are less than a handful of the world's semiconductor companies that have the compute graphics connectivity IP portfolio that you guys have, as well as the capabilities to design these very complex offload SOCs, right? Does the team have a strategy to try and go after some of these semi-custom or full-blown ASIC-based hyperscale programs?
spk08: We do, Harlan, and I would put it more broadly. And the broader point is I think we have a very complete IP portfolio across, you know, CPUs, GPUs, FPGAs, adaptive SOCs, DPUs, and a very capable, you know, semi-custom, you know, team. And so, you know, beyond hyperscalers, I think when you look at, you know, sort of higher volume opportunities, we think there are higher volume opportunities beyond game consoles that there are custom opportunities available. So I think that combination of IP is very helpful. I think it's a long-term opportunity for us, and it's one of the areas where we think we can add value to our largest customers.
spk15: Perfect. Thank you, Lisa.
spk08: Thanks.
spk15: Thank you. Next question is coming from Ross Seymour from Deutsche Bank. Your line is now live.
spk03: Hi. Thanks for letting me ask a question. Lisa, I just want to talk about the pricing environment in a general sense. You guys have done a great job of increasing the benefits to your customers and being able to raise prices, pass along cost increases, those sorts of things. But the competitive intensity and the weakness in the market, at least currently, seems that it could work against that. So in the near term and then perhaps exiting this year into the next couple of years, can you just talk about where you think pricing is going to go across both your data center market, most importantly, but then also in your client market?
spk08: Yeah. I think, Ross, what I would say is a couple things. You know, I think in the data center market, you know, the pricing is relatively stable. And what that comes from is, you know, our goal is to add more capability, right? So it's a TCO equation where, you know, as we're going from, you know, Milan to Genoa, We are adding more cores, more performance, and the performance per dollar that we offer to our customers is one where it's advantageous for them to adopt our technologies and our solutions. So I expect that. I think in the client business, given some of the inventory conditions in there, I think it's a more competitive environment. You know, we're all, you know, from my standpoint, we're focused on normalizing the inventory levels and, you know, with that normalization, the most important thing is to ensure that we get the shipments more in line with consumption because I think that's a healthier business environment overall. And then again, it's back to, you know, product values, right? So we have to ensure that, you know, our products continue to offer you know, superior, you know, performance per dollar, performance per watt, you know, capabilities in the market.
spk03: Thanks for that. And pivoting from my follow-up onto the AI side and the MI300, I just wanted to know what you would describe as your competitive advantages. Everybody knows it's a market that's exploding right now. There's tons of demand. You guys have all the IP to be able to attack it. But there's a very large incumbent in that space as well. So, When you think about what AMD can bring to the market, whether it's hardware, software, heterogeneity of the products you can bring, etc., what do you think is the core competitive advantage that can allow you to penetrate that market successfully?
spk08: Yeah, there's a couple of aspects, Ross, and since we haven't yet announced MI300, all of the specifications, some of those will come over the coming quarters. MI300 is the first solution that has both the CPU and GPU together, and that has been very positive for the supercomputing market. I think as it relates to generative AI, and we think we have a very strong value proposition from both a hardware and, again, it's a performance per dollar conversation, I think there's a lot of demand in the market, and there's also You know, I think given our deep customer relationships on the Epic side, there's actually a lot of synergy between the customer set between the Epic CPUs and the, you know, MI, you know, sort of 300, you know, GPU customers. So I think, you know, when we look at all these together, you know, our view is that demand is strong for AI, and I think our position is also very strong given there are very, very few you know, sort of products that can really satisfy these large language model, you know, sort of needs. And I think we feel confident that we can do that.
spk15: Thank you. Thank you. Next question is coming from Timothy Arcuri from UBS. Your line is now live.
spk02: Thanks a lot. Lisa, there was, you know, a lot more talk on this call about AI. And, you know, obviously PyTorch 2.0 now, you know, supporting AI Rock M is a great step forward. But how much would you say software is going to dictate how successful you can be for these workloads? You had mentioned that you're forming this new group, this new AI group. Do you have the internal software capabilities to be successful in AI?
spk08: Tim, I think the answer is yes. I think we have made significant progress even over the last year in terms of our software capabilities. And the way... you should think about our AI portfolio is, it's really a broad AI portfolio across client, you know, sort of edge as well as cloud. And with that, I think the Xilinx team brings a lot of, you know, background and capability, especially in inference. We've added significant talent in our AI software as well. And, you know, the beauty of Xilinx particularly the cloud opportunity, it's not that many customers and it's not that many workloads. So when you have very clear customer targets, we're working very, very closely with our customers on optimizing for a handful of workloads that generate significant volume. That gives us a very clear target for, you know, what winning is in the market. So we feel good about our opportunities in AI. And, you know, I'd like to say that it's a multi-year journey. So this is the beginning of what we think is, you know, a very significant market opportunity for us over the next three to five years.
spk02: Thanks a lot. And I guess as my follow-up, so can you just give us a sense of sort of the overall, you know, profile that you see for revenue into the back half? I know you said that data center and embedded will be up this year. It sounds like data center, you know, probably up double digits. But I also wanted to confirm that you think that total revenues also will be up this year, year over year?
spk08: Right, Tim. So I think, you know, as we said, we're not guiding the full year just given all the puts and takes. So we see Q2 as flattish, second half return to growth. We'll have to, you know, see exactly how the macro plays out across, you know, PCs and and enterprise. But yes, we feel, you know, good about, you know, growth in embedded, growth in data center, you know, on the data center side, double-digit growth, sort of have to, year over year. And then, you know, we'll see how the rest of the segments play out.
spk15: Thank you. Thank you. Next question is coming from Amrish Shibusaba from BMO Capital Markets. Your line is now live.
spk12: I think, excuse me, thank you very much, Lisa. Actually, I wanted to come back to the first quarter for data center. That's a pretty big gap between, on a Q over Q basis, on your business versus Intel, and I think almost 2X. So this is the first time you would have lost share on a Q over Q basis in a long time. So could you please address that? And I acknowledge that quarters can be pretty volatile, but it seems to be a pretty large gap. And then for my follow-up, just remind us again, please, for the full year growth for data center, kind of what's embedded in the assumptions for cloud as well as enterprise. Thank you.
spk08: Yeah, let me make sure I get your question, Ambersh. So you're asking about Q1 data center and whether we think we've lost share on a sequential basis?
spk12: Right. If I look at just your report versus what Intel reported on a Q over Q basis, and clearly on a year-over-year basis, you have gained share. But I'm just comparing, you know, down 14% versus what you reported, and so that would imply that you had a share loss versus them, unless the data center GPU and the Xilinx business was down significantly also in the Q over Q basis. Yeah.
spk01: Ambrish, maybe I'll give you a little bit of color. It's definitely in Q1, the other networking business, including GPU, have been down. That definitely is the case. But from a share perspective, when we look at... overall Q1 reported the revenue from both sides and analyzed the data, we don't believe we lost a share.
spk08: Yeah, that's right. So I think you just have to go through each of the pieces. But I think from an Epic or server standpoint, we don't believe we lost share. If anything, we might have gained a little bit. But I think overall, I wouldn't look at it so closely on a quarter-by-quarter basis because there are puts and takes. From what we see overall, you know, we believe that, you know, we have a good overall share progression as we go through the year.
spk12: And then the underlying assumptions for full year for data center?
spk08: Underlying assumptions for the full year. I think the key pieces that I talked about are, you know, Q2, let's call it modest growth, still expect some cloud optimization. to be happening as we go into the second half of the year. We'll see a stronger ramp of Genoa and the beginnings of the ramp of Bergamo. We think enterprise is still more dependent on macro, but we do believe that that improves as we go into the second half of the year. And then we'll have the beginnings of our MI300 ramped in the fourth quarter for both supercomputing and some early AI workloads.
spk12: Got it. Thank you very much.
spk08: Sure.
spk15: Thank you. Next question is coming from Stacy from emergency research. Your line is now live. Hi guys. Thanks for taking my questions.
spk16: Um, for my first one, Lisa, can you just like clarify this explicitly for me? Um, so you said double digit data center. Was that a full year statement or was that a second half year of your statement or was that a half over half statement for data center?
spk08: Yeah, let me be clear. That was a year over year statement. So double-digit data center growth for the full year of 2023 versus 2022.
spk16: Got it. Which, just given what you did in Q1 and sort of are implying for Q2, you need something like 50% year-over-year growth in the second half to get there. So you're endorsing those, you're endorsing that now?
spk01: I am. Yeah, your math is right.
spk16: Okay. Thank you. For my second question, Jean, you made a comment on gross margins. where you said the increase of gross margins in the second half was dependent on gross margins in client getting better. I just want to make sure, did I hear that right? Yes. And why should I expect client margins would get better, especially given what Intel has been doing in that space to protect everything? Why is that something that's going to happen?
spk01: Yes, Stacy, that's a good question. The way to think about it is if you look at our Q1 gross margin and the Q2 guide, around 50%. And as you know, both our data center and embedded have a very strong gross margin performance. And so what's the headwind that impact our gross margin is really PC client side, which as we talk about it is we are shipping significantly under the consumption and also to digest inventory in the downstream supply chain. As you know, typically that's the time you get significant pressure on the ASP side and on the funding side. That's why our gross margin in the client segment has been challenged. In second half, we know it's going to be normalized That's a very important fact is when you normalize the demand and the supply, and we continue to plan a very competitive environment. So don't get us wrong on that front. But it will be better because you are not digesting the inventory, the channel funding, everything. Those kind of price reduction will be much less. So we do think the second half will fly outside the gross margin will be better than first half.
spk16: Got it. Thank you. And I apologize. I misspoke as well. I'm 50% half over half in data center, not year over year. So we're all doing it. Thank you very much. Appreciate it.
spk09: Operator, we'll take two more questions, Steve.
spk15: Certainly. Our next question is coming from Blaine Curtis from Barclays. Your line is now live.
spk05: Hey, thanks for taking my question. I had to maybe just start with following up on Stacey's prior question. Could you just comment on what client ASPs did in the March quarter? I'm going to assume they're down a decent amount. Your competitor was down, but any color you could provide on what the environment was in March?
spk08: Yeah, sure, Blaine. So the ASPs were down quite a bit on a year-over-year basis, if you're talking about the overall client business. And what that is, is that's also The client ASPs were higher in the first half of 22, if you just think about what the supply environment was or the demand environment was in that. And given that we're under shipping in the first quarter, the ASPs are lower.
spk05: Gotcha. And then I just wanted to ask you on the data center business, the operating profit is down a ton sequentially. And you talked about enterprise being down. I think that's part of it, but it's a big drop, and it looks like gross margin probably is down a bunch, too. Can you just comment on why that drop in profitability in data center?
spk01: Yeah, Glenn, that's a good question. I think when you look at the year-for-year, you're absolutely right. Revenue is largely flat-ish, but operating margin dropped significantly. There are two major drivers. The first is is that we have increased the investment significantly, especially in networking and AI. As you may recall, we closed the Pancento last May or June. So this is the full quarter of Pancento expenses versus last year. Plus, we also increased GPU investment and the AI investment. That's all under the data center bucket. I mentioned about product mix. Lisa said year-over-year, cloud sales grow double-digit significantly, and enterprise actually declined. So in Q1, our revenue in data center is heavily indexed to the cloud market versus last year in Q1. Typically, cloud growth margin is lower than enterprise. We do expect, even in Q2, it will be balanced, more balanced. And going forward, we do think the enterprise side will come back.
spk05: Thanks. But I guess the big decline was sequential. So I'm assuming cloud was down sequentially.
spk01: Yeah, sequential, it's revenue. If you look at the revenue, it was down very significantly, right? And the mix also is a little bit more indexed to cloud, sequentially, too.
spk08: Yeah, it's the same factors.
spk01: Yeah.
spk08: So both the mix to cloud as well as the R&D, you know, expense has increased just given the large opportunities that we have across the data center and especially AI.
spk05: Okay. Thank you.
spk15: Thank you. Our final question today is coming from Harsh Kumar from Piper Sandler. Your line is now live.
spk10: Yeah, hey, guys. Thank you for squeezing me in. Lisa, I had a question. I wanted to ask you about your views on the inferencing market for generative AI 3+. Specifically, I wanted to ask, because I think there's some cross-currents going on. We're hearing that CPUs are the best way to do inferencing, but then we're hearing that timeliness of CPUs is not there as a function to be able to enable these kind of instances. So I was curious what you guys think, and then I had a follow-up.
spk08: Well, I think, Harsh, if you're saying, I mean, I think today inference is used a lot. CPUs are used a lot for inference. Now, where the demand is highest right now is, you know, for generative AI and large language model inferencing. You need GPUs to have, you know, the horsepower to train sort of the most, you know, sophisticated, you know, models. So I think those are the two, as you say, cross-currents. I think inference becomes a much more important workload just given the adoption rate of AI across the board. And I think we'll see that for smaller tasks on CPUs, but for the larger tasks on GPUs.
spk10: Okay, so it still differs back to GPUs for those. And then similar question on the MI300 series. I know that you talked a lot about success in the HPC side. But specifically, I was curious if you could talk about any wins or any kind of successes or success stories you might have on the generator AI side with BMI 300 or 250 series.
spk08: Yeah, so as we said earlier, we've done some really good work on MI 250 with AI and large language models. The example that is public is what we've done with Lumi and the training of some of the Finnish engineers models. We're doing quite a bit of work with large customers on MI300, and what we're seeing is very positive results. So we think MI300 is very competitive for generative AI, and we'll be talking more about sort of that customer and revenue evolution as we go over the next couple of quarters.
spk10: Thank you, Lisa.
spk09: Great, operator. That concludes today's

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.