This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.
10/31/2023
Greetings, and welcome to the AMD Third Quarter 2023 Earnings Conference Call. At this time, all participants are in a listen-only mode. A brief question and answer session will follow the formal presentation. If anyone should require operator assistance during the conference, please press star zero on your telephone keypad. As a reminder, this conference is being recorded. It is now my pleasure to introduce to you Mitch Hawes, Vice President, Investor Relations. Thank you, Mitch. You may begin.
Thank you, John, and welcome to AMD's third quarter 2023 financial results conference call. By now, you should have had the opportunity to review a copy of our earnings press release and the accompanying slides. If you have not had the chance to review these materials, they can be found on the investor relations page of AMD.com. We will refer primarily to non-GAAP financial measures during today's call, and the full non-GAAP to GAAP reconciliations are available in today's press release and slides posted on our website. Participants on today's conference call are Dr. Lisa Hsu, our Chair and Chief Executive Officer, and Jean Hu, our Executive Vice President, Chief Financial Officer, and Treasurer. This is a live call and will be replayed via webcast on our website. Before we begin, I would like to note that Forrest Norod, Executive Vice President and General Manager, Data Center Business Solutions Unit, will attend the UBS Technology Conference on Tuesday, November 28th. AMD will host its Advancing AI event on December 6th when AMD and its key ecosystem partners and customers will showcase the AMD products and partnerships that will shape the advancement of AI. The event will be live streamed on our website. Gene Hu, Executive Vice President, Chief Financial Officer and Treasurer, will attend the Barclays Global Technology Conference on Thursday, December 7th. And our fourth quarter 2023 quiet time is expected to begin at the close of business on Friday, December 15th. Today's discussion contains forward-looking statements based on current beliefs, assumptions, and expectations, speaking only as of today, and as such involve risks and uncertainties that could cause actual results to differ materially from our current expectations. Please refer to the cautionary statement in our press release for more information on factors that could cause actual results to differ materially. With that, I will hand the call over to Lisa. Lisa?
Thank you, Mitch, and good afternoon to all those listening in today. We executed well in the third quarter, delivering strong top line and bottom line growth, achieving multiple milestones on our AI hardware and software roadmaps, and significantly accelerating our momentum with customers for our AI solutions. In PCs, there are now more than 50 notebook designs powered by Ryzen AI in market, and we are working closely with Microsoft on the next generation of Windows that will take advantage of our on-chip AI engine to enable the biggest advances in the Windows user experience in more than 20 years. In the data center, multiple large hyperscale customers committed to deploy instinct MI300 accelerators supported by our latest Rockum software suite and the growing adoption of an open hardware agnostic software ecosystem. Looking at the third quarter financial results, Revenue grew 4% year-over-year and 8% sequentially to $5.8 billion driven by record server CPU revenue and strong rise in processor sales. Turning to the segment results, data center segment revenue of $1.6 billion was flat year-over-year and up 21% sequentially as solid demand for both third and fourth gen EPIC processor families resulted in record quarterly server processor revenue. We gained server CPU revenue share in the quarter as fourth gen Epic CPU revenue grew more than 50% sequentially, crossing over to represent a majority of our server processor revenue and unit shipments. In cloud, while the demand environment remained mixed in the quarter, Epic CPU revenue grew by a strong double digit percentage sequentially as hyperscalers expanded deployments of Epic processors to power their internal workloads and public instances while optimizing their infrastructure spend. Nearly 100 new AMD-powered cloud instances launched in the quarter from Amazon, Google, Microsoft, Oracle, Tencent, and others, including multiple general instances that deliver leadership performance for general purpose, HPC, bare metal, and memory-optimized workloads. In enterprise, while overall demand remains soft, We are seeing strong indications that the significant performance and TCO advantages of Genoa and our expanded go-to-market investments are paying off as enterprise revenue grew by a double-digit percentage sequentially. We closed multiple new wins with leading automotive, aerospace, financial services, pharmaceutical, and technology customers, and the number of enterprise customers actively testing Epic platforms on-prem increased significantly quarter-on-quarter. We also expanded our fourth gen EPYC processor portfolio with the launch of our Ciena processors that deliver leadership energy efficiency and performance for intelligent edge and teleco applications. Dell, Lenovo, Supermicro, and others launched new platforms that expand our EPYC CPU TAM to address teleco, retail, and manufacturing applications. With the launch of Ciena, we now offer the industry's most performant and most energy-efficient portfolio of server processors across cloud, enterprise, technical, HPC, and edge computing. I am very pleased with the momentum we have built for our EPYC CPU portfolio. We are building on this momentum with our next-gen Turin server processors based on our new Zen5 core that delivers significant performance and efficiency gains. Turin is in the labs of our top customers and partners now, and customer feedback has been very strong, and we're on track to launch 2024. Looking at our broader data center portfolio, we made significant progress in our data center GPU business in the third quarter, as the multi-year investments we have made in our hardware and software roadmaps resulted in significant customer traction for our next generation Instinct MI300 accelerators. in particularly our Instinct MI300X GPU that delivers leadership inferencing and training performance. On the hardware side, bring-up and validation of our MI300A and MI300X accelerators continued progressing to plan, with performance now meeting or exceeding our expectations. Production shipments of Instinct MI300A APUs started earlier this month to support the El Capitan exascale supercomputer, and we are on track to begin production shipments of Instinct MI300X GPU accelerators to lead cloud and OEM customers in the coming weeks. On the software side, we further expanded our AI software ecosystem and made great progress enhancing the performance and features of our Rockum software in the quarter. In addition to Rockum being fully integrated into the mainline PyTorch and TensorFlow ecosystems, Hugging face models are now regularly updated and validated to run on Instinct accelerators and other supported AMD AI hardware. AI startup Lam and I announced they achieved software parity with CUDA for LLMs running on Instinct MI250 GPUs, enabling enterprise customers to easily deploy production-ready LLMs fine-tuned for their specific data on Instinct MI250 GPUs with minimal code changes. We also strengthened our AI software capabilities with the strategic acquisitions of MIPSology and Nod.ai. MIPSology is a longstanding partner with proven expertise delivering AI software and solutions running on top of our adaptive SOCs for data center edge and embedded markets. Nod.ai adds a highly experienced team with a track record of substantial contributions to open source AI compilers and industry-leading software already used by many of the largest cloud enterprise and AI companies. Nod's compiler-based automation software can significantly accelerate the deployment of highly performant AI models optimized for our Instinct, Ryzen, Epic, Versal, and Radeon processors. Based on the rapid progress we are making with our AI roadmap execution and purchase commitments from cloud customers, we now expect data center GPU revenue to be approximately $400 million in the fourth quarter and exceed $2 billion in 2024 as revenue ramps throughout the year. This growth would make MI300 the fastest product to ramp to a billion dollars in sales in AMD history. I look forward to sharing more details on our progress at our December AI event. Turning to our client segment, revenue increased 42% year-over-year and 46% sequentially to $1.5 billion. Sales of our Ryzen 7000 processors featuring our industry-leading Ryzen AI on-chip accelerator grew significantly in the quarter as inventory levels in the PC market normalized and demand began returning to seasonal patterns. Revenue for our latest generation client CPUs powered by our Zen 4 core more than doubled sequentially as we saw strong demand for our Ryzen 7000 series notebook and desktop processors that deliver both leadership energy efficiency and performance across a wide range of workloads. In commercial, we launched our first Threadripper Pro workstation CPUs based on our Zen 4 core that deliver unmatched performance for multi-threaded professional design, rendering, and simulation applications. Dell, HPE, and Lenovo announced an expanded set of workstations powered by new Threadripper Pro processors as we focus on growing this margin-accretive portion of our client business. Looking forward, we are executing a multi-year Ryzen AI roadmap to deliver leadership compute capabilities built on top of Microsoft's Windows software ecosystem to enable the new generation of AI PCs that will fundamentally redefine the computing experience over the coming years. Now turning to our gaming segment, Revenue declined 8% year-over-year and 5% sequentially to $1.5 billion as lower semi-custom revenue was partially offset by increased sales of Radeon GPUs. Although semi-custom SoC sales declined in line with our projections for this point in the console cycle, overall revenue for this console generation continues tracking significantly higher than the prior generation based on strong demand for Microsoft and Sony consoles. In gaming graphics, revenue grew both year over year and sequentially driven by increased demand in the channel. We expanded our Radeon 7000 series with the launch of new RX 7000 series enthusiast desktop GPUs that offer leadership price performance for 1440P gamers. Turning to our embedded segment, as we expected, revenue decreased 5% year over year to 1.2 billion. Sequentially, revenue declined 15% as lead times normalized and customers focused on reducing inventory levels. We expanded our leadership Versal SOC portfolio in the quarter with the launches of our first adaptive SOCs with on-chip HBM memory that deliver significant performance and efficiency for memory-bound data center, network, test, and aerospace applications. We also announced our next-generation space-grade Vercel SoC that integrates an enhanced AI engine and is the industry's only solution that supports unlimited reprogramming during development and after deployment. For the FinTech market, we launched our latest Alveo Accelerator card that delivers a 7x improvement in latency compared to our prior generation and has already been deployed by multiple trading firms in their ultra-low latency training platforms. Since closing our acquisition of Xilinx a little over a year and a half ago, our embedded business has grown significantly driven by our leadership products. Looking ahead, based on our current visibility, we expect embedded segment revenue to decline sequentially as customers continue working through elevated inventory levels through the first half of 2024. Over the medium term, we see strong growth opportunities for our embedded business based on our significant design win traction in our broad and differentiated portfolio of embedded FPGAs, CPUs, GPUs, and adaptive SOCs that can address a larger portion of our customers' compute needs. In summary, I'm pleased with our third quarter financial results, driven by the significant acceleration of Zen 4 server and client processor sales. Looking at the next couple of quarters, we expect strong growth in our data center business, driven by both Epic and Instinct processors. This growth will be partially offset by softening demand in our embedded business and lower semi-custom revenue, given where we are in the console cycle. As the PC market returns to seasonal patterns, we believe we are well positioned to gain profitable share in the premium and commercial portions of the market, based on the strength of our product offerings. We are focused on accelerating our leadership AI capabilities across our entire product portfolio, executing on our hardware and software roadmaps, and expanding our enterprise computing footprint. I look forward to sharing more details on our AI progress in a few weeks at our Together We Advance AI event. Now I'd like to turn the call over to Jean to provide additional color on our third quarter results and our outlook for Q4. Jean?
Thank you, Lisa, and good afternoon, everyone. I'll start with a review of our financial results for the third quarter and then provide our current outlook for the fourth quarter of fiscal 2023. We delivered better than expected third quarter results with a revenue of $5.8 billion and a diluted earning per share of $0.70. On a year-over-year basis, revenue increased 4% as growth in the client segment revenue was partially offset by lower gaming and embedded segment revenue. Revenue was up 8% sequentially, driven by growth in both the client and the data center segment. Growth margin was 51%, up approximately one percentage point year-over-year primarily driven by stronger client segment revenue and the product mix. Operating expenses were $1.7 billion, an increase of 12% year-over-year, primarily driven by higher R&D investment to support our significant AI growth opportunity. Operating income was $1.3 billion, representing a 22% operating margin. Taxes, interest expense, and other was $141 million, For the third quarter, diluted earning per share was $0.70, compared to $0.67 in the same period last year. Now turning to our reportable segments. Starting with the data center segment, revenue was $1.6 billion, flat year-over-year, as growth in the epic process of sales was offset by decline in adaptive SOC product sales. Data center revenue grew 21% sequentially, primarily driven by strong sales of our fourth-generation EPIC processors to both cloud and enterprise customers. Data center segment operating income was $306 million, or 19% of revenue, compared to $505 million, or 31% a year ago. Lower operating income was primarily due to increased R&D investment to support future AI revenue growth and product mix. Client segment revenue was $1.5 billion, up 42% year-over-year, primarily driven by higher sales of Ryzen mobile processors. On a sequential basis, revenue grew 46%. As PC market conditions continue to improve, and we ramped our rising 7,000 series to meet strong demand. Client segment operating income was $140 million, or 10% of revenue, compared to operating loss of $26 million a year ago, driven by higher revenue and disciplined OPEX management. We are pleased that the client segment returned to profitability in the third quarter. Gaming segment revenue was $1.5 billion, down 8% year-over-year, primarily due to a decrease in semi-customer revenue, partially offset by an increase in Radeon GPU sales. On a sequential basis, gaming segment revenue declined 5%, in line with our expectations, as we are now in the fourth year of the console cycle. Gaming segment operating income was $208 million, or 14% of revenue, compared to $142 million or 9% a year ago, primarily driven by higher Radion GPU revenue. Embedded segment revenue was $1.2 billion, down 5% year-over-year, primarily due to lower sales to the communication market. On a sequential basis, embedded segment revenue declined 15%, primarily due to inventory correction at the customers in several end markets. Embedded segment operating income was $612 million, or 49% of revenue, compared to $635 million, or 49% a year ago. Turning to the balance sheet and the cash flow, during the quarter, we generated $421 million in cash from operations, and the free cash flow was $297 million. In the fourth quarter, we expect to pay approximately $550 million in cash taxes, primarily due to previously deferred taxes from California disaster relief efforts made available by the IRS. Inventory decrease is sequentially by $122 million to $4.4 billion. At the end of the quarter, cash, cash equivalents, and short-term investment was strong at $5.8 billion. We returned $511 million to shareholders repurchasing 4.8 million shares, and we have $5.8 billion in remaining share repurchase authorization. Now, turning to our fourth quarter 2023 outlook. We expect revenue to be approximately $6.1 billion, plus or minus $300 million, an increase of approximately 9 percent year-over-year, and 5% sequentially. Year-over-year, we expect revenue for the data center and the client segments to be up by a strong double-digit percentage, the gaming segment to decline given where we are in the console cycle, and the embedded segment to decline due to additional softening of demand in the embedded market. Sequentially, we expect data center segment to grow by strong double digit percentage, client segment revenue to increase, and the gaming and embedded segment to decline by double digit percentage. We expect non-GAAP growth margin to be approximately 51.5%. Non-GAAP operating expenses to be approximately $1.74 billion. Non-GAAP effective tax rate to be 13%, and the diluted share count is expected to be approximately 1.63 billion shares. In closing, I'm pleased with our execution in the third quarter with year-for-year growth in revenue, growth margin, and earnings per share. In the fourth quarter, we expect to benefit from strong data center client momentum, driven by MI300 AI accelerated ramp and the strength of our high-performance leadership Xen4 family of products. Despite lower sales in the gaming segment and additional softening of demand in the embedded market, Looking ahead, the investment we are making in AI across our data center, client, gaming, and embedded segment enable us to offer one of the best industry's broadest portfolio, targeting the most compelling opportunities and positioning us to drive a long-term profitable growth. With that, I'll turn it back to Mitch for the Q&A session.
Thank you, Jane. John, we're happy to poll the audience for questions.
Thank you, Mitch. We will now be conducting a question and answer session. If you would like to ask a question, please press star 1 on your telephone keypad. A confirmation tone will indicate that your line is in the queue. You may press star 2 if you would like to remove your question from the queue. For participants using speaker equipment, it may be necessary to pick up your handset before pressing the star keys. We ask that you please limit yourself to one question and one follow-up. Thank you. One moment, please, while we poll for questions. And the first question comes from the line of Toshiya Hari with Goldman Sachs. Please proceed with your question.
Great. Thank you so much. Lisa, I had two questions. My first one is on the data center GPU business. You talked about 24 revenue potentially exceeding $2 billion. I was hoping you could provide a little bit more color. What percentage of this is AI versus supercomputing or other applications? Within AI, maybe talk about the breadth of your customer lineup, and how should we think about which workloads you're addressing, again, within the context of AI? Is it primarily training or inference or both?
Great. Thanks, Tosha, for the question. So, look, we've made significant progress on the overall MI300 program. I think we're very happy with how the technical milestones look, and then also we've made significant progress from a customer side. Now, your question as to, you know, how the revenue evolves. The way to think about it is in the fourth quarter, we said revenue would be approximately $400 million, and that's mostly HPC with the start of our AI ramp. And then as we go into the first quarter, we actually expect revenue to be approximately similar in that $400 million range, and that will be mostly AI, so with a very small piece being HPC. And as we go through 2024, we would expect revenue to continue to ramp quarterly. And again, it will be mostly AI. Within the AI space, we've had very good customer engagement across the board from hyperscalers to OEMs, enterprise customers, and some of the new AI startups that are out there. From a workload standpoint, we would expect MI300 to be on both training and inference workloads. We're very pleased with the inference performance on MI300, especially for large language model inference given some of our memory bandwidth and memory capacity. We think that's going to be a significant workload for us, but I think we would see a broad set of workloads as well as broad customer adoption.
Thank you. And then as my follow-up, a question on the server CPU side. You talked about Genoa growing really nicely in the quarter. I think you talked about both units and volume being bigger than its predecessor. Is the growth that you're seeing or the growth that you saw in Q3 and the growth that you're guiding to for Q4, is this primarily a function of share growth or are you actually seeing a pickup in the overall market? And I ask the question because Obviously, year-to-date there's been a significant shift away from traditional compute to accelerated computing, but are you actually starting to see signs of stabilization or even improvement on the traditional compute side? Thank you.
Sure. So the way I would frame it is we're very pleased with our third quarter performance as it relates to Epic overall. I think the fourth gen EPIC, so that's Genoa plus Bergamo, actually ramped very nicely. We got to a crossover in the third quarter, which is a little bit ahead of what we had previously forecasted. And when I look underneath that, I would say it was a strong growth in both cloud. Cloud was strong, so strong double digits. The adoption is pretty broad across first-party and third-party workloads. and new instances. And then on the enterprise side, we've also seen some nice growth across our OEMs. And so from the standpoint of is it the market recovery or is it share gain, I think it's some of both. From a market standpoint, I would say it's still mixed. I think enterprise is still a little bit mixed depending on sort of which region from a macroeconomic standpoint. Cloud depends a bit on the customer set. But overall, I think we're pleased with the progress and the leadership of Epic has ended up allowing us to grow substantially in the third quarter and then into the fourth quarter.
And the next question comes from the line of Aaron Rakers with Wells Fargo. Please proceed with your question.
Yeah, thank you for taking the question. Just to build off that last question, You know, Gene, I think last quarter you kind of endorsed the notion that your data center business would grow. I think it was in the high single-digit range. I think you started the year thinking like 10. So I guess the question is, do you still see that kind of growth rate, you know, set up? And how has that $400 million evolved, you know, underneath of that? You know, has that – Was it 300 million now going to 400? Just how has that changed over the course of the last quarter, you know, just to level set that data center expectation?
Yeah, so I think for the second half, we said we expect data center business to grow approximately 50% versus first half. Right now, based on what we are saying, we continue to see in that similar range of that 50%. So we are very happy and pleased about the strong momentum of our data center business. On the GPU side, Lisa mentioned about $400 million, around $400 million. As we go through the quarter, we have a strong engagement with the customers. So we do see the progress continues, and we see customers placing positions. So that's why when we go through the quarter, we start to increasingly confident about the revenue profile in Q4 we are guiding.
Yeah, and Aaron, if I can just add to that, I think what we've seen is the adoption rate of our AI solutions has given us confidence in not just the Q4 revenue number, but also sort of the progression as we go through 2024.
Yeah, that's helpful. And maybe just the follow-up, how would you characterize the supply side of the equation? You know, as you look at that 2 billion number, do you feel confident that you've got adequate visibility in the supply side to hit those expectations? You know, any update on that side?
Sure, Aaron. So, you know, we've been, you know, planning the supply chain for the last year, and, you know, we're always planning for success. So certainly for the current forecast of greater than $2 billion, we have adequate supply. But we have also planned for a supply chain forecast that could be significantly higher than that, and we would continue to work with customers to build that out.
And the next question comes from the line of Joe Moore with Morgan Stanley. Please proceed with your question.
Great. Thank you. Following up on the data center GPU, can you talk about the breadth of customers that you might see there? I assume it's fairly concentrated in year one, but you also did mention multiple hyperscalers. Can you just give us a sense for how concentrated that might be?
Yeah, sure, Joe. So we've been engaging broadly with, you know, the customers that I think in the last earnings call we said that our engagements had increased seven times. And so there is a lot of interest in MI300. We will start, let's call it more concentrated in cloud, you know, sort of several large hyperscalers. But we're also very engaged across the enterprise, and there's a lot of interest. Our partnerships with the OEMs are quite strong. And when we think about sort of the breadth of customers who are looking for AI solutions, we certainly see an opportunity, especially as we get beyond the initial ramp to broaden the customer set.
Great. And now that you're getting a look at volume in that space, can you talk about are the gross margins there going to be comparable to your other data center businesses?
Yeah, so on the gross margin side, we do expect our GPU gross margin to be accretive to corporate average. Of course, right now we're at a very, very early beginning of the ramp of the product. As you probably know, typically when you ramp new product, It takes some time to improve yield, testing time, manufacturing efficiency. So typically, it takes few quarters to ramp the gross margin to normalize the arrival. But we are quite confident that our team is executing really well.
And the next question comes from the line of Timothy Arcuri with UBS. Please proceed with your question.
Thank you. Lisa, I also wanted to ask about that $2 billion number for data center GPU next year. That's still a pretty small portion, obviously, of the total TAM. Where do you think that share can go? Do you think when we look at this out a couple years, do you think you can be 15%, 20% share for total data center GPU, or do you have aspirations to be even larger than that?
Yeah, Tim, I mean, I would say that, first of all, this is an incredibly exciting market, right? I think we all see the growth in generative AI workloads, and the fact is we're just at the very early innings of people truly adopting it for enterprise business productivity applications. So I think we are big believers in the strength of the market. We previously said we believe that the compound annual growth rate could be 50% over the next three or four years. And so we think the market is huge, and there will be multiple winners in this market. Certainly from our standpoint, we want to be We're playing to win, and we think MI300 is a great product, but we also have a strong roadmap beyond that for the next couple of generations. And we would like to be a significant player in this market, so we'll see how things play out. But overall, I would say that I am encouraged with the progress that we're making on hardware and software, and certainly with the customer set.
Thanks a lot. And then Jean, I just wanted to ask on March. I know that there's a lot of moving parts. It sounds like data centers up, but PC is going to be down normal seasonal and embedded in you know, gaming sound down as well. So can you just help us shape sort of how to think about March? Is it is it down a smidge? Is it flat? Could it be up a little bit? And maybe then how to think about like first half back half next year if you even wanted to go there. Thanks.
Tim, we're guiding one quarter of time. But just to help you with some of the color, as Lisa mentioned earlier, we said the data center GPU revenue will be flat-ish sequentially. That's the first thing, right? The mix will shift from El Capitan, majority in Q4, to predominantly more for AI in Q1. So because of the long lead time manufacturing cycle, we feel like it's going to be a similar level of revenue with the data center GPU. But in general, if you look at our business, we do have a seasonality, typically Q1. the client business, the server business, the gaming business seasonally is done. Of course, right now we definitely have a little bit more than seasonality given embedded and the gaming dynamics we are seeing right now. Server and client typically were done sequentially seasonally too. But overall, I think we are really focused on just execution. We probably can provide more color when we get close to Q1 2024. And especially Lisa, please add if we have any color we can provide on the whole year 2024.
Yeah, no, I think that covers it. When we look at the various pluses and minuses, I think we feel very good about the data center business. It continues to be a strong growth driver for us as we think about 2024 for both server as well as our MI300 client as well. We think incrementally improves from a market standpoint as well as we believe we can gain share given the strength of our product portfolio. And then we have the headwinds of embedded in the inventory correction that we'll go through in the first half and the console cycle. So I think those are the puts and takes.
And our next question comes from the line of Vivek Arya with Bank of America. Please proceed with your question.
Thanks for taking my question. Lisa, on the MI300, many of your hyperscaler customers have internal ASIC solutions ready or in the process of getting them ready. So if inference is the primary workload for MI300, do you think it is exposed to replacement by internal ASICs over time, or do you think both MI300 and ASICs can coexist, right, along with the incumbent GPU solution?
Yeah, I think, Vivek, you know, when we look at the set of AI workloads going forward, we actually think they're pretty diverse. I mean, you have sort of the large language model training and inference, then you have what you might do in terms of fine-tuning off of a foundational model, and then you have, let's call it straight inference, what you might do there. So I think within that framework, we absolutely believe that MI300 has a strong place in the market, and that's what our customers are telling us, and we're working very closely with them. So yes, I think there will be other solutions, but I think particularly for the LLMs, I think GPUs are going to be the processing of choice, and MI300s are very capable. Got it.
And then a question, Lisa, on just this interplay between AI and traditional computing. You know, it seems like, especially when it relates to ASPs and units, it seems like server CPU makers are kind of holding the line on price per core. But at the same time, the cloud players are extending the depreciation and replacement cycles of traditional server CPUs. So I'm just curious to get your take. What do you think is the interplay between units and ASPs If you were to take a snapshot of what you have seen in 23 and how it kind of informs you as you look at 24, that is it possible that maybe unit growth in servers is not that high, but you are able to make up for it on the ASP side. So just give us some color on, you know, one, what is happening to traditional computing deployments, and secondly, is there a difference in kind of the unit and ASP interplay on the server CPU side?
Yeah, I think it's a good point, Vivek. So If I take a look at 2023, I think it's been a mixed environment. There was a good amount of let's call it a caution in the overall server market. There was a bit of inventory digestion at some of the cloud guys, and then some optimization going on with enterprise again somewhat mixed. I think as we go forward, we've returned to growth in the server CPU market. Within that realm, because these, like for example, 4th Gen Epic, somewhere between 96 and 128 cores, you just get a lot of compute for that. So I do think there is the framework that unit growth may be more modest, but ASP growth, given the core count and the compute capability, will contribute to overall growth. So from a traditional server CPU standpoint, I think we do see those trends. 2023 was a mixed environment, and I think it improves as we go into 2024.
And the next question comes from the line of Blaine Curtis with Barclays. Please proceed with your question.
Thanks for taking my question. I want to ask on the embedded side, I think last quarter you kind of talked about the headwinds being mostly in the communications end market. You're getting it down in December. I'm just curious, you know, if that weakness is spread. And then your competitor talked about kind of a reset getting back to pre-pandemic levels. I'm just kind of curious how you frame that reset. You said it'd be weak to the first half.
Yeah, absolutely, Blaine. So, I think when we look at end markets, I think communications was weak in sort of last quarter and it certainly continues to be weak. We see 5G sort of capex just down overall. The other market where we see a little bit of let's call it soft end market demand would be industrial and that's a little bit more geographic. a little bit worse in Europe than in other geographies. The other end markets are actually relatively good. And what we just see is that inventory is high. Just given where we were with lead times coming into sort of through the pandemic and with the high demand that was out there, As the lead times have normalized, you know, people are drawing down their inventories and, you know, they have an opportunity to do that given the normalization. So from an overall standpoint, you know, we think demand is solid. And, you know, what we view is that we have a very strong portfolio at Embedded. You know, we like, you know, sort of the combination of the, let's call it the classic Xilinx portfolio together with the embedded processing capabilities that we add. Customers have seen, you know, sort of that portfolio come together and we've gotten, some nice design wind traction as a result of that. So we have to get through sort of the next couple of quarters of inventory, you know, correction, and then we believe we'll return to growth in the second half of the year.
Thanks. And then I just wanted to ask on the PC market, you know, I think you and Intel have seen, you know, you were undershipping in the first half. Maybe you're kind of overshipping a little bit now, restocking. I'm just kind of curious your perspective of what that normal normalized run rate is in terms of the size of the PC market and, you know, kind of any perspective if inventory levels are starting to move back up.
Yeah, I would say, again, Blaine, when we look at, you know, sort of the third quarter and sort of the environment that we're on now, I think inventory levels are relatively normalized. And so, you know, the sell-in and consumption are are fairly close. We were building up for a holiday season that is a strong season for us overall. When I think about the size of the market, I think from a consumption standpoint this year is probably somewhere like 250 to 255 million units or so. We expect some growth going into 2024 as we think about sort of the AI PC cycle and some of the Windows refresh cycles that are out there. And I think the PC market returns to let's call it a typical seasonality in which underneath that We have a strong product portfolio, and we are very much focused on growing in places like high-end gaming, ultra-thins, premium consumer, as well as commercial. So that's how we see the PC market.
And the next question comes from the line of Matt Ramsey with TD Cowen. Please proceed with your question.
Thank you very much. Good afternoon. Lisa, I wanted to maybe ask the AI question a little bit differently, not just focused on your GPU portfolio, but more broadly. I think one of the big surprises to a lot of us is how quickly the AI market changed from accelerator cards to selling full servers or full systems for your primary competitor and They've done a lot of innovation, not just on GPU, but on CPU, on their own custom interconnect, etc. So what I'd like to hear a little bit of an update on is just how you think about your roadmap going forward across CPU, GPU, and networking, and particularly the networking part as you look to continue to advance your AI portfolio. Thanks.
Yeah, thanks Matt. I think it's an important point. What we're seeing with these AI systems is they are truly complicated when you think about putting all of these components together. We are certainly working very closely with our partners in putting together sort of the full system, CPU, GPUs, as well as the networking capability. Our Pensando acquisition has actually been really helpful in this area. I think we have a world-class team of experts in this area, and we're also partnered with some of the networking ecosystem overall. So going forward, I don't think we're going to sell full systems, let's call it AMD branded systems. We believe that there are others who are more set up for that. But I think from a definition standpoint, and when we're doing development, we're certainly doing development with the notion of what that full system will look like. And we'll work very closely with our partners to ensure that that's well-defined so that it's easy for customers to adopt our solutions.
Thank you for that perspective. As my second question, Gene, I wanted to dig into gross margin a little bit and just, I guess, compliment you and the team on being able to guide up for the fourth quarter sequentially gross margin if we, I guess, rewound the clock back to the beginning of the year and the embedded segment would be down from the peak to where you're guiding the fourth quarter, maybe down by a third. I wouldn't have thought gross margin would have hung in as well and grown sequentially each quarter through the year. Obviously, client margins got better, but maybe you could walk us through some of the puts and takes on gross margin and inside of each segment where you're making progress, because I imagine some of that progress is pretty positive underneath Thanksgiving.
Yeah, Matt, thank you for the question. Yeah, there are a few pros and takes, especially in a mixed demand environment. So let me just comment on Q3 first. We are very pleased with our growth margin expansion sequentially, 140 basis point. As you mentioned, the better segment of revenue actually declined double-digit sequentially. There are two primary drivers. The first one is definitely The data center grew 21% sequentially, which provides a tailwind to our growth margin. Secondly, as we go through the inventory correction in the TC market, we did encounter some headwinds in the client segment growth margin. And in Q3, we saw very significant improvement with our client segment growth margin. I think going forward, the pace of client segment improvement will moderate, but it will continue to drive incremental growth margin improvement in client segment. So that really is why we are able to drive sequential growth in Q3. And in Q4, I would say the major dynamics is very strong double-digit growth in data center business. we definitely have the tailwind, which more than offset the embedded segment decline sequentially double-digit again. I think going forward, it's really mixed. Primarily mixed is driving out growth margin, We feel pretty good about second half next year when we can expand the data center significantly and especially embedded segment start to recover. We should be able to drive more meaningful gross margin improvement in second half.
And the next question comes from the line of Ross Seymour with Deutsche Bank. Please proceed with your question.
Hi, Lisa. I have a question on the MI300 side of things. When you go to market, obviously there's been shortages this year of GPU accelerators, and so a second source is definitely needed. But beyond just providing that second source role, can you just walk us through some of the competitive advantages that the customer list that you're going to talk about on the 6th is finding to be so attractive relative to your primary competitor?
Yeah, I think there's a couple different things, Ross. I mean, if we start with, it's just a very capable product. The way it's designed from a chiplet standpoint, we have very strong compute as well as memory capacity and memory bandwidth. In inference in particular, it's very helpful. And the way to think about it is, On these larger language models, you can't fit the model on one GPU. You actually need multiple GPUs. And if you have more memory, you can actually use fewer GPUs to infer those models. And so it's very beneficial from a total cost of ownership standpoint. From a software standpoint, this has been perhaps the area where we've had to invest more and do more work. Our customers and partners are actually moving towards an area where they're more able to move across different hardware, so really optimizing at the higher level frameworks. And that's reducing the barrier of entry of sort of taking on a new solution. And we're also talking very much about going forward what the roadmap is. It's very similar to our Epic evolution. When you think about our closest partners in the cloud environment, we've worked very closely to make each generation better. So I think MI300 is an excellent product, and we'll keep evolving on that as we go through the next couple of generations.
For my follow-up, I'm going to focus on the OPEX side of things. You guys have kept that pretty tight over the years. Gene, I just wondered what the puts and takes on that might be heading into 2024. I think you're exiting this year at about kind of high single digits, maybe 10% year over year. Any sort of unique puts and takes, especially as you guys are driving for all that MI300 success as we think about OPEX generally in 2024?
Thanks for the question. Our team has done an absolutely great job in reallocating resources within our budget envelope to really invest in the most important areas in AI and the data center. We are actually in the planning process for 2024. I can comment on a very high level. given tremendous opportunities we have in AI and the data center, we definitely will increase both R&D investment and the go-to-market investment to address those opportunities. I think the way to think about it is our objective is to drive top-line revenue growth much faster than OPEX growth, so our investment can drive long-term growth, and we also can leverage our operating model to really actually expand earnings much faster than revenue. That's really how we think about running the company and driving the operating margin expansion.
And the next question comes from the line of Harsh Kumar with Piper Sandler. Please proceed with your question.
Yeah. Hi, Lisa. I had a strategic one for you and then somewhat of a tactical one. On the strategic side, As your key competitor is sort of getting their act together on the manufacturing technology and the nodes, would it not be feasible to think that their manufacturing costs could be significantly better, let's say, than that of yours? And so if that's the case down the line a year, two years out, I'm curious what kind of value-add offerings would AMD have to provide to a customer to keep the market share that you have in the server space, data center space, and then keep that growing as well?
Yeah, Harsh, maybe I should just take a step back and just talk about sort of the engagement that we have with our data center customers. When we think about sort of the epic portfolio and what we've been able to build over the last few generations and what we have going forward with Zen 5 and beyond, process technology is only one piece of the optimization. It's really about Process technology, you know, packaging, we're, you know, leading sort of the usage of chiplets and, you know, 2.5D and 3D integration. And then when you go to, you know, architecture and design. So it's really the holistic product. And from a pricing standpoint, actually, price is only one aspect of the conversation. Much of the conversation is on how much performance can you give me at what efficiency. So from a From an overall efficiency standpoint, I think we've developed fantastic products. We are working closely with our customers to ensure that we continue to evolve our overall portfolio. So I think from a value-added standpoint, it's providing the best TCO is what our customers are looking for, and that's where our roadmap is headed. Going forward, I think having the CPU, the GPU, the FPGAs, the DPUs, I think it gives us actually a nice portfolio to really optimize, not just on a single component basis, but on sort of all of the different workloads that you need in the data center.
Very helpful, Lisa. And then for my follow-up, a lot of folks that we talk to think that compute game is shifting completely from CPUs to GPUs. So it was actually very encouraging to hear you talk about your core EPYC CPUs and the traction that you're seeing with the new generation of CPUs. So I'm curious if I was to ask you, you know, how you think your long-term growth prospects for the next, call it two to three to four years are for your CPU business, not the GPU, but the CPU business. I'm curious what the answer would be.
Yeah, so look, I'm a big believer in you need all types of compute in the data center, especially when you look at the diverse set of workloads. There's a lot of excitement around AI and we are very much clear that that is the number one priority from a growth standpoint going forward. But the Epic CPU business, we feel like we've consistently gained share throughout the last few years. And even with that, we're still underrepresented in large portions of the market. We're underrepresented in enterprise. We've seen some nice sort of sequential growth and nice prospects there, but there's a lot more we can do in enterprise. And we're still underrepresented in cloud third-party workloads, which again, you have to sell through the cloud manufacturers. So I think overall, we feel good about our epic leadership and also our go-to-market efforts that will help us continue to grow that business in 2024 and beyond.
Operator, we have time for two more questions.
Okay, and the next question comes from the line of Stacy Rasgen with Bernstein Research. Please proceed with your question.
Hi, guys. Thanks for taking my questions. First, I wanted to just, like, dial in on the Q4 guidance. If you are going to grow data center 50% half over half, and I assume client is up sequentially, it implies gaming and embedded both likely down sequentially in the 20% range. I know you said double digits. But is that right, and if that is true, especially for embedded, what does that mean going forward into next year? I know you said it's going to be weak in the first half. Does that mean, I mean, is it stable at these levels, or does it continue to decline through the first half until things stabilize? Just how do I think about that in the context of the guidance that you've given for Q4?
Yeah, sure, Stacy. Let me take that, and then Gene might add a few comments. So without getting very specific, I would say I think your comments about data center and client are correct. And then from an embedded and gaming standpoint, we would say embedded, think about it down similar levels, sort of in the teens. compared to sort of Q3 was down in the teens and Q4 will be down in the teens. And then gaming from a console standpoint, we do expect that to be down a bit more than that. And then as we go into Q1, again, without being – there are lots of things that need to happen. We would expect that both gaming and embedded would be down into Q1 as well. And sort of the other comments would be more around seasonality. Does that help?
That does help. Thank you. For my follow-up, I wanted to ask about gross margin. So I know that they've been extending through the year, but for the four years, they're actually down. And I get the mixed things and everything else. But as I look into next year, like how do I think about this? Because it sounds like embedded is going to be pretty weak next year. Client is what it is. Data center is growing, but it does feel like even if the the GPUs are accretive. They're not accretive yet, and it's going to take them a while to get to be accretive. How much do you think you can expand gross margins year over year, like in 24 versus 23, given the trends that we have entering the year?
Yeah, hi, Stacy. I'll say the first thing is if you look at 2023, it's a very unusual year for the industry, right? Especially the PC market, it's one of the worst down cycle during the last three decades. So during that kind of a down cycle, definitely, you know, we had headwinds on gross margin side on our client business, which we have made significant progress in Q3 and Q4 in second half. Going into next year, the mix primarily is the driver of our growth margin. The way to think about it is the data center is going to be the largest incremental revenue contributor next year, and then with both gaming and embedded facing continued sequential decline. I think it's all about the mix. We do expect next year we'll improve gross margin versus 2023, especially second half. So that's how we think about it right now.
And our final question comes from the line of Christopher Roland with Susquehanna. Please proceed with your question.
Thanks for the question. There was an article suggesting that you guys could be interested in doing some ARM-based CPUs. I guess I'd love any thoughts that you have there on that architecture for PC. But also, Apple has their M3 out now. It seems pretty robust. Qualcomm has an X-Elite new chip. It was rumored NVIDIA might be doing that as well. Would love your expectations for this market and what does that mean for the TAM for AMD moving forward?
Yeah, sure, Chris. Thanks for the question. So, look, you know, the way we think about Arm, you know, Arm is a partner in many respects. So we use Arm, you know, throughout parts of our portfolio. I think as it relates to PCs, x86 is still the majority of the volume in PCs. And if you think about sort of the ecosystem around x86 and Windows, I think it's been a very robust ecosystem. What I'm most excited about in PCs is actually the AI PC. I think the AI PC opportunity is an opportunity to redefine what PCs are in terms of productivity tool. really sort of operating on sort of user data. And so I think we are at the beginning of a wave there. We are investing heavily in Ryzen AI and the opportunity to really broaden sort of the AI capabilities of PCs going forward. And I think that's where the conversation is going to be about. It's going to be less about what instructions that you are using, and more about what experience are you delivering to customers. And from that standpoint, I think that we have a very exciting portfolio that I feel good about over the next couple of years.
Thank you, Lisa. And one quick one on FPGA for the data center in particular. That was a really cool fintech win. I understand that. Can you talk about where we stand in data center FPGA and that outlook FPGA for AI, and could we even mix in an FPGA into the MI300 tile at some point, or is there really, at this point, not an AI market for FPGA?
Yeah, I mean, Chris, the way I think about sort of FPGAs in the data center, it's another compute element. You know, we do use FPGAs, or there are FPGAs in a number of the systems. I would say from a revenue contribution standpoint, it's still relatively small in the, you know, sort of in the near term. We have, you know, some design wins going forward that we would see that content grow, but that won't be, you know, so much in 2024 that it will be, you know, beyond that. And part of our value proposition, I think, to our data center partners is, look, whatever compute element you need, whether it's CPUs or GPUs or FPGAs or DPUs, we have the ability to sort of bring those components together. And that is a strong point as we think about just how heterogeneous these data centers are going forward. So thank you for that.
At this time, we have reached the end of the question and answer session, and I would like to turn the floor back over to Mitch for any closing comments.
Great, John. That concludes today's call. Thank you to everyone for joining us today.
Ladies and gentlemen, this does conclude today's teleconference. You may disconnect your lines at this time. Thank you for your participation.