2/4/2025

speaker
Operator
Operator

Greetings and welcome to the AMD fourth quarter and full year 2024 conference call. At this time, all participants are in a listen-only mode. A question and answer session will follow the formal presentation. If anyone should require operator assistance, please press star zero on your telephone keypad. And as a reminder, this conference is being recorded. It is now my pleasure to introduce to you Matt Ramsey, Vice President of Investor Relations. Thank you, Matt. You may begin.

speaker
Matt Ramsey
Vice President of Investor Relations

Thank you and welcome to AMD's fourth quarter and full year 2024 financial results conference call. By now, you should have had the opportunity to review a copy of our earnings press release and accompanying slides. If you have not had the chance to review these materials, they can be found on the investor relations page of AMD.com. We will refer primarily to non-GAAP financial measures during today's call. The full non-gap-to-gap reconciliations are available in today's press release and slides posted on our website. Participants in today's conference call are Dr. Lisa Su, our Chair and Chief Executive Officer, and Jean Hu, our Executive Vice President and Chief Financial Officer and Treasurer. This is a live call and will be replayed via webcast on our website. Before we begin, I would like to note that Gene Hu will attend the Morgan Stanley Global TMT Conference on Monday, March 3rd. Today's discussion contains forward-looking statements based on our current beliefs, assumptions, and expectations, speak only as of today, and as such involve risks and uncertainties that could cause actual results to differ materially from our current expectations. Please refer to the cautionary statement in our press release for more information on factors that could cause our actual results to differ materially. With that, I will hand the call over to Lisa. Lisa?

speaker
Dr. Lisa Su
Chair and Chief Executive Officer

Thank you, Matt, and good afternoon to all those listening today. 2024 was a transformative year for AMD. We successfully established our multi-billion dollar data center AI franchise, launched a broad set of leadership products, and gained significant server and PC market share. As a result, we delivered record annual revenue, grew net income 26% for the year, and more than doubled free cash flow from 2023. Importantly, the data center segment contributed roughly 50% of annual revenue as instinct and epic processor adoption expanded significantly with cloud, enterprise, and supercomputing customers. Looking at our financial results, Fourth quarter revenue increased 24% year-over-year to a record $7.7 billion, led by record quarterly data center and client segment revenue, both of which grew by a significant double-digit percentage. On a full-year basis, annual revenue grew 14% to $25.8 billion, as data center revenue nearly doubled and client segment revenue grew 52%, more than offsetting declines in our gaming and embedded segments. Turning to the segments, data center segment revenue increased 69% year-over-year to a record $3.9 billion. 2024 marked another major inflection point for our server business as share gains accelerated, driven by the ramp of fifth-gen Epic Turin and strong double-digit percentage year-over-year growth in fourth-gen Epic Sales. In cloud, we exited 2024 with well over 50% share at the majority of our largest hyperscale customers. Hyperscaler demand for Epic CPUs was very strong, driven by expanded deployments powering both their internal compute infrastructure and online services. Public cloud demand was also very strong, with the number of Epic instances increasing 27% in 2024 to more than 1,000. AWS, Alibaba, Google, Microsoft, and Tencent launched more than 100 AMD general purpose and AI instances in the fourth quarter alone. This includes new Azure instances powered by a custom-built EPYC processor with HBM memory that delivers leadership HPC performance based on offering 8x higher memory bandwidth compared to competitive offerings. We also built significant momentum with Forbes 2000 global businesses using EPYC in the cloud, as enterprise customers activated more than double the number of EPYC cloud instances from the prior quarter. This capped off a strong year of growth as enterprise consumption of Epic in the cloud nearly tripled from 2023. Turning to enterprise on-prem adoption, Epic CPU sales grew by a strong double-digit percentage year over year as sales grew increased and we closed high-volume deployments with Akamai, Hitachi, LG, ServiceNow, Verizon, Visa, and others. We are seeing growing enterprise pull based on the expanding number of Epic platforms available and our increased go-to-market investments. Exiting 2024, there are more than 450 Epic platforms available from the leading server OEMs and ODMs, including more than 120 turn platforms that went into production in the fourth quarter from Cisco, Dell, HPE, Lenovo, Supermicro, and others. Looking forward, TURN is clearly the best server processor in the world with more than 540 performance records across a broad range of industry standard benchmarks. At the same time, we are seeing sustained demand for both fourth and third gen EPYC processors as our consistent roadmap execution has made AMD the dependable and safe choice. As a result, we see clear growth opportunities in 2025 across both cloud and enterprise based on our full portfolio of EPIC processors optimized for leadership performance across the entire range of data center workloads and system price points. Turning to our data center AI business, 2024 was an outstanding year as we accelerated our AI hardware roadmap to deliver an annual cadence of new instinct accelerators, expanded our Rackham software suite with significant uplifts in inferencing and training performance, built strong customer relationships with key industry leaders, and delivered greater than $5 billion of data center AI revenue for the year. Looking at the fourth quarter, MI300x production deployments expanded with our largest cloud partners. Meta exclusively used MI300x to serve their LAMA 405b frontier model on Meta.ai and added instant GPUs to its OCP-compliant Grand Teton platform designed for deep learning recommendation models and large-scale AI inferencing workloads. Microsoft is using MI300x to power multiple GPT-4-based co-pilot services and launch flagship instances that scale up to thousands of GPUs for AI training and inference and HPC workloads. IBM, DigitalOcean, Vulture, and several other AI-focused CSPs have begun deploying AMD Instinct accelerators for new instances. IBM also announced plans to enable MI300X on their WatsonX AI and data platform for training and deploying enterprise-ready generative AI applications. Instinct platforms are currently being deployed across more than a dozen CSPs globally, and we expect this number to grow in 2025. For enterprise customers, more than 25 MI300 series platforms are in production with the largest OEMs and ODMs. To simplify and accelerate enterprise adoption of AMD Instinct platforms, Dell began offering MI300X as a part of their AI factory solution suite, and it's providing multiple ready-to-deploy containers via the Dell Enterprise Hub on Hugging Face. HPC adoption also grew in the quarter. AMD now powers five of the 10 fastest and 15 of the 25 most energy-efficient systems in the world on the latest top 500 supercomputer list. Notably, the El Capitan system at Lawrence Livermore National Labs debuted as the world's fastest supercomputer, using over 44,000 MI300A APUs to deliver more than 1.7 exaflops of compute performance. Earlier this month, the High Performance Computer Center at the University of Stuttgart launched the Hunter supercomputer that also uses MI300A. Like El Capitan, Hunter will be used for both foundational scientific research and advanced AI projects, including training LLMs in 24 different European languages. On the AI software front, we made significant progress across all layers of the Rackham stack in 2024. Our strategy is to establish AMD Rackham as the industry's leading open software stack for AI, providing developers with greater choice and accelerating the pace of industry innovation. More than 1 million models on Hugging Face now run out of the box on AMD, and our platforms are supported in the leading frameworks like PyTorch and JAX, serving solutions like VLLM, and compilers like OpenAI Triton. We have also successfully ramped large-scale production deployments with numerous customers using Rackham, including our lead hyperscale partners. We ended the year with the release of Rackham 6.3 that included multiple performance optimizations, including support for the latest Flash attention algorithm that runs up to three times faster than prior versions, and SGLang runtime that enabled day-zero support for state-of-the-art models like DeepSeq v3. As a result of these latest enhancements, MI300x inferencing performance has increased 2.7 times since launch. Looking forward, we're continuing to accelerate our software investments to improve the out-of-the-box experience for a growing number of customers adopting Instinct to power their diverse AI workloads. For example, in January, we began delivering biweekly container releases that provide more frequent performance and feature updates in ready-to-deploy packages and we continue adding resources dedicated to the open source community that enable us to build, test, and launch new software enhancements at a faster pace. On the product front, we began volume production of MI325X in the fourth quarter. The production ramp is progressing very well to support new customer wins. MI325 is well positioned in market, delivering significant performance and TCO advantages compared to competitive offerings. We have also made significant progress with the number of customers adopting AMD Instinct. For example, we recently closed several large wins with MI300 and MI325 at Lighthouse AI customers that are deploying Instinct at scale across both their inferencing and training production environments for the first time. Looking ahead, our next generation MI350 series featuring our cDNA4 architecture is looking very strong. CDNA4 will deliver the biggest generational leap in AI performance in our history, with a 35x increase in AI compute performance compared to CDNA3. The silicon has come up really well. We were running large-scale LLMs within 24 hours of receiving first silicon, and validation work is progressing ahead of schedule. The customer feedback on MI350 series has been strong, driving deeper and broader customer engagements with both existing and net new hyperscale customers in preparation for at-scale MI350 deployments. Based on early silicon progress and the strong customer interest in the MI350 series, we now plan to sample lead customers this quarter and are on track to accelerate production shipments to mid-year. As we look forward into our multi-year AMD Instinct roadmap, I'm excited to share that MI400 series development is also progressing very well. The CDNA Next architecture takes another major leap, enabling powerful rack-scale solutions that tightly integrate networking, CPU, and GPU capabilities at the silicon level to support in-sync solutions at data center scale. We designed CDNA Next to deliver leadership AI and HPC flops while expanding our memory capacity and bandwidth advantages and supporting an open ecosystem of scale-up and scale-out networking products. We are seeing strong customer interest in the MI400 series for large-scale training and inference deployments and remain on track to launch in 2026. Turning to our acquisition of ZT Systems, we passed key milestones in the quarter and received unconditional regulatory approvals in multiple jurisdictions, including Japan, Singapore, and Taiwan. Cloud and OEM customer response to the acquisition has been very positive as ZT Systems expertise can accelerate time to market for future instinct accelerator platforms. We have also received significant interest in ZT's manufacturing business. We expect to successfully divest ZT's industry-leading US-based data center infrastructure production capabilities shortly after we close the acquisition, which remains on track for the first half of the year. Turning to our client segment, Revenue increased 58% year-over-year to a record $2.3 billion. We gained client revenue share for the fourth straight quarter driven by significantly higher demand for both Ryzen desktop and mobile processors. We had record desktop channel sellout in the fourth quarter in multiple regions as Ryzen dominated the best-selling CPU lists at many retailers globally. exceeding 70% share at Amazon, Newegg, MindFactory, and numerous others over the holiday period. In mobile, we believe we had a record OEM PC sell-through share in the fourth quarter as Ryzen AI 300 series notebooks ramped. In addition to growing share with our existing PC partners, we were very excited to announce the new strategic collaboration with Dell that marks the first time they will offer a full portfolio of commercial PCs powered by Ryzen Pro processors. The initial wave of Ryzen-powered Dell commercial notebooks is planned to launch this spring, with the full portfolio ramping in the second half of the year as we focus on growing commercial PC share. At CES, we expanded our Ryzen portfolio with the launch of 22 new mobile processors that deliver leadership compute, graphics, and AI capabilities. Our Ryzen processor portfolio has never been stronger, with leadership compute performance across the stack. For AI PCs, we are the only provider that offers a complete portfolio of CPUs, enabling Windows Copilot Plus experiences on premium ultra-thin, commercial, gaming, and mainstream notebooks. Looking into 2025, we're planning for the PC TAM to grow by a mid-single-digit percentage year-on-year. Based on the breadth of our leadership client CPU portfolio and strong design wind momentum, we believe we can grow client segment revenue well ahead of the market. Now turning to our gaming segment, revenue declined 59% year-over-year to $562 million. Semi-custom sales declined as expected as Microsoft and Sony focused on reducing channel inventory. Overall, this console generation has been very strong, highlighted by cumulative unit shipments surpassing 100 million in the fourth quarter. Looking forward, we believe channel inventories have now normalized, and semi-custom sales will return to more historical patterns in 2025. In gaming graphics, revenue declined year over year as we accelerated channel sellout in preparation for the launch of our next-gen Radeon 9000 series GPUs. Our focus with this generation is to address the highest volume portion of the enthusiast gaming market with our new RDNA4 architecture. RDNA4 delivers significantly better ray tracing performance and adds support for AI-powered upscaling technology that will bring high quality 4K gaming to mainstream players when the first Radeon 9070 series GPUs go on sale in early March. Now turning to our embedded segment, Fourth quarter revenue decreased 13% year-over-year to $923 million. The demand environment remains mixed, with the overall market recovering slower than expected as strength in aerospace and defense and test and emulation is offset by softness in the industrial and communication markets. We continued expanding our adaptive computing portfolio in the quarter with differentiated solutions for key markets. We launched our Versal RF series with industry-leading compute performance for aerospace and defense markets, introduced our Versal Premium Series Gen 2 as the industry's first adaptive compute devices supporting CXL 3.1 and PCI Gen 6, and began shipping our next-gen Alveo card with leadership performance for ultra-low latency trading. We believe we gained adaptive computing share in 2024 and are well-positioned for ongoing share gains based on our design win momentum. We closed a record $14 billion of design wins in 2024, up more than 25% year over year, as customer adoption of our industry-leading adaptive computing platforms expands, and we won large new embedded processor designs. In summary, we ended 2024 with significant momentum, delivering record quarterly and full-year revenue. Epic and Ryzen processor share gains grew throughout the year, and we are well positioned to continue outgrowing the market based on having the strongest CPU portfolio in our history. We established our multi-billion dollar data center AI business and accelerated both our Instinct hardware and Rackham software roadmaps. For 2025, we expect the demand environment to strengthen across all of our businesses, driving strong growth in our data center and client businesses and modest increases in our gaming and embedded businesses. Against this backdrop, we believe we can deliver strong double-digit percentage revenue and EPS growth year over year. Looking further ahead, the recent announcements of significant AI infrastructure investments like Stargate and latest model breakthroughs from DeepSeq and the Allen Institute highlight the incredibly rapid pace of AI innovation across every layer of the stack. from silicon to algorithms to models, systems, and applications. These are exactly the types of advances we want to see as the industry invests in increased compute capacity while pushing the envelope on software innovation to make AI more accessible and enable breakthrough generative and agentic AI experiences that can run on virtually every digital device. All of these initiatives require massive amounts of new compute, and create unprecedented growth opportunities for AMD across our businesses. AMD is the only provider with the breadth of products and software expertise needed to power AI from end to end across data center, edge, and client devices. We have made outstanding progress building the foundational product, technology, and customer relationships needed to capture a meaningful portion of this market. And we believe this places AMD on a steep long-term growth trajectory led by the rapid scaling of our data center AI franchise for more than $5 billion of revenue in 2024 to tens of billions of dollars of annual revenue over the coming years. Now I'd like to turn the call over to Jean to provide some additional color on our fourth quarter and full year results. Jean?

speaker
Jean Hu
Executive Vice President and Chief Financial Officer and Treasurer

Thank you, Lisa, and good afternoon, everyone. I'll start with a review of our financial results and then provide our current outlook for the first quarter of fiscal 2025. AMB executed very well in 2024, delivering record revenue of $25.8 billion, up 14%, driven by 94% growth in our data center segment and 52% growth in our client segment, which more than offset headwinds in our gaming and embedded segments. We expand the growth margin by 300 basis points and achieved earnings per share growth of 25% while investing aggressively in AI to fuel our future growth. For the first quarter of 2024, revenue was a record $7.7 billion, growing 24% year-over-year, as strong revenue growth in the data center and the client segment was partially offset by lower revenue in our gaming and embedded segments. Revenue was up 12% sequentially, primarily driven by the growth of client, data center, and gaming segments. Growth margin was 54%, up 330 basis points year over year, due to a favorable shift in revenue mix with higher data center and client revenues, lower gaming revenue partially offset by the impact of lower embedded revenues. Operating expenses were $2.1 billion, an increase of 23% year-over-year, as we invested in R&D and marketing activities to address our significant growth opportunities. Operating income was a record of $2 billion, representing 26% operating margin. Taxes, interest, and other was a $249 million net expense. For the fourth quarter of 2024, Diluted earning per share was $1.09, an increase of 42% year-over-year, reflecting the significant operating leverage of our business model. Now turning to our reportable segment. Starting with the data center segment, revenue was the record of $3.9 billion, up 69% year-over-year, driven by strong growth of both AMD Instinct GPU and the fourth and fifth generation AMD EPYC CPU sales. Data center segment operating income was $1.2 billion, or 30% of revenue, compared to $666 million, or 29% a year ago. Client segment revenue was a record of $2.3 billion, up 58% year-over-year, driven by strong demand for AMD Ryzen processors. Client segment operating income was $446 million, or 19% of revenue, compared to operating income of $55 million, or 4% of revenue a year ago, driven primarily by operating leverage from higher revenue. Gaming segment revenue was $563 million, down 59% year-over-year, primarily due to a decrease in semi-customer revenue. Gaming segment operating income was $50 million, or 9% of revenue, compared to $224 million, or 16% a year ago. Embed segment revenue was $923 million, down 13% year-over-year as end-market demand continues to be mixed. Embed segment operating income was $362 million, or 39% of revenue, compared to $461 million, or 44% a year ago. Turning to the balance sheet and the cash flow, during the quarter, we generated $1.3 billion in cash from operations and a record $1.1 billion for free cash flow. Inventory increases sequentially by $360 million to $5.7 billion. At the end of the quarter, cash, cash equivalent, and short-term investment were $5.1 billion. In the fourth quarter, we repurchased 1.8 million shares and returned 256 million to shareholders. For the year, we repurchased 5.9 million shares and returned 862 million to shareholders. We have $4.7 billion remaining in our share repurchase authorization. Before I turn to our financial outlook, let me cover our financial segment reporting. beginning with our first quarter fiscal year 2025 financial statement disclosures. We plan to combine the client and the gaming segment into one single reportable segment to align with how we manage the business. Therefore, reporting three segments, data center, client with the gaming, and the embedded. We'll continue to provide the distinct revenue disclosures for our data center, client, gaming, and embedded businesses, consistent with our current reporting. Now turning to our first quarter of 2025 outlook, we expect revenue to be approximately $7.1 billion, plus or minus $300 million, up 30% year-over-year, driven by strong growth in our data center and client businesses. more than offsetting a significant decline in our gaming business and a modest decline in our embedded business. We expect revenue to be down sequentially approximately 7%, driven primarily by seasonality across our businesses. In addition, we expect first quarter non-GAAP growth margin to be approximately 54%, non-GAAP operating expenses to be approximately $2.1 billion. non-GAAP other net income to be 24 million, non-GAAP effective tax rate to be 13%, and the diluted share count is expected to be approximately 1.64 billion shares. In closing, 2024 was a strong year for AMD, demonstrating our disciplined execution to deliver revenue growth and expand earnings at a faster rate than revenue. all while investing in AI and innovation to fuel long-term growth. Looking ahead, we will build on the momentum to drive double-digit percentage revenue growth and further accelerate earnings in 2025 and beyond. With that, I'll turn it back to Matt for the Q&A session.

speaker
Matt Ramsey
Vice President of Investor Relations

Thank you very much, Jean. We're now ready to start the Q&A session. As the operator polls for questions, we remind each participant to please ask one question and a brief follow-up. Operator, please poll for questions. Thanks.

speaker
Operator
Operator

Thank you, Matt. We will now be conducting a question and answer session. If you would like to ask a question, please press star 1 on your telephone keypad. A confirmation tone will indicate that your line is in the queue. You may press star 2 to remove yourself from the queue. For participants using speaker equipment, it may be necessary to pick up your handset before pressing the star keys. And as a reminder, like Matt said, please limit yourself to one question and one follow-up. Thank you. One moment while we poll for questions. And the first question comes from the line of Aaron Rakers with Wells Fargo. Please proceed.

speaker
Aaron Rakers
Analyst at Wells Fargo

Yeah, thanks for taking this question. I guess I'll just ask it right out of the gate is, You know, as we think about the GPU business, and I appreciate you talked about, you know, delivering north of $5 billion of revenue, which is extremely impressive in 2024. I'm curious if you, how we should think about framing the GPU, the Instinct business as we think about 2025, you know, and any kind of color you can provide us as far as kind of the progression of revenue, the pace of revenue, first half, first second half, as we think about some of the product cycle dynamics. Thank you.

speaker
Dr. Lisa Su
Chair and Chief Executive Officer

Sure, Aaron. Thanks for the question. So first of all, look, we were very pleased with how we finished 2024 in terms of the data center GPU business. I think the ramp was steep as we went throughout the year and the team executed well. You know, going into 2025, you know, as I mentioned in the prepared remarks, we're actually very happy with the progress that we're making on both the hardware roadmaps and the software roadmaps. So on the hardware side, We launched 325 at the end of the fourth quarter, started shipments then. We have new designs that have come on both 300 and 325 that will deploy in the first half of the year. And then the big news is on the 350 series. So we had previously stated that we thought we would launch that in the second half of the year. And frankly, that bring up has come up better than we expected. And there's very strong customer demand for that. So we are actually going to pull that production ramp into the middle of the year, which improves our relative competitiveness. So as it relates to how data center, so the overall data center business will grow strong double digits. Certainly both the server product line as well as the data center GPU product line will grow strong double digits. And from the shape of the revenue, you would expect that the second half would be stronger than the first half. just given 350 will be a catalyst for the data center GPU business. But overall, I think we're very pleased with the trajectory of the data center business in both 2024 and then going into full year 2025.

speaker
Aaron Rakers
Analyst at Wells Fargo

Yeah, thank you very much. And as a quick follow-up, just thinking about the guidance overall relative to that down 7% sequential, I know you mentioned seasonality across the business segments. Uh-oh. Are you assuming that you're down sequentially in data center in total in one queue? And how do I frame that relative to seasonality? Thank you.

speaker
Dr. Lisa Su
Chair and Chief Executive Officer

Yeah, sure, Aaron. So let me give you some more color on the, you know, Q1 guide. So Q1 guide was down 7% sequentially, as Gene mentioned. And the way that breaks out in each of the segments, assume that data center would be down, you know, just about, you know, that average, so the corporate average. We would expect the client business and the embedded business to be down more than that just given where seasonality is for those businesses. And then we would expect gaming business will be down a little less than that. And that's a little atypical from a seasonality standpoint, but we are coming off of a year when there was a lot of let's call it inventory normalization. And now that inventory is normalized we would expect that that would be down a little bit less than the corporate average.

speaker
Operator
Operator

And the next question comes from the line of Timothy Arcuri with UBS. Please proceed with your question.

speaker
Timothy Arcuri
Analyst at UBS

Thanks a lot. I wanted to ask about the server CPU business. Gene, I think you have said in the past that core count is going to grow mid to high teens. And as long as your competitor is not super aggressive on pricing, that your business should grow roughly that much as well. Are you expecting or are you already seeing them become a little more aggressive on pricing as they attempt to shore up their share? It sounds like they're getting a bit more aggressive on pricing. So wondering if you still think that the server CPU business can grow in line with that core kind of mid to high teams.

speaker
Jean Hu
Executive Vice President and Chief Financial Officer and Treasurer

Yeah, first, we always assume server CPU is a very competitive market. But we currently have the best line up of portfolio from customers not only Torrent generation, but Genwa, and then even Meline. We provide the best PCO for our customers based on the product portfolio. So overall, we are actually quite confident about continuing to drive the server CPU businesses, not only growing from a unit perspective, ASP perspective, and continue to gain share.

speaker
Timothy Arcuri
Analyst at UBS

Thanks a lot. And then, Jean, can you just give us a sense of where data center GPUs came in for December, I'm thinking it's probably in the $2 billion range. And then is it assumed to be down, flat, or up? Would you be willing to give a number for March?

speaker
Jean Hu
Executive Vice President and Chief Financial Officer and Treasurer

Thanks. Yeah, I think the way to look at our Q4 performance is our data center business overall did really well. It actually is consistent with our expectations. Of course, when we look at the server and the data center GPU, server did better than data center GPU. But overall, it's very consistent with our performance.

speaker
Dr. Lisa Su
Chair and Chief Executive Officer

Yeah, maybe I'll just add, Tim, on your question as to what you would expect as we go into 2025. I think you should assume that the first half of 2025 data center segment will be consistent with the second half of 24. And that's true for both businesses on the server side as well as the data center GPU side.

speaker
Operator
Operator

And the next question comes from the line of Vivek Arya with Bank of America Securities. Please proceed with your question.

speaker
Vivek Arya
Analyst at Bank of America Securities

Thanks for taking my question. These are a few questions on the data center GPU business. I think last year, AMD was very explicit about setting and beating or meeting expectations. This year, you have not set a specific forecast, and I'm curious what has changed. And then if I go back to your analyst day in December, I think at that time, you had sort of long-term 60% CAGR Is it fair to assume that you can grow at that for 25 versus the 5 billion plus that you did last year? So just contrast the two years and then whether AMD can grow at that 60% trend line.

speaker
Dr. Lisa Su
Chair and Chief Executive Officer

Sure. So Vivek, thanks for the question. I think what we look at is certainly for the first year of the data center GPU business, we wanted to give some clear progression as it was going. The business is now at scale, actually now at, you know, over 5 billion. And as we go into 2025, I think our guidance will be more at the segment level with some color as to, you know, some qualitative color as to what's going on between the two businesses. And relative to, you know, your question about, you know, long-term growth rates, you're absolutely right. I mean, I believe that, you know, the demand for AI compute is strong. And, you know, we've talked about a data center accelerator TAM upwards of $500 billion by the time we get out to 2028. I think all of the recent data points would suggest that there is strong demand out there. Without guiding for a specific number in 2025, one of the comments that we made is we see this business growing to tens of billions as we go through the next couple of years. And that gives you a view of the confidence that we have in the business and particularly you know, our roadmap is getting stronger with each generation, right? So, MI300 was a great start. 350 series is, you know, stronger and addresses a broader set of workloads, including, you know, both inference as well as training. And then as we get into MI400 series, we see significant, you know, traction and excitement around what we can do there with, you know, rack scale designs and, you know, just the innovation that's going on there. So, yeah, we're bullish on the long term. And we'll certainly give you progress as we go through each quarter in 2025. Thank you, Lisa.

speaker
Vivek Arya
Analyst at Bank of America Securities

And for my follow-up, I would love your perspective on the news from DeepSeek recently. There are kind of two parts to that. One is, once you heard the news, do you think that should make us more confident or more conservative? about the semiconductor opportunity going forward? Like, is there something so disruptive in what they have done that reduces the overall market opportunity? And then within that, have your views about GPU versus ASIC, you know, how that share develops over the next few years, have those evolved in any way at all? Thank you.

speaker
Dr. Lisa Su
Chair and Chief Executive Officer

Yeah, great. Thanks for the question, Vivek. Yeah, I think, you know, it's been a pretty exciting first few weeks of the year. I think the DeepSeek announcements, Allen Institute, as well as some of the Stargate announcements, talk about just how much, the rate and pace of innovation that's happening in the AI world. So specifically relative to DeepSeek, look, we think that innovation on the models and the algorithms is good for AI adoption. The fact that there are new ways to bring about training and inference capabilities with less infrastructure actually is a good thing because it allows us to continue to deploy AI compute and broader application space and more adoption. I think from our standpoint we also like very much the fact that we're big believers in open source and from that standpoint having open source models looking at the rate and pace of adoption there I think is pretty amazing. And that is how we expect things to go. So to the overall question of how should we feel about it, we feel bullish about the overall cycle. And similarly on some of the infrastructure investments that were announced with OpenAI and Stargate and building out, let's call it, massive infrastructure for next generation AI. I think all of those say that AI is certainly on the very steep part of the curve. And as a result, we should expect a lot more innovation. And then on the ASIC point, let me address that because I think that is also a place where there's a good amount of discussion. I have always been a believer in you need the right compute for the right workload. And so with AI, given the diversity of workloads, large models, medium models, small models, training, inference, when you're talking about broad foundational models or very specific models, you're going to need all types of compute. And that includes CPUs, GPUs, ASICs, and FPGAs. Relative to our 500 billion plus TAM going out in time, we've always had ASICs as a piece of that. But my belief is given how much change there is still going on in AI algorithms that ASICs will still be the smaller part of that TAM because it is a more specific workload optimized, whereas GPUs will enable significant programmability and adjustments to all of these algorithm changes. But when I look at the AMD portfolio, it really is across all of those pieces. So CPUs, GPUs, and we are also involved in a number of ASIC conversations as well as customers want to really have an overall compute partner.

speaker
Operator
Operator

And the next question comes from the line of Joshua Bookalter with TD Cowan. Please proceed with your question.

speaker
Joshua Bookalter
Analyst at TD Cowen

Hey, guys. Thanks for taking my question. Obviously, it was good to see MI355X pulled into mid-year. But I wanted to clarify, you said first half 25 data center GPU likely consistent with second half 24. And I was wondering if you could speak to whether or not the shape of the first half changed over the last few months and is potentially related to this pulled-in timeline. It could be a potential change. air pocket ahead of that launch, or if this was sort of consistent with how you saw things playing out as MI350 and 325X ramped more fully? Thank you.

speaker
Dr. Lisa Su
Chair and Chief Executive Officer

Yeah, thanks for the question, Joshua. No, I would say, you know, from our standpoint, we've gotten incrementally, you know, more positive on the 2025 data center GPU launch. I think 350 series was in second half always, but pulling it into mid-year is an incremental positive. And from a first half, second half statement, as I mentioned, we have some new important AI design wins that are going to be deployed with 300 and 325 in the first half of the year. But with 350 series, we end up with more content. I mean, it's a more powerful GPU. ASPs go up, and you would expect you know, larger deployments that include training and inference in that timeframe. So, you know, the shape is, you know, similar to what we would have expected before.

speaker
Joshua Bookalter
Analyst at TD Cowen

Thank you. And believe it or not, I'm going to ask a question on client. Obviously, the growth number in the fourth quarter, I mean, was certainly higher than our model. Could you clarify the drivers of the strengths across desktop, notebook, and enterprise and how we should think about OneQ, and in particular, to put it bluntly, are you worried at all about inventory buildup given how much your client revenue has outperformed the broader PC market in the second half of the year? Thank you.

speaker
Dr. Lisa Su
Chair and Chief Executive Officer

Yeah, thanks for the question. Our client business performed really well throughout 2024, and Q4 was a very strong quarter. There are a couple of reasons for that so we should go through that. We don't believe there is some substantial inventory buildup. We actually think that what we are seeing is very strong adoption of our new products. So on the desktop side we saw our highest sellout in many years as we went through the holiday season launching our new gaming CPUs. Frankly they have been constrained in the market and we have continued shipping very strongly through the month of January as we're catching up with some demand there. So desktop business was very strong. And on the notebook side, we also saw a number of our OEM partners launching new AI PCs with the slew of new mobile part numbers that we announced at CES. We have our strongest PC portfolio on the mobile side with top to bottom co-pilot plus PC compatible products and those are playing very well into the market. So I think Q4 was strong. I know that there was some commentary about whether there were pull-ins relative to tariffs. We didn't see that in the fourth quarter. I think, as I said, we saw strong sell-out. Going into the first quarter, we do expect seasonality in there, but The part of our business that is performing better than seasonality is the desktop portion of the business, and the mobile portion of the business is, let's call it more typical seasonality. But overall, I think we're very bullish on our prospects to grow clients in 2025, just given all of the drivers from product portfolio to some of the market dynamics, as well as our new commercial PCs portfolio.

speaker
Operator
Operator

And the next question comes from the line of Harlan Sir with JP Morgan. Please proceed with your question.

speaker
Harlan Sir
Analyst at JP Morgan

Good afternoon. Thanks for taking my question. For the fourth quarter, did your overall server CPU business grow double-digit sequentially? And maybe as a follow-on to that, you know, I think Q4 was the sixth consecutive quarter of double-digit year-over-year increases for your on-prem server solutions. On a sequential basis, I know you guys did start to see recovery in enterprise, In the second quarter of last year, I think it was strongly up sequentially. In the third quarter, pretty broad base. Did enterprise servers grow sequentially in Q4? And Lisa, how do you see the share prospects in this segment as you step into 2025?

speaker
Dr. Lisa Su
Chair and Chief Executive Officer

Yeah, Harlan, thanks for the question. So I think as Gene mentioned earlier, so in the fourth quarter, we did see a sequential double-digit growth in our server business. We saw that in both cloud and enterprise. I think the server business has been performing extremely well. We're continuing to grow our cloud footprint with more workloads as we have the strength of the Tern portfolio in addition to Genoa and Milan. And then to your question on enterprise, I do believe we're seeing some strong traction in the enterprise. I think what's helping us there is frankly we've invested a lot more in go-to-market, and the go-to-market investments are paying off. The enterprise sales cycle is often a six- to nine-month sales cycle, but as we've invested more resources in it throughout 2024, we've seen that convert into a significant number of new POCs that are now converting into volume deployments. And as we go through into 2025, From a competitiveness standpoint, you know, we have a very strong portfolio across every price point, every core count, every workload. So I think we see a strong 2025 for certain CPUs.

speaker
Harlan Sir
Analyst at JP Morgan

I appreciate that. You know, networking is a very critical part of the AI infrastructure, becoming even more important. There seems to be this misconception that AMD is behind the curve here, yet you're keeping pace, kind of leveraging the incumbent Ethernet technology, strong installed ecosystem. You guys are spearheading the alternate Ethernet consortium. You've got your Infinity Fabric technology for scale-up connectivity. Does you continue to drive customer adoption of your overall AI platforms? What's the feedback been like on your AI networking architectures and any networking-related innovations the team's going to be bringing to the market this year?

speaker
Dr. Lisa Su
Chair and Chief Executive Officer

Yeah, thanks, Harlan, for that question. No question, networking is an extremely important part of the AI solution, and it's an area that we have been investing and spending quite a bit of effort with our customers and our partners jointly. The way to think about it is that our networking proof points are actually increasing as we're going from MI300 to MI325 to MI350 to MI400. So in each of those areas, points. We're increasing the number of proof points. I think people want to see more clusters of ours. Certainly on inference we've shown great performance and total cost of ownership. We now also have a number of training systems that we are putting on board and the important part there is the networking. We have worked very closely with our partners on Ethernet. We believe that this is the right technology for the future. In addition to third-party networking solutions, we're also with our Pensando team developing our own in-house AI NIC that Forrest mentioned at our Q4 Advancing AI event. And as we look forward, working with our customers, we are actually standing up full rack solutions at both the 350 level as well as in the MI400 series So I think the net of it is we believe that, yes, it is absolutely very important. And in addition to all of the hardware and software work, the system level scaling is super important. And we are on track to deliver that with our roadmap.

speaker
Operator
Operator

And the next question comes from the line of Blaine Curtis with Jefferies. Please proceed with your question.

speaker
Blaine Curtis
Analyst at Jefferies

Thanks for taking my question. Lisa, I just want to follow up the data synergy for your business. Obviously, very strongly your view, but it seems for your commentary, the sequential growth kind of slows for the next three quarters. So I just want to understand the why. Obviously, you have some new products coming, so maybe it's just the shift to the new products. I also want to just ping your brain on, in terms of when you look at the ASIC, storylines, there seems to be a kind of a shift to focus on training versus inference. So just your perspective, I know a lot of your workloads initially were inference. Are you seeing any shift in terms of the demand from your customers between training and inference as well?

speaker
Dr. Lisa Su
Chair and Chief Executive Officer

Yeah, sure, Blaine. The way I would say it is we saw a tremendous growth as we built up the data center GPU business throughout 2024. I think what we are seeing is we are continuing to do new deployments. We are continuing to bring on new customers. Clearly we are going through a little bit of a product transition timeframe in the first half of the year, but the key is bringing in the MI350 series was very, very important for us and for the customer set. So the fact that that hardware has come on clean and we've learned a lot from the initial deployments of MI300 I think is very positive. And this is as we might expect given the overall landscape of deployments. And then to the second part of your question as it relates to ASICs, I really haven't seen a big shift at all in the conversation. I will say that the conversation as it relates to AMD is kind of the following. People like the work that we've done in inference, but certainly our customers want to see us as a strong training solution. And that's consistent with what we've said. We've said that we have a stepwise roadmap to really show each one of those solutions. On the software side, we've invested significantly more in some of those sort of the training libraries. We talked about Harlan's question earlier about networking. And then this is about just getting into data centers and ramping up tens of thousands of GPUs. So from my standpoint, I think we are making very good progress there. I just want to reiterate on the ASIC side, I think ASICs are a part of the solution, but I want to remind everyone they are also a very strong part of the AMD sort of toolbox. So we've done semi-custom solutions for a long time. We are very involved in a number of ASIC discussions with our customers as well. And what they like to do is they like to take our baseline IP and really innovate on top of that. And that's what I think differentiates our capability is that we do have all of the building blocks of you know, CPUs, GPUs, as well as, you know, all of the networking technologies that you would need to put the solutions together.

speaker
Matt Ramsey
Vice President of Investor Relations

Thank you. Operator, I think we have time for two more callers, please.

speaker
Operator
Operator

Okay. The next question comes from the line of Stacy Rasgen with Bernstein Research. Please proceed.

speaker
Stacy Rasgen
Analyst at Bernstein Research

Hi, guys. Thanks for taking my questions. I want to ask this a little more explicitly. So you said your server business was up strong double digits sequentially in Q4. My math suggests that could have even meant that the GPU business was down sequentially. And given your guidance for, I guess, flattish GPUs in the first half of 25 versus second half of 24, again, does the math not suggest that you'd be down sequentially both in Q1 and in Q2? Am I doing something wrong with my math? Or what am I missing here?

speaker
Dr. Lisa Su
Chair and Chief Executive Officer

Yeah, perhaps Stacy, maybe let me help give you a little bit of color there. I don't think we said strong double digits. I think we said double digits. So that perhaps is the – so data center segment was up 9% sequentially. Server was a bit more than that. Data center GPU was a little less than that. I think, you know, for some of the models that are out there, you might be a little bit light in the Q3 data center GPU number. So, you know, there might be some adjustments that need to be done there. But I think your suggestion would be incorrect. We, you know, if you just take the halves, you know, second half 24 to first half 25, let's call it, you know, roughly, you know, flattish plus or minus. I mean, we'll have to see exactly how it goes. it is going to be a little bit dependent on just when deployments happen. But that's currently what we see.

speaker
Stacy Rasgen
Analyst at Bernstein Research

Got it. Thanks. And I guess for my follow-up, maybe to follow on there, do you think your exit rate on GPUs in 25 is higher than your exit rate in 24? Are you willing to commit to that?

speaker
Unknown
Unknown

Absolutely. Yes, of course. It would be hard to grow strong double digits otherwise, right?

speaker
Operator
Operator

And the final question comes from the line of Toshiya Hari with Goldman Sachs. Please proceed with your question.

speaker
Toshiya Hari
Analyst at Goldman Sachs

Hi. Thank you so much for squeezing me in. Lisa, I had a question on the server CPU business. I'm curious how you're thinking about the market this year, and if you can delineate between cloud and enterprise, that would be really helpful. And then kind of part B to that question, in your prepared remarks you talked about you all having more than 50% share across the major hyperscalers. How would you characterize the competitive intensity at those customers vis-a-vis some of the internal custom silicon that's expected to ramp over the coming quarters and years?

speaker
Dr. Lisa Su
Chair and Chief Executive Officer

Sure, Toshiet. So let me say, as we look into 2025, I think we see a good server market between cloud and enterprise. The I think as we went into sort of the early part of 24, there was a little bit of let's call it less investment on the CPU side as people were optimizing investments for AI. We saw that sort of pick up in the second half of the year in 24, and we would expect that to go into 25. So the enterprise refresh cycles are coming in again, and certainly there are a number of cloud projects vendors that are now, let's call it, re-updating some of their data centers. And then your second question was as it relates to the competitive landscape with custom silicon. Yeah. Look, I think it's about the same. What I would say, Toshia, is it's less about custom silicon versus versus x86. It's much more about do you have the right product for the right workload. And look, the server market is always a competitive market. What we've done, and you've seen it in our Zen 4 product line as well as in our Zen 5 product line, we've expanded the design points for each of the core generations so that we have cloud native, and then we have enterprise optimized, you know, low core count, you know, high core count, highest performance, best perf per dollar. And I think as we, you know, do those things, I think we are continuing to grow share across both cloud and enterprise. And look, it's always very competitive. You know, we take, you know, every design win with, you know, very seriously. But, you know, we're winning our fair share. And I think that's the strength of the product portfolio. And also, I think there's a good amount of trust for our delivery capability as we've built up our franchise over the last number of years.

speaker
Toshiya Hari
Analyst at Goldman Sachs

That's great. Thank you. And then as a quick follow-up, maybe one for Gene. So you're guiding gross margin to 54% in the first quarter. I'm curious what some of the major puts and takes are and the things that we should be cognizant of going into Q2 and, more importantly, the second half. You know, given your data center commentary skewed more to the second half, I would expect margins to improve in the second half. But, yeah, if you can kind of run through the pluses and minuses, that would be really helpful. Thank you.

speaker
Jean Hu
Executive Vice President and Chief Financial Officer and Treasurer

Yeah, thanks for the question. You're right. Our growth margin is primarily driven by our revenue mix. I think when you look into 2025 with Q1 guide, not only data center continue to grow significantly year over year, At the same time, client on the business is also growing year over year. So overall, the revenue mix is quite consistent with Q4. So the growth margin guide is 54. I think for the first half, if the revenue mix is at this level, we do feel the growth margin will be consistent with 54. But going to second half, we do believe the data center is our faster growth driver for the company. and that will drive the gross margin to step up in second half.

speaker
Matt Ramsey
Vice President of Investor Relations

All right. With that, I think we are ready to close the call now. Operator, I just wanted to say thank you to everybody that listened in and participated today and for your interest in AMD. Thank you very much.

speaker
Operator
Operator

Thank you, and ladies and gentlemen, that does conclude today's teleconference. We thank you for your participation. You may disconnect your lines at this time.

Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-