Broadcom Inc.

Q1 2024 Earnings Conference Call

3/7/2024

spk04: Hello, and welcome to Broadcom's Inc. First Quarter, Fiscal Year 2024 Financial Results Conference Call. At this time, for opening remarks and introductions, I will turn the call over to G.U., Head of Investor Relations of Broadcom Inc. You may begin.
spk35: Thank you, Operator, and good afternoon, everyone. Joining me on today's call are Hawk Tan, President and CEO, Kirsten Spears, Chief Financial Officer, and Charlie Kowals, President Semiconductor Solutions Group. Broadcom distributed a press release and financial tables after the market closed describing our financial performance for the first quarter of fiscal year 2024. If you did not receive a copy, you may obtain the information from the investor section of Broadcom's website at broadcom.com. This conference call is being webcast live and an audio replay of the call can be accessed for one year through the investor section of Broadcom's website. During the prepared comments, Hawk and Kirsten will be providing details of our first quarter fiscal year 2024 results, guidance for our fiscal year 2024, as well as commentary regarding the business environment. We'll take questions after the end of our prepared comments. Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call. In addition to US GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today's press release. Comments made during today's call will primarily refer to our non-GAAP financial results. I'll now turn the call over to Hawk.
spk16: Thank you, G. And thank you everyone for joining us today. In our fiscal Q1 2024, consolidated net revenue was $12 billion, up 34% year on year, as revenue included 10 and a half weeks of contribution from VMware. Excluding VMware, consolidated revenue was up 11% year-on-year. Semiconductor solutions revenue increased 4% year-on-year to $7.4 billion. And infrastructure software revenue grew 153% year-on-year to $4.6 billion. With respect to infrastructure software, revenue contribution from consolidating VMware drove a sequential jump in revenue by 132%. We expect continued strong bookings at VMware will accelerate revenue growth through the rest of fiscal 2024. In semiconductors, AI revenue quarterly year-on-year to $2.3 billion during the quarter, more than offsetting the current cyclical slowdown in enterprise and telcos. Now let me give you more color on our two reporting segments. Starting with software, Q1, Software segment revenue of $4.6 billion was up 156% year-on-year and included $2.1 billion in revenue contribution from VMware. Consolidated bookings in software grew sequentially from less than $600 million to $1.8 billion in Q1 and is expected to grow to over $3 billion in Q2. Revenue from VMware will grow double digit percentage sequentially quarter over quarter through the rest of the fiscal year. This is simply a result of our strategy with VMware. We are focused on upselling customers, particularly those who are already running their compute workloads with vSphere virtualization tools to upgrade to VMware Cloud Foundation, otherwise branded as VCF. VCF is the complete software stack integrating compute, storage, and networking that virtualizes and modernizes our customers' data centers. This on-prem self-service cloud platform provides our customers a complement and an alternative to public cloud. And in fact, at VM Explore last August, VMware and NVIDIA entered into a partnership called VMware Private AI Foundation. which enables VCF to run GPUs. This allows customers to deploy their AI models on-prem and wherever they do business without having to compromise on privacy and control of their data. And we're seeing this capability drive strong demand for VCF. from enterprises seeking to run their growing AI workloads on-prem. Reflecting all these factors, for the full year, we reiterate our fiscal 2024 guidance for software revenue of $20 billion. Turning to semiconductors, before I give you an overall assessment of this segment, let me provide more color by end markets. Q1 networking revenue of $3.3 billion grew 46% year-on-year, representing 45% of our semiconductor revenue. This was largely driven by strong demand for our custom AI accelerators at our two hyperscale customers. This trend extends beyond AI accelerators. Our latest generation Tomahawk 5 800G switches, Thor 2 Ethernet NICs, read-timers, DSPs, and optical components are experiencing strong demand at hyperscale customers, as well as large-scale enterprises deploying AI data centers. For fiscal 2024, given continuous strength of AI networking demand, we now expect networking revenue to grow over 35% year on year compared to our prior guidance for 30% annual growth. Moving on to wireless, Q1 wireless revenue of $2 billion decreased 1% sequentially and declined 4% year on year, representing 27% of semiconductor revenue. As you all may know, the engagement with our North American customer continues to be very deep, strategic, and, of course, multi-year. And in fiscal 2024, helped by content increases, we reiterate our previous guidance for wireless revenue to be flat year on year. Next, our Q1 server storage connectivity revenue was $887 million, or 12% of semiconductor revenue, down 29% year on year. We are seeing weaker demand in the first half, but expect recovery in the second half. Accordingly, We are revising our outlook for fiscal 24 server storage revenue to decline in the mid 20 percentage range year on year compared to prior guidance for high teens percent decline year on year. On broadband, Q1 revenue declined 23% year on year to $940 million and represented 13% of semiconductor revenue. We are seeing a cyclical trough this year for broadband as telco spending continues to weaken and do not expect improvement until late in the year. And accordingly, we're revising our outlook for fiscal 24 broadband revenue to be down 30% year-on-year from our prior guidance of down mid-teens year-on-year. And finally, Q1 industrial resales of $215 million declined 6% year on year. In fiscal 24, we continue to expect industrial resales to be down high single digits year upon year. And in summary, with stronger than expected growth from AI, more than offsetting the cyclical weakness in broadband and server storage, Q1 semiconductor revenue grew 4% year over year to $7.4 billion. Turning to fiscal 24, we reiterate our guidance for semiconductor solution revenue to be up mid to high single-digit percentage year on year. I know we told you in December our revenue from AI would be 25% of our full-year semiconductor revenue. We now expect revenue from AI to be much stronger, representing some 35% of semiconductor revenue at over $10 billion. And this more than offsets weaker than expected demand in broadband and service storage. So for fiscal 2024, in summary, we reiterate our guidance for consolidated revenue to be $50 billion, which represents 40% year-on-year growth. And we reiterate our full-year adjusted EBITDA guidance of 60%. Before I turn this call over to Kirsten, who will provide more details of our financial performance this quarter, Let me just highlight that Broadcom recently published its fourth annual ESG report available on a corporate citizenship site, which discusses the company's sustainability initiatives. As a global technology leader, we recognize Broadcom's responsibility to connect our customers, employees, and communities. Through our product and technology innovation and operational excellence, We remain committed to this mission. Kirsten?
spk01: Thank you, Hawk. Let me now provide additional detail on our Q1 financial performance, which was a 14-week quarter and included 10.5 weeks of contribution from VMware. Consolidated revenue was $12 billion for the quarter, up 34% from a year ago. Excluding the contribution from VMware, Q1 revenue increased 11% year-on-year. Gross margins were 75.4% of revenue in the quarter. Operating expenses were $2.2 billion and R&D was $1.4 billion, both up year-on-year primarily due to the contribution from VMware. Q1 operating income, including VMware, was $6.8 billion and was up 26% from a year ago. with operating margin at 57% of revenue. Excluding transition costs of $226 million in Q1, operating profit of $7.1 billion was up 30% from a year ago with operating margin of 59% of revenue. Adjusted EBITDA was $7.2 billion or 60% of revenue. This figure excludes $139 million of depreciation. Now a review of the P&L for our two segments, starting with semiconductor. Revenue for our semiconductor solution segment was $7.4 billion and represented 62% of total revenue in the quarter. This was up 4% year-on-year. Gross margins for our semiconductor solution segment were approximately 67%, down 190 basis points year-on-year, driven primarily by product mix within our semiconductor end markets. Operating expenses increased 8% year-on-year to $865 million, reflecting a 14-week quarter, resulting in semiconductor operating margins of 56%. Now moving on to our infrastructure software segment. Revenue for infrastructure software was $4.6 billion, up 153% year-on-year, primarily due to the contribution of VMware, and represented 38% of revenue. Gross margins for infrastructure software were 88% in the quarter and operating expenses were 1.3 billion in the quarter, resulting in infrastructure software operating margin of 59%. Excluding transition costs, operating margin was 64%. Moving on to cash flow. Free cash flow in the quarter was 4.7 billion and represented 39% of revenues off a higher revenue base. Excluding restructuring and integration spend of $658 million, free cash flows were 45% of revenue. We spent $122 million on capital expenditures. Day sales outstanding were 41 days in the first quarter compared to 31 days in the fourth quarter on higher accounts receivable due to the VMware acquisition. The accounts receivable we brought on from VMware has payment terms of 60 days, Unlike Broadcom's standard 30 days, we ended the first quarter with inventory of $1.9 billion, up 1% sequentially. We continue to remain disciplined on how we manage inventory across the ecosystem. We ended the first quarter with $11.9 billion of cash and $75.9 billion of gross debt. The weighted average coupon rate and years to maturity of our $48 billion in fixed rate debt is 3.5% and 8.4 years respectively. The weighted average coupon rate and years to maturity of our $30 billion in floating rate debt is 6.6% and 3 years respectively. During the quarter, we repaid $934 million of fixed rate debt that came due. This week, we repaid $2 billion of our floating rate debt, and we intend to maintain this quarterly repayment of debt throughout fiscal 2024. Turning to capital allocation. In the quarter, we paid stockholders $2.4 billion of cash dividends based on a quarterly common stock cash dividend of $5.25 per share. We executed on our plan to complete our remaining share buyback authorization. We repurchased $7.2 billion of our common stock and eliminated $1.1 billion of common stock for taxes due on vesting of employee equity, resulting in the repurchase and elimination of approximately 7.7 million AVGO shares. To help you with modeling share count, the weighted effect of the 54 million shares issued for the VMware acquisition resulted in a sequential increase in Q1 to $478 million, with the Q2 non-GAAP diluted share count expected to increase to approximately $492 million as the shares issued are fully weighted in the second quarter. Now on to guidance. Regardless of the updated dynamics of our semiconductor and software segments Hawk discussed, We choose to reiterate our guidance for fiscal year 2024 consolidated revenue of $50 billion and adjusted EBITDA of 60%. With regard to VMware, in February, we signed a definitive agreement to divest the end-user computing division with the transaction expected to close in 2024 subject to customary closing conditions, including regulatory approvals. The EUC division has been classified as discontinued operations in our Q1 financials. We have decided to retain the carbon black business and merge carbon black with Symantec to form the enterprise security group. The impact on revenue and profitability is not significant. That concludes my prepared remarks. Operator, please open up the call for questions.
spk04: Thank you. Ladies and gentlemen, to ask the question, please press star 11 on your telephone and then wait to hear your name announced. To withdraw your question, please press star 11 again. We ask that you limit yourself to one question only. Please stand by while we compile the Q&A roster. Our first question comes from the line of Harsh Kumar with Piper Sandler. Your line is open.
spk22: Yeah, hey, thank you, Hawk. Once again, tremendous results and tremendous activity that you guys are benefiting from in AI. But my question was on software. I think if I heard you correctly, Hawk, you mentioned that your software bookings will rise quite dramatically to $3 billion in 2Q. I was hoping that you could explain to us why it would rise almost 100% up, if my math is correct. in 2Q over 1Q? Is it something simple or is it something that you guys are doing from a strategy angle that's making this happen?
spk16: As I indicated, with the acquisition of VMware, we're very focused on selling, upselling and helping customers not just buy but deploy this private cloud. what we call virtual private cloud solution or platform on their on-prem data centers. It has been very successful so far, and I agree, it's early innings still at this point. We just have closed on the deal, we closed on the deal late November, and we are now March, early March. So we have the benefit of at least three months. But we have been very prepared to launch and focus on this push initiative on private cloud VCF. And the results has been very much what we expect it to be, which is very, very successful.
spk24: Thank you.
spk04: Thank you. Please stand by for our next question. Our next question comes from the line of Harlem serve with JP Morgan. Your line is open.
spk21: Yeah. Good afternoon. Thanks for taking my question. Hawk on the, um, on the AI outlook being revised, you know, from greater than seven and a half billion, I think last quarter to 10 billion plus this quarter, you know, as you mentioned, AI compute pulls your ASICs, but it also pulls your networking optical PCIe connectivity solutions as well. So you can, you just help us understand like of that, two and a half billion increase in outlook. Is it stronger AI ASIC demand, stronger networking, stronger optical, et cetera? But more importantly, are you also seeing a similar acceleration in your forward ASIC design wind pipeline as well?
spk16: That's a lot of questions, a lot of information you want me to disgorge. Let's take them one at a time, shall we? Yeah, they increase. As we have said before, as we have shown before, it's roughly two-thirds, one-third, or 70-30, which is AI accelerators, which are custom ASIC AI accelerators with a couple of hyperscalers compared to the other components, which I collectively consider as networking components. And it's about 70-30% makes, and that increase of Almost $3 billion that you mentioned is a similar combination.
spk21: And then are you seeing a similar acceleration on the forward design wind pipeline in customer engagements?
spk16: I have indicated I only have two. Really only have two. Seriously. I don't count anybody. I do not go into production as a real customer at this point.
spk20: Okay. Thanks, Hawk.
spk16: Thanks.
spk04: Will you stand by for our next question? Our next question comes from the line of Vivek Arya with Bank of American Securities. Your line is open.
spk37: Thank you for taking my question. Hawk, again, on the over $10 billion for AI, is this still a supply constraint number, or do you think that this is kind of a very project-driven number, so it's not really supply that gates it? So if you were to get, let's say, increased supply, could there be upside? And then kind of part B of that is on the switching side, Have you already started to see benefits from the 51 terabit per second switches? Is that something that comes along later? Like what is the contribution of 51T to the switching upside that you mentioned for this year?
spk16: Yeah, no. Our Tomahawk 5 is going great guns. Now, it's not driven unlike in the past, Tomahawk 3, Tomahawk 4, by traditional scale out in hyperscalers on the cloud environment. This is all largely coming from scaling out of AI data centers. The building of larger and larger clusters to enable generative AI computing functionality. And you're going for bigger and bigger pipes. And the Tomahawk 5, 51 terabit, is a perfect solution. And we're seeing a lot of demand. And in many cases, we are basically surpassing the rate of adoption that we previously thought. So it is a very good solution in connecting GPUs. And with respect to... to AI accelerators where I think you are focusing on, is that a constraint on supply chain? We do get enough lead time out of our hyperscale customers that we do not have a supply chain constraint. Thank you.
spk05: Thank you. Please stand by for our next question.
spk04: Our next question comes from the line of Stacey with Bernstein Research. Your line is open.
spk31: Hi, guys. Thanks for taking my question. I had a question on the core software business. You said VMware for the two months that was in there was 2.1 billion. Would you put the rest of the software, CA Semantic and Brocade, at like two and a half almost, to be up like 25% sequentially and almost 40% year over year? I guess the question is, do I have my math right? And if so, how can that be? What's going on in the core business? And how should we be thinking about the growth of the core business in VMware as we go through the year? Is VMware still $12 billion?
spk16: Yeah, don't get too excited over that. Don't get too excited over that. I think it's certain products, contracts we obtain, but it's very strong. Contract renewals from old Broadcom contracts uh contracts especially mainframes were very strong as was uh some of our other distributed distributed software platform so that has also accelerated uh but that's not the style of this show stacy start this show is the accelerating bookings and backlog we're accumulating on vmware
spk31: Okay, so VMware is still running at like an $11 or $12 billion run rate benefit. So that sounds like that should accelerate. So the overall for VMware should be more than the $12 billion that you talked about. So the core business, the strength of core, that was kind of a one-time. We should model that kind of like falling off because we've still got the overall software at $20. Correct. Got it. Okay. Thank you. Thanks.
spk04: Please stand by for our next question. Our next question comes from the line of Aaron Rakers with Wells Fargo. The line is open.
spk30: Yeah. Thanks for taking the question. I wanted to ask kind of continuing on the VMware discussion a little bit, you know, Hawk, now that you've had the asset, you know, for a little while, I'm curious of how you, how the go-to-market strategy looks with VMware relative to the prior software acquisitions that you've done. What I'm really getting at is kind of like, You know, how have you kind of thought about the segmentation of the customer base of VMware? I know there's been some discussion around your channel engagement, you know, legacy VMware channel in the past. So I'm just kind of curious of how you've been managing that go-to-market.
spk16: Oh, I think, no, we haven't had it for that long, to be honest. It was like three months, about three months. But, yeah, it's. It's what, and it seems to be, I know there are things to be worked out, but things seem to be progressing very well as much as we had hoped it would. Because where we are focusing our go-to-market, and more than go-to-market, where we are focusing our resources on not just go-to-market, but on engineering a very improved vcf stack which we have and selling it out there and being able to then support it and event and in the process get help customers deploy and and start to really make it stand up in your data centers all that focus is on the largest i would say uh 2000 strategic customers these are guys want to still have significant distributed data center on-prem. You know, many of our customers are looking at a hybrid situation. I'm not trying to use the word too loosely. Basically, a lot of these customers have some very legacy but critical mainframe. That's an old platform not growing. uh except it's still vital then what they have to more in modernizing workloads today and in the future is they really have a choice which they are taking both both angles of running a lot of applications in data centers on-prem distributed data centers on-prem which can handle this modernized workloads while at the same time because of elastic demand, to be able to also put some of these applications into public cloud. Today's environment, most of these customers do not have an on-prem data center that resembles what's in the cloud, which is very high availability, very low latency, highly resilient. which is one we are offering with VMware Cloud Foundation of VCA. It's exactly replicates what they get in a public cloud. And they love it. Now, three months, but we are seeing it in the level of bookings we are generating over the last three months.
spk06: Thank you.
spk05: Thank you. Please stand by for our next question.
spk04: Our next question comes from the line of Chris Dangley with Citi. Your line is open.
spk25: Hey, thanks, gang, for letting me ask a question. Hey, Hawk, just a question on the AI upside in terms of a customer perspective. How much of the upside is coming from new versus existing customers, and then how do you see the customer base going forward? I think it's going to broaden, and we know how you like to – uh, you know, price. So if you do get a bunch of new customers for these products, could there be some better, better pricing and better margins as well? Hopefully they're not listening to the call.
spk16: Oh, great. Thanks for this question. Love it. Because perhaps let me try to, uh, perhaps, uh, give you a sense how we think of the AI market, the new generative AI market, so to speak. using it very loosely and generically as well. It's really, we see it as two broad segments. One segment is hyperscalers, especially very large hyperscalers with huge, huge consumer subscriber base. You probably can guess who this few people are. Very large subscriber base, and very and almost infinite amount of data and our model is getting subscribers to keep using this platform they have and through that be able to generate a better experience for not only the subscribers but a better advertising opportunity for their advertising clients. It's a great ROI, as we are seeing. It's an ROI that comes very quickly. And the investment continues vigorously with that segment, comprising very few players, but with a huge subscriber base, but with a scale to invest a lot. And here, ASICs, custom silicon, custom AI accelerators makes plenty of sense, and that's where we focus that attention on. They also buy as they scale up those AI accelerators through clusters, increasing large clusters, because of the way the models are running, the foundation models run, and large language models need to generate those parameters. They buy a lot of networking together with it. But in comparison, Obviously, to the value of AI accelerators we sell, the net working size while growing is small percentage compared to the size, the value of the accelerators. That's one big segment we have. The other segment we have, which is smaller, is the enterprise, what I broadly call enterprise segment in AI. Here you're talking about companies, large, not so large, but large, who have AI initiatives going on you know all this big news and hype about AI being the savior to productivity and all that gets all these companies on multiple on their own initiatives and here you know short of going to public cloud they try to run it on-prem if they try to run it on-prem they take standard silicon from AI accelerators as much as possible and here in terms of the AI accelerator We don't have a market. That's the merchant silicon market. But in the networking side, as they tie it together with their data centers, they do buy. All those are our networking components, beginning with switches, routers even, through people like the Arista 7800, but switches for sure, and the various other components I mentioned. And that's
spk04: different sense market that we have so it's an interesting mix and we see both thanks a lot thank you please stand by for our next question our next question comes from the line of call Ackerman with BNP Paribas your line is open yes thank you Hawk
spk32: Weakness in broadband, server, and storage customers is understandable given what your peers have said this earnings season, but perhaps you could speak to the backlog visibility you have with your customers in those markets that would indicate those markets could begin to order again and see sequential growth in the second half of your calendar year. Thank you.
spk16: You're correct. As I say, we are almost like near the trough. This year, 24, first half for sure will be The trough, second half, 24, don't know yet. But we have 52-week lead time, as you know. We are very disciplined in sticking to it. And based on that, we are seeing bookings lately significantly up from bookings a year ago.
spk02: Thank you.
spk05: Please stand by for our next question.
spk04: Our next question comes from the line of Christopher Rowland with Susquehanna. Your line is open.
spk27: Thanks for the question. So, Hawk, this one's for you on optical. So, our checks suggest that you're vertically integrating there. You're now putting in your own drivers, TIAs. You're starting to get traction in PAM4 DSP. uh and i think you kind of had an early lead in 100 gig data center lasers uh as well um and this is a lot of this should be on the back of ai networking that appears to be exploding here um so i was wondering if you could help us size the market and then also talk about how fast this is growing for you i think there may have been some clues in that one-third number of the ai you gave us but perhaps if you can kind of double-click or square that for us, it'd be great.
spk16: Thanks. Okay. Before you get carried away, please, in the other categories outside AI accelerators, all those things like PEM4, DSPs, optical components, re-timers, they are small compared to Tomahawk switches and Jericho routers using AI networks. And also we're in an environment where, as you all know, traditional enterprise networking is kind of also in a bit of a slowdown law. So all we're seeing is demand driven very much by AI. And that tends to push us in a line of thinking that could be very biased because what it is showing is that the mix and the and the content on networking relative to compute is very obscure very different from uh in an ai data center compared to a traditional cpu based data center so i don't want to get get you guys all in the wrong way but you're right in the ai data center there's a lot of there's a quite a bit of content on DSPs, PAMFORs, optical components, and re-timers, and PCI Express switches. But they're still not that big in the overall scheme of things compared to what we sell in switches and routers. And compared to AI accelerators, they're even small. Thinking in that ratio, as I said, AI revenue of $10 billion plus this year, 70% will be AI accelerators, 30% everything else. And within that everything else, 30% or so, I would say more than half of that 30%, more like 20%, are the switches and routers. And the rest are the various three-timers, DSP components, because we are not Unlike what you said, we're not vertically integrated in the sense we do not do the entire transceiver, the optical transceiver. We don't do that. Those are manufactured typically by OEMs, contract manufacturers, like InnoLight, Eoptolink guys in China. When they're in, those guys are much more competitive, but we provide those key components we talk about. So when you look at it that way, you can understand the, the kind of the, the weighting of the various values.
spk26: Super helpful. Thank you, Hawk.
spk04: Thank you. Please stand by for our next question. Our next question comes from the line of Toshiya Hari with Goldman Sachs. Your line is open.
spk12: Hi, thank you for taking the question. Hawk, I think we all appreciate the capabilities you have in terms of custom compute. I asked this question last quarter on the group callback, but there is one competitor based in Asia who continues to be pretty vocal and adamant that one of the future designs at your largest customer, they may have some share, and we're picking up conflicting evidence, and we're getting a bunch of investor questions. I was hoping you could address that and your confidence level and sort of maintaining, if not extending, your position there. Thank you.
spk16: You know, I can't stop somebody from trash-talking, okay? It's the best way to describe it. Let the numbers speak for themselves, please. Leave it that way. And I add to it Like most things we do in terms of large critical technology products, we tend to always have, as we do here, a very deep strategic and multi-year relationship with our customer. Enough said.
spk23: Understood. Thank you.
spk04: Thank you. Please stand by for our next question. Our next question comes from the line of Jay Rakesh with Mazuhu. Your line is open.
spk10: Yeah, hi, Hawk. Just on the custom silicon side, obviously you guys dominate that space. But you mentioned two customers, you're only two major customers. But just wondering what's really holding back other hyperscalers from ramping up their custom silicon side? And on the flip side, you're hearing some peers talk about custom silicon
spk16: roadmaps as well so if you could hit both thanks well number one we don't dominate this market only have two I can be dominating it too and number one number two the second point is it's very it's where it takes it takes years it takes a lot of heavy lifting to create that custom silicon Because you need to do more than just hardware or silicon to really have a solution for generative AI or even AI in trying to create those AI capabilities in your data centers. It's more than just silicon. You have to invest a lot in creating software models that works on your custom silicon that matches you got to match your business model in the first place and which leads to and create foundation models which then needs to work and optimize on the custom silicon you're developing so it's an iterative process and it's a constant evolving process even for the same customer we deal with i mentioned that in the last call so it takes years to really understand or be to be able to uh basically reach a point where you can say that hey i'm finally delivering production worthy and it's not because silicon is bad It's because it doesn't work well with the foundation models that the customer put in place and the software layer that works with it, the firmware, the software layer that translates into it. All that has to work. You're almost like creating an entire ecosystem in a limited basis, which we recognize very well in x86 CPUs, but in GPUs, those kind of AI accelerators, something still very early stage. So it takes years. And for our two customers, we have engaged for years. With one of them, we have engaged for eight years to get to this point. So it's something you have to be very patient, be persevere, and hope that everything lines up because ultimate success, if you are just a silicon developer, is not just dependent on you. but dependent as much, if not more, on your partner or customer doing it. So just got to be patient, guys. I got the two only so far.
spk10: And on the peers getting into that market?
spk16: Who is getting into the market? Please repeat.
spk10: Talk about some of your peers. I think NVIDIA has been talking about entering the custom silicon market. Oh, custom silicon market. Yeah.
spk16: I have no comment to be made on it. All I do say is I have no interest in going into a market where, you know, we have a philosophy in running our business, Broadcom, and maybe other people have a different philosophy. Let me tell you my simple philosophy, which I've articulated over time every now and then, but which is very clear to my management team and to the whole Broadcom. You do what you're good at. And you keep doubling down on things you know you are better than anybody else. And you just keep doubling down because nobody else will catch up to you if you keep running ahead of the pack. But do not do something that you think you can do, but somebody else is doing a much better job than you are. That's my philosophy. Great. Thanks, Hawk. Great.
spk04: Thank you. Will you stand by for our next question? Our next question comes from the line of Matt Ramsey with TD Cohen. Your line is open.
spk28: Thank you very much for squeezing me in, guys. Just kind of a two-part thing on the custom silicon stuff. I guess, Hawk, if some of the merchant leaders in AI were interested in some custom networking stuff from you, either in switching routings would you consider it? And the second question is for Kirsten. The business model around custom silicon for most folks is take NRE payments up front and sell the end product at a lower gross margin, but a higher operating margin. And you guys have ramped this massive custom business with no real impact to gross margin. So maybe you could just unpack the philosophy in the accounting about the way that you guys approach the custom silicon opportunities just from a margin perspective. Thanks, guys.
spk18: I'll take that.
spk16: Because you're asking business model, you're not asking really number crunching. So let me try to answer in this way. No, there's no particular reason short of What constitutes an AI accelerator? An AI accelerator, the way it's configured now, whether it's a merchant or it's custom, has a lot... First, an AI accelerator to run foundation models very well needs not just a whole bunch loads of floating point multipliers to do matrix multiplication, matrix... matrix analysis on regression that's the logic part compute part it comes you have to come with access to a lot of memory literally almost cache memory tied to it the chip is not just a simple multiplier it has it comes attached to it memory it's almost like a layered three-dimensional chip which it is memory It's not something we are, any of us in AI accelerators are super good at designing or building. So we buy the memory from very specialized high bandwidth memory, you all know about that, from key memory suppliers. Every one of us does that. So you pile the two, combine the two together, that's what an AI accelerator is. So even if I get very good net normal corporate silicon gross margin on my compute logic chip on multipliers, there's no way I can apply that kind of add-on margin to the high bandwidth memory, which is a big part of the cost of the total chip. And so naturally, by simple math, that whole entire consolidated AI accelerator brings a gross margin below what a traditional uh silicon product we have out there no going away from that because you are adding on memory even though we have to create the access the ios that attach it we do not and could not justify adding that kind of margin to memory nobody could for us So it brings a natural low-volume margin. That's really the simple basis to it. But on the logic part of it, sure, with the kind of content, with the kind of IP that we developed, scouting edge, to make those high-density floating point multipliers on 800 square millimeters of advanced silicon, we can command the margin similar to our corporate gross margin.
spk05: Thank you.
spk04: Please stand by for our next question. Our next question comes from the line of Edward Snyder with Child Equity Research. Your line is open.
spk19: Thanks a lot. First, a housekeeping one, if I could, Hawk. You mentioned the second, customizing a customer, but you also mentioned that it takes years of work iteratively. I mean, anybody who's looked at the TPU history, I guess, understands that. So, and you've said before that it takes time to wrap it up. So, but maybe you could give us a little bit of color. You said phenomenal growth in your custom silicon products. Is much or a material part of that coming from your second customer and taking into account the lower revenue number? Is the growth rate, generally speaking, fairly comparable? And then I had a question about VMware.
spk16: You better go on to your VMware customers and because on the first I don't tell about my customer individually. Sorry.
spk19: Okay. Well, okay, never mind. That's a waste of time. So closing VMware held kind of a significant shift in your software strategy from focusing on the largest thousand or so customers to, well, hundreds of thousands now. Why should we expect once you get through, I don't want to say the low-hanging fruit of selling into, like you mentioned, the first thousand customers with the VCS product, that your OpEx as a share of sales, especially in sales and marketing, would start to increase? Because that's the big leverage Broadcom has had over almost all your acquisitions in software, and that seems to be changing now.
spk16: Ah, we have a shift here. And it's interesting, you're right in all regards. We are spending more on go-to-market and support because we have a lot of customers in VMware. There are 300,000 customers. But we stratify. So we have the strategic guys. We sell, upsell, VCF, private cloud. Very good. But the long tail of what we call smaller commercial customers, we continue to support and sell improved versions of just the vSphere, compute virtualization to improve productivity on their servers. We don't attempt to say go build up your whole VCF. They don't have the skills nor the scale to do it. But all it adds up is, you're right, my cost of my spend, OPEC spend, be it support, services, you know, go to market, will increase. But the difference between that and say CA, under acquisition we did, is We're growing this business very fast. And you don't have to increase your spend growing this business. So we have operating leverage through revenue growth over the next three years.
spk19: Great. If I could squeeze one more in. You mentioned several times, actually, in the last quarter that there were two divisions you're going to divest, including carbon black, and that's changed. What has changed? Has the market outlook kind of softened and you said wait and see, or Did you change your strategy in how you integrate? I'm just curious why last quarter you said you'd probably get rid of it in three months, and now you're keeping it.
spk16: Well, we find now that we could generate more value to you, the shareholders. I'm just kidding, but we would generate more value to our shareholders by taking carbon blank, which is not that big, and integrating it into cementing. that by doing it, we would generate much better value to our shareholders than taking a one-shot divestiture on this asset. Not particularly large to begin with.
spk07: Great. Thank you.
spk04: Thank you. Ladies and gentlemen, due to the interest of time, I would now like to turn the call back over to GU for closing remarks.
spk35: Thank you, Operator. In closing, we would like to highlight our Broadcom Enabling AI in Infrastructure Investor Meeting on Wednesday, March 20th, 2024 at 9 a.m. Pacific, 12 p.m. Eastern Time. Charlie Coase, President of Broadcom Semiconductor Solutions Group, and several general managers will present on Broadcom's Merchant Silicon Portfolio. The live webcast and replay of the investor meeting will be available at investors.broadcom.com. Broadcom currently plans to report its earnings for the second quarter of fiscal 24 after close of market on Wednesday, June 12, 2024. A public webcast of Broadcom's earnings conference call will follow at 2 p.m. Pacific time. That will conclude our earnings call today. Thank you all for joining. Operator, you may end the call.
spk04: Thank you. Ladies and gentlemen this concludes today's conference call. Thank you for your participation. You may now disconnect. Thank you. you Bye. Thank you. Thank you. Thank you. Hello, and welcome to Broadcom's Inc. First Quarter Fiscal Year 2024 Financial Results Conference Call. At this time, for opening remarks and introductions, I will turn the call over to G.U., Head of Investor Relations of Broadcom Inc. You may begin.
spk35: Thank you, Operator, and good afternoon, everyone. Joining me on today's call are Hawk Tan, President and CEO, Kirsten Spears, Chief Financial Officer, and Charlie Kowals, President Semiconductor Solutions Group. Broadcom distributed a press release and financial tables after the market closed describing our financial performance for the first quarter of fiscal year 2024. If you did not receive a copy, you may obtain the information from the investor section of Broadcom's website at broadcom.com. This conference call is being webcast live and an audio replay of the call can be accessed for one year through the investor section of Broadcom's website. During the prepared comments, Hawk and Kirsten will be providing details of our first quarter fiscal year 2024 results, guidance for our fiscal year 2024, as well as commentary regarding the business environment. We'll take questions after the end of our prepared comments. Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call. In addition to US GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today's press release. Comments made during today's call will primarily refer to our non-GAAP financial results. I'll now turn the call over to Hawk.
spk16: Thank you, G. And thank you everyone for joining us today. In our fiscal Q1 2024, consolidated net revenue was $12 billion, up 34% year on year, as revenue included 10 and a half weeks of contribution from VMware. Excluding VMware, consolidated revenue was up 11% year-on-year. Semiconductor solutions revenue increased 4% year-on-year to $7.4 billion. And infrastructure software revenue grew 153% year-on-year to $4.6 billion. With respect to infrastructure software, revenue contribution from consolidating VMware drove a sequential jump in revenue by 132%. We expect continued strong bookings at VMware will accelerate revenue growth through the rest of fiscal 2024. In semiconductors, AI revenue quadruple year-on-year to $2.3 billion during the quarter, more than offsetting the current cyclical slowdown in enterprise and telcos. Now let me give you more color on our two reporting segments. Starting with software, Q1, Software segment revenue of $4.6 billion was up 156% year-on-year and included $2.1 billion in revenue contribution from VMware. Consolidated bookings in software grew sequentially from less than $600 million to $1.8 billion in Q1 and is expected to grow to over $3 billion in Q2. Revenue from VMware will grow double-digit percentage, sequentially quarter over quarter through the rest of the fiscal year. This is simply a result of our strategy with VMware. We are focused on upselling customers, particularly those who are already running their compute workloads with vSphere virtualization tools to upgrade to VMware Cloud Foundation, otherwise branded as VCF. VCF is the complete software stack integrating compute, storage, and networking that virtualizes and modernizes our customers' data centers. This on-prem self-service cloud platform provides our customers a complement and an alternative to public cloud. And in fact, at VM Explore last August, VMware and NVIDIA entered into a partnership called VMware Private AI Foundation. which enables VCF to run GPUs. This allows customers to deploy their AI models on-prem and wherever they do business without having to compromise on privacy and control of their data. And we're seeing this capability drive strong demand for VCF. from enterprises seeking to run their growing AI workloads on-prem. Reflecting all these factors, for the full year, we reiterate our fiscal 2024 guidance for software revenue of $20 billion. Turning to semiconductors, before I give you an overall assessment of this segment, let me provide more color by end markets. Q1 networking revenue of $3.3 billion grew 46% year-on-year, representing 45% of our semiconductor revenue. This was largely driven by strong demand for our custom AI accelerators at our two hyperscale customers. This trend extends beyond AI accelerators. Our latest generation Tomahawk 5 800G switches, Thor 2 Ethernet NICs, read-timers, DSPs, and optical components are experiencing strong demand at hyperscale customers, as well as large-scale enterprises deploying AI data centers. For fiscal 2024, given continuous strength of AI networking demand, we now expect networking revenue to grow over 35% year-on-year compared to our prior guidance for 30% annual growth. Moving on to wireless, Q1 wireless revenue of $2 billion decreased 1% sequentially and declined 4% year on year, representing 27% of semiconductor revenue. As you all may know, the engagement with our North American customer continues to be very deep, strategic, and, of course, multi-year. And in fiscal 2024, helped by content increases, we reiterate our previous guidance for wireless revenue to be flat year on year. Next, our Q1 server storage connectivity revenue was $887 million, or 12% of semiconductor revenue, down 29% year on year. We are seeing weaker demand in the first half, but expect recovery in the second half. Accordingly, We are revising our outlook for fiscal 24 server storage revenue to decline in the mid 20 percentage range year on year compared to prior guidance for high teens percent decline year on year. On broadband, Q1 revenue declined 23% year on year to $940 million and represented 13% of semiconductor revenue. We are seeing a cyclical trough this year for broadband as telco spending continues to weaken and do not expect improvement until late in the year. And accordingly, we're revising our outlook for fiscal 24 broadband revenue to be down 30% year-on-year from our prior guidance of downed meetings year-on-year. And finally, Q1 industrial resales of $215 million declined 6% year on year. In fiscal 24, we continue to expect industrial resales to be down high single digits year upon year. And in summary, with stronger than expected growth from AI, more than offsetting the cyclical weakness in broadband and server storage, Q1 semiconductor revenue grew 4% year over year to $7.4 billion. Turning to fiscal 24, we reiterate our guidance for semiconductor solution revenue to be up mid to high single-digit percentage year on year. I know we told you in December our revenue from AI would be 25% of our full-year semiconductor revenue. We now expect revenue from AI to be much stronger, representing some 35% of semiconductor revenue at over $10 billion. And this more than offsets weaker than expected demands in broadband and service storage. So for fiscal 2024, in summary, we reiterate our guidance for consolidated revenue to be $50 billion, which represents 40% year-on-year growth. And we reiterate our full-year adjusted EBITDA guidance of 60%. Before I turn this call over to Kirsten, who will provide more details of our financial performance this quarter, Let me just highlight that Broadcom recently published its fourth annual ESG report available on a corporate citizenship site, which discusses the company's sustainability initiatives. As a global technology leader, we recognize Broadcom's responsibility to connect our customers, employees, and communities. Through our product and technology innovation and operational excellence, We remain committed to this mission. Kirsten?
spk01: Thank you, Hawk. Let me now provide additional detail on our Q1 financial performance, which was a 14-week quarter and included 10.5 weeks of contribution from VMware. Consolidated revenue was $12 billion for the quarter, up 34% from a year ago. Excluding the contribution from VMware, Q1 revenue increased 11% year-on-year. Gross margins were 75.4% of revenue in the quarter. Operating expenses were $2.2 billion and R&D was $1.4 billion, both up year-on-year primarily due to the contribution from VMware. Q1 operating income, including VMware, was $6.8 billion and was up 26% from a year ago. with operating margin at 57% of revenue. Excluding transition costs of $226 million in Q1, operating profit of $7.1 billion was up 30% from a year ago with operating margin of 59% of revenue. Adjusted EBITDA was $7.2 billion or 60% of revenue. This figure excludes $139 million of depreciation. Now a review of the P&L for our two segments, starting with semiconductor. Revenue for our semiconductor solution segment was $7.4 billion and represented 62% of total revenue in the quarter. This was up 4% year-on-year. Gross margins for our semiconductor solution segment were approximately 67%, down 190 basis points year-on-year, driven primarily by product mix within our semiconductor end markets. Operating expenses increased 8% year-on-year to $865 million, reflecting a 14-week quarter, resulting in semiconductor operating margins of 56%. Now moving on to our infrastructure software segment. Revenue for infrastructure software was $4.6 billion, up 153% year-on-year, primarily due to the contribution of VMware, and represented 38% of revenue. Gross margins for infrastructure software were 88% in the quarter and operating expenses were $1.3 billion in the quarter, resulting in infrastructure software operating margin of 59%. Excluding transition costs, operating margin was 64%. Moving on to cash flow. Free cash flow in the quarter was $4.7 billion and represented 39% of revenues off a higher revenue base. Excluding restructuring and integration spend of $658 million, free cash flows were 45% of revenue. We spent $122 million on capital expenditures. Day sales outstanding were 41 days in the first quarter compared to 31 days in the fourth quarter on higher accounts receivable due to the VMware acquisition. The accounts receivable we brought on from VMware has payment terms of 60 days, Unlike Broadcom's standard 30 days, we ended the first quarter with inventory of $1.9 billion, up 1% sequentially. We continue to remain disciplined on how we manage inventory across the ecosystem. We ended the first quarter with $11.9 billion of cash and $75.9 billion of gross debt. The weighted average coupon rate and years to maturity of our $48 billion in fixed rate debt is 3.5% and 8.4 years respectively. The weighted average coupon rate and years to maturity of our $30 billion in floating rate debt is 6.6% and 3 years respectively. During the quarter, we repaid $934 million of fixed rate debt that came due. This week, we repaid $2 billion of our floating rate debt, and we intend to maintain this quarterly repayment of debt throughout fiscal 2024. Turning to capital allocation. In the quarter, we paid stockholders $2.4 billion of cash dividends based on a quarterly common stock cash dividend of $5.25 per share. We executed on our plan to complete our remaining share buyback authorization. We repurchased $7.2 billion of our common stock and eliminated $1.1 billion of common stock for taxes due on vesting of employee equity, resulting in the repurchase and elimination of approximately 7.7 million AVGO shares. To help you with modeling share count, the weighted effect of the 54 million shares issued for the VMware acquisition resulted in a sequential increase in Q1 to $478 million, with the Q2 non-GAAP diluted share count expected to increase to approximately $492 million as the shares issued are fully weighted in the second quarter. Now on to guidance. Regardless of the updated dynamics of our semiconductor and software segments Hawk discussed, We choose to reiterate our guidance for fiscal year 2024 consolidated revenue of $50 billion and adjusted EBITDA of 60%. With regard to VMware, in February, we signed a definitive agreement to divest the end-user computing division with the transaction expected to close in 2024 subject to customary closing conditions, including regulatory approvals. The EUC division has been classified as discontinued operations in our Q1 financials. We have decided to retain the carbon black business and merge carbon black with Symantec to form the enterprise security group. The impact on revenue and profitability is not significant. That concludes my prepared remarks. Operator, please open up the call for questions.
spk04: Thank you. Ladies and gentlemen, to ask the question, please press star 11 on your telephone and then wait to hear your name announced. To withdraw your question, please press star 11 again. We ask that you limit yourself to one question only. Please stand by while we compile the Q&A roster. Our first question comes from the line of Harsh Kumar with Piper Sandler. Your line is open.
spk22: Yeah, hey, thank you, Hawk. Once again, tremendous results and tremendous activity that you guys are benefiting from in AI. But my question was on software. I think if I heard you correctly, Hawk, you mentioned that your software bookings will rise quite dramatically to $3 billion in 2Q. I was hoping that you could explain to us why it would rise almost 100% up, if my math is correct. in 2Q over 1Q? Is it something simple or is it something that you guys are doing from a strategy angle that's making this happen?
spk16: As I indicated, with the acquisition of VMware, we're very focused on selling, upselling and helping customers not just buy but deploy this private cloud. what we call virtual private cloud solution or platform on their on-prem data centers. It has been very successful so far, and I agree, it's early innings still at this point. We just have closed on the deal, we closed on the deal late November, and we are now March, early March. So we have the benefit of at least three months. But we have been very prepared to launch and focus on this push initiative on private cloud VCF. And the results has been very much what we expect it to be, which is very, very successful.
spk24: Thank you.
spk04: Thank you. Please stand by for our next question. Our next question comes from the line of Harlan Serb with JP Morgan. Your line is open.
spk21: Yeah, good afternoon. Thanks for taking my question. Hawk, on the AI outlook being revised, you know, from greater than $7.5 billion, I think, last quarter to $10 billion plus this quarter, you know, as you mentioned, AI compute pulls your ASICs, but it also pulls your networking, optical, PCIe connectivity solutions as well. So can you just help us understand, like, of that solution, two and a half billion increase in outlook. Is it stronger AI ASIC demand, stronger networking, stronger optical, et cetera? But more importantly, are you also seeing a similar acceleration in your forward ASIC design wind pipeline as well?
spk16: That's a lot of questions, a lot of information you want me to disgorge. Let's take them one at a time, shall we? Yeah, they increase. As we have said before, as we have shown before, It's roughly two-thirds, one-third, or 70-30, which is AI accelerators, which are custom ASIC AI accelerators with a couple of hyperscalers compared to the other components, which I collectively consider as networking components. And it's about 70-30% makes, and that increase of Almost $3 billion that you mentioned is a similar combination.
spk21: And then are you seeing a similar acceleration on the forward design wind pipeline in customer engagements?
spk16: I have indicated I only have two. Really only have two, seriously. I don't count anybody. I do not go into production as a real customer at this point.
spk20: Okay. Thanks, Hawk.
spk16: Thanks.
spk04: Will you stand by for our next question? Our next question comes from the line of Vivek Arya with Bank of America Securities. Your line is open.
spk37: Thank you for taking my question. How come, again, on the over $10 billion for AI, is this still a supply constraint number? Or do you think that this is kind of a very project-driven number, so it's not really supply that gates it? So if you were to get, let's say, increased supply, could there be upside? And then kind of part B of that is on the switching side, Have you already started to see benefits from the 51 terabit per second switches? Is that something that comes along later? Like what is the contribution of 51T to the switching upside that you mentioned for this year?
spk16: Yeah, no. Our Tomahawk 5 is going great guns. Now, it's not driven unlike in the past, Tomahawk 3, Tomahawk 4. by traditional scale out in hyperscalers on the cloud environment. This is all largely coming from scaling out of AI data centers. The building of larger and larger clusters to enable generative AI computing functionality. And you're going for bigger and bigger pipes. And the Tomahawk 5, 51 terabit, is a perfect solution. And we're seeing a lot of demand. And in many cases, we are basically, they are surpassing the rate of adoption that we previously thought. So it is a very good solution in connecting GPUs. And with respect to... to AI accelerators where I think you are focusing on, is that a constraint on supply chain? We do get enough lead time out of our hyperscale customers that we do not have a supply chain constraint. Thank you.
spk05: Thank you. Please stand by for our next question.
spk04: Our next question comes from the line of Stacey Rackham with Bernstein Research. Your line is open.
spk31: Hi, guys. Thanks for taking my question. I have a question on the core software business. You said VMware for the two months that was in there was 2.1 billion. Would you put the rest of the software, CA Symantec and Brocade, at like two and a half almost? It would be up like 25% sequentially and almost 40% year over year. I guess the question is, do I have my math right? And if so, how can that be? What's going on in the core business? And how should we be thinking about the growth of the core business in VMware as we go through the year? Is the VMware still $12 billion?
spk16: Yeah, don't get too excited over that. Don't get too excited over that. I think it's certain products, contracts we obtain, but it's very strong contract renewals in the older, from old Broadcom, uh contracts especially mainframes were very strong as was uh some of our other distributed distributed software platform so that has also accelerated uh but that's not the style of this show stacy start this show is the accelerating bookings and backlog we're accumulating on vmware
spk31: Okay, so VMware is still running at like an $11 or $12 billion run rate benefit. So that sounds like that should accelerate. So the overall for VMware should be more than the $12 billion that you talked about. So the core business, the strength of the core, that was kind of a one-time. We should model that kind of like falling off because we've still got the overall software at $20.
spk14: Correct.
spk31: Got it. Okay. Thank you. Thanks.
spk04: Please stand by for our next question. Our next question comes from the line of Aaron Rakers with Wells Fargo. The line is open.
spk30: Yeah, thanks for taking the question. I wanted to ask, kind of continuing on the VMware discussion a little bit, you know, Hawk, now that you've had the asset, you know, for a little while, I'm curious of how you, how the go-to-market strategy looks with VMware relative to the prior software acquisitions that you've done. What I'm really getting at is kind of like, You know, how have you kind of thought about the segmentation of the customer base of VMware? I know there's been some discussion around your channel engagement, you know, legacy VMware channel in the past. So I'm just kind of curious of how you've been managing that go-to-market.
spk16: Oh, I think, no, we haven't had it for that long, to be honest. It was like three months, about three months. But, yeah, it's, It's what, and it seems to be, you know, there are kings to be worked out, but things seem to be progressing very well as much as we had hoped it would. Because where we are focusing our go-to-market, and more than go-to-market, where we are focusing our resources on not just go-to-market, but on engineering a very improved VCF stack which we have and selling it out there and being able to then support it and event and in the process get help customers deploy and And start to really make it stand up in your data centers all that focus is on the largest I would say 2000 strategic customers these are guys who want to still have significant distributed data center on-prem. You know, many of our customers are looking at a hybrid situation. I'm not trying to use the word too loosely. Basically, a lot of these customers have some very legacy but critical mainframe. That's an old platform not growing. uh except it's still vital then what they have to more in modernizing workloads today and in the future is they really have a choice which they are taking both both angles of running a lot of applications in data centers on-prem distributed data centers on-prem which can handle this modernized workloads while at the same time because of elastic demand, to be able to also put some of these applications into public cloud. Today's environment, most of these customers do not have an on-prem data center that resembles what's in the cloud, which is very high availability, very low latency, highly resilient. which is one we are offering with VMware cloud foundation of VCA is exactly replicates what they get in a public club and they love it now three months. And, but we are seeing it in the level of bookings we are generating over the last three months.
spk06: Thank you.
spk05: Thank you. Please stand by for our next question.
spk04: Our next question comes from the line of Chris Dangley with Citi. Your line is open.
spk25: Hey, thanks, gang, for letting me ask a question. Hey, Hawk, just a question on the AI upside in terms of a customer perspective. How much of the upside is coming from new versus existing customers, and then how do you see the customer base going forward? I think it's going to broaden, and we know how you like to – you know, price. So if you do get a bunch of new customers for these products, could there be some better pricing and better margins as well? Hopefully they're not listening to the call.
spk16: Chris, thanks for this question. Love it. Because perhaps let me try to perhaps give you a sense how we think of the AI market, the new generative AI market, so to speak. using it very loosely and generically as well. It's really, we see it as two broad segments. One segment is hyperscalers, especially very large hyperscalers with huge, huge consumer subscriber base. You probably can guess who this few people are. Very large subscriber base, and very and almost infinite amount of data and our model is getting subscribers to keep using this platform they have and through that be able to generate a better experience for not only the subscribers but a better advertising opportunity for their advertising clients. It's a great ROI, as we are seeing. It's a ROI that comes very quickly. And the investment continues vigorously with that segment, comprising very few players. But we've a huge subscriber base, but with a scale to invest a lot. And here, ASICs, custom silicon. custom AI accelerators makes plenty of sense and that's where we focus that attention on. They also buy as a scale up those AI accelerators through clusters, increasing large clusters because of the way the models are running, the foundation models run and large language models need to generate those parameters. They buy a lot of networking together with it, but in comparison, Obviously, to the value of AI accelerators we sell, the net working size while growing is small percentage compared to the size, the value of the accelerators. That's one big segment we have. The other segment we have, which is smaller, is the enterprise, what I broadly call enterprise segment in AI. Here you're talking about companies, large, not so large, but large, who have AI initiatives going on you know all this big news and hype about AI being the savior to productivity and all that gets all these companies on multiple on their own initiatives and here you know short of going to public cloud they try to run it on-prem if they try to run it on-prem they take standard silicon from AI accelerators as much as possible and here in terms of the AI accelerator We don't have a market. That's the merchant silicon market. But in the networking side, as they tie it together with their data centers, they do buy. All those are our networking components, beginning with switches, routers even, through people like the Arista 7800, but switches for sure, and the various other components I mentioned. And that's
spk04: different sense market that we have so it's an interesting mix and we see both thanks a lot thank you please stand by for our next question our next question comes from the line of call Ackerman with BNP Paribas your line is open yes thank you Hawk
spk32: Weakness in broadband, server, and storage customers is understandable given what your peers have said this earnings season, but perhaps you could speak to the backlog visibility you have with your customers in those markets that would indicate those markets could begin to order again and see sequential growth in the second half of your calendar year. Thank you.
spk16: You're correct. As I say, we are almost like near the trough. This year, 24, first half for sure will be The trough, second half, 24, don't know yet. But we have 52-week lead time, as you know. We are very disciplined in sticking to it. And based on that, we are seeing bookings lately significantly up from bookings a year ago.
spk02: Thank you.
spk05: Please stand by for our next question.
spk04: Our next question comes from the line of Christopher Rowland with Susquehanna. Your line is open.
spk27: Thanks for the question. So, Hawk, this one's for you on optical. So, our checks suggest that you're vertically integrating there. You're now putting in your own drivers, TIAs. You're starting to get traction in PAM4 DSP. and I think you kind of had an early lead in 100 gig data center lasers as well. And this is a lot of this should be on the back of AI networking that appears to be exploding here. So I was wondering if you could help us size the market and then also talk about how fast this is growing for you. I think there may have been some clues in that one third number of the AI you gave us, but Perhaps if you can kind of double-click or square that for us, it'd be great.
spk16: Thanks. Okay. Before you get carried away, please, in the other categories outside AI accelerators, all those things like PEM4, DSPs, optical components, re-timers, they are small compared to Tomahawk switches and Jericho routers using AI networks. And also we're in an environment where, as you all know, traditional enterprise networking is kind of also in a bit of a slowdown law. So all we're seeing is demand driven very much by AI. And that tends to push us in a line of thinking that could be very biased because what it is showing is that the mix And the content on networking relative to compute is very obscure, very different from in an AI data center compared to a traditional CPU-based data center. So I don't want to get you guys all in the wrong way. But you're right, in an AI data center, there's quite a bit of content. on DSPs, PAMFORs, optical components, and read timers, and PCI Express switches. But they're still not that big in the overall scheme of things compared to what we sell in switches and routers. And compared to AI accelerators, they're even small. I think in that ratio, as I said, AI revenue of $10 billion plus this year, 70% will be AI accelerators, 30% everything else. And within that everything else, 30% or so, I would say more than half of that 30%, more like 20%, are the switches and routers. And the rest are the various retimers, DSP components, because we are not Unlike what you said, we're not vertically integrated in the sense we do not do the entire transceiver, the optical transceiver. We don't do that. Those are manufactured typically by OEMs, contract manufacturers, like InnoLight, Eoptolink guys in China. where those guys are much more competitive. But we provide those key components we talk about. So when you look at it that way, you can understand the kind of the weighting of the various values.
spk26: Super helpful. Thank you, Hawk.
spk04: Thank you. Please stand by for our next question. Our next question comes from the line of Toshiya Hari with Goldman Sachs. Your line is open.
spk12: Hi, thank you for taking the question. Hawk, I think we all appreciate the capabilities you have in terms of custom compute. I asked this question last quarter on the group callback, but there is one competitor based in Asia who continues to be pretty vocal and adamant that one of the future designs at your largest customer, they may have some share, and we're picking up conflicting evidence, and we're getting a bunch of investor questions. I was hoping you could address that and your confidence level and sort of maintaining, if not extending, your position there. Thank you.
spk16: You know, I can't stop somebody from trash-talking, okay? It's the best way to describe it. Let the numbers speak for themselves, please. Leave it that way. And I add to it Like most things we do in terms of large critical technology products, we tend to always have, as we do here, a very deep strategic and multi-year relationship with our customer.
spk23: Understood. Thank you.
spk04: Thank you. Please stand by for our next question. Our next question comes from the line of Jay Rakesh with Mazuhu. Your line is open.
spk10: Yeah, hi, Hawk. Just on the custom silicon side, obviously you guys dominate that space. But you mentioned two customers, you're only two major customers. But just wondering what's really holding back other hyperscalers from ramping up their custom silicon side? And on the flip side, you're hearing some peers talk about custom silicon
spk16: roadmaps as well so if you could hit both thanks well number one we don't dominate this market only have two I can be dominating it too and number one number two the second point is it's very it's where it takes it takes years it takes a lot of heavy lifting to create that custom silicon Because you need to do more than just hardware or silicon to really have a solution for generative AI or even AI in trying to create those AI capabilities in your data centers. It's more than just silicon. You have to invest a lot in creating software models that works on your custom silicon that matches you got to match your business model in the first place and which leads to and create foundation models which then needs to work and optimize on the custom silicon you're developing so it's an iterative process And it's a constant evolving process even for the same customer we deal with. I mentioned that in the last call. So it takes years to really understand or be able to basically reach a point where you can say that, hey, I'm finally delivering production worthy. And it's not because silicon is bad. It's because it doesn't work well with the foundation models that the customer put in place and the software layer that works with it, the firmware, the software layer that translates into it. All that has to work. You're almost like creating an entire ecosystem in a limited basis, which we recognize very well in x86 CPUs, but in GPUs, those kind of AI accelerators, something still very early stage. So it takes years. And for our two customers, we have engaged for years. With one of them, we have engaged for eight years to get to this point. So it's something you have to be very patient, be persevere, and hope that everything lines up because ultimate success, if you are just a silicon developer, is not just dependent on you. but dependent as much, if not more, on your partner or customer doing it. So just got to be patient, guys. I got the two only so far.
spk10: And on the peers getting into that market?
spk16: Who is getting into the market? Please repeat.
spk10: Talk about some of your peers. I think NVIDIA has been talking about entering the custom silicon market. Oh, custom silicon market. Yeah.
spk16: I have no comment to be made on it. All I do say is I have no interest in going into a market where, you know, we have a philosophy in running our business, Broadcom, and maybe other people have a different philosophy. Let me tell you my simple philosophy, which I've articulated over time every now and then, but which is very clear to my management team and to the whole Broadcom. You do what you're good at. And you keep doubling down on things you know you are better than anybody else. And you just keep doubling down because nobody else will catch up to you if you keep running ahead of the pack. But do not do something that you think you can do, but somebody else is doing a much better job than you are. That's my philosophy. Great. Thanks, Hawk. Great.
spk04: Thank you. Will you stand by for our next question? Our next question comes from the line of Matt Ramsey with TD Cohen. Your line is open.
spk28: Thank you very much for squeezing me in, guys. Just kind of a two-part thing on the custom silicon stuff. I guess, Hawk, if some of the merchant leaders in AI were interested in some custom networking stuff from you, either in switching routing Would you consider it? And the second question is for Kirsten. The business model around custom silicon for most folks is take NRE payments up front and sell the end product at a lower gross margin, but a higher operating margin. And you guys have ramped this massively. custom business with no real impact to gross margin. So maybe you could just unpack the philosophy and the accounting about the way that you guys approach the custom silicon opportunities just from a margin perspective. Thanks, guys.
spk16: I'll take that. Because you're asking business model, you're not asking really number crunching. So let me try to answer in this way. No, there's no particular reason short of what constitutes an ai accelerator an ai accelerator the way it's configured now whether it's a merchant or it's custom has a lot first an ai accelerator to run foundation models very well needs not needs not just a whole bunch loads of floating point multipliers to do matrix multiplication matrix matrix analysis on regression that's the logic part compute part it comes you have to come with access to a lot of memory literally almost cache memory tied to it the chip is not just a simple multiplier it has it comes attached to it memory it's almost like a layered three-dimensional chip which it is memory It's not something we are, any of us in AI accelerators are super good at designing or building. So we buy the memory from very specialized high bandwidth memory, you all know about that, from key memory suppliers. Every one of us does that. So you pile the two, combine the two together, that's what an AI accelerator is. So even if I get very good net normal corporate silicon gross margin on my, compute logic chip on multipliers, there's no way I can apply that kind of add-on margin to the high bandwidth memory, which is a big part of the cost of the total chip. And so naturally by simple math, that whole entire consolidated AI accelerator brings a gross margin below what a traditional uh silicon product we have out there no going away from that because you are adding on memory even though we have to create the access the ios that attach it we do not and could not justify adding that kind of margin to memory nobody could for us So it brings a natural low-value margin. That's really the simple basis to it. But on the logic part of it, sure, with the kind of content, with the kind of IP that we developed scouting edge to make those high-density floating point multipliers on 800 square millimeters of advanced silicon, we can command the margin similar to our corporate gross margin.
spk05: Thank you.
spk04: Please stand by for our next question. Our next question comes from the line of Edward Snyder with Child Equity Research. Your line is open.
spk19: Thanks a lot. First, a housekeeping one, if I could, Hawk. You mentioned the second, customizing a customer, but you also mentioned that it takes years of work iteratively. I mean, anybody who's looked at the TPU history, I guess, understands that. And you said before that it takes time to wrap it up. But maybe you could give us a little bit of color. You said phenomenal growth in your custom silicon products. Is much or a material part of that coming from your second customer? And taking into account the lower revenue number, is the growth rate, generally speaking, fairly comparable? And then I had a question about VMware.
spk16: You better go on to your VMware customers. And because on the first I don't tell about my customer individually. Sorry.
spk19: Okay. Okay, never mind. That's a waste of time. So closing VMware held kind of a significant shift in your software strategy from focusing on the largest thousand or so customers to hundreds of thousands now. Why should we expect once you get through, I don't want to say the low-hanging fruit of selling into, like you mentioned, the first thousand customers with the VCS product, that your OpEx as a share of sales, especially in sales and marketing, would start to increase? Because that's the big leverage Broadcom has had over almost all your acquisitions in software, and that seems to be changing now.
spk16: We have a shift here, and it's interesting. You're right in all regards. We are spending more on go-to-market and support because we have a lot of customers in VMware. There are 300,000 customers. But we stratify. So we have the strategic guys. We sell, upsell, VCF, private cloud. Very good. But the long tail of what we call smaller commercial customers, we continue to support and sell improved versions of just the VC, vSphere, compute virtualization to improve productivity on their servers. We don't attempt to say go build up your whole VCF they don't have the skills nor the scale to do it but all it adds up is you're right my cost of my spend OPEC spend be it support services you know go to market will increase but the difference between that and say CA under acquisition we did is We're growing this business very fast. And you don't have to increase your spend growing this business. So we have operating leverage through revenue growth over the next three years.
spk19: Great. If I could squeeze one more in. You mentioned several times actually in the last quarter that there were two divisions you're going to divest, including carbon black, and that's changed. What has changed? Has the market outlook kind of softened and you said wait and see, or? Did you change your strategy in how you integrate? I'm just curious why. Last quarter you said you'd probably get rid of it in three months, and now you're keeping it.
spk16: Well, we find now that we could generate more value to you, the shareholders. I'm just kidding. But we would generate more value to our shareholders by taking carbon blank, which is not that big, and integrating it into cementing. that by doing it, we would generate much better value to our shareholders than taking a one-shot divestiture on this asset. Not particularly large to begin with.
spk07: Great. Thank you.
spk04: Thank you. Ladies and gentlemen, due to the interest of time, I would now like to turn the call back over to GU for closing remarks.
spk35: Thank you, Operator. In closing, we would like to highlight our Broadcom Enabling AI in Infrastructure Investor Meeting on Wednesday, March 20th, 2024 at 9 a.m. Pacific, 12 p.m. Eastern Time. Charlie Coase, President of Broadcom Semiconductor Solutions Group, and several general managers will present on Broadcom's Merchant Silicon Portfolio. The live webcast and replay of the investor meeting will be available at investors.broadcom.com. Broadcom currently plans to report its earnings for the second quarter of fiscal 24 after close of market on Wednesday, June 12, 2024. A public webcast of Broadcom's earnings conference call will follow at 2 p.m. Pacific time. That will conclude our earnings call today. Thank you all for joining. Operator, you may end the call.
spk04: Thank you. Ladies and gentlemen this concludes today's conference call. Thank you for your participation. You may now disconnect.
Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-