Broadcom Inc.

Q2 2024 Earnings Conference Call

6/12/2024

spk27: Welcome to Broadcom, Inc.' 's second quarter fiscal year 2024 financial results conference call. At this time, for opening remarks and introductions, I would like to turn the call over to G.U., head of investor relations of Broadcom, Inc.
spk33: Thank you, operator, and good afternoon, everyone. Joining me on today's call are Hawk Tan, president and CEO, Kirsten Spears, chief financial officer, and Charlie Kawaz, President Semiconductor Solutions Group. Broadcom distributed a press release and financial tables after the market closed describing our financial performance for the second quarter of fiscal year 2024. If you did not receive a copy, you may obtain the information from the investor section of Broadcom's website at broadcom.com. This conference call is being webcast live. and then audio replay of the call can be accessed for one year through the investor section of Broadcom's website. During the prepared comments, Hawk and Kirsten will be providing details of our second quarter fiscal year 2024 results, guidance for our fiscal year 2024, as well as commentary regarding the business environment. We'll take questions after the end of our prepared comments. Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call. In addition to U.S. GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today's press release. Comments made during today's call will primarily refer to our non-GAAP financial results. I'll now turn the call over to Hawk.
spk05: Thank you, Chi, and thank you everyone for joining today. In our fiscal Q2 2024 consult, consolidated net revenue was $12.5 billion, up 43% year-on-year, as revenue included a full quarter of contribution from VMware. But if we exclude VMware, consolidated revenue was up 12% year-on-year, and this 12% organic growth in revenue was largely driven by AI revenue, which stepped up 280% year-on-year to $3.1 billion, more than offsetting continued cyclical weakness in semiconductor revenue from enterprises and telcos. Let me now give you more color on our two reporting segments, beginning with software. In Q2, infrastructure software segment revenue of $5.3 billion was up 175% year-on-year and included $2.7 billion in revenue contribution from VMware, up from $2.1 billion in the prior quarter. The integration of VMware is going very well. Since we acquired VMware, We have modernized the product SKUs from over 8,000 disparate SKUs to four core product offerings and simplified the go-to-market flow, eliminating a huge amount of channel conflicts. We are making good progress in transitioning all VMware products to a subscription licensing model. And since closing the deal, we have actually signed up close to 3,000 of our largest 10,000 customers to enable them to build a self-service virtual private cloud on-prem. Each of these customers typically sign up to a multi-year contract, which we normalize into an annual measure known as annualized booking value, or ABV. This metric ABV for VMware products accelerated from $1.2 billion in Q1 to $1.9 billion in Q2. For reference, for the consolidated Broadcom software portfolio, ABV grew from $1.9 billion in Q1 to $2.8 billion over the same period in Q2. Meanwhile, we have integrated SG&A across the entire platform and eliminated redundant functions. Year-to-date, we have incurred about $2 billion of restructuring and integration costs and drove our spending run rate at VMware to $1.6 billion this quarter from what used to be $2.3 billion per quarter pre-acquisition. We expect spending will continue to decline towards a $1.3 billion run rate exiting Q4, better than our previous $1.4 billion plan, and will likely stabilize at $1.2 billion post-integration. VMware revenue in Q1 was $2.1 billion. It grew to $2.7 billion in Q2 and will accelerate towards a $4 billion per quarter run rate. We therefore expect operating margins for VMware to begin to converge towards that of classic Broadcom software by fiscal 2025. Turning to semiconductors. Let me give you more color by end markets. Networking. Q2 revenue of $3.8 billion grew 44% year-on-year, representing 53% of semiconductor revenue. This was again driven by strong demand from hyperscalers for both AI networking and custom accelerators. It's interesting to note that as AI data center clusters continue to deploy, our revenue mix has been shifting towards an increasing proportion of networking. We doubled the number of switches we sold year on year, particularly Tomahawk 5 and Jericho 3, which we deploy successfully in close collaboration with partners like Arista Networks, Dell, Juniper, and Supermicro. Additionally, we also double our shipments of PCI Express switches and NICs in the AI backend fabric. We're leading the rapid transition of optical interconnects in AI data centers to 800 gigabit bandwidth, which is driving accelerated growth for our DSPs, optical lasers, and pin diodes. And we are not standing still. Together with these same partners, we are developing the next generation switches, DSP, and optics that will drive the ecosystem towards 1.6 terabit connectivity to scale up larger AI-accelerated clusters. Talking of AI accelerators, you may know our hyperscale customers are accelerating their investments to scale up the performance of these clusters. And to that end, we have just been awarded the next generation custom AI accelerators for these hyperscale customers of ours. Networking these AI accelerators is very challenging, but the technology does exist today. In Broadcom, we have the deepest and broadest understanding of what it takes for complex, large workloads to be scaled out in an AI fabric. Proven point, seven of the largest eight AI clusters in deployment today use Broadcom Ethernet solutions. Next year, we expect all mega-scale GPU deployments to be on Ethernet. We expect the strength in AI to continue, and because of that, we now expect net working revenue to grow 40% year-on-year compared to our prior guidance of over 35% growth. Moving to wireless, Q2 wireless revenue of $1.6 billion grew 2% year-on-year, was seasonally down 19% quarter-on-quarter, and represents 22% of semiconductor revenue. And in fiscal 24, helped by content increases, we reiterate our previous guidance for wireless revenue to be essentially flat year on year. This trend is wholly consistent with our continued engagement with our North American customer, which is deep, strategic, and multi-year, and represents all of our wireless business next our q2 server storage connectivity revenue was 824 million or 11 of semiconductor revenue down 27 year-on-year we believe though q2 was the bottom in server storage and based on updated demand forecasts and bookings we expect a modest recovery in the second half of the year. And accordingly, we forecast fiscal 24 server storage revenue to decline around the 20% range year-on-year. Moving on to broadband. Q2 revenue declined 39% year-on-year to $730 million and represented 10% of semiconductor revenue. Broadband remains weak on a continued pause in telco and service provider spending. We expect Broadcom to bottom in the second half of the year with a recovery in 2025. Accordingly, we are revising our outlook for fiscal 24 broadband revenue to be down high 30s year-on-year from our prior guidance for decline of just over 30% year-on-year. Finally, Q2 industrial REV resale of $234 million declined 10% year-on-year. And for fiscal 24, we now expect industrial resales to be down double-digit percentage year-on-year compared to our prior guidance for high single-digit decline. So to sum it all up, here's what we are seeing. For fiscal 24, We expect revenue from AI to be much stronger at over $11 billion. Non-AI semiconductor revenue has bottomed in Q2 and is likely to recover modestly for the second half of fiscal 24. On infrastructure software, we're making very strong progress in integrating VMware and accelerating its growth. Pulling all these three key factors together, we're raising our fiscal 24 revenue guidance to $51 billion. And with that, let me turn the call over to Kirsten.
spk28: Thank you, Hawk. Let me now provide additional detail on our Q2 financial performance, which included a full quarter of contribution from VMware. Consolidated revenue was $12.5 billion for the quarter, up 43% from a year ago. Excluding the contribution from VMware, Q2 revenue increased 12% year on year. Gross margins were 76.2% of revenue in the quarter. Operating expenses were $2.4 billion and R&D was $1.5 billion, both up year-on-year primarily due to the consolidation of VMware. Q2 operating income was $7.1 billion and was up 32% from a year ago with operating margin at 57% of revenue. Excluding transition costs, operating profit of $7.4 billion was up 36% from a year ago with operating margin of 59% of revenue. Adjusted EBITDA was $7.4 billion or 60% of revenue. This figure excludes $149 million of depreciation. Now a review of the P&L for our two segments, starting with semiconductors. Revenue for our semiconductor solutions segment was $7.2 billion and represented 58% of total revenue in the quarter. This was up 6% year-on-year. Gross margins for our semiconductor solution segment were approximately 67% down 370 basis points year on year, driven primarily by a higher mix of custom AI accelerators. Operating expenses increased 4% year on year to $868 million on increased investment in R&D, resulting in semiconductor operating margins of 55%. Now moving on to infrastructure software. Revenue for infrastructure software was $5.3 billion, up 170% year-on-year, primarily due to the contribution of VMware, and represented 42% of revenue. Gross margin for infrastructure software were 88% in the quarter, and operating expenses were $1.5 billion in the quarter, resulting in infrastructure software operating margin of 60%. Excluding transition costs, operating margin was 64%. Now moving on to cash flow. Free cash flow in the quarter was $4.4 billion and represented 36% of revenues. Excluding cash used for restructuring and integration of $830 million, free cash flows of $5.3 billion were up 18% year-on-year and represented 42% of revenue. Free cash flow as a percentage of revenue has declined from 2023 due to higher cash interest expense from debt related to the VMware acquisition and higher cash taxes due to a higher mix of U.S. income and the delay in the reenactment of Section 174. We spent $132 million on capital expenditures. Day sales outstanding were 40 days in the second quarter, consistent with 41 days in the first quarter. We ended the second quarter with inventory of $1.8 billion, down 4% sequentially. We continue to remain disciplined on how we manage inventory across our ecosystem. We ended the second quarter with $9.8 billion of cash and $74 billion of gross debt. The weighted average coupon rate and years to maturity of our $48 billion in fixed rate debt is 3.5% and 8.2 years, respectively. The weighted average coupon rate and years to maturity of our $28 billion in floating rate debt is 6.6% and 2.8 years, respectively. During the quarter, we repay $2 billion of our floating rate debt, and we intend to maintain this quarterly repayment of debt throughout fiscal 2024. Turning to capital allocation, in the quarter we paid stockholders $2.4 billion of cash dividends based on a quarterly common stock cash dividend of $5.25 per share. In Q2, non-GAAP diluted share count was $492 million as the 54 million shares issued for the VMware acquisition were fully weighted in the second quarter. We paid $1.5 billion in withholding taxes due on vesting of employee equity resulting in the elimination of 1.2 million ABGO shares. Today, we are announcing a 10-for-1 forward stock split of Broadcom's common stock to make ownership of Broadcom stock more accessible to investors and to employees. Our stockholders of record after the close of market on July 11, 2024, will receive an additional nine shares of common stock after the close of market on July 12, with trading on a split-adjusted basis expected to commence at market open on July 15, 2024. In Q3, reflecting a post-split basis, we expect share count to be approximately 4.92 billion shares. Now on to guidance. We are raising our guidance for fiscal year 2024 consolidated revenue to $51 billion and adjusted EBITDA to 61%. For modeling purposes, please keep in mind that gap net income and cash flows in fiscal year 2024 are impacted by restructuring and integration-related cash costs due to the VMware acquisition. That concludes my prepared remarks. Operator, please open up the call for questions.
spk27: Thank you. As a reminder, to ask a question, you will need to press star 1-1 on your telephone. To withdraw your question, press star 11 again. Due to time restraints, we ask that you please limit yourself to one question. Please stand by while we compile the Q&A roster. And our first question will come from the line of Vivek Arya with Bank of America. Your line is open.
spk01: Thanks for taking my question. Hawk, I would appreciate your perspective on the emerging competition between Broadcom and NVIDIA across both accelerators and Ethernet switching. So on the accelerator side, they are going to launch their BlackBell product at many of the same customers that you have a very large position in the custom compute. So I'm curious how you think customers are going to do that allocation decision, just broadly what the visibility is And then I think part B of that is as they launch their SpectrumX Ethernet switch, do you think that poses increasing competition for Broadcom and the Ethernet switching side in AI for next year? Thank you.
spk05: Very interesting question, Vivek. On AI accelerators, I think we're operating on a different, to start with scale, much different model. It's You know, the GPUs, which are the AI accelerator of choice in a merchant environment, is something that is extremely powerful as a model and is something that NVIDIA operates in a very, very effective manner. We don't even think about competing against them in that space. That's where they're very good at and we know where we stand with respect to that. What we do for very selected or selective hyperscalers is if they have the skill and the skills to try to create silicon solutions which are AI accelerators to do particular very complex AI workloads we're happy to use our IP portfolio to create those custom ASIC AI accelerators. So I do not see them as truly competing against each other and far from me to say I'm trying to position myself to be a competitor on on basically GPUs in this market. We're not. We are not competitor to them. We don't try to be either. Now, on networking, maybe that's different. But again, people may be approaching and they may be approaching it from different angle. We are, as I indicated all along, very deep in Ethernet. we've been doing internet for over 25 years internet networking and we've gone through a lot of market transitions and we have captured a lot of market transitions from cloud scale networking to routing and now AI so it's a natural extension for us to go into AI we also recognize that being the AI compute engine of choice in the ecosystem, which is GPUs, that they are trying to create a platform that is probably end-to-end, very integrated. We take the approach that we don't do those GPUs, but we enable the GPUs to work very well. So if anything else, we supplement and hopefully complement those GPUs with customers who are building bigger and bigger GPU clusters.
spk01: Thank you.
spk27: Thank you. One moment for our next question. And that will come from the line of Ross Seymour with Deutsche Bank. Your line is open.
spk09: Hi guys. Thanks for me asking a question. I want to stick on the AI theme. Hawk, the strong growth that you had in the quarter, the 280% year over year, could you delineate a little bit between if that's the compute offload side versus the connectivity side? And then as you think about the growth for the full year, How are those split in that realm as well? Are they kind of going hand in hand, or is one side growing significantly faster than the other, especially with the, I guess you said, the next generation accelerators are now going to be Broadcom as well?
spk05: Well, to answer your question on the mix, you're right. It's something we don't really... predict very well, not understand completely except in hindsight because it's tied to some extent to the cadence of deployment of when they put in the AI accelerators versus when they put in the infrastructure that puts it together, the networking. And we don't really quite understand it 100%. All we know is it used to be 80% accelerators, 20% networking. It's now running closer to two-thirds accelerators, one-third networking, and we'll probably head towards 60-40 by the close of the year. Thank you.
spk27: Thank you. One moment for our next question. And that will come from the line of Stacy Raskon with Bernstein. Your line is open.
spk14: Hi, guys. Thanks for taking my question.
spk16: I wanted to ask about the $11 billion AI guide. You'd be at 11.6 even if you didn't grow AI from the current level in the second half, and it feels to me like you're not suggesting that. It feels to me like you think you'd be growing. So why wouldn't that AI number be a lot more than 11.6? It feels like it ought to be. Am I missing something?
spk05: Because I guided just over 11 billion. They say it could be what you think it is. You know, quarterly shipments get sometimes very lumpy. And it depends on rate of deployment, depends on a lot of things. So you may be right. You may estimate it better than I do. But the general trajectory is
spk16: getting better okay so I guess again how do I are you just suggesting that that more than 11 billion is sort of like the worst it could be because that would just be flat at the current levels but you're also suggesting that things are getting better into the back half so all right oh okay so I guess we just take that that's a very good that if I'm reading it wrong but that's just a very conservative number that's the best focus I have at this point they say
spk15: All right. Okay, Hawk. Thank you. I appreciate it.
spk04: Thank you.
spk27: Thank you. One moment for our next question. And that will come from the line of Harlan Sir with JP Morgan. Your line is open.
spk06: Yeah, good afternoon. Thanks for taking my question. Hawk, on cloud and AI networking silicon, you know, good to see that the networking mix is steadily increasing. You know, like clockwork, the Broadcom team has been driving a consistent two-year cadence, right, of new product introductions, Trident, Tomahawk, Jericho family of switching and routing products for the past seven generations. You layer on top of that your GPU, TPU customers are accelerating their cadence of new product introductions and deployments of their products. So is this also driving faster adoption curve for your latest Tomahawk and Jericho products? And then maybe just as importantly, Like clockwork, it's been two years since you've introduced Tomahawk 5 product introduction, right? Which, if I look back historically, means you have silicon and are getting ready to introduce your next generation 3 nanometer Tomahawk 6 products, which would, I think, put you two to three years ahead of your competitors. Can you just give us an update there?
spk05: Alan, you're pretty insightful there. Yes, we launched Tomahawk 5. 23. So you're right. By late 25 is the time we should be coming out with Tomahawk 6, which is the 100 terabit switch.
spk06: Yes. And is this acceleration of cadence by your GPU and TPU partners, is that also what's kind of driving the strong growth in the networking products?
spk05: Well, you know what? Sometimes you have to let things take its time. But it's two years, Caden, so we're right on. You know, 23 was when we showed it out to a total of five. And adoption, you're correct, with AI has been tremendous because it ties in with the need for very large bandwidth in the networking, in the fabric, for AI clusters, AI data centers, but regardless, we've always targeted Tomahawk 6 to be out two years after that, which we should put it into late 25.
spk31: Okay. Thank you, Hough.
spk27: Thank you. One moment for our next question. And that will come from the line of Ben Rietzes with Mellius. Your line is open.
spk22: Hey, thanks a lot and congrats on the quartering guide. Hach, I wanted to talk a little bit more about VMware. Just wanted to clarify if it is indeed going better than expectations and how would you characterize the customer willingness to move to subscription and also just a little more color on Cloud Foundation. You've cut the price there, and are you seeing that beat expectations? Thanks a lot.
spk05: Thanks, and thanks for your kind regards on the quarter. But as far as VMware is concerned, we're making good progress. The journey is not over by any means, but it's pretty much, very much to expectation. Moving to subscription, hell, VMware, in VMware, we're very slow compared to, I mean, a lot of other guys, Microsoft, Salesforce, Oracle, who have already been pretty much in subscription. So VMware is late in that process, but We're trying to make up for it by offering it and offering it very, very in a compelling manner because subscription is the right things to do, right? It's a situation where you put out your product offering and you update it, patch it, but update it feature-wise, everything else, capabilities on a continual basis, almost like getting your news on an ongoing basis, subscription online versus getting it in a printed manner once a week. That's how I compare perpetual to subscription. So it's very interesting for a lot of people to want to get on. And so to no surprise, they are getting on very well. The big selling point we have, as I indicated, is the fact that we're not just trying to keep customers kind of stuff on just server or compute virtualization. That's a great product, great technology, but it's been out for 20 years. Based on what we are offering now at a very compelling price point, compelling being very attractive price point, the whole stack, software stack, to use vSphere, and its basic fundamental technology to virtualize networking, storage, operation and management, the entire data center, and create this self-service private cloud. And thanks for saying it, you're right, and we have priced it down to the point where it's comparable with just compute virtualization. So yes, that's getting a lot of interest, a lot of attention from the customers who have signed up who would like to deploy, the ability to deploy their own private cloud on-prem as a nice complement, maybe even alternative or hybrid to public clouds. That's the selling point, and we're getting a lot of interest from our customers in doing that.
spk22: Great.
spk21: And it's on track for four bill by the fourth quarter still, which is reiterated.
spk05: Well, I didn't give a specific time frame, did I? But it's on track, as we see this process growing, towards a $4 billion quarter.
spk24: Okay. Thanks a lot, Hock.
spk05: Thanks.
spk27: Thank you. One moment for our next question. And that will come from the line of Toshia Hari with Goldman Sachs. Your line is open.
spk30: Hi. Thank you so much for taking the question. I guess kind of a follow-up to the previous question on your software business. Hockey seemed to have pretty good visibility into hitting that $4 billion run rate over the medium term, perhaps. You also talked about your operating margins in that business converging to classic Broadcom levels. I know the integration is not done and you're still kind of in debt pay down mode, but how should we think about your growth strategy beyond VMware? Do you think you have enough drivers both on the semiconductor side and the software side to continue to drive growth or is M&A still an option beyond VMware? Thank you.
spk05: Interesting question. You're right. As I indicated in my remark, even without the contribution from VMware this past quarter, where we have AI helping us, but we have non-AI semiconductors sort of bottoming out, we're able to show 12% organic growth year on year. So almost have to say, so do we need to rush to buy another company? Answer is no, but all options are always open because we're trying to create the best value for our shareholders who have entrusted us with the capital to do that. So I would not discount that. alternative because our strategy, our long-term model has always been to grow through a combination of acquisition but also on the assets required to really improve, invest and operate them better to show organic growth as well. But again, organic growth often enough is determined very much by how fast your market would grow. So we do look towards acquisitions now and then.
spk19: Great. Thank you.
spk27: Thank you. One moment for our next question. And that will come from the line of Blaine Curtis with Jefferies. Your line is open.
spk26: Hey, thanks for taking my question. I wanted to ask you, Huck, on the networking business kind of XAI. Obviously, you know, I think there's an inventory correction the whole industry is seeing, which is kind of curious. I don't think you mentioned that it was at a bottom to just, perspective, I think it's down about 60% year-over-year. Is that business finding a bottom? I know you said overall the whole semi-business should, non-AI, should see recovery. Are you expecting any there and any perspective on just customer inventory levels in that segment?
spk05: We see it behaving. I didn't particularly call it out, obviously, because more than anything else, I kind of link it very much to server storage, non-AI, that is. and we call server storage as at the bottom Q2 and we call it to recover modestly second half of the year. We see the same thing in networking which is combination of Enterprise networking as well as the hyperscalers who run their traditional workloads on those, though it's hard to figure out sometimes, but it is. So we see the same trajectory as we are calling out on server storage.
spk25: Okay, thank you.
spk27: Thank you. One moment for our next question. And that will come from the line of Timothy Arcuri with UBS. Your line is open.
spk00: Mr. Arcuri, your line is open.
spk17: Hi. Hi. Sorry. Thanks. Hawk, is there a way to sort of map GPU demand back to your AI networking opportunity? I think I've heard you say in the past that if you spend $10 billion on GPU compute, you need to spend another $10 billion on other infrastructure, most of which is networking. So I'm just kind of wondering if when you see these big GPU numbers, is there sort of a rule of thumb that you use to map it back to what the opportunity will be for you? Thanks.
spk05: There is, but it's so complex. I stopped creating such a model. I'm serious. But there is, because one would say that You want to say for every billion dollars you spend on GPU you probably would spend probably on networking and if you include the optical interconnects as part of it though we're not totally in that market except for the components like DSPs, lasers, pin dyes that go into those high bandwidth optical connect. But if you just take optical connects in totality, switching, all the networking components, it goes into, attaches itself to clustering a bunch of GPUs. You probably would say that about 25% of the value of the GPU goes to networking. the rest of networking. Now, not entirely all of it is my available market. I don't do the optical connect, but I do the few components I talk about in it. But roughly, the simple way to look at it is probably about 25%, maybe 30% of all this infrastructure components is kind of attached to the GPU value point itself. but having said that it's never that precise that deployment is the same way so you may see the deployment of GPU or the purchase of GPU much earlier and the networking comes later or sometimes less the other way around which is why you're seeing the mix going on within my AI revenue mix but typically you run towards that range over time
spk13: Perfect talk. Thank you so much.
spk27: Thank you. One moment for our next question. And that will come from the line of Thomas O'Malley with Barclays. Your line is open.
spk10: Hey guys, thanks for taking my question and nice results. But my question regards to the custom basic AI talk. You've had a long run here of a very successful business, particularly with one customer. If you look in the market today, you have a new entrant who's playing with different customers. And I know that you said historically, that's not really a direct customer to you. But could you talk about what differentiates you from the new entrant in the market as of late? And then there's been profitability questions around the sustainability of gross margins longer term. Can you talk about if you see any increased competition and if there's really areas that you would deem more or less defensible in your profile today, and if you would see kind of that additional entrant maybe attack any of those in the future?
spk05: Let me take the second part first, which is our custom AI accelerator business. It is a very profitable business, and let me examine it from a model point of view. I mean, you know, each of these AI accelerators, no different from a GPU, the way these large language models get run computing get run on these accelerators no one single accelerator as you know can run this big large language models you need multiple of them no matter how how powerful those accelerators are but also and the way the models are run there's a lot of memory access to memory requirements so each of this accelerator comes with large amount of cache memory as you call it what you guys probably now know as HBM high bandwidth memory specialized for AI accelerators or GPUs so we supply both in our custom business and the logic side of it the you know the way you just where the compute function is on doing the chips Hey, the margin there are no different than the margin in most of any of our semiconductor silicon chip business. But when you attach to it a huge amount of memory, memory comes from a third party. There are a few memory makers who make this specialized thing. We don't do margin stacking on that part. So almost buying basic math will dilute the margin of this AI accelerators when you sell them with memory, which we do. It does push up revenue somewhat higher, but it dilutes the margin. But regardless, the spend, the R&D, the OPEX that goes to support this, as a percent of the revenue, which is higher revenue, so much less. So on an operating margin level, this is easily as profitable, if not more profitable, given the scale that each of those custom AI accelerators can go up to. It's even better than our normal operating margin scale. So that's the return on investment that attracts and keeps us going at this game. And this is more than a game. It's a very difficult business. And to answer your first question, there's only one broker, period.
spk27: Thank you. One moment for our next question. And that will come from the line of Carl Ackerman with BNP. Your line is open.
spk07: Yes, thank you. Good afternoon. Hawk, your networking switch portfolio with Tomahawk and Jericho chipsets allow hyperscalers to build AI clusters using either a switch scheduled or endpoint scheduled network. And that, of course, is unique among competitors. But as hyperscalers seek to deploy their own unique AI clusters, are you seeing a growing mix of white box networking switch deployments? I ask because while your custom silicon business continues to broaden, it would be helpful to better understand the growing mix of your $11 billion network. AI networking portfolio combined this year. Thank you.
spk05: Let me have Charlie address this question.
spk32: He's the expert. Yeah. Thank you, Hawk. So two quick things on this. One is you're exactly right that the portfolio we have is quite unique in providing that flexibility. And by the way, this is exactly why Hawk in his statements earlier on mentioned that seven, out of the top eight hyperscalers use our portfolio. And they use it specifically because it provides that flexibility. So whether you have an architecture that's based on an endpoint and you want to actually build your platform that way or you want that switching to happen in the fabric itself, that's why we have the full N10 portfolio. So that actually has been a proven differentiator for us. And then on top of that, we've been working, as you know, to provide a complete network operating system that's open on top of that using Sonic and SAI, which has been deployed in many of the hyperscalers. And so the combination of the portfolio plus the stack really differentiates the solution that we can offer to these hyperscalers. And if they decide to build their own NICs, their own accelerators that are custom or use standard products, whether it's from Broadcom or other, That platform, that portfolio of infrastructure switching gives you that full flexibility.
spk12: Thank you.
spk27: Thank you. One moment for our next question. And that will come from the line of CJ Muse with Cantor Fitzgerald. Your line is open.
spk11: Yeah, good afternoon. Thank you for taking the question. I was hoping to ask a two-part software question. Excluding VMware, your Brocade, CA, and Symantec business is now running $500 million higher for the last two quarters. So curious, is that the new sustainable run rate, or were there one-time events in both January and April that we should be considering? And then the second question is, as you think about VMware Cloud Foundation adoption, are you seeing any sort of crowding out of spending like other software guys are seeing as they repurpose their budgets to IT, or is that business so less discretionary that it's just not an impact to you? Thanks so much.
spk05: Well, on the second one, I don't know about any crowding out, to be honest. It's not. What we're offering, obviously, is not something that they would like to use themselves, to be able to do themselves, which is they are already spending on building their own on-prem data centers. And typical approach people take, a lot of enterprises take, historically continue today, that most people do, a lot of people do, is they have best of breed. What I mean was they create a data center that is compute as a separate category, best compute there is, and they often enough use vSphere. for compute virtualization due to improved productivity, but best of breed there. Then there's best of breed on networking and best of breed on storage with common management operations layer, which very often is also VMware V-Ray life. And what we're trying to say is this makes back, and what they see, is this makes back best of breed data center very heterogeneous it's not grieving that it's not a highly resilient data center I mean you have a mixed bag so it goes down where do you find quickly root cause everybody's pointing fingers at the other so we got a problem not very resilient and not necessary secure between bare metal in one side and software on the other side. So it's a natural thinking on the part of many CIOs we talk to to say, hey, I want to create one common platform as opposed to just best of breed of age. So that gets us into that. So if it's a green field, that's not bad. They started from scratch. If it's a brown field, that means they have existing data centers trying to upgrade. Sometimes that's more challenging for us to get that adopted. So I'm not sure there's a crowding out here. There's some competition, obviously, on Greenfield where they can spend a budget on an entire platform versus Baselbrit, but on the existing data center where you're trying to upgrade, that's a trickier thing to do. And it cuts the other way as well for us. But so that's That's how I see it. So in that sense, the best answer is I don't think we're seeing a level of crowding out that is very significant for me to mention. In terms of the revenue mix, no, Brocade is having a great, great field year so far. and still chugging along. But will that sustain? Hell no. You know that. Brocade goes through cycles like most enterprise purchases. So we're enjoying it while it lasts.
spk20: Thank you. Thanks.
spk27: Thank you. And we do have time for one final question. And that will come from the line of William Stein with Truist Securities. Your line is open.
spk08: Great. Thanks for squeezing me in. Hawk, congrats on yet another great quarter and strong outlook in AI. I also want to ask about something you mentioned with VMware. In your prepared remarks, you highlighted that you've eliminated a tremendous amount of channel conflict. I'm hoping you can linger on this a little bit and clarify maybe what you did and specifically also what you did in the heritage Broadcom software business where I think historically you'd shied away from the channel, and there was an idea that perhaps you'd reintroduce those products to the channel through a more unified approach using VMware's channel partners or resources. So any sort of clarification here I think would be helpful. Thank you.
spk05: Yes, thank you. That's a great question. Yeah, VMware taught me a few things. They have 300,000 customers. That's pretty interesting, amazing. And we look at it. I know under CA, we took a position that let's pick an A-list strategic guys and focus on it. I can't do that in VMware. I have to approach it differently and I start to learn the value of a very strong bunch of partners they have which are a network of distributors and something like 15,000 vast value-added resellers supported with these distributors. So we have doubled down and invested in this reseller network in a big way for VMware. It's a great move, I think, about six months into the game, but we're seeing a lot more velocity out of it. Now, these resellers, having said that, tend to be very focused on the very long tail of their 300,000 customers. The largest 10,000 customers of VMware are large enterprises. and who tend to you know they are very large enterprises the largest banks the largest healthcare companies and their view is I want very bespoke service support engineering solutions from us so we've created a direct approach supplemented with their vow of choice where they need to but on the long tail of 300 000 customers they get a lot of services to from the resellers value added resellers and so in their way so we now and strengthen that whole network of resellers so that they can go direct manage uh supported financially with distributors and we don't try to challenge those guys unless the customers, it all boils down to the end of the day the customer chose where they'd like to be supported. So we kind of simplify this together with the number of SKUs there are. In the past, unlike what we're trying to do here, everybody is a part, I mean you're talking a full range of partners and whoever You know, makes the biggest deal, gets the lowest, the partner that makes the biggest deal, gets the biggest discount, lowest price, and they are out there basically kind of creating a lot of channel chaos and conflict in the marketplace. Here we don't. The customers are aware. They can take it direct from VMware to the direct sales force, or they can easily move to the reseller to get it that way. And as a third alternative which we offer, if they want to run their applications on VMware and they want to run it efficiently on the full stack, they have a choice now of going to a hosted environment managed by a network of managed service providers which we set up globally that will run the infrastructure, invest and operate the infrastructure, and these enterprise customers just run their workloads in and get it as a service, basically VMware as a service, as an alternative. And we are clear to make it very distinct and differentiated for our end-use customers. They are available to all three, is how they chose to consume our technology.
spk18: Great, thank you.
spk27: Thank you. I would now like to hand the call over to G.U., head of investor relations, for any closing remarks.
spk33: Thank you, Cherie. Broadcom currently plans to report its earnings for the third quarter of fiscal 24 after close of market on Thursday, September 5th, 2024. A public webcast of Broadcom's earnings conference call will follow at 2 p.m. Pacific time. That will conclude our earnings call today. Thank you all for joining. Operator, you may end the call.
spk27: Thank you all for participating. This concludes today's program. You may now disconnect. Music playing. Thank you. Thank you. Thank you.
spk23: you
spk27: Welcome to Broadcom, Inc.' 's second quarter fiscal year 2024 financial results conference call. At this time, for opening remarks and introductions, I would like to turn the call over to GU, Head of Investor Relations of Broadcom, Inc.
spk33: Thank you, Operator, and good afternoon, everyone. Joining me on today's call are Hawk Tan, President and CEO, Kirsten Spears, Chief Financial Officer, and Charlie Kawaz, President Semiconductor Solutions Group. Broadcom distributed a press release and financial tables after the market closed, describing our financial performance for the second quarter of fiscal year 2024. If you did not receive a copy, you may obtain the information from the investor section of Broadcom's website at broadcom.com. This conference call is being webcast live. and then audio replay of the call can be accessed for one year through the investor section of Broadcom's website. During the prepared comments, Hawk and Kirsten will be providing details of our second quarter fiscal year 2024 results, guidance for our fiscal year 2024, as well as commentary regarding the business environment. We'll take questions after the end of our prepared comments. Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call. In addition to U.S. GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today's press release. Comments made during today's call will primarily refer to our non-GAAP financial results. I'll now turn the call over to Hawk.
spk05: Thank you, Chi, and thank you everyone for joining today. In our fiscal Q2 2024 consult, consolidated net revenue was $12.5 billion, up 43% year-on-year, as revenue included a full quarter of contribution from VMware. But if we exclude VMware, consolidated revenue was up 12% year-on-year, and this 12% organic growth in revenue was largely driven by AI revenue, which stepped up 280% year-on-year to $3.1 billion, more than offsetting continued cyclical weakness in semiconductor revenue from enterprises and telcos. Let me now give you more color on our two reporting segments, beginning with software. In Q2, infrastructure software segment revenue of $5.3 billion was up 175% year-on-year and included $2.7 billion in revenue contribution from VMware, up from $2.1 billion in the prior quarter. The integration of VMware is going very well. Since we acquired VMware, We have modernized the product SKUs from over 8,000 disparate SKUs to four core product offerings and simplified the go-to-market flow, eliminating a huge amount of channel conflicts. We are making good progress in transitioning all VMware products to a subscription licensing model. And since closing the deal, we have actually signed up close to 3,000 of our largest 10,000 customers to enable them to build a self-service virtual private cloud on-prem. Each of these customers typically sign up to a multi-year contract, which we normalize into an annual measure known as annualized booking value, or ABV. This metric ABV for VMware products accelerated from $1.2 billion in Q1 to $1.9 billion in Q2. For reference, for the consolidated Broadcom software portfolio, ABV grew from $1.9 billion in Q1 to $2.8 billion over the same period in Q2. Meanwhile, we have integrated SG&A across the entire platform and eliminated redundant functions. Year-to-date, we have incurred about $2 billion of restructuring and integration costs and drove our spending run rate at VMware to $1.6 billion this quarter, from what used to be $2.3 billion per quarter pre-acquisition. We expect spending will continue to decline towards a $1.3 billion run rate exiting Q4, better than our previous $1.4 billion plan, and will likely stabilize at $1.2 billion post-integration. VMware revenue in Q1 was $2.1 billion. It grew to $2.7 billion in Q2 and will accelerate towards a $4 billion per quarter run rate. We therefore expect operating margins for VMware to begin to converge towards that of classic Broadcom software by fiscal 2025. Turning to semiconductors. Let me give you more color by end markets. Networking. Q2 revenue of $3.8 billion grew 44% year-on-year, representing 53% of semiconductor revenue. This was again driven by strong demand from hyperscalers for both AI networking and custom accelerators. It's interesting to note that as AI data center clusters continue to deploy, our revenue mix has been shifting towards an increasing proportion of networking. We doubled the number of switches we sold year on year, particularly Tomahawk 5 and Jericho 3, which we deploy successfully in close collaboration with partners like Arista Networks, Dell, Juniper, and Supermicro. Additionally, we also double our shipments of PCI Express switches and NICs in the AI backend fabric. We're leading the rapid transition of optical interconnects in AI data centers to 800 gigabit bandwidth, which is driving accelerated growth for our DSPs, optical lasers, and pin diodes. And we are not standing still. Together with these same partners, we are developing the next generation switches, DSP, and optics that will drive the ecosystem towards 1.6 terabit connectivity to scale up larger AI accelerated clusters. Talking of AI accelerators, you may know our hyperscale customers are accelerating their investments to scale up the performance of these clusters, and to that end, We have just been awarded the next generation custom AI accelerators for these hyperscale customers of ours. Networking these AI accelerators is very challenging, but the technology does exist today. In Broadcom, we have the deepest and broadest understanding of what it takes for complex large workloads to be scaled out in an AI fabric. Proven point, seven of the largest eight AI clusters in deployment today use Broadcom Ethernet solutions. Next year, we expect all mega-scale GPU deployments to be on Ethernet. We expect the strength in AI to continue, and because of that, we now expect net working revenue to grow 40% year-on-year compared to our prior guidance of over 35% growth. Moving to wireless, Q2 wireless revenue of $1.6 billion grew 2% year-on-year, was seasonally down 19% quarter-on-quarter, and represents 22% of semiconductor revenue. And in fiscal 24, helped by content increases, we reiterate our previous guidance for wireless revenue to be essentially flat year on year. This trend is wholly consistent with our continued engagement with our North American customer, which is deep, strategic, and multi-year, and represents all of our wireless business. Next, our Q2 server storage connectivity revenue was $824 million, or 11% of semiconductor revenue, down 27% year-on-year. We believe the old Q2 was the bottom in server storage, and based on updated demand forecasts and bookings, we expect a modest recovery in the second half of the year. And accordingly, we forecast fiscal 24 server storage revenue to decline around the 20% range year-on-year. Moving on to broadband. Q2 revenue declined 39% year-on-year to $730 million and represented 10% of semiconductor revenue. Broadband remains weak on a continued pause in telco and service provider spending. We expect Broadcom to bottom in the second half of the year with a recovery in 2025. Accordingly, we are revising our outlook for fiscal 24 broadband revenue to be down high 30s year-on-year from our prior guidance for decline of just over 30% year-on-year. Finally, Q2 industrial resale of $234 million declined 10% year-on-year. And for fiscal 24, we now expect industrial resale to be down double-digit percentage year-on-year compared to our prior guidance for high single-digit decline. So to sum it all up, here's what we are seeing. For fiscal 24, we expect revenue from AI to be much stronger at over $11 billion. Non-AI semiconductor revenue has bottomed in Q2. and is likely to recover modestly for the second half of fiscal 24. On infrastructure software, we're making very strong progress in integrating VMware and accelerating its growth. Pulling all these three key factors together, we're raising our fiscal 24 revenue guidance to $51 billion. And with that, let me turn the call over to Kirsten.
spk28: Thank you, Hawk. Let me now provide additional detail on our Q2 financial performance, which included a full quarter of contribution from VMware. Consolidated revenue was $12.5 billion for the quarter, up 43% from a year ago. Excluding the contribution from VMware, Q2 revenue increased 12% year on year. Gross margins were 76.2% of revenue in the quarter. Operating expenses were 2.4 billion and R&D was 1.5 billion, both up year on year, primarily due to the consolidation of VMware. Q2 operating income was 7.1 billion and was up 32% from a year ago, with operating margin at 57% of revenue. Excluding transition costs, Operating profit of $7.4 billion was up 36% from a year ago with operating margin of 59% of revenue. Adjusted EBITDA was $7.4 billion or 60% of revenue. This figure excludes $149 million of depreciation. Now a review of the P&L for our two segments, starting with semiconductors. Revenue for our semiconductor solution segment was $7.2 billion and represented 58% of total revenue in the quarter. This was up 6% year-on-year. Gross margins for our semiconductor solution segment were approximately 67%, down 370 basis points year-on-year, driven primarily by a higher mix of custom AI accelerators. Operating expenses increased 4% year-on-year to $868 million on increased investment in R&D, resulting in semiconductor operating margins of 55%. Now moving on to infrastructure software. Revenue for infrastructure software was $5.3 billion, up 170% year-on-year, primarily due to the contribution of VMware, and represented 42% of revenue. Gross margin for infrastructure software were 88% in the quarter and operating expenses were 1.5 billion in the quarter, resulting in infrastructure software operating margin of 60%. Excluding transition costs, operating margin was 64%. Now moving on to cash flow. Free cash flow in the quarter was 4.4 billion and represented 36% of revenues. Excluding cash used for restructuring and integration of $830 million, free cash flows of $5.3 billion were up 18% year on year and represented 42% of revenue. Free cash flow as a percentage of revenue has declined from 2023 due to higher cash interest expense from debt related to the VMware acquisition and higher cash taxes due to a higher mix of U.S. income and the delay in the reenactment of Section 174. We spent $132 million on capital expenditures. Day sales outstanding were 40 days in the second quarter, consistent with 41 days in the first quarter. We ended the second quarter with inventory of $1.8 billion, down 4% sequentially. We continue to remain disciplined on how we manage inventory across our ecosystem. We ended the second quarter with $9.8 billion of cash and $74 billion of gross debt. The weighted average coupon rate and years to maturity of our $48 billion in fixed rate debt is 3.5% and 8.2 years, respectively. The weighted average coupon rate and years to maturity of our $28 billion in floating rate debt is 6.6% and 2.8 years, respectively. During the quarter, we repaid $2 billion of our floating rate debt, and we intend to maintain this quarterly repayment of debt throughout fiscal 2024. Turning to capital allocation, in the quarter we paid stockholders $2.4 billion of cash dividends based on a quarterly common stock cash dividend of $5.25 per share. In Q2, non-GAAP diluted share count was $492 million, as the 54 million shares issued for the VMware acquisition were fully weighted in the second quarter. We paid $1.5 billion in withholding taxes due on vesting of employee equity, resulting in the elimination of 1.2 million ABGO shares. Today, we are announcing a 10 for 1 forward stock split of Broadcom's common stock to make ownership of Broadcom stock more accessible to investors and to employees. Our stockholders of record after the close of market on July 11th, 2024 will receive an additional nine shares of common stock after the close of market on July 12th with trading on a split adjusted basis expected to commence at market open on July 15th, 2024. In Q3, reflecting a post split basis, we expect share count to be approximately 4.92 billion shares. Now on to guidance. We are raising our guidance for fiscal year 2024 consolidated revenue to $51 billion and adjusted EBITDA to 61%. For modeling purposes, please keep in mind that GAAP net income and cash flows in fiscal year 2024 are impacted by restructuring and integration-related cash costs due to the VMware acquisition. That concludes my prepared remarks. Operator, please open up the call for questions.
spk27: Thank you. As a reminder, to ask a question, you will need to press star 1 1 on your telephone. To withdraw your question, press star 1 1 again. Due to time restraints, we ask that you please limit yourself to one question. Please stand by while we compile the Q&A roster. And our first question will come from the line of Vivek Arya with Bank of America. Your line is open.
spk01: Thanks for taking my question. Hawk, I would appreciate your perspective on the emerging competition between Broadcom and NVIDIA across both accelerators and Ethernet switching. So on the accelerator side, they are going to launch their Black Belt product at many of the same customers that you have a very large position in the custom compute. So I'm curious. how you think customers are going to do that allocation decision, just broadly what the visibility is. And then I think part B of that is, as they launch their SpectrumX Ethernet switch, do you think that poses increasing competition for Broadcom and the Ethernet switching side in AI for next year? Thank you.
spk05: Very interesting question, Vivek. On AI accelerators, I think we're operating on a different to start with scale, much as different model. The GPUs, which are the AI accelerator of choice in a merchant environment, is something that is extremely powerful as a model. It's something that NVIDIA operates in a very, very effective manner. We don't even think about competing against them in that space. Not in the least. That's where they're very good at and we know where we stand with respect to that. What we do for very selected or selective hyperscalers is if they have the skill and the skills to try to create silicon solutions, which are AI accelerators, to do particular AI, very complex AI workloads. We're happy to use our IP portfolio to create those custom ASIC AI accelerator. So I do not see them as truly competing against each other and far for me to say I'm trying to position myself to be a competitor on on basically GPUs in this market. We're not. We are not competitors to them. We don't try to be either. Now, on networking, maybe that's different. But again, people may be approaching, and they may be approaching it from a different angle. We are, as I indicated all along, very deep in Ethernet. We've been doing Ethernet for over 25 years, Ethernet networking. and we've gone through a lot of market transitions and we have captured a lot of market transitions from cloud scale networking to routing and now AI so it's a natural extension for us to go into AI we also recognize that being the AI compute engine of choice in the ecosystem which is GPUs that they are trying to create a platform that is probably end-to-end very integrated. We take the approach that we don't do those GPUs, but we enable the GPUs to work very well. So if anything else, we supplement and hopefully complement those GPUs with customers who are building bigger and bigger GPU clusters.
spk27: Thank you. Thank you. One moment for our next question. And that will come from the line of Ross Seymour with Deutsche Bank. Your line is open.
spk09: Hi, guys. Thanks for asking the question. I want to stick on the AI theme. Hawk, the strong growth that you had in the quarter, the 280% year over year, could you delineate a little bit between if that's the compute offload side versus the connectivity side? And then as you think about the growth for the full year, How are those split in that realm as well? Are they kind of going hand in hand, or is one side growing significantly faster than the other, especially with the, I guess you said, the next generation accelerators are now going to be Broadcom as well?
spk05: Well, to answer your question on the mix, you're right. It's something we don't really predict very well, nor understand completely, except in hindsight. because it's tied to some extent to the cadence of deployment of when they put in the AI accelerators versus when they put in the infrastructure that puts it together, the networking. And we don't really quite understand it 100%. All we know is it used to be 80% accelerators, 20% networking. It's now running closer to one-third, two-thirds accelerators, one-third networking, and we'll probably head towards 60-40 by the close of the year. Thank you.
spk27: Thank you. One moment for our next question. And that will come from the line of Stacy Rescon with Bernstein. Your line is open.
spk14: Hi, guys. Thanks for taking my question.
spk16: I wanted to ask about the $11 billion AI guide. You'd be at 11.6 even if you didn't grow AI from the current level in the second half. And it feels to me like you're not suggesting that. It feels to me like you think you'd be growing. So why wouldn't that AI number be a lot more than 11.6? It feels like it ought to be. Or am I missing something?
spk05: Because I guided just over 11 billion. They say it could be what you think it is. uh you know it's quarterly shipments get sometimes very lumpy and it depends on rate of deployment depend a lot of things so you may be right you may be you may get you may expect estimate it better than i do but the general trend trajectory is it's getting better okay so i guess again how do i are you just suggesting that
spk16: that more than $11 billion is sort of like the worst it could be because that would just be flat at the current levels. But you're also suggesting that things are getting better into the back half. Correct. Okay. So I guess we just take that. If I'm reading it wrong, that's just a very conservative number.
spk05: That's the best forecast I have at this point, Stacey.
spk15: All right. Okay, Huck. Thank you. I appreciate it.
spk04: Thank you.
spk27: Thank you. One moment for our next question. And that will come from the line of Harlan Sir with JP Morgan. Your line is open.
spk06: Yeah, good afternoon. Thanks for taking my question. On cloud and AI networking, Silicon, you know, good to see that the networking mix is steadily increasing. You know, like clockwork, the Broadcom team has been driving a consistent two-year cadence, right, of new product introductions, Trident, Tomahawk, Jericho family of switching and routing products for the past seven generations. You layer on top of that, your GPU, TPU customers are accelerating their cadence of new product introductions and deployments of their products. So is this also driving faster adoption curve for your latest Tomahawk and Jericho products? And then maybe just as importantly, like clockwork, it's been two years since you've introduced Tomahawk 5 product introduction, right? which if I look back historically means you have silicon and are getting ready to introduce your next generation three nanometer Tomahawk 6 products, which would, I think, put you two to three years ahead of your competitors. Can you just give us an update there?
spk05: Alan, you're pretty insightful there. Yes, we launched Tomahawk 5 in 23 years. So you're right. By late 25 is the time we should be coming out with Tomahawk 6, which is the 100 terabit switch.
spk06: Yes. And is this acceleration of cadence by your GPU and TPU partners, is that also what's kind of driving the strong growth in the networking products?
spk05: Well, you know what? Sometimes you have to let things take its time. But it's two years, Caden, so we're right on. You know, 23 was when we showed it out to a total of five. And adoption, you're correct, with AI has been tremendous because it ties in with the need for very large bandwidth in the networking, in the fabric, for AI clusters, AI data centers, but regardless, we've always targeted Tomahawk 6 to be out two years after that, which we should put it into late 25.
spk31: Okay. Thank you, Hoff.
spk27: Thank you. One moment for our next question. And that will come from the line of Ben Rietzes with Mellius. Your line is open.
spk22: Hey, thanks a lot and congrats on the quartering guide. Hach, I wanted to talk a little bit more about VMware. Just wanted to clarify if it is indeed going better than expectations and how would you characterize the customer willingness to move to subscription and also just a little more color on Cloud Foundation. You've cut the price there, and are you seeing that beat expectations? Thanks a lot.
spk05: Thanks, and thanks for your kind regards on the quarter. But as far as VMware is concerned, we're making good progress. The journey is not over by any means, but it's pretty much, very much to expectation. Moving to subscription, hell, VMware, in VMware, we're very slow compared to, I mean, a lot of other guys, Microsoft, Salesforce, Oracle, who have already been pretty much in subscription. So VMware is late in that process, but We're trying to make up for it by offering it and offering it in a very compelling manner because subscription is the right thing to do, right? It's a situation where you put out your product offering and you update it, patch it, but update it feature-wise, everything else, capabilities, on a continual basis, almost like getting your news on an ongoing basis, subscription online versus getting it in a printed manner once a week. That's how I compare perpetual to subscription. So it's very interesting for a lot of people to want to get on. And so to no surprise, they are getting on very well. The big selling point we have, as I indicated, is the fact that we're not just trying to keep customers kind of stuff on just server or compute virtualization. That's a great product, great technology, but it's been out for 20 years. Based on what we are offering now at a very compelling price point, compelling being very attractive price point, the whole stack, software stack, to use vSphere, and its basic fundamental technology to virtualize networking, storage, operation and management, the entire data center, and create this self-service private cloud. And thanks for saying it, you're right, and we have priced it down to the point where it's comparable with just compute virtualization. So yes, that's getting a lot of interest, a lot of attention from the customers who have signed up who would like to deploy, the ability to deploy their own private cloud on-prem as a nice complement, maybe even alternative or hybrid to public clouds. That's the selling point, and we're getting a lot of interest from our customers in doing that.
spk22: Great.
spk21: And it's on track for four bill by the fourth quarter still, which is reiterated.
spk05: Well, I didn't give a specific time frame, did I? But it's on track, as we see this process growing, towards a $4 billion quarter.
spk24: Okay. Thanks a lot, Hock.
spk05: Thanks.
spk27: Thank you. One moment for our next question. And that will come from the line of Toshia Hari with Goldman Sachs. Your line is open.
spk30: Hi. Thank you so much for taking the question. I guess kind of a follow-up to the previous question on your software business. Hockey seemed to have pretty good visibility into hitting that $4 billion run rate over the medium term, perhaps. You also talked about your operating margins in that business converging to classic Broadcom levels. I know the integration is not done and you're still kind of in debt pay down mode, but how should we think about your growth strategy beyond VMware? Do you think you have enough drivers both on the semiconductor side and the software side to continue to drive growth or is M&A still an option beyond VMware? Thank you.
spk05: Interesting question. You're right. As I indicated in my remark, even without the contribution from VMware this past quarter, where we have AI helping us, but we have non-AI semiconductors sort of bottoming out, we're able to show 12% organic growth year on year. So almost have to say, so do we need to rush to buy another company? Answer is no, but all options are always open because we're trying to create the best value for our shareholders who have entrusted us with the capital to do that. So I would not discount that. alternative because our strategy, our long-term model has always been to grow through a combination of acquisition but also on the assets required to really improve, invest and operate them better to show organic growth as well. But again, organic growth often enough is determined very much by how fast your market would grow. So we do look towards acquisitions now and then.
spk19: Great. Thank you.
spk27: Thank you. One moment for our next question. And that will come from the line of Blaine Curtis with Jefferies. Your line is open.
spk26: Hey, thanks for taking my question. I wanted to ask you, Huck, on the networking business kind of XAI. Obviously, you know, I think there's an inventory correction the whole industry is seeing. But just kind of curious, I don't think you mentioned that it was at a bottom to just perspective, I think it's down about 60% year-over-year. Is that business finding a bottom? I know you said overall the whole semi-business should, non-AI, should see recovery. Are you expecting any there and any perspective on just customer inventory levels in that segment?
spk05: We see it behaving. I didn't particularly call it out, obviously, because more than anything else, I kind of link it very much to server storage, non-AI, that is. and we call server storage as at the bottom Q2 and we call it to recover modestly second half of the year. We see the same thing in networking which is combination of Enterprise networking as well as the hyperscalers who run their traditional workloads on those, though it's hard to figure out sometimes, but it is. So we see the same trajectory as we are calling out on server storage.
spk25: Okay, thank you.
spk27: Thank you. One moment for our next question. And that will come from the line of Timothy Arcuri with UBS. Your line is open.
spk00: Mr. Arcuri, your line is open.
spk17: Hi. Hi. Sorry. Thanks. Hawk, is there a way to sort of map GPU demand back to your AI networking opportunity? I think I've heard you say in the past that if you spend $10 billion on GPU compute, you need to spend another $10 billion on other infrastructure, most of which is networking. So I'm just kind of wondering if when you see these big GPU numbers, is there sort of a rule of thumb that you use to map it back to what the opportunity will be for you? Thanks.
spk05: There is, but it's so complex. I stopped creating such a model. I'm serious. But there is, because one would say that for every billion dollars you spend on GPU you probably would spend probably on networking and if you include the optical interconnects as part of it though we're not totally in that market except for the components like DSPs, lasers, pin dyes that go into those high bandwidth optical connect. But if you just take optical connects in totality, switching, all the networking components, it goes into, attaches itself to clustering a bunch of GPUs. You probably would say that about 25% of the value of the GPU goes to networking. the rest of networking, now not entirely all of it is my available market I don't do the optical connect but I do the few components I talked about in it but roughly the simple way to look at it is probably about 25% maybe 30% of all this infrastructure components is kind of attached to the GPU value point itself but having said that it's never that precise that deployment is the same way so you may see the deployment of GPU or the purchase of GPU much earlier and the networking comes later or sometimes less the other way around which is why you're seeing the mix going on within my AI revenue mix but typically you run towards that range over time
spk13: Perfect talk. Thank you so much.
spk27: Thank you. One moment for our next question. And that will come from the line of Thomas O'Malley with Barclays. Your line is open.
spk10: Hey guys, thanks for taking my question and nice results. But my question regards to the custom basic AI talk. You've had a long run here of a very successful business, particularly with one customer. If you look in the market today, you have a new entrant who's playing with different customers. And I know that you said historically, that's not really a direct customer to you. But could you talk about what differentiates you from the new entrant in the market as of late? And then there's been profitability questions around the sustainability of gross margins longer term. Can you talk about if you see any increased competition and if there's really areas that you would deem more or less defensible in your profile today, and if you would see kind of that additional entrant maybe attack any of those in the future?
spk05: Let me take the second part first, which is our custom AI accelerator business. It is a very profitable business, and let me examine it from a model point of view. Each of these AI accelerators, no different from a GPU, the way these large language models get run computing get run on these accelerators no one single accelerator as you know can run this big large language models you need multiple of them no matter how how powerful those accelerators are but also and the way the models are run there's a lot of memory access to memory requirements so each of this accelerator comes with large amount of cache memory as you call it what you guys probably now know as HBM high bandwidth memory specialized for AI accelerators or GPUs so we supply both in our custom business and the logic side of it the you know where you just where the compute function is on doing the chips hey the margin there are no different than the margin in any in most of any of our semiconductor silicon chip business but when you attach to it a huge amount of memory memory comes from a third party there are a few memory makers to make this specialized thing we don't we don't do margin stacking on that part so by almost buying basic math will dilute the margin of this AI accelerators when you sell them with memory, which we do. It does push up revenue somewhat higher, but it dilutes the margin. But regardless, the spend, the R&D, the OPEX that goes to support this as a percent of the revenue, which is higher revenue, so much less. So on an operating margin level, this is easily as profitable, if not more profitable, given the scale that each of those custom AI accelerators can go up to. It's even better than our normal operating margin scale. So that's the return on investment that attracts and keeps us going at this game. And this is more than a game. It's a very difficult business. And to answer your first question, there's only one broker, period.
spk27: Thank you. One moment for our next question. And that will come from the line of Carl Ackerman with BNP. Your line is open.
spk07: Yes, thank you. Good afternoon. Hawk, your networking switch portfolio with Tomahawk and Jericho chipsets allow hyperscalers to build AI clusters using either a switch scheduled or endpoint scheduled network. And that, of course, is unique among competitors. But as hyperscalers seek to deploy their own unique AI clusters, are you seeing a growing mix of white box networking switch deployments? I ask because while your custom silicon business continues to broaden, it would be helpful to better understand the growing mix of your $11 billion network AI networking portfolio combined this year. Thank you.
spk05: Let me have Charlie address this question.
spk32: He's the expert. Yeah. Thank you, Hawk. So two quick things on this. One is the you're exactly right that the portfolio we have is quite unique in providing that flexibility. And by the way, this is exactly why Hawk in his statements earlier on mentioned that seven, out of the top eight hyperscalers use our portfolio. And they use it specifically because it provides that flexibility. So whether you have an architecture that's based on an endpoint and you want to actually build your platform that way or you want that switching to happen in the fabric itself, that's why we have the full N10 portfolio. So that actually has been a proven differentiator for us. And then on top of that, we've been working, as you know, to provide a complete network operating system that's open on top of that using Sonic and SAI, which has been deployed in many of the hyperscalers. And so the combination of the portfolio plus the stack really differentiates the solution that we can offer to these hyperscalers. And if they decide to build their own NICs, their own accelerators, or custom or use standard products, whether it's from Broadcom or other, That platform, that portfolio of infrastructure switching gives you that full flexibility.
spk12: Thank you.
spk27: Thank you. One moment for our next question. And that will come from the line of CJ Muse with Cantor Fitzgerald. Your line is open.
spk11: Yeah, good afternoon. Thank you for taking the question. I was hoping to ask a two-part software question. Excluding VMware, your Brocade, CA, and Symantec business is now running $500 million higher for the last two quarters. So curious, is that the new sustainable run rate, or were there one-time events in both January and April that we should be considering? And then the second question is, as you think about VMware Cloud Foundation adoption, are you seeing any sort of crowding out of spending like other software guys are seeing as they repurpose their budgets to IT or is that business so less discretionary that it's just not an impact to you? Thanks so much.
spk05: Well, on the second one, I don't know about any crowding out, to be honest. It's not. What we're offering, obviously, is not something that they would like to use themselves, to be able to do themselves, which is they are already spending on building their own on-prem data centers. And typical approach people take, a lot of enterprises take, historically continue today, that most people do, a lot of people do, is they have best of breed. What I mean was they create a data center that is compute as a separate category, best compute there is, and they often enough use vSphere. for compute virtualization due to improved productivity, but best of breed there. Then there's best of breed on networking and best of breed on storage with common management operations layer, which very often is also VMware V-Ray life. And what we're trying to say is this makes back, and what they see, is this makes back best of breed data center very heterogeneous it's not grieving that it's not a highly resilient data center I mean you have a mixed bag so it goes down where do you find quickly root cause everybody's pointing fingers at the other so we got a problem not very resilient and not necessary secure between bare metal in one side and software on the other side. So it's a natural thinking on the part of many CIOs we talk to to say, hey, I want to create one common platform as opposed to just best of breed of age. So that gets us into that. So if it's a green field, that's not bad. They started from scratch. If it's a brown field, that means they have existing data centers trying to upgrade Sometimes that's more challenging for us to get that adopted. So I'm not sure there's a crowding out here. There's some competition, obviously, on Greenfield where they can spend a budget on an entire platform versus Baselbrit, but on the existing data center where you're trying to upgrade, that's a trickier thing to do. And it cuts the other way as well for us. But so that's That's how I see it. So in that sense, the best answer is I don't think we're seeing a level of crowding out that is very significant for me to mention. In terms of the revenue mix, no, Brocade is having a great, great field year so far. and still chugging along. But will that sustain? Hell no. You know that. Brocade goes through cycles like most enterprise purchases. So we're enjoying it while it lasts.
spk20: Thank you. Thanks.
spk27: Thank you. And we do have time for one final question. And that will come from the line of William Stein with Truist Securities. Your line is open.
spk08: Great. Thanks for squeezing me in. Hawk, congrats on yet another great quarter and strong outlook in AI. I also want to ask about something you mentioned with VMware. In your prepared remarks, you highlighted that you've eliminated a tremendous amount of channel conflict. I'm hoping you can linger on this a little bit and clarify maybe what you did and specifically also what you did in the heritage Broadcom software business where I think historically you'd shied away from the channel, and there was an idea that perhaps you'd reintroduce those products to the channel through a more unified approach using VMware's channel partners or resources. So any sort of clarification here I think would be helpful. Thank you.
spk05: Yes, thank you. That's a great question. Yeah, VMware taught me a few things. They have 300,000 customers. That's pretty interesting, amazing. And we look at it. I know under CA we took a position that let's pick an A-list strategic guys and focus on it. I can't do that in VMware. I have to approach it differently and I start to learn the value of a very strong bunch of partners they have which are a network of distributors and something like 15,000 vast value-added resellers supported with these distributors. So we have doubled down and invested in this reseller network in a big way for VMware. It's a great move, I think, about six months into the game, but we're seeing a lot more velocity out of it. Now, these resellers, having said that, tend to be very focused on the very long tail of their 300,000 customers. The largest 10,000 customers of VMware are large enterprises. and who tend to, you know, they are very large enterprises, the largest banks, the largest healthcare companies, and their view is, I want very bespoke service, support, engineering solutions from us. So we've created a direct approach, supplemented with their bar of choice where they need to. But on the long tail, of 300 000 customers they get a lot of services to from the resellers value added resellers and so in their way so we now and strengthen that whole network of resellers so that they can go direct manage uh supported financially with distributors and we don't try to challenge those guys unless the customers, it all boils down to the end of the day the customer chose where they'd like to be supported So we kind of simplify this together with the number of SKUs there are In the past, unlike what we're trying to do here everybody is a partner I mean you're talking a full range of partners and whoever You know, makes the biggest deal, gets the lowest, the partner that makes the biggest deal, gets the biggest discount, lowest price, and they are out there basically kind of creating a lot of channel chaos and conflict in the marketplace. Here we don't. The customers are aware. They can take it direct from VMware to the direct sales force, or they can easily move to the reseller to get it that way. And as a third alternative which we offer, if they want to run their applications on VMware and they want to run it efficiently on the full stack, they have a choice now of going to a hosted environment managed by a network of managed service providers which we set up globally that will run the infrastructure, invest and operate the infrastructure, and these enterprise customers just run their workloads in and get it as a service, basically VMware as a service, as an alternative. And we are clear to make it very distinct and differentiated for our end-use customers. They are available to all three, is how they chose to consume our technology.
spk18: Great, thank you.
spk27: Thank you. I would now like to hand the call over to G.U., head of investor relations, for any closing remarks.
spk33: Thank you, Cherie. Broadcom currently plans to report its earnings for the third quarter of fiscal 24 after close of market on Thursday, September 5th, 2024. A public webcast of Broadcom's earnings conference call will follow at 2 p.m. Pacific time. That will conclude our earnings call today. Thank you all for joining. Operator you may end the call.
spk27: Thank you all for participating. This concludes today's program. You may now disconnect.
Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-