This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.
spk27: demanding and costly AI resources. These switches based on an open distributed disaggregated architecture won't support 32,000 GPU clusters running at 800 gigabit per second bandwidth. Ethernet fabric as we know it already supports multi-tenancy capability and end-to-end congestion management. This lossless connectivity with high QoS performance has been well proven over the last 10 years of network deployment in the public cloud and telcos. In other words, the technology is not new. And we are, as Broadcom, very well positioned to simply extend our best-in-class networking technology into generative AI infrastructure, while supporting standard connectivity, which enables vendor interoperability. In Q3, we expect networking revenue to maintain its growth year-on-year of around 20%. Our server storage connectivity revenue was $1.1 billion, or 17% of semiconductor revenue, and up 20% year-on-year. And as we noted last quarter, with the transition to next-generation mega-rate largely completed and enterprise demand moderating, we expect server storage connectivity revenue in Q3 to be up low single digits year-on-year. Moving on to broadband, revenue grew 10% year-on-year to $1.2 billion and represented 18% of semiconductor revenue. Growth in broadband was driven by continued deployments by telcos of next generation 10G PON and cable operators of DOCSIS 3.1. with high tax rates of Wi-Fi 6 and 6E. And in Q3, we expect our broadband growth to moderate to low single-digit percent year-on-year. And finally, Q2 industrial resales of $260 million increased 2% year-on-year as the softness in China was offset by strength globally in renewable energy and robotics. And in Q3, we forecast industrial resale to be flattish year-on-year on continuing softness in Asia offset by strength in Europe. So summary, Q2 semiconductor solutions revenue was up 9% year-on-year. And in Q3, we expect semiconductor revenue growth of mid-single-digit year-on-year growth. Turning to software, in Q2, infrastructure software revenue of $1.9 billion grew 3% year-on-year and represented 22% of total revenue. As expected, continuous softness in Brocade was offset by the continuing stable growth in core software. Relating to core software, consolidated renewal rates averaged 114% over expiring contracts, and in our strategic accounts, we averaged 120%. Within the strategic accounts, annualized bookings of $564 million included 133 million or 23% of cross-selling of other portfolio products to these same core customers. Over 90% of the renewal value represented recurring subscription and maintenance. And over the last 12 months, consolidated renewal rates averaged 117% over expiring contracts And among our strategic accounts, we averaged 128%. Because of this, our ARR, the indicator of forward revenue at the end of Q2 was 5.3 billion, up 2% from a year ago. And in Q3, we expect our infrastructure software segment revenue to be up low single digits percentage year on year as the Core software growth continues to be offset by weakness in Brocade. On a consolidated basis, we're guiding Q3 revenue of $8.85 billion, up 5% year on year. Before Kirsten tells you more about our financial performance for the quarter, Let me provide a brief update on our pending acquisition of VMware. We're making good progress with our various regulatory filings around the world, having received legal merger clearance in Australia, Brazil, Canada, South Africa, and Taiwan, and foreign investment control clearance in all necessary jurisdictions. We still expect the transaction will close in Broadcom's fiscal 2023. The combination of Broadcom and VMware is about enabling enterprises to accelerate innovation and expand choice by addressing their most complex technology challenges in this multi-cloud era. And we are confident that regulators will see this when they conclude their review. With that, let me turn the call over to Kirsten.
spk01: Thank you, Hawk. Let me now provide additional detail on our financial performance. Consolidated revenue was $8.7 billion for the quarter, up 8 percent from a year ago. Those margins were 75.6 percent of revenue in the quarter, about 30 basis points higher than we expected on product mix. Operating expenses were 1.2 billion, down 4% year-on-year. R&D of 958 million was also down 4% year-on-year on lower variable spending. Operating income for the quarter was 5.4 billion and was up 10% from a year ago. Operating margin was 62% of revenue, up approximately 100 basis points year-on-year. Adjusted EBITDA was $5.7 billion, or 65% of revenue. This figure excludes $129 million of depreciation. Now a review of the P&L for our two segments. Revenue for our semiconductor solution segment was $6.8 billion and represented 78% of total revenue in the quarter. This was up 9% year-on-year. Gross margins for our semiconductor solution segment were approximately 71%, down approximately 120 basis points year-on-year, driven primarily by product mix within our semiconductor and markets. Operating expenses were $833 million in Q2, down 5% year-on-year. R&D was $739 million in the quarter, down 4% year-on-year. Q2 semiconductor operating margins were 59%. So while semiconductor revenue was up 9%, operating profit grew 10% year on year. Moving to the P&L for our infrastructure software segment. Revenue for infrastructure software was $1.9 billion, up 3% year on year, and represented 22% of revenue. Gross margins for infrastructure software were 92% in the quarter, and operating expenses were $361 million in the quarter, down 3% year over year. Infrastructure software operating margin was 73% in Q2, and operating profit grew 8% year on year. Moving to cash flow. Free cash flow in the quarter was $4.4 billion and represented 50% of revenues in Q2. We spent $122 million on capital expenditures. Day sales outstanding were 32 days in the second quarter compared to 33 days in the first quarter. We ended the second quarter with inventory of $1.9 billion, down 1% from the end of the prior quarter. We ended the second quarter with $11.6 billion of cash and $39.3 billion of gross debt, of which $1.1 billion is short term. The weighted average coupon rate and years to maturity of our fixed rate debt is 3.61% and 9.9 years, respectively. Turning to capital allocation. In the quarter, we paid stockholders $1.9 billion of cash dividends. Consistent with our commitment to return excess cash to shareholders, we repurchased $2.8 billion of our common stock and eliminated $614 million of common stock for taxes due on the vesting of employee equity, resulting in the repurchase and elimination of approximately 5.6 million AVGO shares. The non-GAAP diluted share count in Q2 was 435 million. As of the end of Q2, 9 billion was remaining under the share repurchase authorizations. Excluding the potential impact of any share repurchases in Q3, we expect the non-GAAP diluted share count to be $438 million. Based on current business trends and conditions, our guidance for the third quarter of fiscal 2023 is for consolidated revenues of $8.85 billion and adjusted EBITDA of approximately 65% of projected revenue. In Q3, we expect gross margins to be down approximately 60 basis points sequentially on product mix. That concludes my prepared remarks. Operator, please open up the call for questions.
spk17: Thank you. To ask a question, you will need to press star 1 1 on your telephone. To withdraw your question, please press star 1 1 again. Due to time restraints, we ask that you please limit yourself to one question. Please stand by while we compile the Q&A roster. And today's first question will come from the line of Ross Seymour with Deutsche Bank. Your line is open.
spk23: Thanks for letting me ask a question. Hawk, I might as well start off with the topic that you started, AI these days is everywhere. Thanks for the color that you gave and the percentages of sales that it was potentially going to represent into the future. I wanted to just get a little bit more color on two aspects of that. How you've seen the demand evolve during the course of your quarter and Has it accelerated in what areas, et cetera? And is there any competitive implications for it? We've heard from some of the compute folks that they want to do more on the networking side, and then obviously you want to do more into the compute side. So I just wondered how the competitive intensity is changing given the AI workload increases these days.
spk27: Okay. Well, on your first part of your question, Yeah, I mean, last earnings call, we have indicated there was a strong sense of demand, and we have seen that continue unabated in terms of that strong demand surge that's coming in. Now, of course, we all realize lead times, manufacturing lead times on most of these cutting-edge products is fairly extended. I mean, you don't make this manufactured, this product, under our process in anything less than six months or thereabouts. And while there is strong demand and a strong urgency of demand, the ability to ramp up will be more measured in addressing demands that are most urgent. On the second part, no, we always seen competition and really even in traditional workloads in enterprise data centers and hyperscale data centers, our business, our markets in networking, switching, routing, continues to face competition. So really nothing new here. The competition continues to exist. And we all, each of us do the best we can in the areas we are best at doing. Thank you.
spk17: Thank you. One moment for our next question. That will come from the line of Vivek Arya with Bank of America Securities. Your line is open.
spk05: Thanks for taking my questions. Hawkeye, I just wanted to first clarify, I think you might have mentioned it, but I think last quarter you gave very specific numerical targets of $3 billion in ASICs and $800 million in switches for fiscal 23. I just wanted to make sure if there is any specific update to those numbers. Is it more than $4 billion in total now, et cetera? And then my question is, you know, longer term, What do you think the share is going to be between kind of general purpose GPU type solutions versus ASICs? Do you think that share shifts towards ASICs? Do you think it shifts towards general purpose solutions? Because if I look outside of the compute offload opportunity, you have generally favored right more the general purpose market. So I'm curious, how do you see this share between general purpose versus ASICs play out in this AI processing opportunity longer term?
spk27: On the first part of your question, you guys love your question in two parts. Let's do the first part first. You know, we guided or we indicated that for fiscal 23, that the revenue we're looking in this space is $3.8 billion. There's no reason, nor are we trying to do it now in the middle of the year to change that forecast. at this point. So we still keep to that forecast we've given you in fiscal 23. We're obviously giving you a sense of trajectory in my remarks on what we see 24 to look like. And that, again, is a broad trajectory of the guidance, nothing more than that, just to give you a sense for the accelerated move from 22, 23, and headed into 24, nothing more than that. But in terms of specific number that you indicated we gave, it's, you know, we stay by our forecast of fiscal 23, 3.8, frankly, because my view is a bit early to give you any revised forecast. Then beyond that, on your most broad specific questions, A6 versus merchant, you know I always favor merchant, whether it's in compute, whether it's in networking. In my mind, long-term merchant will eventually, my view, have a better shot at prevailing. But what we're talking about today is obviously a shorter-term issue versus a very long-term issue. And the shorter-term issue is, yeah, compute offload exists, but again, the number of players in compute offload ASICs is very, very limited, and that's what we continue to see.
spk16: Thank you.
spk17: Thank you. One moment for our next question. And that will come from the line of Harlan Sir with J.P. Morgan. Your line is open.
spk11: Hi, good afternoon. Thanks for taking my question. Great to see the strong and growing ramp of your AI compute offload and networking products. On your next generation, Hawk, on your next generation AI and compute offload programs that are in the design phase now, you've got your next-gen switching and routing platforms. are being qualified like are your customers continuing to push the team to accelerate the design funnel pull in program ramp timing and then i think you might have addressed this but i just want to clarify you know all of these solutions use the same type of very advanced packaging right stack die hbm memory co-op packaging and not surprisingly this is the same architecture used by your ai gpu peers which are driving the same strong trends right so Is the Broadcom team facing or expected to face like advanced packaging, advanced substrate supply constraints, and how is the operations team going to sort of manage through all of this?
spk27: Well, you're right in that this kind of AI product, and this generative AI products, next generation, current generation, are all using very leading-edge technologies in wafers, silicon wafers and substrates and packaging, including memory stacking. But, you know, it's from consumption, there's still products out there, there's still capacity out there, as I say, and this is... This is not something you want to be able to ship or deploy right away. It takes time. And we see it as a measured ramp that has started in fiscal 23 and will continue its pace through to 24.
spk11: And on the design range funnel, are you seeing customers still trying to pull in all of their designs? Well, it's
spk27: You know, our basic opportunity still lies in the networking of AI networks. And we have the products out there, and we are working with many, many customers, obviously, to put in place this distributed, disaggregated architecture of Ethernet fabric on AI networks. And yeah, that's a lot of obvious interest and lots of design that exists out there.
spk20: Thank you, Hawk.
spk17: Thank you. One moment for our next question. And that will come from the line of Timothy Arcuri with UBS. Your line is open.
spk13: Thanks a lot. Hawk, I was wondering if you can sort of help shed some light on the general perception that all this AI spending is sort of boxing out traditional compute. Can you talk about that, or is it that just CapEx budgets are going to have to grow to support all this extra AI CapEx? I mean, the truth is probably somewhere in between, but I'm wondering if you can help shed some light on just the general perception that All of this is coming at the expense of the traditional compute and the traditional infrastructure. Thanks.
spk27: You know, your guess is as good as mine, actually. I can tell you this. I mean, right there is this AI networks and this budget that are now allocated more and more by the hyperscale towards this AI networks. not necessary, particularly in enterprise, at the expense of traditional workloads and traditional data centers. I think there's definitely coexistence, and a lot of the large amount of spending on AI today that we see, for us that is, is very much on the hyperscale. And so, enterprises are still focusing a lot of their budget, as they have, on their traditional data centers and traditional workloads supporting x86. But it's just maybe too early to really for us to figure out whether that is that cannibalization.
spk21: Thank you, Har.
spk17: Thank you. One moment for our next question. And that will come from the line of Ambrish Srivastava with BMO Capital Markets. Your line is open.
spk09: Hi, thank you very much, Hawk. I have a less sexy topic to talk about, but obviously very important in how you manage the business. You talk about lead times, and especially in the light of demand moderating, manufacturing cycle times coming down, not to mention the six months that you highlighted for the cutting edge. Are you still staying with the 52-week kind of lead coating to customers, or has that changed? Thank you.
spk27: By the way, it's 50. Yes, my standard lead time for our products is 50 weeks, and we are still staying with it because it's not about as much lead time to manufacture the products as our... interests and, frankly, mutual interests between our customers and ourselves to take a hard look at providing visibility for us in ensuring we can supply and supply in the right amount at the right time the requirements. So, yes, we're still sticking to 50 weeks. Got it.
spk09: Thank you all.
spk17: Thank you. One moment for our next question. And that will come from the line from Harsh Kumar with Piper Sandler.
spk02: Yeah. Hey, Hawk. I was hoping you could clarify something for us. I think earlier in the beginning of the call when you gave your AI commentary, you said that Gen AI revenues are 15% off today. They'll go to 25% by the end of 2024. that's practically all your growth that's the four three four billion dollars that you'll grow so looking at your commentary i know your core business is doing really well so i know that i'm probably misinterpreting it but i was hoping that that maybe there's not so many you know hoping that there's no cannibalization going on in your business but maybe you could clarify for us answer that from an earlier question by a peer of yours we do not speak
spk27: Obviously, we do not see cannibalization, but these are early innings, relatively speaking, and budgets don't change that rapidly. If there's cannibalization, obviously it comes from where the spending goes in terms of priority. It's not obvious to us there's that clarity to be able to tell you there's cannibalization, none in the least. And by the way, if you look at the numbers that all the growth is coming from it, perhaps you're right, but as we sit here in 23 and we still show some level of growth, I would say we still show growth in the rest of our business, in the rest of our products, augmented, perhaps that growth is augmented with the growth in AI revenue in delivering an AI product, but it's not entirely all our growth. I would say at least half the growth is still on our traditional business. The other half may be out of generative AI. Thank you so much.
spk17: Thank you. One moment for our next question. And that will come from the line of Carl Ackerman with BNP Paribas. Your line is open.
spk06: Yes, thank you for taking my question. Hawk, you rightly pointed to the custom silicon opportunity that supports your cloud AI initiatives. However, your AI revenue that's not tied to custom silicon appears to be doubling in fiscal 23. And the outlook for fiscal 24 implies that it will double again. Obviously, Broadcom has multiple areas of exposure to AI, really across PCIe switches, Tomahawk, Jericho, and Raman ASICs, and electro-optics. I guess, what sort of opportunity do you see your electric optics portfolio playing a role in high-performance networking environments for inferencing and training AI applications?
spk27: What you said is very, very insightful. A big part of our growth now in AI comes from the networking components that we're supplying into creating this Ethernet fabric for AI clusters. In fact, a big part of it you hit on. And the rate of growth there is probably faster than our offload computing can grow. And that's where we're focused on, as I say. Our networking products are merchant standard products supporting the very rapid growth of generative AI clusters out there in the compute side. And for us, this growth in the networking side is really the faster part of the growth.
spk16: Thank you. Thank you.
spk17: One moment for our next question. That will come from the line of Joseph Moore with Morgan Stanley. Your line is open.
spk03: Great. Thank you. I wanted to ask about the renewal of the wireless contract. Can you give us a sense for how much sort of concrete visibility you have into content over the duration of that? You mentioned it's both RF and wireless connectivity. Just any other additional call you can give us would be great.
spk27: Okay, well, I don't want to be wordsmith you or nitpicky. It's an extension, I would call it, of our existing long-term agreement, and it's an extension in the form of a collaboration and strategic arrangement is the best way to describe it. It's not really a renewal, but the characteristics are similar. which is with supply technology, we supply products in a bunch of very specific products related to 5G components and wireless connectivity, which is our strength, which is the technology we keep leading in the marketplace. And it's multi-year, and beyond that, I truly would refer you to our 8K, and not provide any more specifics simply because of sensitivities all around. Thank you.
spk17: Thank you. One moment for our next question. And that will come from the line of Christopher Rowland with Susquehanna. Your line is open.
spk16: Mr. Rowland, your line is open.
spk17: Okay, we'll move on to the next question. And that will come from the line of Toshi Yahari with Goldman Sachs. Your line is open.
spk08: Hi. Thank you so much for taking the question. Huck, I'm curious how you're thinking about your semiconductor business long term. You've discussed AI pretty extensively throughout this call. Could this be something that drives higher growth for your semiconductor business on a sustained basis? I think historically you've given relatively subdued or muted growth rates for your business vis-a-vis many of your competitors. Is this something that can drive sustained growth acceleration for your business? And if so, how should we think about the rate of R&D growth going forward as well? Because I think your peers are growing R&D faster than... what you guys are doing today. Thank you.
spk27: Well, very, very good question, Toshio. Well, we are still a very broadly diversified semiconductor company, as I pointed out. We're still multiple end markets beyond just AI, most of which AI revenue happen to sit in my networking segment of the business, as you all noted and you see. So we still have plenty of others. And even... As I mentioned, for fiscal 24, our view is that it could hit over 25% of our semiconductor revenue. We still have a large number of underpinnings for the rest of our semiconductor business. I mean, our wireless business, for instance, has a very strong lease of life for multi-years, and that's a big chunk of business. Just that the... You know, the AI business appears to be trying to catch up to it in terms of the size, but our broadband service storage enterprise business continues to be very, very sustainable. And when you mix it all up, I don't know, we haven't updated our forecast long-term And so, Toshio, I really have nothing more to add than what we already told you in the past. Would it make a difference in our long-term growth rate? Don't know. We haven't thought about it. I leave it to you to probably speculate before I put anything on paper.
spk08: Appreciate it. Thank you.
spk17: Thank you. One moment for our next question. And that will come from the line of William Stein with Chua Securities. Your line is open.
spk22: Great. Thank you. Huck, I'm wondering if you can talk about your foundry relationships. I know you've got a very strong relationship with TSM and, of course, Intel's been very vocal about winning new customers potentially. I wonder if you can talk about your flexibility and openness in considering new partners, and then maybe also talk about pricing from Foundry and whether that's influencing, you know, any changes quarter to quarter. There have been certainly a lot of price increases that we've heard about recently, and I'd love to hear your comments. Thank you.
spk27: Thank you. You know, we tend to be very loyal to our suppliers. The same reason we look at customers in that same manner. It cuts both ways for us. So that's deep abiding loyalty to all our key suppliers. Having said that, we also have to be very realistic of the geopolitical environment we have today. And so we are also very open to looking at, in certain specific technologies, to broaden our supply base. And we have taken steps to constantly look at it, much as we still continue to want to be very loyal and fair to our existing base. So we continue that way. And because of that partnership and loyalty, for us, price increase is something that is a very long-term thing. It's part of the overall relationship. And put it simply, we don't move just because of prices. We stay put because of support, service, and a very abiding sense of commitment mutually. Thank you.
spk17: Thank you. One moment for our next question. And that will come from the line of Edward Snyder with Charter Equity Research. Your line is open.
spk15: Thank you very much. Basically, a housekeeping question. It sounded like your comments in the press release on the wireless deal did not include mixed signal, which is part of your past agreement. And everything you seem to have said today doesn't suggest that may not be the next. You've mentioned wireless and RF, but you're also doing a lot of mixed signal stuff too. So if you need to provide some clarity on that. And now also, why shouldn't we expect the increased interest in AI to increase the prospects if not orders immediately for the electro-optic products that are coming out physically. I would think that would be much greater demand given the clusters and the size of these arrays that people are trying to put together. It would provide enormous benefits, I think, in power, wouldn't it? So maybe give us some color on that. Thanks.
spk27: All right. You have two questions here, don't you?
spk14: Well, it was a two-part question. I was going to do a three, but I cut off one.
spk27: Oh, thank you. I love you guys with your multi-part questions. Let's do the first one. You're right. Our long-term collaboration agreement that we recently announced includes, as it indicated, wireless connectivity and 5G components. It does not include the high-performance analog components, mixing of components, that we also sell to that North American OEM customer. That doesn't make it any less, I would add, strategic, non-deeply engaged with each other. I would definitely hasten to add. And on the second part, and if you could indulge me, could you repeat that question?
spk15: Yeah, so you talk about general AI and the increase in demand that you're seeing from hyperscale guys, and we've already seen how big these clusters can get, and it's really putting, I don't want to say stress on your networking assets, but I would think given the size of the razor we're facing, that the electro-optic products that you're releasing next in Tomahawk 5, that you're releasing next year, that puts photonics right on the chip, would become more attractive because it significantly reduces the power requirements. And I know no one's used ESF and deployed, but I would think that interest in that should increase. Am I wrong?
spk27: You're not wrong. All this, as I indicated up front in my remarks, yeah, we see our next generation coming up Tomahawk 5, which will have silicon photonics which is co-packaging as a key part of the offering, and not to mention that it's going up to 51 terabits per second cut-through bandwidth, is exactly what you want to put in place for very high-demanding AI networks, especially if those AI networks start running to 32,000 GPU clusters running at 800 gigabit per second, then you really need a big amount of switching because those kind of networks, as I mentioned, have to be very low latency, virtually lossless. Ethernet lossless calls for some interesting science and technology in order to make Ethernet losses because by definition, Ethernet tends to have it traditionally, but the technology is there to make it losses. So all this fits in with our new generation of products and not to mention our Jericho 3 AI, which as you know, the router has a unique technology, a differentiated technology that allows for very, very low tail latency. and in terms of how it transmits and reorder packets so that there's no loss and very little latency. And that exists in network routing in telcos, which we now apply to AI networks in a very effective manner. And that's our whole new generation of products. So yes, we're leaning into this opportunity with our networking technology. and next generation products, very much so. So you hit it right on, and which is what makes it very exciting for us in AI. It's in the networking area, networking space, that we see most interesting opportunities.
spk17: Thank you. One moment for our next question. And that will come from the line of Anton Chakaibin with New Street Research. Your line is open.
spk10: Hi, thank you very much for the question. I'll stick to a single part question. Can you maybe double click on your compute offload business? What can you maybe tell us about how growth could split between revenues from existing customers? or potential diversification of that business going forward? Thank you.
spk27: Thank you. Good question. And I will reiterate the answers in some other ways I've given to certain other audience who have asked me this question. You know, we really have only one real customer, one customer. And in my forecast, in my remarks so far in offload computing, it's pretty much very, very largely around one customer. It's not very diversified. It's very focused.
spk26: That's our compute offload business.
spk17: Thank you. One moment for our next question. And that will come from the line of CJ Muse with Evercore ISI.
spk07: Hey, thank you. This is Kurt Swartz on for CJ. Wanted to touch on software gross margins, which continue to tick higher alongside softness and brocade. Curious what sort of visibility you may have into brocade stabilization and how we should think about software gross margins as mix normalizes. Thank you.
spk27: Okay. Well, you know, our core... Our software segment comprises, you hit it correctly, two parts. That's our core software products, revenues, and sold directly to enterprises. And these are your typical infrastructure software products. And they are multi-year contracts, and we have a lot of backlog, something like $17 billion of backlog, averaging over almost two and a half, three years. And every quarter, a part of that renews. And we give you the data on it. It's very stable. And given our historical pattern of renewing on expanding consumption of our core group of customers, we tend to drive that in a very stable manner. And the growth rate is very, very predictable. And we're happy with that. Then we overlay on it a business that is software, but also very appliance-driven, the fiber channel sand business or brocade. And that's very enterprise-driven, very, very much so. You're only used by enterprises, obviously, and lots of enterprises are dead. And it is a fairly cyclical business. And last year was a very strong upcycle. And this year, not surprisingly, the cycles are not as strong, especially compared year on year to the very strong numbers last year. So this is the phenomenon. The outcome of combining the two is what we're seeing today. But given another, my view, next year, the cycle could turn around and brocade would go on. And then instead of a 3% year-on-year growth in this whole segment, we could end up with a high single-digit year-on-year growth rate because the core software revenue, as I've always indicated to you guys, you want to plan long-term on mid-single-digit year-on-year growth rate. And that's the very predictable part of our numbers.
spk17: Thank you. And today's final question will come from the line of Vijay Rakesh with Mizuho. Your line is open.
spk04: Hi, Hawk. Just a quick, I'll keep it a two-part question for you to wrap up. So just wondering what the content uplift for Broadcom is on an AI server versus a general compute server. And if you look at generative AI, What percent of servers today are being outfitted for generative AI as you look, you know, you have the dominant share there. And where do you see that uptake ratio for generative AI any year out if you look at fiscal 24, 25?
spk27: You know, I'm sorry to disappoint you on your two parts, but it's too early for me to be able to give you a good answer or a very definitive answer today. on that because, you know, by far the majority of servers today are your traditional servers driving x86 CPUs and your networking today are very, very still running Ethernet traditional data center networking. because most enterprises, if not virtually all enterprises today, are very much still running their own traditional servers on x86. Generative AI is something so new and in a way, the limits of it is so extended that what we largely see today are at the hyperscale guys. in terms of deploying at scale those generative AI infrastructure. Enterprises continue to deploy and operate standard x86 servers and Ethernet networking in the traditional data centers. And so that's still, so what we're seeing today may be early part of the whole cycle. There's a question, which is why I cannot give you any definitive view, opinion, of what the test rate, what the ratio will be, or if there's any stability that could be achieved anywhere in the near term. We see both running and co-enlisting very much together. All right. Thank you.
spk17: Thanks. Thank you. I would now like to turn the call over to GU for any closing remarks.
spk18: Thank you, operator. In closing, we would like to highlight that Broadcom will be attending the B of A Global Technology Conference on Tuesday, June 6th. Broadcom currently plans to report its earnings for the third quarter of fiscal 23 after close of market on Thursday, August 31st, 2023. A public webcast of Broadcom's earnings conference call will follow at 2 p.m. Pacific. That will conclude our earnings call today. Thank you all for joining. Operator, you may end the call.
spk17: Thank you all for participating. This concludes today's program.
spk16: You may now disconnect. you Bye. Bye.
spk17: Welcome to Broadcom's Inc's second quarter fiscal year 2023 financial results conference call. At this time, for opening remarks and introductions, I would like to turn the call over to G.U., Head of Investor Relations of Broadcom Inc.
spk18: Thank you, Operator, and good afternoon, everyone. Joining me on today's call are Hawk Tan, President and CEO, Kirsten Spears, Chief Financial Officer, and Charlie Kawas, President, Semiconductor Solutions Group. Broadcom distributed a press release and financial tables after the market closed, describing our financial performance for the second quarter fiscal year 2023. If you did not receive a copy, you may obtain the information from the investor section of Broadcom's website at broadcom.com. This conference call is being webcast live, and an audio replay of the call can be accessed for one year through the investor section of Broadcom's website. During the prepared comments, Hawk and Kirsten will be providing details of our second quarter fiscal year 2023 results, guidance for our third quarter, as well as commentary regarding the business environment. We'll take questions after the end of our prepared comments. Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call. In addition to U.S. GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today's press release. Comments made during today's call will primarily refer to our non-GAAP financial results. I'll now turn the call over to Hawk.
spk27: Thank you, G. And thank you, everyone, for joining us today. So in our fiscal Q2 2023, consolidated net revenue was $8.7 billion, up 8% year-on-year. Semiconductor solutions revenue increased 9% year-on-year to $6.8 billion, and infrastructure software grew 3% year-on-year to $1.9 billion as the stable growth in core software more than offset softness in the brocade business. As I start this call, I know you all want to hear about how we are benefiting from this strong deployment of generative AI by our customers. Put this in perspective. Our revenue today from this opportunity represents about 15% of our semiconductor business. Having said this, it was only 10% in fiscal 22, and we believe it could be over 25% of a semiconductor revenue in fiscal 24. In fact, over the course of fiscal 23 that we're in, we are seeing a trajectory WHERE OUR QUARTERLY REVENUE ENTERING THE YEAR DOUBLES BY THE TIME WE EXCEED 23. AND IN FISCAL, THIRD QUARTER, 23, WE EXPECT THAT THIS REVENUE TO EXCEED $1 BILLION IN THE QUARTER. BUT AS YOU WELL KNOW, We are also a broadly diversified semiconductor and infrastructure software company. And in our fiscal Q2, demand for IT infrastructure was driven by hyperscale, while service providers and enterprise continue to hold up. Following the 30% year-on-year increases we have experienced over the past five quarters, Overall IT infrastructure demand in Q2 moderated to meet Teams percentage growth year on year. As we have always told you, we continue to ship only to end user demand. We remain very disciplined on how we manage inventory across our ecosystem. We exited the quarter with less than 86 days on hand a level of inventory consistent with what we have maintained over the past eight quarters. Now let me give you more color on our end markets. Let me begin with wireless. As you saw in our recent 8K filing, we entered into a multi-year collaboration with our North American Wireless OEM on cutting edge wireless connectivity and 5G components. Our engagement in technology and supply remains deep, strategic, and long-term. Q2 wireless revenue of $1.6 billion represented 23% of semiconductor revenue. Wireless revenue declined seasonally down 24% quarter-on-quarter, and down 9% year-on-year. In Q3, as we just begin the seasonal ramp of the next generation phone platform, we expect wireless revenue to be up low single digits sequentially. We expect, however, it will remain around flattish year-on-year. Moving on to networking. Networking revenue was $2.6 billion and was up 20% year-on-year in line with guidance, representing 39% of our semiconductor revenue. There are two growth drivers here. One, continued strength in deployment of our merchant Tomahawk switching for traditional enterprise workloads, as well as Jericho routing platforms for telcos. And two, strong growth in AI infrastructure at hyperscalers from compute offload and networking. And speaking of AI networks, Broadcom's next-generation Ethernet switching portfolio consisting of Tomahawk 5 and Jericho 3 AI offers the industry's highest performance fabric for large-scale AI clusters by optimizing the demanding and costly AI resources. These switches, based on an open distributed disaggregated architecture won't support 32,000 GPU clusters running at 800 gigabit per second bandwidth. Ethernet fabric as we know it already supports multi-tenancy capability and end-to-end congestion management. This lossless connectivity with high QoS performance has been well proven over the last 10 years of network deployment in the public cloud and telcos. In other words, the technology is not new and we are, as Broadcom, very well positioned to simply extend our best-in-class networking technology into generative AI infrastructure. while supporting standard connectivity which enables vendor interoperability. In Q3, we expect networking revenue to maintain its growth year-on-year of around 20%. Next, our server storage connectivity revenue was $1.1 billion or 17% of semiconductor revenue, and up 20% year on year. And as we noted last quarter, with the transition to next generation mega rate largely completed, and enterprise demand moderating, we expect service storage connectivity revenue in Q3 to be up low single digits year on year. Moving on to broadband, Revenue grew 10% year-on-year to $1.2 billion and represented 18% of semiconductor revenue. Growth in broadband was driven by continued deployments by telcos of next-generation 10G PON and cable operators of DOCSIS 3.1 with higher tax rates of Wi-Fi 6 and 6E. And in Q3, we expect our broadband growth to moderate to low single-digit percent year-on-year. And finally, Q2 industrial resales of 260 million increased 2% year-on-year as the softness in China was offset by strength globally in renewable energy and robotics. And in Q3, We forecast industrial resales to be flat-ish year-on-year on continuing softness in Asia, offset by strength in Europe. So summary, Q2 semiconductor solutions revenue was up 9% year-on-year, and in Q3, we expect semiconductor revenue growth of mid-single-digit year-on-year growth. Turning to software, In Q2, infrastructure software revenue of $1.9 billion grew 3% year-on-year and represented 22% of total revenue. As expected, continuous softness in Brocade was offset by the continuing stable growth in core software. Relating to core software, consolidated renewal rates averaged 114% over expiring contracts, and in our strategic accounts, we average 120%. Within the strategic accounts, annualized bookings of $564 million included $133 million, or 23%, of cross-selling of other portfolio products to these same core customers. Over 90% of the renewal value represented recurring subscription and maintenance. And over the last 12 months, consolidated renewal rates averaged 117% over expiring contracts, and among our strategic accounts, we averaged 128%. Because of this, our ARR, the indicator of forward revenue, at the end of Q2 was $5.3 billion, up 2% from a year ago. And in Q3, we expect our infrastructure software segment revenue to be up low single digits percentage year on year as the core software growth continues to be offset by weakness in brocade. On a consolidated basis, We're guiding Q3 revenue of $8.85 billion, up 5% year on year. Before Kirsten tells you more about our financial performance for the quarter, let me provide a brief update on our pending acquisition of VMware. We're making good progress. with our various regulatory filings around the world, having received legal merger clearance in Australia, Brazil, Canada, South Africa, and Taiwan, and foreign investment control clearance in all necessary jurisdictions. We still expect the transaction will close in Broadcom's fiscal 2023. The combination of Broadcom and VMware is about enabling enterprises to accelerate innovation and expand choice by addressing their most complex technology challenges in this multi-cloud era. And we are confident that regulators will see this when they conclude their review. With that, let me turn the call over to Kirsten.
spk01: Thank you, Hawk. Let me now provide additional detail on our financial performance. Consolidated revenue was $8.7 billion for the quarter, up 8% from a year ago. Those margins were 75.6% of revenue in the quarter, about 30 basis points higher than we expected on product mix. Operating expenses were $1.2 billion down 4% year on year. R&D of $958 million was also down 4% year on year on lower variable spending. Operating income for the quarter was $5.4 billion and was up 10% from a year ago. Operating margin was 62% of revenue, up approximately 100 basis points year on year. Adjusted EBITDA was $5.7 billion, or 65% of revenue. This figure excludes $129 million of depreciation. Now a review of the P&L for our two segments. Revenue for our semiconductor solution segment was $6.8 billion and represented 78% of total revenue in the quarter. This was up 9% year on year. Gross margins for our semiconductor solution segment were approximately 71%, down approximately 120 basis points year-on-year, driven primarily by product mix within our semiconductor and markets. Operating expenses were $833 million in Q2, down 5% year-on-year. R&D was $739 million in the quarter, down 4% year-on-year. Q2 semiconductor operating margins were 59%. So while semiconductor revenue was up 9%, operating profit grew 10% year on year. Moving to the P&L for our infrastructure software segment. Revenue for infrastructure software was 1.9 billion, up 3% year on year, and represented 22% of revenue. Gross margins for infrastructure software were 92% in the quarter, and operating expenses were $361 million in the quarter, down 3% year over year. Infrastructure software operating margin was 73% in Q2, and operating profit grew 8% year on year. Moving to cash flow. Free cash flow in the quarter was $4.4 billion and represented 50% of revenues in Q2. We spent $122 million on capital expenditures. Day sales outstanding were 32 days in the second quarter compared to 33 days in the first quarter. We ended the second quarter with inventory of $1.9 billion, down 1% from the end of the prior quarter. We ended the second quarter with $11.6 billion of cash and $39.3 billion of gross debt, of which $1.1 billion is short term. The weighted average coupon rate and years to maturity of our fixed rate debt is 3.61% and 9.9 years respectively. Turning to capital allocation. In the quarter, we paid stockholders 1.9 billion of cash dividends. Consistent with our commitment to return excess cash to shareholders, we repurchased 2.8 billion of our common stock and eliminated 614 million of common stock for taxes due on the vesting of employee equity, resulting in the repurchase and elimination of approximately 5.6 million AVGO shares. The non-GAAP diluted share count in Q2 was 435 million. As of the end of Q2, 9 billion was remaining under the share repurchase authorizations. Excluding the potential impact of any share repurchases in Q3, we expect the non-GAAP diluted share count to be $438 million. Based on current business trends and conditions, our guidance for the third quarter of fiscal 2023 is for consolidated revenues of $8.85 billion and adjusted EBITDA of approximately 65% of projected revenue. In Q3, we expect gross margins to be down approximately 60 basis points sequentially on product mix. That concludes my prepared remarks. Operator, please open up the call for questions.
spk17: Thank you. To ask a question, you will need to press star 11 on your telephone. To withdraw your question, please press star 11 again. Due to time restraints, we ask that you please limit yourself to one question. Please stand by while we compile the Q&A roster. And today's first question will come from the line of Ross Seymour with Deutsche Bank. Your line is open.
spk23: Thanks for letting me ask a question. Hawk, I might as well start off with the topic that you started, AI these days is everywhere. Thanks for the color that you gave in the percentages of sales that it was potentially going to represent into the future. I wanted to just get a little bit more color on two aspects of that. How you've seen the demand evolve during the course of your quarter and Has it accelerated in what areas, et cetera? And is there any competitive implications for it? We've heard from some of the compute folks that they want to do more on the networking side, and then obviously you want to do more into the compute side. So I just wondered how the competitive intensity is changing given the AI workload increases these days.
spk27: Okay. Well, on your first part of your question, Yeah, I mean, last earnings call, we have indicated there was a strong sense of demand, and we have seen that continue unabated in terms of that strong demand surge that's coming in. Now, of course, we all realize lead times, manufacturing lead times, on most of these cutting-edge products is fairly extended. I mean, you don't make this manufactured, this product, under our process in anything less than six months or thereabouts. And while there is strong demand and a strong urgency of demand, the ability to ramp up will be more measured in addressing demands that are most urgent. On the second part, no, we always seen competition and really even in traditional workloads in enterprise data centers and hyperscale data centers, our business, our markets in networking, switching, routing, continues to face competition. So really nothing new here. The competition continues to exist. And we all, each of us do the best we can in the areas we are best at doing. Thank you.
spk17: Thank you. One moment for our next question. That will come from the line of Vivek Arya with Bank of America Securities. Your line is open.
spk05: Thanks for taking my questions. Hawkeye, I just wanted to first clarify, I think you might have mentioned it, but I think last quarter you gave very specific numerical targets of $3 billion in ASICs and $800 million in switches for fiscal 23. I just wanted to make sure if there is any specific update to those numbers. Is it more than $4 billion in total now, et cetera? And then my question is, you know, longer term, What do you think the share is going to be between kind of general purpose GPU type solutions versus ASICs? Do you think that share shifts towards ASICs? Do you think it shifts towards general purpose solutions? Because if I look outside of the compute offload opportunity, you have generally favored right more the general purpose market. So I'm curious, how do you see this share between general purpose versus ASICs play out in this AI processing opportunity longer term?
spk27: On the first part of your question, you guys love your question in two parts. Let's do the first part first. You know, we guided or we indicated that for fiscal 23, that the revenue we're looking in this space is $3.8 billion. There's no reason, nor are we trying to do it now in the middle of the year to change that forecast. at this point. So we still keep to that forecast we've given you in fiscal 23. We're obviously giving you a sense of trajectory in my remarks on what we see 24 to look like. And that, again, is a broad trajectory of the guidance, nothing more than that, just to give you a sense for the accelerated move from 22, 23, and headed into 24, nothing more than that. But in terms of specific number that you indicated we gave, it's, you know, we stay by our forecast of fiscal 23, 3.8, frankly because mile view is a bit early to give you any revised forecast. Then beyond that, on your most broad specific questions, ASIC versus merchant, you know I always favor merchant. Whether it's in compute, whether it's in networking, in my mind, long-term merchant will eventually, my view, have a better shot at prevailing. But what we're talking about today is obviously a shorter-term issue versus a very long-term issue. And the shorter-term issue is, yeah, compute offload exists, but again, the number of players in compute offload ASICs is very, very limited, and that's what we continue to see. Thank you.
spk17: Thank you. One moment for our next question. And that will come from the line of Harlan Sir with JP Morgan. Your line is open.
spk11: Hi, good afternoon. Thanks for taking my question. Great to see the strong and growing ramp of your AI compute offload and networking products. On your next generation, Hawk, on your next generation AI and compute offload programs that are in the design phase now, you've got your next-gen switching and routing platforms. are being qualified like are your customers continuing to push the team to accelerate the design funnel pull in program ramp timing and then i think you might have addressed this but i just want to clarify you know all of these solutions use the same type of very advanced packaging right stack die hbm memory co-op packaging and not surprisingly this is the same architecture used by your ai gpu peers which are driving the same strong trends right so Is the Broadcom team facing or expected to face like advanced packaging, advanced substrate supply constraints, and how is the operations team going to sort of manage through all of this?
spk27: Well, you're right in that this kind of AI product, and this generative AI products, next generation, current generation, are all using very leading-edge technologies in wafers, silicon wafers and substrates and packaging, including memory stacking. But, you know, it's from consumption, there's still products out there, there's still capacity out there, as I say, and this is... This is not something you want to be able to ship or deploy right away. It takes time. And we see it as a measured ramp that has started in fiscal 23 and will continue its pace through to 24.
spk11: And on the design range funnel, are you seeing customers still trying to pull in all of their designs?
spk27: Well, it's... You know, our basic opportunity still lies in the networking of AI networks. And we have the products out there, and we are working with many, many customers, obviously, to put in place this distributed, disaggregated architecture of Ethernet fabric on AI And yeah, that's a lot of obvious interest and lots of design that exists out there.
spk20: Thank you, Hawk.
spk17: Thank you. One moment for our next question. And that will come from the line of Timothy Arcuri with UBS. Your line is open.
spk13: Thanks a lot. Hawk, I was wondering if you can sort of help shed some light on the general perception that all this AI spending is sort of boxing out traditional compute. Can you talk about that, or is it that just CapEx budgets are going to have to grow to support all this extra AI CapEx? I mean, the truth is probably somewhere in between, but I'm wondering if you can help shed some light on just the general perception that All of this is coming at the expense of the traditional compute and the traditional infrastructure. Thanks.
spk27: You know, your guess is as good as mine, actually. I can't tell either. I mean, right there is this AI networks and this budget that are now allocated more and more by the hyperscale towards this AI networks. But not necessary, particularly in enterprise, at the expense of traditional workloads and traditional data centers. I think there's definitely coexistence, and a lot of the large amount of spending on AI today that we see, for us that is, is very much on the hyperscale. And so enterprise are still focusing a lot of their budget, as they have, on their traditional data centers and traditional workloads supporting x86. But it's just maybe too early to really for us to figure out whether that is that cannibalization.
spk21: Thank you, Hock.
spk17: Thank you. One moment for our next question. And that will come from the line of Ambrish Srivastava with BMO Capital Markets. Your line is open.
spk09: Hi, thank you very much, Hawk. I have a less sexy topic to talk about, but obviously very important in how you manage the business. You talk about lead times, and especially in the light of demand moderating, manufacturing cycle times coming down, not to mention the six months that you highlighted for the cutting edge. Are you still staying with the 52 week kind of lead coating to customers or has that changed? Thank you.
spk27: By the way, it's 50. Yes, my standard lead time for our products is 50 weeks and we are still staying with it because it's not about as much lead time to manufacture the products as our interests and, frankly, mutual interests between our customers and ourselves to take a hard look at providing visibility for us in ensuring we can supply and supply in the right amount at the right time the requirements. So, yes, we're still sticking to 50 weeks. Got it.
spk09: Thank you all.
spk17: Thank you. One moment for our next question. And that will come from the line from Harsh Kumar with Piper Sandler.
spk02: Yeah. Hey, Hawk. I was hoping you could clarify something for us. I think earlier in the beginning of the call when you gave your AI commentary, you said that Gen AI revenues are 15% off today. They'll go to 25% by the end of 2024. that's practically all your growth that's the four three four billion dollars that you'll grow so looking at your commentary i know your core business is doing really well so i know that i'm probably misinterpreting it but i was hoping that that maybe there's not so many you know hoping that there's no cannibalization going on in your business but maybe you could clarify for us answer that from an earlier question by a peer of yours we do not speak
spk27: Obviously, we do not see cannibalization, but these are early innings, relatively speaking, and budgets don't change that rapidly. If there's cannibalization, obviously, it comes from where the spending goes in terms of priority. It's not obvious to us that there's that clarity to be able to tell you there's cannibalization, not in the list. And by the way, if you look at the numbers that all the growth is coming from it, perhaps you're right, but as we sit here in 23 and we still show some level of growth, I would say we still show growth in the rest of our business, in the rest of our products, augmented, perhaps that growth is augmented with the growth in AI revenue in delivering an AI product, but it's not entirely all our growth. I would say at least half the growth is still on our traditional business. The other half may be out of generative AI. Thank you so much.
spk17: Thank you. One moment for our next question. And that will come from the line of Carl Ackerman with BNP Paribas. Your line is open.
spk06: Yes, thank you for taking my question. Hawk, you rightly pointed to the custom silicon opportunity that supports your cloud AI initiatives. However, your AI revenue that's not tied to custom silicon appears to be doubling in fiscal 23. And the outlook for fiscal 24 implies that it will double again. Obviously, Broadcom has multiple areas of exposure to AI, really across PCIe switches, Tomahawk, Jericho, and Raman ASICs, and electro-optics. I guess, what sort of opportunity do you see your electric optics portfolio playing a role in high-performance networking environments for inferencing and training AI applications?
spk27: What you said is very, very insightful. A big part of our growth now in AI comes from the networking components that we're supplying into creating this Ethernet fabric for AI clusters. In fact, a big part of it you hit on. And the rate of growth there is probably faster than our offload computing can grow. And that's where we're focused on, as I say. Our networking products are Merchant standard products supporting the very rapid growth of generative AI clusters out there in the compute side. And for us, this growth in the networking side is really the faster part of the growth.
spk00: Thank you.
spk16: Thank you. One moment for our next question.
spk17: That will come from the line of Joseph Moore with Morgan Stanley. Your line is open.
spk03: Great. Thank you. I wanted to ask about the renewal of the wireless contract. Can you give us a sense for how much sort of concrete visibility you have into content over the duration of that? You mentioned it's both RF and wireless connectivity. Just any other additional call you can give us would be great.
spk27: Okay, well, I don't want to be wordsmith you or nitpicky. It's an extension, I would call it, of our existing long-term agreement, and it's an extension in the form of a collaboration and strategic arrangement is the best way to describe it. It's not really a renewal, but the characteristics are similar. which is with supply technology, we supply products in a bunch of very specific products related to 5G components and wireless connectivity, which is our strength, which is the technology we keep leading in the marketplace, and it's multi-year, and beyond that, I truly would refer you to our 8K, and not provide any more specifics simply because of sensitivities all around. Thank you.
spk17: Thank you. One moment for our next question. And that will come from the line of Christopher Rowland with Susquehanna. Your line is open.
spk16: Mr. Rowland, your line is open.
spk17: Okay, we'll move on to the next question. And that will come from the line of Toshi Yahari with Goldman Sachs. Your line is open.
spk08: Hi. Thank you so much for taking the question. Huck, I'm curious how you're thinking about your semiconductor business long term. You've discussed AI pretty extensively throughout this call. Could this be something that drives higher growth for your semiconductor business on a sustained basis? I think historically you've given relatively subdued or muted growth rates for your business vis-a-vis many of your competitors. Is this something that can drive sustained growth acceleration for your business? And if so, how should we think about the rate of R&D growth going forward as well? Because I think your peers are growing R&D faster than... what you guys are doing today. Thank you.
spk27: Well, very, very good question, Toshio. Well, we are still a very broadly diversified semiconductor company, as I pointed out. We're still multiple end markets beyond just AI, most of which AI revenue happen to sit in my networking segment of the business, as you all noted and you see. So we still have plenty of others. And even As I mentioned, for fiscal 24, our view is that it could hit over 25% of our semiconductor revenue. We still have a large number of underpinnings for the rest of our semiconductor business. I mean, our wireless business, for instance, has a very strong lease of life for multi-years, and that's a big chunk of business. Just that the... You know, the AI business appears to be trying to catch up to it in terms of the size. But our broadband service storage enterprise business continues to be very, very sustainable. And when you mix it all up, I don't know, we haven't updated our forecast long term. And so, Toshio, I really have nothing more to add than what we already told you in the past. Would it make a difference in our long-term growth rate? Don't know. We haven't thought about it. I leave it to you to probably speculate before I put anything on paper.
spk08: Appreciate it. Thank you.
spk17: Thank you. One moment for our next question. And that will come from the line of William Stein with Chua Securities. Your line is open.
spk22: Great. Thank you. Huck, I'm wondering if you can talk about your foundry relationships. I know you've got a very strong relationship with TSM and, of course, Intel's been very vocal about winning new customers potentially. I wonder if you can talk about your flexibility and openness in considering new partners, and then maybe also talk about pricing from Foundry and whether that's influencing, you know, any changes quarter to quarter. There have been certainly a lot of price increases that we've heard about recently, and I'd love to hear your comments. Thank you.
spk27: Thank you. You know, we tend to be very loyal to our suppliers. The same reason we look at customers in that same manner. It cuts both ways for us. So that's deep abiding loyalty to all our key suppliers. Having said that, we also have to be very realistic of the geopolitical environment we have today. And so we are also very open to looking at, in certain specific technologies, to broaden our supply base. And we have taken steps to constantly look at it, much as we still continue to want to be very loyal and fair to our existing base. So we continue that way. And because of that partnership and loyalty, for us, price increase is something that is a very long-term thing. It's part of the overall relationship. And put it simply, we don't move just because of prices. We stay put because of support, service, and a very abiding sense of commitment mutually. Thank you.
spk17: Thank you. One moment for our next question. And that will come from the line of Edward Snyder with Charter Equity Research. Your line is open.
spk15: Thank you very much. Basically, a housekeeping question. It sounded like your comments in the press release on the wireless deal did not include mixed signal, which is part of your past agreement. And everything you seem to have said today doesn't suggest that may not be the next. You've mentioned wireless and RF, but you're also doing a lot of mixed signal stuff too. So if you need to provide some clarity on that. And now also, why shouldn't we expect the increased interest in AI to increase the prospects if not orders immediately for the electro-optic products that are coming out physically. I would think that would be much greater demand, given the clusters and the size of these arrays that people are trying to put together. It would provide enormous benefits, I think, in power, wouldn't it? So maybe give us some color on that. Thanks.
spk27: All right. You have two questions here, don't you?
spk14: Well, it was a two-part question. I was going to do a three, but I cut off one.
spk27: Oh, thank you. I love you guys with your multi-part questions. Let's do the first one. You're right. Our long-term collaboration agreement that we recently announced includes, as it indicated, wireless connectivity and 5G components. It does not include the high-performance analog components, mixing of components, that we also sell to that North American OEM customer. All right. That doesn't make it any less, I would add, strategic, nor deeply engaged with each other. I would definitely hasten to add. And on the second part, and if you could indulge me, could you repeat that question?
spk15: Yeah, so you talk about general AI and the increase in demand that you're seeing from hyperscale guys, and we've already seen how big these clusters can get, and it's really putting, I don't want to say stress on your networking assets, but I would think given the size of the laser we're facing, that the electro-optic products that you're releasing next in Tomahawk 5, that you're releasing next year, that puts photonics right on the chip, would become more attractive because it significantly reduces the power requirements. And I know no one's used it yet, but I would think that interest in that should increase. Am I wrong?
spk27: You're not wrong. All this, as I indicated up front in my remarks, yeah, we see our next generation coming up, Tomahawk 5, which will have silicon photonics which is co-packaging as a key part of the offering, and not to mention that it's going up to 51 terabits per second cut-through bandwidth, is exactly what you want to put in place for very high-demanding AI networks, especially if those AI networks start running to 32,000 GPU clusters running at 800 gigabit per second, then you really need a big amount of switching because those kind of networks, as I mentioned, have to be very low latency, virtually lossless. Ethernet lossless calls for some interesting science and technology in order to make Ethernet losses because by definition, Ethernet tends to have it traditionally, but the technology is there to make it losses. So all this fits in with our new generation of products and not to mention our Jericho 3 AI, which as you know, the router has a unique technology, a differentiated technology that allows for very, very low tail latency. and in terms of how it transmits and reorder packets so that there's no loss and very little latency. And that exists in network routing in telcos, which we now apply to AI networks in a very effective manner. And that's our whole new generation of products. So yes, we're leaning into this opportunity with our networking technology and next generation products, very much so. So you hit it right on, and which is what makes it very exciting for us in AI. It's in the networking area, networking space, that we see most interesting opportunities.
spk17: Thank you. One moment for our next question. And that will come from the line of Anton Chakaibin with New Street Research. Your line is open.
spk10: Hi, thank you very much for the question. I'll stick to a single part question. Can you maybe double click on your compute offload business? What can you maybe tell us about how growth could split between revenues from existing customers or potential diversification? of that business going forward? Thank you.
spk27: Thank you. Good question. And I will reiterate the answers in some other ways I've given to certain other audience who have asked me this question. You know, we really have only one real customer, one customer. And in my forecast, in my remarks so far in offload computing, it's pretty much very, very largely around one customer. It's not very diversified. It's very focused.
spk26: That's our compute offload business.
spk17: Thank you. One moment for our next question. And that will come from the line of CJ Muse with Evercore ISI.
spk07: Hey, thank you. This is Kurt Swartz on for CJ. Wanted to touch on software gross margins, which continue to tick higher alongside softness and brocade. Curious what sort of visibility you may have into brocade stabilization and how we should think about software gross margins as mix normalizes. Thank you.
spk27: Okay. Well, you know, our core Our software segment comprises, you hit it correctly, two parts. That's our core software products, revenues, and sold directly to enterprises. And these are your typical infrastructure software products. And there are multi-year contracts, and we have a lot of backlogs, something like $17 billion of backlog averaging over almost two and a half, three years, and every quarter a part of that renews, and we give you the data on it. It's very stable, and given our historical pattern of renewing on expanding consumption of our core group of customers, we tend to drive that in a very stable manner, and the growth rate is very, very predictable, and we're happy with that. Then we overlay on it a business that is software, but also very appliance-driven, the fiber channel sand business or brocade. And that's very enterprise-driven, very, very much so. You're only used by enterprises, obviously, and lots of enterprises are dead. And it is a fairly cyclical business. And last year was a very strong upcycle. And this year, not surprisingly, the cycles are not as strong, especially compared year on year to the very strong numbers last year. So this is the phenomenon. The outcome of combining the two is what we're seeing today. But given another, my view, next year, the cycle could turn around and a brocade would go up. And then instead of a 3% year-on-year growth in this whole segment, we could end up with a high single-digit year-on-year growth rate because the core software revenue, as I've always indicated to you guys, you want to plan long-term on mid-single-digit year-on-year growth rate.
spk26: And that's the very predictable part of our numbers.
spk17: Thank you. And today's final question will come from the line of Vijay Rakesh with Mizuho. Your line is open.
spk04: Hi, Hawk. Just a quick, I'll keep it a two-part question for you to wrap up. So just wondering what the content uplift for Broadcom is on an AI server versus a general compute server. And if you look at generative AI, What percent of servers today are being outfitted for generative AI as you look, you know, you have the dominant share there. And where do you see that uptake ratio for generative AI any year out if you look at fiscal 24, 25?
spk27: You know, I'm sorry to disappoint you on your two parts, but it's too early for me to be able to give you a good answer or a very definitive answer. on that because, you know, by far the majority of servers today are your traditional servers. Driving x86 CPUs and your networking today are very, very still running Ethernet traditional data center networking. because most enterprises, if not virtually all enterprises today, are very much still running their own traditional servers on x86. Generative AI is something so new and in a way, the limits of it is so extended that what we largely see today are at the hyperscale guys. in terms of deploying at scale those generative AI infrastructure. Enterprises continue to deploy and operate standard x86 servers and internet networking in the traditional data centers. And so that's still, so what we're seeing today may be early part of the whole cycle. That's a question, which is why I cannot give you any definitive view, opinion, of what the test rate, what the ratio will be, or if there's any stability that could be achieved anywhere in the near term. We see both running and coexisting very much together. All right. Thank you. Thanks.
spk17: Thank you. I would now like to turn the call over to GU for any closing remarks.
spk18: Thank you, operator. In closing, we would like to highlight that Broadcom will be attending the B of A Global Technology Conference on Tuesday, June 6th. Broadcom currently plans to report its earnings for the third quarter of fiscal 23 after close of market on Thursday, August 31st, 2023. A public webcast of Broadcom's earnings conference call will follow at 2 p.m. Pacific. That will conclude our earnings call today. Thank you all for joining. Operator, you may end the call.
spk17: Thank you all for participating. This concludes today's program. You may now disconnect.
Disclaimer