This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

Marvell Technology, Inc.
5/29/2025
Good afternoon and welcome to Marvell Technology Inc. first quarter of fiscal year 2026 earnings conference call. Note that all participants will be in listen-only mode. Should you need assistance, please signal a conference specialist by pressing the star key followed by zero. After today's presentation, there will be an opportunity to ask questions. Please note that this event is being recorded. I would now like to turn the conference over to Mr. Ash. Ashish Sarvan, Senior Vice President of Investor Relations. Please go ahead, sir.
Thank you and good afternoon, everyone. Welcome to Marvell's first quarter fiscal year 2026 earnings call. Joining me today are Matt Murphy, Marvell's Chairman and CEO, and Willem Minkus, our CFO. Let me remind everyone that certain comments made today include forward-looking statements which are subject to significant risks and uncertainties that could cause our actual results to differ materially from management's current expectations. Please review the cautionary statements and risk factors contained in our earnings press release, which we filed with the SEC today and posted on our website, as well as our most recent 10-K and 10-Q filings. We do not intend to update our forward-looking statements. During our call today, we will refer to certain non-GAAP financial measures. A reconciliation between our GAAP and non-GAAP financial measures is also available in our earnings press release. Let me now turn the call over to Matt for his comments on the quarter. Matt?
Thanks, Ashish, and good afternoon, everyone. For the first quarter of fiscal 2026, Marvell delivered record revenue of $1.895 billion, above the midpoint of guidance, reflecting a 4% sequential increase and strong 63% year-over-year growth. The data center and market continued to deliver strong growth, driven by robust AI demand. In addition, we were pleased to see ongoing revenue recovery in our carrier infrastructure and enterprise networking and markets. Higher revenue versus our forecast also resulted in record non-gap earnings per share, which came in above the midpoint of guidance. In addition, we significantly increased our stock repurchases in the first quarter, buying back $340 million, a substantial step up from the $200 million repurchased in the prior quarter. During the first quarter, we announced the sale of our automotive Ethernet business to Infineon in an all-cast transaction valued at $2.5 billion. We are extremely proud of the progress we have made in organically building and and expanding this business, which we expect will deliver a compelling financial outcome for Marvell stockholders. The closing of this transaction, which we expect within calendar 2025, will provide us with additional flexibility in our capital allocation strategy. We started our new fiscal year on a strong note with Q1 results ahead of plan and are forecasting second quarter revenue of $2 billion at the midpoint of guidance. This represents 57% year-over-year growth and would set another record revenue level for Marvell. Let me now discuss our results and expectations for each of our end markets. In our data center end market, we achieved record revenue of $1.44 billion in the first quarter, growing 5% sequentially and 76% year-over-year. Looking ahead to the second quarter, we expect this momentum to continue, with data center revenue projected to grow sequentially in the mid-single-digit range on a percentage basis, while maintaining strong year-over-year growth. These strong results, along with our second quarter guidance, are being driven by the rapid scaling of our custom AI silicon programs to high-volume production, along with robust shipments of our electro-optics products for AI and cloud applications. We continue to expand the capabilities of our advanced technology platform to enable our customers to build full rack-level custom infrastructure, including innovative technologies such as custom high bandwidth memory and pro package optics. Marvell's custom HBM compute architecture enhances XPUs by optimizing the IO interfaces between accelerator silicon and the memory embedded in the package. This enables AI custom compute accelerators with much more efficient integration of main memory. which can increase performance and reduce run times, resulting in increased utilization. We continue to see strong interest from multiple customers who are targeting the highest performance and optimized TCO for their upcoming AI solutions that leverage the most advanced HBMs. Marvell's breakthrough Copackage Optics platform enables customers to integrate our silicon photonics light engine into future custom AI accelerators. Copackage Optics can drive a transition from copper interconnects to optical fiber for scale-up AI clusters. This technology will enable larger AI servers with significantly higher total system memory capacity and processing capability, which we expect will provide the scaling headroom required for the next wave of AI models. We expect this transition from copper, which does not contain any active silicon, to optical interconnects to significantly expand Marvell's interconnect revenue and market opportunities. Earlier this month, we announced two new additions to our custom platform. First, we announced a partnership with NVIDIA, adding their NVLink Fusion technology to our expanding custom platform. Marvell Custom Silicon with NVLink Fusion provides our customers with an accelerated path to custom scale-up solutions, offering greater flexibility and choice in developing next-generation AI infrastructure. This announcement further validates the proliferation of custom XPUs as a strong complement to merchant solutions. In addition, earlier today we announced Marvell's new multi-die packaging platform, the first of many advanced packaging innovations we are bringing to the market. The solution is already qualified and has entered production in support of a customer-specific XPU program. This platform enables customers to realize multi-die architectures utilizing differentiated Marvell design interposer technology. This approach can enable more efficient die-to-die interconnect, lower power consumption, increase yields, and lower product costs. Marvell's solution offers a compelling alternative to traditional silicon interposers for custom cloud applications. We expect to continue to proliferate this technology more broadly in next-generation designs. Moving on to the progress we are making in our custom business, we are benefiting from revenue contributions across multiple programs, including XPUs and other accelerators. Our lead XPU program for a large US hyperscale data center customer is doing extremely well and has become a key revenue driver for our custom business. I'm incredibly proud of our team and the close collaboration with this customer to drive this program to volume production with A0 silicon and meet the customer's steep ramp. As I mentioned last quarter, we are fully engaged with this customer on the follow-on generation, and I'm pleased to report that we have now secured three nanometer wafer and advanced packaging capacity and expect to start production in calendar 2026. At the same time, our architecture team is working with the customer to support the definition of the generation after that. This is all consistent with the multi-generational nature of these engagements, reflecting the benefit of working with Marvell over the long term. As a result, we anticipate that our revenue from custom AIXVUs for this customer will continue to grow next year, fiscal 2027, and beyond. At our AI Day last year, we also announced a significant design win for a custom AIXPU with another U.S. hyperscaler. Joint development on this program continues to progress well, and we are already engaged with this same customer on the architecture for the follow-on generation of this AIXPU program. I'm very pleased with the strong progress we are making across both our current and upcoming custom programs. This momentum reinforces our confidence in achieving our long-term goals for custom revenues. We will be highlighting our broad and expanding range of opportunities at our custom AI investor event on June 17th. I will share more details about this event in a moment. Turning to our interconnect portfolio, our PAM and DCI franchises continue to lead the industry in enabling the build-out of AI and cloud infrastructure. At this year's Optical Fiber Conference, we showcased a broad range of products, technologies, and ecosystem initiatives, designed to power the next generation of scale-up and scale-out AI deployments. including the industry's first 400 gig per lane PAM technology, a critical step towards 3.2T optical interconnects, enabling pluggable transceiver modules to remain the dominant solution for scale-out connectivity for the foreseeable future. Copackaged optics and copackaged copper technologies, providing greater interconnect densities and longer reach for scale-up networking. Silicon photonics light engines, scaling up to 6.4T, consolidating hundreds of components into compact modules optimized for both pluggable and CPO applications. 1.6T linear drive pluggable optical modules using Marvell's silicon photonics light engine. The industry's first three nanometer 1.6T PAM-4 DSP featuring 200 gig per lane electrical and optical interfaces, enabling customers to reduce module power consumption by more than 20% compared to its predecessor. Next-generation 800-gig DCI modules supporting data transmission over distances up to 1,000 kilometers. Production-ready 1.60 AEC DSPs for emerging 200-gig per lane accelerated infrastructure. Coherent light DSPs enabling power and performance-optimized solutions for the emerging market of distributed campus data center interconnects, spanning distances up to 20 kilometers. And finally, PCIe Gen 6 and Gen 7 certies for end-to-end connectivity over optics. We received highly positive feedback from customers, partners, and industry analysts during the event, making OSC 2025 another home run for the Marvell team. Now, let me turn to Marvell's enterprise networking and carrier infrastructure and markets. In the first quarter, enterprise networking revenue was $178 million, while carrier infrastructure revenue totaled $138 million. Collectively, revenue grew by 14% sequentially, exceeding the midpoint of our forecast and reflecting the ongoing recovery in both end markets. Looking ahead to the second quarter of fiscal 2026, we expect aggregate revenue from enterprise networking and carrier infrastructure to grow sequentially in the mid-single-digit range on a percentage basis. In the consumer end market, first quarter revenue was $63 million, representing a 29% sequential decline. Looking ahead to the second quarter, we expect consumer revenue to grow by approximately 50% sequentially. These sequential changes are largely driven by seasonality and gaming demand, which continues to be the primary factor driving our consumer business. Turning to our automotive and industrial end market, first quarter revenue was $76 million, declining by 12% sequentially. While we saw sequential growth in our automotive end market, this was more than offset by a can be lumpy in any given quarter. Looking ahead to the second quarter of fiscal 2026, we anticipate overall revenue from the auto and industrial end market to be flat on a sequential basis. In summary, in the first quarter of fiscal 2026, we continue to deliver operating margin expansion, earnings per share growth, and new revenue records. These results reflect strong contributions from our AI-driven data center end market and ongoing recovery in our enterprise, networking, and carrier infrastructure. While there are ongoing macroeconomic uncertainties, we are guiding for continued growth in the second fiscal quarter. We continue to closely monitor the broader environment to assess potential long-term impacts. As I mentioned during our March earnings call, AI now represents the majority of our data center revenue, and we expect the relative proportion of AI-related revenue to grow further in the coming years driven in large part by our custom silicon business. We've made tremendous progress since our AI era event last year, and in light of this momentum, we are hosting a dedicated forum on June 17th for investors to gain deeper insight into Marvell's unique position in the custom silicon market. This event will be broadcast live and provide an ideal setting to showcase our differentiated technology platform and provide an opportunity to hear directly from me, and a broad cross-section of Marvell's engineering leadership and members of my direct staff. Presentations will be followed by a Q&A session, providing investors and analysts an opportunity to ask questions. During the event, we will highlight our expertise in system and semiconductor design, our advanced process and package roadmap, and our comprehensive portfolio of semiconductor platform solutions and IP. Complementing this technical deep dive, the agenda will include a market-focused section with updates on the expanding market opportunity for custom silicon, a robust design wind pipeline, and the growing role of custom silicon in AI infrastructure. We will also share our progress towards the market share goals we set at our AI-era event last year, along with our vision for significant growth in the years ahead. We are excited to share more details about this incredible custom silicon opportunity on June 17th. Looking more broadly, we continue to see strong tailwinds in AI, including robust capital expenditure plans from hyperscalers, an increasing number of sovereign data center announcements, and an emerging group of hyperscalers further expanding the market. These fast-growing markets present us with a diverse set of opportunities, increasing our confidence in the long-term potential of our data center business. In addition, we are seeing an encouraging recovery in carrier infrastructure and enterprise networking. their second quarter forecast representing the fifth straight quarter of sequential revenue growth for these combined end markets. With that, I'll turn the call over to Willem for more detail on our recent results and outlook.
Thanks, Matt, and good afternoon, everyone. Let me start with a summary of financial results for first quarter of fiscal 2026. Revenue in the first quarter was $1.895 billion, exceeding the midpoint of our guidance, growing 63% year over year and 4% sequentially. Data Center was our largest end market, contributing 76% of total revenue. GAAP gross margin was 50.3%. Non-GAAP gross margin was 59.8%. Moving on to operating expenses. GAAP operating expenses were $682 million, including stock-based compensation, amortization of acquired intangible assets, restructuring costs, and acquisition-related costs. Non-GAAP operating expenses came in at $486 million, slightly below guidance. Our GAAP operating margin was 14.3%, while non-GAAP operating margin was 34.2%. For the first quarter, GAAP earnings per diluted share was $0.20. Non-GAAP earnings per diluted share was $0.62, reflecting year-over-year growth of 158%, which is more than double the pace of revenue growth demonstrating the significant operating leverage in our model. Now turning to our cashflow and balance sheet. Cashflow from operations in the first quarter was $333 million. Our inventory at the end of the first quarter was $1.07 billion, an increase of $42 million from the prior quarter to support the growth in our business. We returned $52 million to shareholders through cash dividends. In addition, Reflecting our strong belief in Marvell's future prospects, we significantly increased repurchases in the first quarter, buying back $340 million of our stock, stepping up from the $200 million we repurchased in the prior quarter. Our total debt was $4.2 billion, with a gross debt to EBITDA ratio of 1.8 times and a net debt to EBITDA ratio of 1.42 times. Our debt ratios have continued to improve as we have driven an increase in our EBITDA. As of the end of the first fiscal quarter, our cash and cash equivalents were $886 million. Turning to our guidance for the second quarter of fiscal 2026, we are forecasting revenue to be in the range of $2 billion, plus or minus 5%. We expect our gap gross margin to be between 50% and 51%. We expect our non-GAAP gross margin to be between 59% and 60%. Looking forward, we anticipate that the overall level of revenue and product mix will remain key determinants of our gross margin in any given quarter. For the second quarter, we project our GAAP operating expenses to be approximately $735 million. We anticipate our non-GAAP operating expenses to be approximately $495 million. For the second quarter, we expect other income and expense, including interest on our debt, to be approximately 49 million. We expect a non-GAAP tax rate of 10% for the second quarter. We expect our basic weighted average shares outstanding to be 864 million and our diluted weighted average shares outstanding to be 874 million. We anticipate GAAP earnings per diluted share in the range of 16 cents to 26 cents. We expect non-GAAP earnings per diluted share in the range of $0.62 to $0.72. In summary, we continue to drive strong revenue growth, expand our operating margins, generate strong cash flow, and returning increasing amounts of capital to our stockholders. We are looking forward to completing the sale of our automotive Ethernet business to Infineon, which will provide us with even more flexibility to execute on our capital allocation strategy. With that, we are ready to start our Q&A session. Operator, please open the line and announce Q&A instructions. Thank you.
Thank you, sir. We will now begin the question and answer session. To ask a question, you may press star then one on your touchtone phone. If you are using a speakerphone, please pick up your handset before pressing the keys. And to withdraw your question, please press star then two. And in the interest of time, please restrict yourself to one question only. And if you have additional questions, please rejoin the queue. At this time, we will pause momentarily to assemble our roster. And our first question comes from Vivek Arya at Bank of America. Please go ahead.
Thank you. And thanks, Matt, for clarifying that you have that three nanometer program at your large XPU customer base. and have the follow-on program at the other large customer, just given all the Asia supply chain noise. I guess I have two related questions. One is, what is the direction of content in these next-generation programs? You know, is it up, down, down flat? And number two, are you exclusive on these three-nanometer XTUs, or do you expect to share that program, given that, you know, your competition also seems to be convinced that they have the program? So just, you know, comments on content and exclusivity. Thank you.
Yeah, hey Vivek, thanks for the question. So first, yeah, I'm glad you said it. I can certainly appreciate the challenge for investors given the news flow that is coming out on seemingly a weekly basis here. The fact is that these Asia supply chain sources have an incomplete view. We run a very tight ship here at Marvell. We don't share customer confidential information. We treat that with the highest priority. So they simply have no idea what we're doing for our customer. So let me state the facts as they exist today. We're the incumbent shipping the current generation of this AIXBU. And as I've detailed, we've had a very successful and rapid ramp on this program from A0, which is first time success, to high volume production. I've talked about this for a couple of quarters. We've been engaged with this customer on the follow-on generation of this XPU. This next generation program has continued to move forward. In this quarter, we have secured three nanometer wafer and advanced packaging capacity. And that's for 2026, where we expect to start production. And that would be subject to the typical completion of our customer's qualification cycles, which we now have some experience with. And so assuming AI CapEx continues to grow, we expect our custom silicon revenue to continue to grow next year through program transitions at this customer. So look, our customers have relationships with many partners, and they do that to build out their whole silicon portfolio. The key point is that irrespective of any of these relationships, we expect our revenue to continue to grow on a multi-year, multi-generational basis. with this customer. And given the volumes that have materialized for XPUs, it's certainly possible and likely that customers and our customer may be pursuing multiple paths to meet their requirements. So thank you for asking this important question, Vivek. I think my answer is as clear as it could be. Thanks.
Thank you, Vivek. Thank you. Next question will be from Ross Seymour at Deutsche Bank. Please go ahead.
Hi, Matt. Thanks for that color. Following on Vivek's question, one of your primary XPU competitors has talked about engagements at additional customers, and not necessarily to front-run anything you may or may not say on June 17th, but how do you see Marvell's ability to support a broader customer base beyond the initial, I guess, 3-4 that you have, depending upon if we're talking XPUs or connectivity?
Yeah, thanks, Ross. Yeah, we'll cover this certainly at the AI investor event we'll be holding. But just to give you some color, we absolutely have the capacity engineering-wise to expand our portfolio and expand our engagements. If you look at Marvell's R&D spend over the last few years through cycle, we've made increases there. And then within that, as part of our capital allocation strategy, we've aggressively repurposed and reallocated our talent and our teams to this massive data center and AI opportunity. So if you look at just the raw R&D spending, the investments we're making in next generation technologies, and again, we're going to cover all this at the AI day. we are extremely well positioned to support and are supporting multiple increased engagements across a number of programs that we'll detail and we'll talk about and we'll frame at the AI investor event. But I think we're very well positioned to support designs that can really grow the company to even a much larger scale and level. And I would say also, by the way, It's not just, you know, the top sort of four. I think there's been a really interesting emergence over the last year or so of the next wave of hyperscale kind of class customers that can benefit from our technology relative to going custom. So we're going to talk about all that just a couple weeks out here. Look forward to talking to you then.
Thank you.
Next question will be from Torres Vanberg at Stifel. Please go ahead.
Yes, thank you, and congrats on the results. Matt, maybe in the spirit of clarifying things, there's a lot of debates out there about your service technology, especially for 200 gig certies. So maybe you could talk a little bit about Marvell's positioning there. And maybe related to that, how this relationship with NVIDIA is going to work out I'm sure there's a lot you can't share with us, but I tried anyway.
Yeah, sure. Yeah, on the first topic, and we'll actually be showcasing this at our AI event, R30's technology remains best in class, both from a performance standpoint as well as a time-to-market standpoint. If you recall, we've had now leadership announcements and production, by the way, on 200 gig and It's performing extremely well. And we just showed off at OFC the first and only 400 gig per lane demonstrations with very aggressive roadmap there. So that is in great shape, our technology across the board, both electrical and optical SERDs. And tell me your second question.
Yeah, with the NVLink Fusion and how that's going to work out.
Yeah, so we're excited about that partnership. I think it's a sign that kind of demonstrates how, well, one, we're deeply engaged in the ecosystem, Tori, with key partners. And the second, I think it's some validation as well that from a market standpoint, there is a complementary role for custom, even as acknowledged by NVIDIA, sort of the R&D investment they've already put in in their rack scale solutions, but with people that want to do their own XPUs. And so we're part of that discussion and we're going to help enable that if that's the direction our customers want to go. And we just view it as another key piece to the whole portfolio that we offer to customers kind of end to end. And I detailed some of those in my prepared remarks as well around things like custom HBM, co-package optics, advanced packaging and others, these are really just, it's really part of a suite of technologies to give the broadest offerings to our customers with the best-in-class technology.
Thanks for all that color, Matt. Yep.
Next question will be from Timothy Arcury at UBS. Please go ahead.
Thanks a lot. Matt, I was wondering if you can break down data center revenue a little bit. I think last quarter you said that AI was about 55% of data center. I wonder if it was in the same range. I assume maybe custom silicon grew a little faster because the gross margin was a little bit lower. So I guess that's the first part of the question. And then obviously you're running way above this $2.5 billion number this year for AI revenue. Can you give us just a sense of where it's tracking for the year? Are we talking like $3.5 to $4 billion? Maybe you'll talk about this at the day here in a few weeks, but I'm wondering if you can sort of add some color to this for us. Thanks.
Yeah, thanks, Tim. Yeah, we didn't give an exact number when we talked about it relative to our Q4 was the data point we gave, but we did say that at that time that within data center, AI had crossed and was now the majority of our data center revenue. And I think the way to think about it is, because we're not going to break it out on a sort of quarter by quarter basis, but as I've prepared for the call and been looking at the trajectory of the business, and I say this with also you've got to factor in that we anticipate the mass market and the core business to continue to recover. You know, we see a path here in the not too distant future where AI is not only the majority of the data center, and market, but it becomes the majority of Holco. And so where that line is exactly, you know, we're not calling it, but that's the trajectory. So I think the mental shift is like not sort of figuring out within data center what that exact number is, but, you know, you can do your own models. You guys are very, very capable there, drawing a line between here and when it can actually cross as a 50% contributor, the whole company. But that's coming.
But I guess then is that why, so did the custom async business grow a little faster, Matt? That's why the gross margin was a little lower?
Oh, yeah, on that side of things. Yeah, there's mix within the quarter, and clearly the custom business we have does run at a fundamentally lower gross margin. So that is going to modulate our gross margins on a quarterly basis, and you've seen the impact. But we've also had, I think, strong recovery in some of the other businesses as well. But that's... always been fundamentally the custom business a lower than company average gross margin.
Maybe I can add quickly. When you look at our Q2 guide, suddenly custom continues to grow really strongly, and that's what you're seeing reflected in that. When we look forward to the second half, Kelly, we're optimistic on custom demand. And that interplay between that and the rest of the business is really going to drive what gross margin will be in the second half. I mean, right now, probably expected to be in a similar range as what we guided for Key2.
Okay. Thank you, Willem. Yeah. Thanks, Tim.
Next question will be from Chris Castle at Wolf Research. Please go ahead.
Yes. Thank you. Good evening. I guess the question is regarding the second half of the year. And I'm sure you're not ready to provide that guidance right now, but could you give us some puts and takes on, you know, what your expectations may be? Do you expect the custom business to continue to grow in the second half of the year? You know, what do you expect with regard to, you know, the carry enterprise businesses? Is this kind of a, you know, kind of slow, steady ramp in those businesses and any kind of color in the second half, please?
Yeah, thanks, Chris. Yeah, maybe just at a higher level, you know, and I said this in some of my prepared remarks, but we are monitoring, you know, the macro and the various dynamics going on out there. So that's always a factor. But at the moment, you know, we see a couple things happening. The first is, and just even recently as yesterday, you see strong numbers out of NVIDIA and other announcements leading up to that. But we see the AI demand, you know, continuing. and the data center demand continuing. And then on top of that, we have this nice recovery, strong recovery, actually, we've been driving in our core business, in particular, enterprise networking and carriers. So we expect that to continue to recover and grow throughout the rest of the year as well. So we're not calling on a quarter-by-quarter basis, but right now we expect growth kind of across the board. through the year with, I think, what can be a really nice setup for fiscal 27 with some of the growth drivers we've articulated before. So that's where we're at right now, and I feel good about it.
Thanks. Thank you.
Next question will be from Harsh Kumar at Piper Sandler. Please go ahead.
Yeah. Hey, Matt. Congratulations on solid results. When we talk to investors, Matt, we find that there's a desire on part of investors for the data center business to grow faster than, let's say, what you've been putting up mid-single in the April quarter and then also the guidance. I know that in the last quarter, Matt, you had some moving parts with on-prem, et cetera. I was hoping that you could clarify the growth of your AI business and versus some of the non-AI pieces like on-prem or some other things that might be moving around and just help us understand how fast maybe your AI business is growing.
Yeah, thanks, Harsh. Yeah, and I think the way to think about it is helpful is to contextualize. You know, it's not a one-quarter snapshot. So, you know, if you look at the last few quarters, you know, we had a very dramatic step up from Q3 to Q4, you know, 20% plus in the data center business. It grew four. We're guiding it up mid-single. I just kind of gave some indication that it'll keep growing. And so if you look at the year, so that's one way to think about it is we did have a significant step up. And Q2 has signaled is looking like it's going to grow slightly faster than Q1. So that's a good sign. But also I'd point you to the fact that, you know, look, a year ago, we were in a very strong position relative to our data center and how it was growing. So the comparables are also telling, right? When you're talking about data center year-over-year revenues for a company at our scale growing 70% plus on a year-over-year basis, we feel good about that. So I think with that setup, if you look at just the last you know, two, three quarters, and you look at year over year, we feel really good about the trajectory of the business. And, you know, AI continues to be the fastest portion of that in terms of our data center business. And as I said earlier, to answer Tim's question, you know, at some point that'll just flip over and become, you know, half of Holco. So I think that's the context I can give you at the moment. Thanks.
So Matt, hopefully you can talk about the on-prem piece if possible or other parts of the moving.
Well, yeah, the on-prem is pretty small at this point. So it has some effect and it's always, you know, continue to be a little bit of a drag. But, you know, the overall data center revenue is driven primarily by by AI and the other stuff's performing pretty much in line. I don't think it's a drag going forward. I think if you're worried about where does it land in the next few quarters, it's not as big a contributor as it used to be, and it's probably where it is. Maybe it comes back a little bit, but the spending has shifted just so dramatically in the data center. The enterprise on-prem piece is just not getting the spend anymore. So I think it was down a little bit in Q1, but it's relatively stable. Yeah.
Thank you. Thank you so much.
Yep.
Next question will be from Tom O'Malley at Barclays. Please go ahead.
Hey, guys. Thanks for taking my questions. I have two quick ones. First one's for Willem. You are guiding data center at mid-single digits into the out-quarter, and you're saying custom silicon is growing quite strongly. Is there something going on with optical? Is that flattening out a bit? Is there anything going on? Can you give us an update on the health of optical and what you're embedded in your guide? And then the second is Matt. In your announcement with Amazon, you talked about a bunch of other products other than custom silicon that you guys were engaging with over the next couple of years, PCE cables, PCE retimers, AEC. Could you maybe try to size what that impact is right now? Some of your competitors in that space are putting up some bigger numbers. We'd love to see how significant that is for you guys. Thank you.
Yeah, thanks, Tom. Maybe I'll take the second part first. Yeah, I think those engagements are going very well. You're right in our – In our broader agreement, there was actually a lot of different exciting opportunities on the networking connectivity side. Those are progressing, but no major updates there. And then with respect to how the optics business is doing, it's done quite well. And in fact, we see that business growing throughout the year this year. So we see both growth from custom as well as optics, and then again on the Amazon agreement progressing well. I would say just at a high level, too, we have in some of these new emerging categories, we do have AECs ramping this year. I wanted to note that, as well as continued revenue growth and switching. So a lot of good things happening there, Tom.
I'll just add quickly in terms of the puts and takes on gross margin. The other one to take into account there is the strong snapback on consumer. That's seasonal and expected, but that's another area that's typically slightly below average when you look at the overalls.
Can we have the next question, please? Harlan Sir at JPMorgan.
Hey, good afternoon. Nice job on the quarterly execution, guys. The lead AI GPU vendor, I think, is getting ready to commence shipments of their next-generation platform. And in parallel, they're rolling out their next-gen X800 InfiniBand and Ethernet scale-out solutions that support 1.6T optical connectivity. And typically, the optics start to ship ahead of the GPUs and RAP scale platforms. Do you anticipate a strong ramp of your 1.6T solution starting now? And more importantly, your 5 nanometer solution, I think, is already qualified with this customer. Are you guys going to be ramping your 3 nanometer DSP solution into this next generation networking platform?
Yeah, thanks, Arlan. And you're right about the sequencing. And so we have commenced shipments on 1.6T. at 5 nanometer, we have very strong demand for 3, and we've executed very well on that product development so far. And so I think that's from a 1.6T kind of lifespan perspective, I think that's where the volume is going to be, and I think that's going to be a much stronger ramp next year. And some of that I'd say, too, because the 800 gig has continued to be so strong kind of across the board. that just as a kind of a percentage of the total, it still largely dominated this entire year by 800 gig. But yeah, we're shipping now. The part works well, but there's just a lot of demand to, you know, initial ramps and customers will be on five nanometer, but there's a big push in the ecosystem to drive it to three nanometer given the compelling power savings and the performance of our product. So all that's going well. We're well positioned for the 1.6T ramp and ecosystem expansion.
Thanks for the update, Matt.
Yeah, thanks, Arlan.
Next question will be from Blaine Curtis at Jefferies. Please go ahead.
Hey, good afternoon. Thanks for taking the question, and thanks for clarifying all that ASIC stuff. I just kind of wanted to ask, going back to the gross margin, so I think there's been a lot of discussions about you know, what an ASIC vendor can charge for and not charge, whether they can charge for memory. You know, I think you guys kind of put a line in the sand at a certain gross margin for ASIC. I'm curious if that's still the right level. And when you look at these next generations, I'm glad you're involved with all those. You know, how should we think about, like, sustainable AI ASIC gross margins? Is it still in the original range you were looking at?
Yeah, thanks, Blaine. Yeah, when we look at our overall... you know, custom business as a whole. We've been able to manage the gross margins, you know, even with these ramps in the range we've talked about. But within custom, there is a range, and it's like the old, you know, the old adage in semiconductors, you know, the higher the volume, you're going to see lower gross margins, but you're going to see a lot more operating income. And so that's what we see. I wouldn't say that the percent change is really – any different relative to how it stacks up on the opportunity set. You know, things that are a lot more Marvell contribution, as an example, where we leverage a lot of the IP and maybe more outside the XPU, those tend to be a little higher. And, you know, and of course they don't grab the limelight, but we're going to talk about those at the AI day because they actually, you know, in aggregate add up to a very nice revenue pipeline for us. an opportunity set with also that same multi-generational aspect to it. But from a percent standpoint, we're managing the business fine, and the AI portion, the XPU is always going to be lower. It's just that as the XPU, depending on the magnitude of that, that may move that mix around. But there's a certain range for these programs and projects, and we haven't really seen that change. The magnitude of the volume has changed. But I'd say that's across the board, by the way. Not only have the XPUs that we have going in flight or in production increased a lot from when we won them, but now these other types of accelerators or custom networking or custom NICs or things of that nature, those have also, quite frankly, those are a lot more volume than we had anticipated, and those carry a little bit higher margins. So I hope that context is helpful, but we're still managing the business in the same way.
Thanks, Matt. Yeah, but maybe one additional add is just on operating margins. So I think the one thing that's consistent across all these programs is that we're driving a creative operating margin to our model. And so, you know, clearly there is a range, but as you scale those programs, it's a finite investment. And you've seen that leverage in our model over the last couple of quarters here.
Yeah, I mean, I think you're seeing it in operating margin dollars flowing in, Blaine, and also in EPS, right, which still continues to grow like twice the rate of revenue at this point. And we don't see that changing relative to the leverage we get on these big programs, very operating margin rich in terms of just the dollars. Thanks, Ed.
Yeah, thanks. Next question will be from Quinn Bolton at Needham & Company. Please go ahead.
Hey, thanks for taking my question, Matt. I wanted to follow up on the next question or maybe just a clarification on whether these ASIC programs could potentially be dual sourced. I think towards the end of your answer, you said that there could be multiple paths, and I'm not sure if you're sort of suggesting there might be different versions of an XPU or whether they could actually be dual sourced. To the extent that they're dual sourced, you spent a couple years designing these programs. Do you guys have guarantees that you don't get blanked at the end of the year if they, you know, take the other path?
Yeah, so, no, Quinn, I appreciate the question. I was quite lengthy in my prepared remarks and even my, I think, Vivek answer in terms of the perspective I can offer you on that. So I think I've said it, and if I wasn't clear, you know, I did say that given the volumes, it's certainly possible that there are multiple paths pursued in For us in our business, and what I said in the prepared remarks as well, is we feel very confident and very good about our revenue continuity on the initial program and the revenue ramp on our additional program. We're planning production. We're tracking. We're doing that. So, you know, I see the noise coming in, but I also just need to kind of focus on what I'm hearing from my customer and what we need to go do to execute to be a good.
Yeah, the second question is just a follow-up on the NVIDIA Fusion. Are you guys, is NVIDIA sort of open up their NVLink and you design your certies to be compatible with that and you offer that out as an IP block to your ASIC customers? Or is there an IP transfer from NVIDIA to yourselves involved in that NVLink Fusion offering?
Yeah, the initial intent is it's planned as a chiplet solution. And so we could, for example, work with the customer, do the XPU design with them, and then through IO chiplets, take die-to-die interface between the XPU and the NVLink chiplet, and then that interfaces to the to the scale-up network. There's probably other models that can exist. I think the higher-level notion that Jensen and the team are driving is really trying to open up and make sure that customers can take advantage of the investment they've made in their infrastructure, even if they have unique needs down at a more granular level, like at the XPU. So we'll see how that all evolves, but we're very happy to be asked to participate and be a partner. We've already received a whole bunch of interest from customers on what this engagement might look like. So we'll see where it goes, but I think that the high-level concept is what is important. Got it. Thank you.
Next question will be from Srini Perjudi at Raymond James. Please go ahead.
Thank you, Matt. My question is on the optical side. I know you said your expectation for this year is for the optical business to grow. Obviously, you have the 1.6T ahead of you, and you also said 800 gig demand is quite strong. But a couple of questions we get from investors. One, can you speak to your market share as we go from 800 to 1.6T and also the potential impact from any technology transition such as LPO? And then the second question is, can you speak to the extent you have visibility on What is the current inventory that you see at your customers? Because that question keeps coming up, given how strong this cycle has been so far. Thank you.
Yeah, thanks. So, yeah, we see, you know, we've maintained very strong market share in the 800 gig wave. We let it, you know, a few others have come in, but we still have a very, very strong position, commanding position there. We were first to market. At 1.6T, 5 nanometer, first to market at 1.63 nanometer. We're driving 400 gig IO, which is going to enable 3.2T based on PAM. So I don't see any fundamental shift at all in the market share. And quite frankly, the faster we execute and the faster we ramp these ramp, the better because of the lead we have. So we feel good about that. Any impacts? Any adoption, I should say, not impact, of new technologies like LPO, I view this as very incremental. Those would be for things like scale-up, and we will participate in those. We have solutions based on our broadband analog technology, and we'll participate in LPO. When it happens, our view still is that's going to be a smaller portion of the market, but we will be there, and we're engaged right now. And then I think your last question was just about the broader inventory at our customers. Yeah. So that looks pretty good. I mean, we service this business direct. We serve it through a number of channels, but key data points, like distribution inventory and things like that, that's very low. We had strong sell-through last quarter. You know, we're looking at the whole year, and it still looks like a good year. We've got to wait and see. We're only doing one quarter at a time here, but high level this year and next year, especially with the 1.60 transition, we see continued strong growth in our optics business.
Got it. Thank you.
We will take our last question from Josh Buckhalter at TD Cowan. Please go ahead.
Hey, guys. Thank you for taking my question. I wanted to follow up on a couple of previous ones. Can you maybe elaborate on what you would expect to see from a potential customer who's exploring multiple tracks? Is that generally a performance skew and an efficiency skew, or could you see instances where your Hyperscale customers are engaged on completely different ASIC programs for completely different workloads? Thank you.
Yeah, thanks. Nice to meet you. Yeah, I think I've covered kind of as much as I'm going to on the topics from today. But I do think, you know, there'll be some discussion probably at the AI day around some of the different business models that are out there and different things customers are looking for. But at the moment, I think I've given kind of the max color we're going to do on this. But I appreciate the question. Thank you.
Thank you. This concludes our question and answer session. I will now turn the call over to Matt Murphy, Marvell's CEO, for closing remark.
Great. Thank you, operator. And look, I'd just like to thank everybody for your interest in Marvell and for joining the call. We're off to a great start with our first quarter in fiscal 2026. I'm very excited to be guiding a $2 billion quarter for the second quarter and seeing that all the effort that the Marvell team has put in is coming to fruition relative to ramping some of these key programs and executing to meet our customer needs. And I'm looking forward to connecting with everybody at the Custom Silicon event for investors in a couple of weeks and giving a deeper dive into this quite large opportunity in front of us. So thanks, everybody, and we'll talk soon. Bye.
Thank you, sir. Ladies and gentlemen, this does indeed conclude our conference call for today. Once again, thank you for attending. And at this time, we do ask that you please disconnect.