This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.
3/4/2025
I am joined by Bill Brennan, Credo's Chief Executive Officer, and Dan Fleming, our Chief Financial Officer. During this call, we will make certain forward-looking statements. These forward-looking statements are subject to risks and uncertainties discussed in detail in our documents filed with the SEC, which can be found in the investor relations section of the company's website. It is not possible for the company's management to predict all risks, nor can the company assess the impact of all factors on its business, or the extent to which any factor or combination factors may cause actual results to differ materially from those contained in any forward-looking statement. Given these risks, uncertainties, and assumptions, the forward-looking events discussed during this call may not occur, and actual results could differ materially and adversely from those anticipated or implied. The company undertakes no obligation to publicly update forward-looking statements for any reason after the date of this call to conform these statements to actual results or to the company's expectations, except as required by law. Also, during this call, we will refer to certain non-GAAP financial measures, which we consider to be important measures of the company's performance. These non-GAAP financial measures are provided in addition to, and not as a substitute for, or superior to, financial performance prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed using the investor relations portion of our website. With that, I will now turn the call over to our CEO. Bill?
Thanks, Dan, and thank you for joining our earnings for the third quarter of fiscal 25. I'll start with an overview of our third quarter results and then discuss our outlook. Following my comments, our CFO, Dan Fleming, will provide Q3 financial details and our guidance for the fiscal fourth quarter. For the third quarter, Credo reported revenue of $135 million, up 87% sequentially and up 154% year over year. Credo's non-GAAP gross margin was 63.8%. Credo achieved record revenue in Q3, as we saw the expected inflection point in our business. This ramp was led by our largest hyperscale customer as they scaled production of AI platforms. Additionally, we received solidified forecasts and saw increased design activity for our products with additional hyperscalers and other customers as the performance, reliability, and power benefits of our connectivity solutions have become increasingly clear throughout the industry. In a world where data drives everything, the demand for faster, more reliable, and energy-efficient connectivity continues to expand rapidly. Key to Credo's competitive advantage is our multi-tiered innovation, which enables us to deliver a broad set of optimized solutions that are tuned at every level. The first tier of innovation is in our Surtees technology. Our Surtees technology is purpose-built to tackle the toughest bandwidth challenges, balancing speeds up to 200 gig per lane with exceptional performance and power efficiency. By leveraging advanced signal processing and a programmable design, we've created a flexible architecture that adapts to the unique needs of AI workloads, whether it's long-reach data center links or ultra-short connections in dense compute environments. The second tier of innovation is in our integrated circuit design. Our IC designs, including re-timers, DSPs, and shiplets, are engineered to deliver the best combination of performance, power, and cost for a given application. Our LRO DSP is a great example of innovation on the customer's behalf to deliver a compelling IC solution optimized for power efficiency and optical links. The third tier of innovation is our system-level approach. We don't stop at chips. Our best example of system-level innovation is Credo pioneering the active electrical cable market. By taking accountability for the system-level solution, we raised the bar to deliver -to-end connectivity solutions that go beyond industry standards to deliver unique functionality and -in-class reliability. We see this system-level approach creating a larger opportunity for Credo with the emergence of AI clusters due to the intense demand for reliability and power efficiency. Finally, wrapped around each tier of innovation is our development and diagnostics software and firmware platform to deliver predictive signal integrity, link optimization, and tuning. From the Sertis to the system level, this software platform helps our customers navigate system development to achieve best performance, yields, and reliability. As we look forward, we'll expand our solutions to the PCIe protocol with the same ingenuity that creates our differentiated Ethernet solutions. With the introduction of a full suite of PCIe products on the near-term horizon, Credo will be addressing a larger connectivity opportunity with AI scale-out and scale-up networks, substantially expanding our overall TAM. Now I'll discuss our business in more detail. Regarding our AEC product line, as expected, our revenue surged in the third quarter, driven by our largest hyperscale customer. Compared to alternatives, the benefits of AECs have become clearer. More than ever, data centers are highly focused on backend network reliability. With billions of hours operating in the field, AECs have become the de facto standard for intraconnections for NIC to Tor and -to-switch applications. However, we are now seeing a new expansion of AEC usage. Our zero-flap AECs deliver more than 100 times better reliability than laser-based optical solutions. And as a result, we're seeing AECs replacing optics for -to-rack solutions for lengths up to 7 meters. We continue to make significant progress with additional hyperscalers for our Ethernet AEC solutions. We have achieved volume production with three hyperscalers, and we're in qualification with two additional hyperscalers, expecting production at fiscal 26. With broad traction, we feel confident we'll continue to see increasing diversification of our revenue base across more customers in the coming quarters and years. Additionally, Credo continues to make progress with our PCIe AEC solutions. Our Gen6 64-gig PAM4 AECs will deliver the same compelling benefits for AI scale-up networks as deployments move to rack-scale architectures. Credo will demonstrate our PCIe AECs at NVIDIA's GTC show later this month. We expect customer design engagements and qualifications for our PCIe AECs in the upcoming quarters with a significant revenue opportunity in the upcoming years. For our AEC business in total, we expect continued revenue growth based on customer forecasts, new qualifications, new design engagements, and TAM expansion. Now I'll turn to our optical business. Our optical DSP business is on track to achieve the growth objectives we set out at the beginning of fiscal 25. We have opportunities across the global customer base with revenue currently driven by 50 gig and 100 gig per lane designs for AOC and transceiver applications at port speeds up to 800 gig. Credo is actively engaged in opportunities with more than 10 transceiver vendors for multiple hyperscale end users. We work with our optical transceiver partners to provide full DSP and LRO options to meet a wide range of networking architectures. We see a large and growing market for these offerings, as well as for 1.6T port deployments in the future. With our recent 3-nanometer tape-out, Credo is well positioned for these leading-edge opportunities with our 200 gig per lane DSPs, where we expect to again have a compelling combination of performance, power, and features. We see the market opportunity for optical connectivity continuing to be very dynamic as reliability and energy efficiency become more important. As a result, we see an increasing opportunity for Credo to deliver system-level advantages to our partners, activating Credo's third tier of innovation I outlined earlier. Next month at the OFC conference in San Francisco, we'll demonstrate a full suite of optical solutions, including 200 gig per lane, in conjunction with our optical module partners. Based on all of our progress, the breadth of customer engagements, and the expanding market opportunity, we remain excited about the increasing revenue prospects given our role as an innovator in the optical connectivity market. Now, regarding a retimer business, Credo continued to gain momentum with our retimer business in the third quarter. Over the past several years, Credo has established leadership in the Ethernet retimer market, delivering advanced capabilities such as MacSec encryption, gearboxing, and other software-enabled functionality. Existing customer wins and future opportunities here include 100 gig and 200 gig per lane applications for both traditional switching and increasingly for AI servers requiring retimers for scale-out networks. This year, Credo has entered the market for PCIe retimers used in scale-out networks. Our strategy is in alignment with our three-tier innovation approach. We believe Credo's PCIe CERTIES IP will establish new benchmarks for the combination of latency, reach, performance, and power, and that our implementation of the Toucan PCIe retimer will deliver compelling advantages to our customers. In February, Credo participated in the PCIe SIG compliance workshop in Taipei, and we are pleased that our Toucan retimer achieved full PCIe compliance. It is notable that Credo is only the second vendor to achieve this level of compliance certification for PCIe Gen 5. This significant milestone demonstrates our capability to bring -in-class PCIe products to market. Credo will be added to the PCIe SIG integrators list in the coming weeks. During Q3, we engaged with key customers who evaluated our PCIe silicon. I'm pleased to say that the feedback was very encouraging, and we received our first platform commitment from a large AI server ODM. We are on track for production revenue in calendar year 2026. Market forecasters believe the TAM for PCIe retimers will exceed $1 billion by 2027, and Credo is very well positioned to compete for material market share. In summary, I'd like to first comment about our team's incredible execution over the past quarter. To successfully navigate a ramp of this magnitude requires extremely tight operational control, supply chain coordination, and customer communication. Just as Credo works tirelessly with customers to innovate on solving their pressing connectivity needs, the Credo team is clearly rising to the occasion to deliver on the significant demand ramp we're experiencing. As we more broadly ramp customers across our products, we will continue to closely manage our execution. I remain enthusiastic about the expanding market opportunity for high-speed connectivity driven by the promise of AI and the investment it's spurring. Credo's tiered approach to innovation has and will continue to be an advantage as we continue to serve our customers. Based on our progress with customers and the increasing demand for leading edge connectivity solutions, Credo remains on track for continued scaling of revenue and profit. I'll now turn the call over to our CFO, Dan Fleming, who will provide more detail.
Thank you, Bill, and good afternoon. I'll first review our Q3 results and then discuss our outlook for Q4 fiscal year 25. In Q3, we reported revenue of $135 million, up 87% sequentially and up 154% year over year and well above the high end of our guidance range. Our product business generated $132 million of revenue in Q3, up 91% sequentially and up 155% year over year. Notably, our AEC product line grew strong triple digits sequentially to achieve new record revenue levels. Our product business, excluding product engineering services, generated another record at $129.4 million of revenue in Q3, 101% higher than our previous product record in the prior quarter. Our IP business generated $3 million of revenue in Q3. As demonstrated by our product revenue ramp, we are seeing substantial opportunities with customer programs on the product side, which we are prioritizing. This prioritization does not impact our long-term model for company-wide non-GAAP gross margin of 63 to 65%. Our largest end customer was 86% of revenue in Q3. As a reminder, customer mix will vary from quarter to quarter, and we continue to make progress in diversifying our customer base. As we shared last quarter, we had seven customers that contributed more than 5% of revenue. And going forward, we expect that three to four customers will be greater than 10% of revenue in the coming quarters and fiscal year, as additional hyperscalers ramp to more significant volumes, as Bill described. Our team delivered Q3 non-GAAP gross margin of .8% above the high end of our guidance range and up 17 basis points sequentially. Our product non-GAAP gross margin was 63% in the quarter, up 85 basis points sequentially, and up 152 basis points year over year. Our product non-GAAP gross margin, excluding product engineering services, was .4% in the quarter, up 229 basis points sequentially, and up 934 basis points year over year, primarily due to increasing scale. Total non-GAAP operating expenses in the third quarter were $43.8 million within our guidance range and up 16% sequentially, due primarily to higher headcounts. Our non-GAAP operating income was $42.4 million in Q3, compared to non-GAAP operating income of $8.3 million in Q2, up demonstrably due to the leverage attained by achieving 87% sequential top line growth. Our non-GAAP operating margin was .4% in the quarter, compared to a non-GAAP operating margin of .5% in the prior quarter, a sequential increase of nearly 20 percentage points. Our non-GAAP net income was $45.4 million in Q3, compared to non-GAAP net income of $12.3 million in Q2. And our non-GAAP net margin was .6% in the quarter, above the high end of our long-term net margin model of 28 to 33%. Cash flow from operations in the third quarter was $4.2 million, down sequentially due to working capital increases driven by the significant sequential product ramp. CAPEX was $4.6 million in the quarter, driven largely by purchases of production equipment. And free cash flow was $0.4 million, an improvement of $11.3 million from the second quarter. We ended the quarter with cash in equivalents of $379.2 million, a decrease of $3.7 million from the second quarter. We remain well capitalized to continue investing in our growth opportunities while maintaining a substantial cash buffer. Our Q3 ending inventory was $53.2 million, up $16.9 million sequentially. Now, turning to our guidance, we currently expect revenue in Q4 of fiscal 25 to be between $155 million and $165 million, up 19% sequentially at the midpoint. We expect Q4 non-GAAP gross margin to be within a range of 63 to 65%. We expect Q4 non-GAAP operating expenses to be between $50 million and $52 million. We expect Q4 diluted weighted average share count to be approximately 188 million shares. As we approach the start of fiscal year 26, we expect revenue growth from fiscal year 25 to fiscal year 26 to be greater than 50%. And we expect non-GAAP operating expenses to grow at half the rate of revenue from fiscal year 25 to fiscal year 26. As a result, we look forward to continue driving operating leverage while expanding our net margin throughout the year. And with that, I will open it up for questions.
Thank you. At this time, I would like to remind everyone in order to ask a question, press star, then number one on your telephone keypad. We ask that you please limit yourself to one question and one follow-up. And we will pause for just a moment to compile the Q&A roster. And your first question comes from the line of Vivek Arya with Bank of America Securities. Your line is open.
Thanks for taking my question. Bill, for the first one, if you could give us a sense for how large was the largest customer. I think Dan mentioned some number. I didn't catch it. But I guess I have two parts to the question. One is where are you in the adoption of AEC at that customer? And if you exclude that customer, how are you looking at the growth of your business among other customers? Because depending on that customer concentration, we get a slightly different trend outside of that large customer.
Yeah, so Vivek, so what I mentioned in the prepared remarks is our largest customer was 86 percent of revenue. So let me just give you some historical context there. As we entered our fiscal 25, we described a second half inflection point really to be driven by our largest hyperscaler, which is exactly what we saw play out with that 87 percent sequential growth into Q3. So as far as customer concentration goes, Q3 was a bit of an outlier for us, especially considering the customer diversity that we had in Q2 and what we expect in the coming quarters. And again, as I mentioned in the prepared remarks, we expect three to four, 10 percent plus end customers in the coming quarters and fiscal year, which is simply based on the forecast that we receive from our customers. So just broadly speaking, there's clearly a broad-based need for innovative solutions as lane speeds are increasing at these hyperscale deployments. And our AEC opportunities in particular are expanding in the backend network from originally the scale-out network to now the scale-up network as well, which represents significant expansion of our TAM. So I'll let Bill add some additional color to those comments on customer diversification.
Yeah, thanks. You know, first thing I would say is that, you know, my takeaways are that this is really great confirmation of the AEC product category and market and the overall TAM. I think that it's also a great confirmation of Credo's leadership in the space. And so, you know, as we've talked about, you know, on past calls, we see each one of the hyperscalers capable of driving a very sizable TAM as we, you know, take our engagements from first deployment to a more populated Credo A&T deployment longer term. So I think, you know, if we look at our customer base, it's broadening. I mentioned that we've taken now three customers, three hyperscalers to volume production. And we've got two additional hyperscalers in qualification and expecting ramp in our fiscal 26. So we expect to see really solid customer diversity long term within our AEC business. But also, you know, I think that I'll add to that that the expanding opportunity that we see for Optical as well as PCIe will drive really further diversity across products and customers as we look towards the future.
All right. So my follow up, Bill, if let's say, you know, more of the market turns toward inference rather than training, right, whether more consensus, but, you know, in the state of the compute build out, what does that do to these AI clusters and what does that do to connectivity requirements and the AEC potential? Is that, you know, good, bad, neutral for AEC? Thank you.
Yeah, I think as the inference market really takes off, I think, if anything, we would see a larger opportunity, you know, from an AEC standpoint. And that's just, you know, that's just based on, you know, sheer number of deployments related to inference. You know, so I think that as we look at the customer activity going forward, there is a lot more focus now on inference and we're seeing really an uptick in the amount of activity.
And your next question comes from the line of Tom O'Malley with Barclays. Your line is open.
Hey, guys, thanks for taking my question. Appreciate it. So I think a lot of the tone of the call here has been about like a broadening of the portfolio. You're talking about PCIe read timers. You're talking about PCIe cables. To push you maybe one step further, if you're looking in the PCIe realm, an area of a lot of value is in the switching ecosystem as well. Do you guys have plans to move into the switching ecosystem? Is that like a natural progression from kind of the road you're traveling down already? And maybe talk about the challenges of doing read timers versus moving to switching and, you know, the timeline that it would take to transition from product in the market today to maybe some new products on the switching side. Thank you.
Sure. Appreciate the question. So we've made a lot of progress with PCIe over the past quarters. You know, I think the comment that I made earlier was regarding Credo passing the PCI special interest group, the PCIe SIG, most recently in February. That's, you know, great confirmation of the technology, great confirmation of interoperability for Gen 5. So as we move towards Gen 6, you know, I think we're increasingly bullish on our ability to compete in the market. The, you know, we see that for us, first step is really, you know, going after the read timer market as well as the PCIe AEC market as scale up architecture's move to rack scale. I think it's a really natural progression to think about moving to PCIe switching. There's different challenges associated with that, but it is really, you know, not such a huge step. To transition from retimers to the type of switches that are being built and talked about being deployed in the future. So I look at that as, you know, a possible spot for us to grow into over time. Right now, we're very much focused on getting things right related to the retimer as well as the AECs that we're building.
I appreciate you answering despite you just announcing the prior product before, so I appreciate it. But just in terms of the market outlook, so you're talking about the scale up network is, and also the scale out network, you have PCIe cables potentially more aligned with the scale up network. You've got some of the AEC product, particularly looking more at the scale out network. Can you talk about the opportunity size? Because I think we're going to see in a couple of weeks here, like maybe a redefinition of what scale up and scale out is. Is it one rack? Is it multiple racks? Maybe your thoughts on scale up versus scale out. Where's the opportunity set lie for you? Are you indifferent? Just want to kind of to get your temperature on that transition.
Sure. So, you know, we've talked about the opportunity in the past. So scale out, we really see it being an Ethernet protocol. And so we see that continuing on the path that we've been on as the market moves to 100 gig per lane and as the market will subsequently move to 200 gig per lane solutions. So scale up is really probably more interesting in the sense of talking about where we are today and where we're going. And we do see by the innovations previously shown by NVIDIA. And what we expect coming up at GDC that there's a lot of activity. It's really a dynamic space from their perspective. Now, they're a little bit different from the market broadly. But what we see within the customer base that we're talking to is that this scale up network that has really been a network that exists within an AI appliance really going rack scale. And then, of course, there's an opportunity for it to go row scale long term. We've talked about the volumes being larger than the scale out network opportunities. So we really see this as a big new tam. As the market moves from Gen 5 to Gen 6, we're talking about moving from 32 gig NRZ, which is really very old technology. And it is really not competitive if you compare it to the market leader from a bandwidth standpoint per lane. Gen 6 represents the move to 64 gig PAM4 modulation. So that's a step up in complexity and difficulty from the existing suppliers in the market moving towards that direction. For Credo, we've been there for many years. So we feel like we're going to bring the same compelling advantages that we've brought to Ethernet. The interesting part of the future is that when you look at that scale up network and the opportunity for improving performance with an increase in bandwidth, we've talked about the ultra accelerator link, the UAL conversation in the market, taking the scale up network from 64 gig all the way to 224 gig. And so we're going to be in a unique position to take advantage of that transition when that happens with the other market leaders that are outside of the Nvidia ecosystem. So we're pretty excited, absolutely, about both opportunities. But I would say the scale up opportunity is probably larger if we look out the two to three year time frame.
And your next question comes from the line of Carl Ackerman with BNP Parabas. Your line is open.
Thank you, gentlemen. Two questions, if I may. The first one, I guess, what's driving the uptick in gross margins in the April quarter? Is it IP licensing and revenue related or is there some other thing that we should be thinking about?
Yes, so to sum it up, we're really seeing a huge benefit from scale, which we've talked about or we've foreseen it for quite some time. And as you know, our overall gross margin in Q3 was almost 64% up a little bit from last quarter. But if you dig into the numbers, the thing that's more indicative of what's happening underneath is if you look at our product gross margin excluding product engineering services, our gross margin was 62.4%. So that was up over 200 basis points sequentially, up over 900 basis points year over year. Principally driven by scale. It's really as simple as that. The other lesser factor that I'll mention, we have a warrant with Amazon. The contra revenue associated with that kind of rolled off during the quarter as we attained that or as we fulfilled the $200 million of total gross revenue shipments to them in the quarter. So that was also a creative to margin, which will continue to be a creative next quarter as well.
Got it. Thanks for that. Just to follow up on your prepared comment, you spoke about how you may have three or four customers that would be 10% plus in the coming quarters or encounter 26. Obviously the first three today are AEC related. Is the fourth one also AEC related or would that suggest perhaps a broadening into other aspects and product portfolios of your business? Thank you.
Yes, of the ones we've referred to in those comments, they are AEC related. That by far is the largest driver in absolute dollars of our revenue growth. Having said that, if I just look simply year over year from fiscal 24 to 25, for instance, we've had our main three product categories between AEC, our retimer business, our Ethernet retimer business and optical. They're all experiencing sequential growth, but AEC just has, it's growing from a bigger base into an even bigger base as we go into the future. Thank you.
Your next question comes from the line of Quinn Bolton with Needham & Company. Your line is open.
Hey guys, if I've got my numbers right, revenue outside of your largest customer went from about 48 million in October to about 19 million in January. You talked about revenue diversifying again over the next few quarters into fiscal 26 with three to four customers that could be over 10%. Can you just give us some sense, what do you expect your largest customer to do? Does it stay 80 plus percent of revenue? Are you anticipating that that pulls back and that you see ramps at some of the other customers in the April quarter? Just any sense of revenue, rediversifying would be helpful.
Yeah, we've talked in the past about, actually Amazon's a great example. Our largest hyperscaler, if you look at their Q1 revenue was 30 million, then it went down a bit in Q2. Now it obviously surged in our Q3. Our internal expectation is they'll probably be in the same zip code as to where they were in absolute dollar terms in Q3 or where they were in Q3. So if you look at that being what it is and knowing that we guided 19% sequentially up quarter over quarter into Q4 at the midpoint, that would imply that maybe two-thirds of our revenue in Q4 would be what that math would imply.
Got it. I guess just looking at that customer, obviously it's a significant percentage of revenue in the January and April quarters. I think in the past you've talked about having 12-month rolling forecasts from some of your larger hyperscale customers. Just generally speaking, can you give us any sense, what gives you the confidence that the surge you've seen here in January and April may not just be somewhat related to an inventory build ahead of server deployments that at some point in fiscal 26 you go into a depletion mode or just a deployment mode where you could see, I guess for lack of a better term, an air pocket at that customer at some point next year. Just any help to hold hands would be appreciated.
Yeah, sure. So I guess when we talk about visibility, I think it's very normal at this point to get 12-month forecasts from customers. I think this is one of the positives that came out of COVID is that really a top priority for our customers is to secure their supply across all materials, including the things that Credo supplies. So when there is a sizable ramp, it's really imperative to work together with our customers as well as our supply chain to ensure that we can deliver flawlessly. And so I would say across the board from a customer perspective, we're getting good visibility. In the case where there is a new ramp, this is where things become more high definition as you get closer to the actual ramp. There's some things that can happen where we've seen pushouts to a schedule, we've seen pull-ins, we've seen increases as we saw this quarter in the level of deployments as customers ramp. But generally as we look towards the future, we see several customers that are at different stages of deployments. And so if we look at the numbers and compare Q2 to Q3, you saw a pretty big shift, pretty good move. And we indicated that, say for the first customer that we ramped with over the last several years, we talked about a period of time where supply and consumption and those curves will meet. Well, that happened in Q3. We also talked about that customer returning to historic levels, which we expect to be over $100 million run rate, say as we look at fiscal 26. We've got a new customer that we're ramping and the size of allocations that they're getting from a GPU standpoint suggests that they're going to be a very large customer in our fiscal 26 as well. I mentioned the two new customers. These would be customers that are a little less definition on the exact timing and size of the ramp, but both are promising in the sense that we're seeing the level of deployments they're talking about being sizable. So as you look at the next couple of quarters, I'm sure we're going to have a decreasing level of concentration, but it's still going to be something that you would say that's a large amount of concentration, but it's going to be balanced out over the second half of fiscal 26.
Understood. I'd rather see you guys have the Amazon revenue than not. So congratulations on the nice results. These are good. These are good. Good issues.
Your next question comes from the line of Tori Zandberg with Stiefel. Your line is open.
Thank you and congrats on the record results. Bill, in your prepared remarks, you talked about several layers or multi-tiered innovation. The one thing that obviously that you talk about is the system level approach. That's obviously very obvious on the AEC part of your business, but as you venture into some of these new segments and especially on PCIE, will you take more of a system business model approach there as well or should we assume that's going to be primarily selling chips?
Well, I think the fact that at GTC in a few weeks, we're going to be demonstrating a full rack with multiple AECs that are active, that are live. We're going to be demonstrating our AEC capability and it may actually be, it might even be the first demo of its kind in the AEC space. Yes, absolutely. From day one, we've been planning on pursuing the same path with PCIE AECs as we have with Ethernet. No change there. We're even planning as the scale-up architecture could potentially go row scale, we're also investing in the optical space as well.
Yes, that's very interesting. Then my follow-up, LRO. I haven't heard a whole lot about it since you obviously introduced it. Any update you can give us there. If you do have some stuff that you're going to talk about at OFC, I get it. Any update we can get on LRO would be really
helpful. Right. We continue to offer customers both full DSP solutions and LRO solutions. We're agnostic to their choice. We see customers continuing to purchase traditional full DSP solutions where they can fit within the power ceiling. In the case, we do have cases where power is a deciding factor. I think with all of the customers we're talking to, LRO really becomes that de facto solution compared to say LPO. I think we've all seen that the discussion around LPO has decreased pretty substantially. The conversation with LRO, I think, is it's a real opportunity to get the kind of signal integrity, bit error rates, and interoperability and being able to save that kind of power, those sub 10 watts for an 800 gig port. But it's ultimately an answer to the side.
And your next question comes from the line of Vijay Rakesh with Mizuho. Your line is open.
Yeah. Hey, Dylan. Great quarter in Agilentions. Just on the AEC side, I know you mentioned two new customers, two new hyperscalers that you'll be working with. On your disability, do you see them ramping to, as you look at 12, 18 months, do you see them ramping pretty nicely to significant volumes? How do you see that ramp in their follow-up?
Yeah, sure. The customer-facing team at Credo is hard at work, and that includes our applications engineering group. And I think it's a function of exactly when we intercept on the ramp versus if we do. Both of these customers, there are design engagements that are absolutely identified and committed to. So I think it's a function of exactly when the ramp starts, and that's when I make the comment that fiscal 26 is what we expect. I think if we wanted to be somewhat, I guess, on the conservative side, we'd say second half of fiscal 26. But we're getting to the point where our confidence is quite high with the additional two customers that are in qualification.
Got it. And then on the PCIe scale-up side, very great to see you guys ramp so fast, so quickly after you introduced the cables. You mentioned one server ODM. How does that pipeline look, and do you have to qualify for the server ODM or the GPU async guide? Just a second, thanks.
Yeah, so understand the timing of our ramp for PCIe, it'll still take some time. We're really targeting the Gen 6 market, which is, that's really a calendar 25 design cycle and a calendar 26 production ramp cycle. And so it's really somewhat out in time, even though we're passing interoperability tests, we're absolutely ready to compete. It's going to be some time. Another question would be, are there going to be Gen 5 opportunities that we're able to pursue? And of course, as our solution is brought to market, we're open for business and we'll bring advantages to Gen 5 and easy upgrade to Gen 6. And so what we're seeing from a customer perspective is that there is independence in making decisions, and there is a need for a more broad supply chain than exists for Gen 5.
Got it, thank you.
And your next question comes from the line of Sean O'Loughlin with TV Cowan. Your line is open.
Hey guys, thanks a lot for taking the question and congrats on the nice set of results and guide. I'd like to look at the trend a little bit and ask about front-end networking. You know, the original, obviously the original AEC deployment was on a front-end solution. And I wonder whether or how you and your customers are thinking about that front-end opportunity that still potentially exists. Is it just that traditional front-end networking is not where innovation is happening in the space and so customers are focused on the back end? Or is it a matter of time or some hybrid of the both? And then to squeeze an extra question on top of that, is there really a difference in the product that you ship to front-end or go to market for the front-end versus back-end? Thanks. So
from, you know, a front-end network perspective, it applies to general compute and it applies to AI clusters. So the front-end connections are pretty similar between the two. Now with general compute, that's the only connection to the network, the front-end network. And if we think about the traditional general compute space, it's really a question about really the x86 roadmap. And that is typically moved at a slower pace than what's happening within AI clusters. And so it's very common still today to see 25 gig per lane front-end network connections. In fact, the one that we've shipped in high volume is of four lanes of 25 gigs. So it's a hundred gig port connecting to that front-end network. The unique feature that we brought was that our lead customer implemented a dual tour design to achieve levels of availability, the SLA's of five nines. So it was a way for them to really achieve a goal that had been in place for five to 10 years even. So that unique functionality is what turned on that business for the front-end network connection at 100 gig ports. As we look at that general compute, there's a natural move to 50 gig per lane and 100 gig per lane. But in contrast, if we look at AI, just the nature of the application is causing customers to move to the fastest connection. So 100 gig per lane today and really putting pressure on delivering 200 gig per lane solutions in the future. And within AI clusters, those front-end connections exist and at best case they'll be at the same line rates as we see in the back-end, but probably most likely they'll be at lower line rates. And so our largest customer is using our Credo AEC solutions for both back-end and front-end connections.
Great, super helpful. And then just as a quick follow-up, you mentioned the three hyperscalers that are achieving volume shipments today. I think last quarter you talked about the two hyperscale obviously that we all know about and then the third emerging hyperscale. I just want to clarify that that is not the third hyperscaler that you talked about today but is in fact a different customer. Thanks guys. Yeah, so as it
relates to this concept of emerging hyperscalers, it really boils down to the amount of spend. And if you look at the allocation that is being given to this kind of elite quote unquote emerging hyperscaler, we really are classifying them now as a hyperscaler. So we're not drawing the line between the two.
And your next question comes from the line of Christopher Rollin with Susquehanna. Your line is open.
Hey, thanks for that. Thanks for the question. I guess, yeah, as it concerns your largest customer here and maybe any of these large ramps going forward, you know, I think a lot of these things are project-based. We have very large deployments, you know, sometimes, you know, 200,000 GPUs in just a couple of months. So as we think about these kind of large lumpy deployments, should we be expecting these customers, you know, for example, in this case, a pretty strong handoff in the next few quarters to other customers? Like, Bill, maybe you could talk to how project-based these revenue ramps are. That'd be great.
Yeah, I think that, you know, when I think, I guess when I think about your question and we talk about project-based, you know, of course, all of these programs that we're involved with, you know, are projects that have been planned for quite some time and, you know, going through the development internally within hyperscaler customers, it's a long process to, you know, to architect and ultimately bring all of the gear up to a point where they're ready for qualification. Qualification takes a significant amount of time and then planning the ramp takes time as well. And so we're at the point where we've kind of gotten through that process with, or we're nearly through that process with many of the customers. I don't really see, you know, a handoff so much as I see a diversification in the shipments that we'll be seeing throughout fiscal 26. This is just a function of where we are with each customer, but I don't think we'll see a customer get to a point where they are done with a project. There will be a transition from, you know, from, you know, say 50 gig per lane technology to 100 gig per lane technology. And that, as we've talked about in the past, is we are absolutely queued up and ready to ramp. We're, you know, we're the short pole in the tent, so to speak, that all of that technology that needed to be developed internally at Credo is done and we're ready to ramp as soon as our customers are. And so, you know, I don't see us getting to the point where we say, hey, that project is over and now there's some sort of an air pocket. You know, the key is making sure that we're first, making sure that we're first to deliver, you know, next generation samples, we're first to get qualified, and we're first around production, which really is our strength at this point given the fact that we're taking accountability for entire system solution.
Great. Thank you. And maybe as a second question, analog copper cable solutions seem to have missed perhaps the B300 cycle. You know, is that an opportunity for you guys? Do you think you're gaining share and design wins just as that technology did not ramp? Does that increase your TAM and your ramp expectation?
Yeah, I think that TAM really existed within the Nvidia ecosystem. You know, the TAM that we're looking at, we've not seen really any customer think seriously about using amplified solutions. So this, you know, referring to the analog amplified solution, we haven't really been competing head to head with any of those solutions really, you know, at any other customer. We really see it within the Nvidia ecosystem. And what role do we play within that ecosystem? You know, that is one that we're, you know, pretty conservative in talking about. When there is an opportunity for us to add value, we're absolutely ready, but it's not something that's built into any kind of forecast that we've got.
Thanks so much,
Bill. And your next question comes from the line of Richard Shannon with Craig Hallam. Your line is open.
Well, hi, thanks, Bill and Dan. Let's see here, maybe I'll just ask a simple question here that I think you get most quarters here, Bill, but just want to get your latest thoughts on competition in the AEC space. Obviously, you've got a very high share, you're doing exceptionally well here. You know, it certainly makes sense for some of your large customers to attempt to do dual sourcing. Have you seen any evidence of that, either attempts or see that coming here anytime soon? Just kind of your latest thoughts on that,
please. Yeah, I think the fact that the AEC product category is now really de facto for in rec connectivity, naturally there's a desire for multiple sources in the market. But the way that we think about competition, it's really, you know, our objective is always to be the best possible partner to our customers. That means delivering on the innovations first, delivering on the most reliable solutions first, passing qualification first, and getting our customers into production predictably. And so that's, you know, our focus with our customers is really on, you know, creating a relationship with them where they're counting on us to help them ramp and having the second source being, you know, something that's kind of a trailing edge effort. So that's really the way we think about competition. Now, I will mention again that being accountable for the system solution allows us to deliver in a faster, more predictable way. Owning, you know, every level from the Sertis to the ICs and ultimately to the cable system, it gives us a big advantage from our perspective. That's what we're seeing across the board with customers. With that said, we have a lot of respect for competition, but, you know, we don't see any significant changes on the competitive front.
Okay, great. Thanks for that update, Bill. My second question, really just looking at the large AEC opportunities here and clearly the largest customer, is really driving a lot of content here, both as you said, both in the back end and the front end. I think maybe just thinking about from a back end perspective, are you seeing any other hyperscalers getting close to adopting and even qualifying AECs in the back end and kind of thought process on time here? I'm assuming it's somewhere in fiscal 26 or not after, but maybe just kind of your latest thoughts there as well. Thank you.
With our new customers, we see it playing out the same way that it's played out with our existing customers. And it's always starting with a first, you know, starting with a first project or a first skew that we're working on, starting in, you know, a given rack and then really expanding from there. And so one of the customers that's new, we started with a switch rack, so a disaggregated chassis, a relatively low hurdle, 50 gig per lane type of project, but it quickly turned into two additional skews that are targeted for AI clusters that they'll be building in the future. And those are going, you know, to 100 gig per lane speeds. And so we see, you know, there's always got to be a first program, but the execution that we have on that first program will always lead to a deeper relationship. And we see that playing out the same way with our fourth and fifth customers as we did with the first three.
And your next question comes from the line of CJ De Silva with Roth Capital. Your line is open.
Hi, Bill. Hi, Dan. Congrats on the progress here. I just want to understand from the lead customer and the future 10% customers, whether, just to clarify, whether the ramp is today a single program and expected to be a single program across those 10% customers or whether you gained to the point where you have multiple projects across these customers in the pipeline, that might diversify you for each customer?
Yeah, I would say that if you look at any of our customers, there's multiple projects. You know, for the first three that we're ramping, there's multiple projects, there's multiple skews. And I expect that to be the same with the additional customers that we add, although we always are going to start with the first project.
Okay. That's helpful, Bill. Thanks. And then the opportunity and specifically in back end scale up, just want to be clear here that is that an opportunity only on in-house A6 or is that an opportunity as well on merchant GPUs in the marketplace?
Oh, it's an opportunity across the board. And so, I mean, we can talk about the Nvidia ecosystem and, you know, that's probably part of the TAM that, you know, that we're not really thinking about too aggressively as we start. But even for the deployments that use Nvidia GPUs and, you know, use, you know, using the kind of the open protocols in the market, PCIe will be an opportunity there as So hopefully that gives you an answer to your question. That is very helpful, Bill.
Thanks. And there are no further questions at this time. Mr. Brennan, I will turn the call back over to you.
Well, thanks for joining us today. Really appreciate the interest and support. As we look ahead, we're absolutely laser focused on executing our strategy, delivering value to our customers, and driving long-term growth and profitability. We look forward to the callbacks.
Thanks. And ladies and gentlemen, this concludes today's call and we thank you for your participation. You may now disconnect.