speaker
Operator

Ladies and gentlemen, thank you for standing by at this time. Our participants are in a listen-only mode. Later, we will conduct a question-and-answer session. At that time, if you have a question, you will need to press the star 11 on your push-button phone. I would now like to hand the conference over to Dan O'Neill. Please go ahead, sir.

speaker
Dan O'Neill

Good afternoon, and thank you all for joining us today for our fiscal 2024 fourth quarter and year-ending earnings call. I'm joined today by Bill Brennan, Credo's Chief Executive Officer, and Dan Fleming, Credo's Chief Financial Officer. I'd like to remind everyone that certain comments made in this call today may include forward-looking statements regarding expected future financial results, strategies and plans, future operations, the markets in which we operate, and other areas of discussion. These forward-looking statements are subject to risks and uncertainties that are discussed in detail in our documents filed with the SEC. It's not possible for the company's management to predict all risks, nor can the company assess the impact of all factors on its business, or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement. Given these risks, uncertainties, and assumptions, the forward-looking events discussed during this call may not occur, and actual results could differ materially and adversely from those anticipated or implied. The company undertakes no obligation to publicly update forward-looking statements for any reason after the date of this call to conform these statements to actual results or to changes in the company's expectations, except as required by law. Also during this call, we will refer to certain non-GAAP financial measures which we consider to be important measures of the company's performance. These non-GAAP financial measures are provided in addition to and not as a substitute for or superior to financial performance prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed using the investor relations portion of our website. With that, I'll now turn the call over to our CEO.

speaker
Bill Brennan

Thank you for joining our fourth quarter fiscal 24 earnings call. I'll start by reviewing our results and then I'll provide highlights of what we see for fiscal 25. Our CFO Dan Fleming will then follow with a detailed discussion of our Q4 and fiscal year 24 results and provide our outlook for Q1. Credo is a pure play high-speed connectivity company delivering a range of optimized and innovative connectivity solutions to meet the needs of global data center operators and service providers. We leverage our core SERDES technology and unique customer-focused design approach to deliver a differentiated suite of solutions, including active electrical cables, or AECs, optical DSPs, line card PHYs, SERDES chiplets, and SERDES IP licenses. for Ethernet port speeds ranging from 100 gig up to 1.6 terabits per second. We target the most difficult connectivity challenges facing the market. For our combination of architecture, system level approach, power, and performance are most differentiated. Credo is in a market environment of steadily increasing demand for optimized solutions with higher bandwidth and improved power efficiency. driven by the accelerating connectivity requirements of leading edge AI deployments. I'm pleased to say that during both fiscal Q4 and fiscal 24, Credo achieved record revenue. In Q4, we delivered revenue of $60.8 million and non-GAAP gross margin of 66.1%. In fiscal 24, Credo achieved revenue of $193 million and non-GAAP gross margin of 62.5%. The workloads supported by our solutions changed significantly during the fiscal year, and our growth was primarily driven by AI deployments across our entire portfolio. In fiscal Q4, roughly three-quarters of our revenue was driven by AI workloads. The year was also notable as we diversified our revenue across additional customers and products. I'm proud of the team for delivering solid results across a shifting landscape and also for executing a strong quarterly sequential ramp throughout the year. In our ADC product line, we continued our market leadership and delivered our customers a range of solutions for port speeds ranging from 100 gig to 800 gig. Furthermore, Our approach of delivering system-level solutions with customized hardware and software features has enabled us to build close, collaborative relationships with our customers. Over many design cycles across numerous customers, we have dramatically improved our speed to market in designing and qualifying our solutions. And this remains a key aspect of our competitive advantage. We believe this positions Credo with a unique value to the market that is difficult to replicate. With this, our AECs have quickly become the leading solution for in-rack cabled connectivity for single-lane speeds of 50 gigabits per second and above. In addition to the advantages of AECs that include signal integrity, power, form factor, and reliability, our customers have embraced the opportunity to innovate with Credo as a design partner to optimize system level features that make their AI clusters more efficient. From a customer engagement perspective, fiscal 24 was fruitful as we saw the successful ramp at a new hyperscale customer, qualification at another, and expansion of our ADC engagements with hyperscalers, tier two data centers, and service providers. AECs have quickly transitioned from a new concept to a de facto solution across many data center environments. Based on customer feedback and forecasts, we continue to expect an inflection point in our AEC revenue growth during the second half of fiscal 25. Fiscal 24 was also a strong year for our optical DSP products. During the year, we achieved material production revenues. with significant wins at domestic and international hyperscalers. Additionally, we continue to gain traction with optical module partners and end customers due to our attractive combination of performance, power efficiency, and system costs. AI back-end network deployments are a strong volume driver for the optical transceiver and AOC market, specifically for leading edge 100 gig per lane solutions. As power efficiency has become a more critical factor, Credo's responded with innovative architectural solutions that drastically reduce DSP power while maintaining interoperability and signal integrity at the optical module level. We've made great progress with our linear receive optics DSPs. In November, we announced our LRO DSP solutions, and by March at OFC, we demonstrated production designs with three 800-gig module partners. In the few months since OOC, we've seen continued market acceptance and design activity. These products enable a significant power reduction versus a traditional 800 gig solution. The LRO architecture is the only way to achieve a sub 10 watt 800 gig module that meets existing industry optical standards and facilitates multi-vendor interoperability. We expect the benefits of LRO solutions to become even more impactful in next generation 1.6T optical modules. I feel confident saying that the LPO architecture with no DSP has lost nearly all momentum in the market and that the LRO architecture is showing great promise. I'm encouraged by our customer traction in Q4. We were pleased to kick off multiple new optical DSP design engagements with the leading optical module manufacturers, both with our new LRO DSP and our traditional full DSP solutions. Given our results to date and our customer engagements, we are on track to achieve our optical DSP revenue goal of 10% of our fiscal 25 revenue, and we are enthusiastic about future growth prospects in this category. Regarding our LineCard 5 business, our leadership in this market continues as we transition to more advanced process nodes that deliver improved product performance and power efficiency. During the year, we continue to add to our customer base and have multiple 100 gig per lane wins at industry-leading Tier 1 OEMs and ODMs that serve the global data center market. These include 800 gig and 1.6T gearbox, Retimer, and MaxSec5 products. As we've discussed in the past, AI deployments are the driving force behind our growth for these leading edge devices. In the fourth quarter, we had success with both 50 gig and 100 gig per lane line card products. While 50 gig per lane solutions will continue to have lengthy life cycles, our 100 gig per lane solutions will also start adding to our revenue in fiscal 25. We expect the LineCard 5 business will continue to grow and contribute nicely to our overall business in fiscal 25 and beyond as we continue to invest and innovate in this market. Lastly, I'll discuss our Certi's IP licensing and chiplet businesses. In Q4, our Certi licensing business delivered solid results, owing to our combination of speed, signal integrity, leading power efficiency, and breadth of offering. During fiscal 24, we won licensing business across a range of applications, process nodes, and lane rates. Our wins ranged from 28 nanometer down to 4 nanometer at lane rates ranging from 28 gig to 112 gig. Our chiplet business saw significant growth, led by our largest customer, who deploys our Certi's chiplets in a massive AI cluster. This customer also engaged us to develop a next-generation chiplet for future deployments, which is a testament to our leading technology and customer-centric focus. We are entering fiscal 25 with a strong and diverse funnel of Certi's licensing and chiplet opportunities. In summary, the shift towards generative AI accelerated during our fiscal 24, and we see that continuing into the foreseeable future Industry data and market forecasters point towards continued and growing demand for high-bandwidth, energy-efficient connectivity solutions that are application optimized. Credo benefits from this demand due to our focus on innovative, low-power, customer-centric connectivity solutions for the most demanding applications. Our view into fiscal 25 and beyond has remained consistent for a number of quarters now. And this has only been reinforced by recent wins, production ramps, and customers' forecasts as they continue to formalize their AI deployment plans. And with that, Dan Fleming, our CFO, will now provide additional details.

speaker
Dan Fleming

Thank you, Bill, and good afternoon. I will first provide a financial summary of our fiscal year 24, then review our Q4 results, and finally, discuss our outlook for Q1 and provide some color on our expectations for fiscal year 25. Revenue for fiscal year 24 was a record at $193 million, up 5% year over year, driven by product revenue that grew by 8%. Gross margin for the year was 62.5%, up 448 basis points year over year. Our operating margin declined by 208 basis points, as we continue to invest in R&D to support product development focused on numerous opportunities across our hyperscale customers. We reported earnings per share of $0.09 for the year, a $0.04 improvement over the prior year. Moving on to the fourth quarter, in Q4, we reported record revenue of $60.8 million, up 15% sequentially, and up 89% year over year. Our IP business generated $16.6 million of revenue in Q4, up 193% year over year. IP remains a strategic part of our business, but as a reminder, our IP results may vary from quarter to quarter, driven largely by specific deliverables to preexisting or new contracts. While the mix of IP and product revenue will vary in any given quarter over time, Our revenue mix in Q4 was 27% IP, above our long-term expectation for IP, which is 10 to 15% of revenue. Our product business generated $44.1 million of revenue in Q4, down 15% sequentially and up 67% year over year. Our product business, excluding product engineering services, generated $40.8 million of revenue in Q4, up 2% sequentially. Our top four end customers were each greater than 10% of our revenue in Q4. Our team delivered Q4 non-GAAP gross margin of 66.1%, above the high end of our guidance range, and up 391 basis points sequentially, enabled by a strong IP contribution in the quarter. Our IP non-GAAP gross margin generally hovers near 100% and was 99.2% in Q4. Our product non-GAAP gross margin was 53.7% in the quarter, down 783 basis points sequentially and up 392 basis points year over year. The sequential decline was due to a change in product engineering services revenues. Total non-GAAP operating expenses in the fourth quarter were $32.7 million, below the midpoint of our guidance range and up 7% sequentially. Our OpEx increase was a result of a 17% year-over-year increase in R&D as we continue to invest in the resources to deliver innovative solutions for our hyperscale customers. And a 26% year-over-year increase in SG&A as we continue to invest in public company infrastructure. Our non-GAAP operating income was a record $7.5 million in Q4 compared to non-GAAP operating income of $2.4 million last quarter due to strong gross margin performance coupled with top line leverage. Our non-GAAP operating margin was also a record at 12.3% in the quarter. compared to a non-GAAP operating margin of 4.6 percent last quarter, a sequential increase of 771 basis points. We reported non-GAAP net income of $11.8 billion in Q4, compared to non-GAAP net income of $6.3 million last quarter. Cash flow from operations in the fourth quarter was $4.2 million. CapEx was $3.2 million in the quarter, driven by R&D equipment spending. And free cash flow was $1 million, an increase of $16.7 million year over year. We ended the quarter with cash and equivalents of $410.0 million, an increase of $0.9 million from the third quarter. We remain well capitalized to continue investing in our growth opportunities while maintaining a substantial cash buffer. Our accounts receivable balance increased 33% sequentially to $59.7 million, while day sales outstanding increased to 89 days, up from 77 days in Q3. Our Q4 ending inventory was $25.9 million, down $5.6 million sequentially. Now, turning to our guidance. We currently expect revenue in Q1 of fiscal 25 to be between $58 million and $61 million, down 2% sequentially at the midpoint. We expect Q1 non-GAAP gross margin to be within a range of 63% to 65%. We expect Q1 non-GAAP operating expenses to be between $35 million and $37 million. The first quarter of fiscal 25 is a 14-week quarter So included in this forecast is approximately $2 million in expenses for the extra week. We expect Q1 diluted weighted average share count to be approximately 180 million shares. We were pleased to see fiscal year 24 play out as expected. The rapid shift to AI workloads drove new and broad-based customer engagement. and we executed well to deliver the sequential growth we had forecast throughout the year. Our revenue mix transitioned swiftly through the year. In Q4, we estimate that AI revenue was approximately three quarters of total revenue, up dramatically from the prior year. As we move forward through fiscal year 25, we expect sequential growth to accelerate in the second half of the year. From Q4 of fiscal 24 to Q4 of fiscal 25, We expect AI revenue to double year over year as programs across a number of customers reach production scale. We expect fiscal year 25 non-GAAP gross margin to be within a range of 61 to 63% as product gross margins expand due to increasing scale. We expect fiscal year 25 non-GAAP operating expenses to grow at half the rate of top-line growth. As a result, we look forward to driving operating leverage throughout the year.

speaker
Bill

And with that, I will open it up for questions. Thank you.

speaker
Operator

At this time, I would like to remind everyone, in order to ask a question, press star, then the number 11 on your telephone keypad. We'll pause for just a moment to compile the Q&A roster. Our first question comes from the line of Quinn Bolton from Needham. Your line is open.

speaker
Needham

Hey, guys. Congratulations on the results. Nice to see you quantifying the AI revenue. I guess Bill or Dan wanted to start with just sort of a couple of housekeeping questions. Can you give us sort of the percent of revenue from your largest four customers, and were they all across different product lines, or are you starting to see consolidation back to AECs among that top four customer base.

speaker
Dan Fleming

Yeah, Quinn, let me comment on that. This is Dan. Yeah, so as I mentioned in our prepared remarks, we had four 10% end customers in Q4. They were our two AEC hyperscalers that we've discussed previously, plus a large consumer company and our lead chiplet customer. So by that list, you can kind of see a a broad diversification of products represented. And I'll add to that by saying when our K is filed in the next few weeks, you'll see that we had three 10% end customers for the full year. And I'll lay those out for you since you'll see them soon enough. Our largest customer was our first AEC hyperscale customer, which we've talked about over the last few years, which was Microsoft at 26%. Then the second was an AEC, our second AEC hyperscaler. They came in at 20% for the year. And the third 10% customer was the lead chiplet customer that we had at 15%. So the key takeaway, though, this year was, if you go back to FY23 versus FY24, and we've been saying this for the last few quarters, FY24 was really... the year in which revenue diversification materialized for us, both from a customer perspective and a product perspective as well. Hopefully that gives you some additional color, Quinn.

speaker
Needham

Yeah, that's great, Dan. Thank you. And then I guess, Bill, I think if I got your prepared script, you talked about ramping a second AEC customer and then qualifying a third hyperscaler. Wondering if you could just give us a little bit of detail on on the third hyperscaler? You know, is it a sort of AI application? Is it NIC to Tor? Is it within a switch? Can you give us any sense of the per lane speed or total cable speed on that third engagement?

speaker
Bill Brennan

Sure, sure. So we've talked about in the past that the first program with this hyperscaler is a switch rack. It's 50 gig per lane design, 400 gig ports. And we've seen this relationship develop in a really similar way to the first two. Start with a single program, and after the first experience with AECs, we're now engaged with additional programs on the roadmap. I mentioned the first program is a switch rack, and now we're working on additional programs for AI appliance racks, and these are at 100 gig per lane. And I'll mention that the plan that we're getting from them at this point is that you know, we'll see this third customer ramp in the second half fiscal year timeframe. So that'll contribute to the inflection point that Dan has referenced.

speaker
Needham

Perfect.

speaker
Bill

Thank you.

speaker
Operator

One moment for our next question. And our next question comes from Tor Svanberg from Stifel. Your line is open.

speaker
Tor Svanberg

Yes, thanks, and congratulations on the record revenue. I had a question on, Dan, your comment about AI revenue for Q4 fiscal 25. So, based on my math, you know, it sounds like AI revenue would be about $90 million. How should we think about the non-AI revenue over the next 12 months? Or, in other words, that $15 million in non-AI revenue for Q4 24, how will that progress over the next 12 months?

speaker
Dan Fleming

Yeah, so based on our comments, we didn't provide specific revenue guidance for the full year, but we wanted to provide you a framework to understand a little bit more definitively how we've been framing our revenue growth throughout the year. And as you say, AI revenue, we expect to grow 100%. Q4 to Q4, fiscal 24 to 25. If you look at the non-AI revenue piece, what I would say is we can assume or you can assume modest year-over-year growth. That's not what's driving our growth in the year. It's really these AI programs that are ramping largely in the second half of the year. So that's why we frame things as we did. The other comment I'll add to that is our overall fiscal year 25 outlook has not changed. We're just kind of giving it a little bit more specificity. So we continue to expect that meaningful growth in the year, and that second half inflection point will be fast upon us here shortly, driven by these AI programs.

speaker
Tor Svanberg

Yeah, no, that's great, Collar. And perhaps a question for you, Bill. It looks like your PAM4 DSP business is finally starting to take off. You're targeting 10% fiscal 25. First of all, how much was that revenue in fiscal 24? And could you talk a little bit of the diversified customer base that you have for the PAM4 DSP business? You talked about growth at international and North America, but in North America, are we talking about several hyperscalers driving that growth?

speaker
Bill Brennan

So first of all, for fiscal 24, we did not, you know, we hadn't had that 10% number as an objective in fiscal 24, but we came pretty close to it. And so we feel like things are lined up well for fiscal 25 and beyond. I would say there's multiple drivers. Of course, we've got, you know, the first U.S. hyperscaler in production. We've got a second in qualification. We're seeing a return in spending. with non-US hyperscalers. And we've commented that we're very well positioned for when that spending turns back on. And I would say that these are the primary contributors in fiscal 25. I will mention that we've got a lot of promising new engagements with optical module partners. And these new partners are really considered leaders in the industry. And of course, that bridges to additional hyperscalers that interested in looking at solutions with Credo. I will also say that we've spent a lot of time talking about the LRO architecture in really the last six months. And we see growing momentum with that LRO architecture for sure. And that's in addition to the full DSP momentum that we're building. So hopefully that gives you the color that you're looking for.

speaker
Operator

Thank you. One moment for our next question. Our next question comes from Thomas O'Malley from Barclays. Your line is open.

speaker
Thomas O'Malley

Hey, what's going on, guys? And thanks for taking my question. I wanted to follow up on the AI guidance. Obviously, if you take the comment that three quarters of the revenue was related to AI and Q4 and extrapolate that to next year, gets to pretty big numbers. But I wanted to just kind of zoom in on this quarter. So you had four 10% customers, one of which was a consumer customer who we know is non-AI related. So that would kind of imply that the rest of your business was entirely AI if you just do that math. So could you just help me walk through, is it kind of just rounding three quarters AI revenue or how should I be thinking about like the dollar amount? Because it obviously sounds like that's going quite nicely, but just want to understand the base if you're giving some color on what that should grow for the entire year.

speaker
Dan Fleming

Yeah, we didn't give a precise number because it's hard for us to estimate in some cases how our end customers utilize our products, but we have a fair amount of certainty that three-quarters or 75% is where we ended for Q4. So we kind of, you know, that's just kind of with a caveat. So you would use that, and it's really, you know, looking, this framework is as we exited fiscal 24 versus how we expect to exit fiscal 25. So as you know, these production ramps at these large customers can take time and they can pull in, they can push out a quarter or so. So that's why we're framing it kind of fiscal year end to fiscal year end.

speaker
Thomas O'Malley

Gotcha. And then I just wanted to ask, I know you guys don't guide by product segment, but just a little color on the product and the IP side because it swung pretty drastically over the last couple of quarters. So with the July guidance, with the gross margins being a bit better than expected, you would just assume that maybe the IP side is kind of staying higher sequentially. Could you give us any color on the mix there into July? Is there any product growth or is most of the, well, obviously with the midpoint of revenue a bit down, but is there any IP color that you can give us? Does it stay at these kind of elevated rates after the big fiscal Q4? Thank you.

speaker
Dan Fleming

Yeah, sure. So just to reiterate what we had said for guidance for Q1, for gross margin, it was 63 to 65%. So really just kind of a modest sequential decline from Q4. So it's really driven by a few things. One is IP revenue will decline sequentially quarter over quarter. So that will happen. However, if you look at NRE, that's kind of, you should assume we're at historic averages, which we were in Q4, so kind of flat quarter over quarter. So it's really the product gross margin. There's a bit of a revenue mix dynamic there as well. And a lot of this, part of the theme of fiscal 25 will be, increasing product margin, you know, exclusive of product engineering services due to increasing scale as we kind of return to that roadmap where we really do drive operating margin and gross margin leverage as we increase in scale.

speaker
Operator

Thank you. One moment for our next question. Our next question will come from Vijay Rakesh from Mizuho. Your line is open.

speaker
spk00

Yeah, hi, Bill and Dan. Good quarter here. Just a quick question on the LRO side. You mentioned the 800 gig LRO, the sub-10 watt power consumption, and your engagement with this. I'm just wondering how many CSPs you're working with on shipping that product, and how do you see those revenues ramping into 25, I guess, calendar 25?

speaker
Bill Brennan

So the work is primarily being done right now with optical module manufacturers. We've got more than a handful that are working on designs now. We have delivered first samples to the first hyperscale potential customer, and we see that continuing throughout this quarter. So as far as fiscal 25 goes, there is a possibility. We don't have much really forecasted in what we're looking at yet, but there's a possibility that we'll ramp for significant revenue in this fiscal year. But it's not something that's really built in.

speaker
spk00

Got it. So that should be incremental. And on the chiplet customer, your chiplet customer is obviously increasing capex quite a bit. And, you know, Do you see your traction growing proportionately? Is that starting to pick up as well? Thanks.

speaker
Bill Brennan

Yeah, so the first customer that we've got in production, the one we've talked about, we see that business really ongoing today. Now, if they have a big increase in the spend on the cluster that's designed in-house, we'll definitely see a participation with that. But they've got multiple different paths that they're pursuing right now. But generally speaking, we continue to be bullish on chiplets in general. We've got additional customers that will be coming online in the future. Again, not much built in in fiscal 25, but we're bullish on the segment.

speaker
Operator

Thank you. One moment for our next question. Our next question comes from Vivek Arya from Bank of America. Your line is open.

speaker
Vivek Arya

Hi, thank you for taking our question. This is Daksan on behalf of Vivek. I just want to go back to the AEC product line. Obviously, you're ramping your second customer, third customer also in qualification, the second half. How are you seeing the competitive dynamic just because Marvell and Astera are also launching their products here?

speaker
Bill Brennan

At a high level, we have not seen a significant change in the competitive environment. So our objective has always been to be first to deliver and first to qualify. And I think we've done a really good job on this objective with all of our customers. I would say that one big advantage that we have competitively is the way we're organized. We have more than 100 people on our team that are dedicated to the AEC business. And that includes hardware and software development. That includes qualification, production fulfillment, and support. And this really drives success with this objective to be able to deliver first and qualify first. So I would say that as we go deeper with each customer relationship, we really see an increasing number of requests for customized hardware and software. And I think from the standpoint of the number of SKUs that we're working on today, the number is more than 20 that are in active development from a different SKU perspective. So competitively, I would say we're unique in a sense that we're the single point of contact and we take full responsibility for all aspects of the relationships with our customers. And this really drives their satisfaction. When I talk more specifically about competition, we're really competing with groups of companies that need to do the same work that we're doing. But there's really no shortcut to it. And when you've got the complexity of having multiple suppliers involved and responsible for different aspects of one solution, it's really far greater complexity than having one party like Credo being ultimately responsible. And so I guess with that said, the market's growing quickly, and we do expect to see second sourcing in the future. This is natural. And ultimately, our goal is to always be raising the competitive bar and ultimately serving our customers very well and driving their satisfaction. But I don't have specific feedback regarding the two potential competitors that you mentioned.

speaker
Vivek Arya

Of course. And then as a follow-up, Just given NVIDIA is also entering this Ethernet switch market, and that could potentially have some implications on AEC as a standard for connectivity. So I was wondering if you have any color there, or if you've done any interoperability testing with the NVIDIA solutions as well. Thank you.

speaker
Bill Brennan

Sure. We've been really clear when we talk about the US hyperscalers. There is a desire to move to Ethernet long term. And so I think it comes as no surprise that we've seen a lot of discussion around NVIDIA and Ethernet. We view this as a positive for us and our business. We've done testing with everybody that's out there from the standpoint of NICs and switches. And so we feel really quite confident that there will be an opportunity for our AECs, for interact connectivity. And again, we don't view this as really a surprise that the U.S. hyperscalers are driving in this direction.

speaker
Operator

Thank you.

speaker
Bill

One moment for our next question.

speaker
Operator

Our next question will come from Richard Shannon from Craig Halem. Your line is open.

speaker
Dan Fleming

Well, hi, guys. Thanks for taking my question. I apologize if this has been touched on before I got on the call late here, but Bill, just following up on your comments regarding custom cables and the increasing requests there. Maybe you can characterize your business now and kind of what you expect going forward here in terms of its profile of custom versus more commodity or standard. Is there much of any of that going on now, or do you expect that to be a material contributor soon?

speaker
Bill Brennan

Well, I think that what we've seen is that every time we engage deeply with a customer and we open the door for innovation, basically we're open to special requests from a hardware standpoint, from a firmware or software standpoint. And what we're seeing is that customers really respond positively. So we've organized to be able to receive these requests and really deliver on them. So I think that More and more as we look into the future, I think that a very large percentage of what we ship will be customer specific. There will be a market, a smaller market, say for standard connectivity solutions like an 800 gig to 800 gig AEC with just two connectors and really nothing special. But we see the large majority of the volume being somewhat customer specific.

speaker
Dan Fleming

Okay, thanks for that clarification, Bill. My second question is on following up on your comments here about AI revenues doubling from this past fourth quarter to the next fourth quarter here. Maybe you can characterize the degree to which, you know, back-end network revenues are built into this at all versus, you know, front-end and kind of the dictator and other applications you've been at historically.

speaker
Bill Brennan

Yeah, I would say that you're right on from the standpoint that the back-end networks are really driving the increase in revenue. And that's a general statement about AI. Of course, AI networks are also connected to the front-end network, but the number of connections is small in comparison. So I'll say that we're seeing the continued increase in the density of connections in AI clusters. And it's really... driven by the combination of increased GPU performance, generally, as those in that market are executing on the roadmap. But there's also a desire to increase the GPU utilization. Some out there, like OpenAI, they published a document that said that the average utilization of a GPU is roughly 33%. And so there's a big opportunity you know, going with more parallelism. And really, that drives a larger density or increased density in the number of connections, really specifically to the back-end scale-up networks. So we talked about scale-out and scale-up. What that means from the standpoint of how many connections, how many AEC connections are, you know, possible per GPU, some of the back-end clusters that we're ramping in the second half will have two or four AECs per GPU. And we're working on next generation platforms that will actually increase that number of connections to eight or even higher per GPU. And so I think if you take it to a rack level, say an AI appliance rack level, we're seeing a density today of between 56 and 64 AECs per rack. And we expect this number to likely reach close to 200 AECs per rack in the future. This is something that will fuel the growth as well.

speaker
Operator

Thank you. One moment for our next question. And our next question comes from Carl Ackerman from P&B Pirate Bus. Your line is open.

speaker
Carl Ackerman

Yes, thank you, gentlemen. I have two. I suppose for the first question, Dan, Could you put a finer point on IP licensing revenue in the July quarter? Like, is it cut in half? And do you see IP revenue remaining toward the upper end of your long-term 10% to 15% range for fiscal 25?

speaker
Dan Fleming

Yeah, so for fiscal 25, we internally expect it to be near the high end of that long-term model, which, again, is 10% to 15% of overall revenue. And so if I were in your shoes to model this, I would assume that it's kind of near a quarter of that annual amount in Q1. And if you do that, you should kind of solve to within our gross margin range for Q1, if that's helpful.

speaker
Carl Ackerman

I see. Thanks for that. Perhaps a question for you, Bill. You know, there has been much discussion and confusion about where half-retimed DSPs can be used in the network. For example, the use of active copper cable are being used for interact connectivity, while AOCs and AECs appear to be the primary use case for connecting NICs to TOR and or the middle row switches. My question is, do all AI networks require a full retimed DSP for either AOC or AEC connections? Thank you.

speaker
Bill Brennan

So this is a much discussed topic in the industry. And I think a year ago when At OFC, there was a big discussion about the idea of eliminating the DSP. That really started a lot of effort in pursuing the solution. So there's many optical module companies that pursued designs with no DSP. And I think generally, the jury has come in and basically there's really no momentum in the market now for DSP. you know, for solutions with no DSP. So in the optical world that we see right now, the solution for LRO is really, you know, quite feasible. And we're showing that with multiple partners. We demonstrated three at OFC, Lumentum being the, you know, the largest of the three partners. And what we're seeing is that the solution successfully reduces power. And by the way, that was the entire objective of LPO was to reduce power of these connections. And so we've shown that we can deliver 800 gig modules with partners that go sub 10 watts, which is really probably a 30% to 40% reduction compared to what's typical in the market for fully retimed solutions. And so the key with LRO is that we're able to maintain industry standards as well as interoperability. And so you can literally use an LRO solution There's nothing special that you need to do. And so we see that, especially for clusters, these are shorter connections and AOCs are likely also transceivers. But for these shorter connections that say 10 to 20 meters, and especially in the cluster, power is so critical that we see that that entire market could be addressed by LRO solutions. Now, you know, it's obviously going to be up to a given customer and their strategy. You know, but the idea that, you know, there will be a large volume of solutions with no DSP, I think that, you know, that really no longer exists from the customers that we're talking to.

speaker
Bill

Thank you.

speaker
Operator

One moment for our next question. Our next question comes from Suji De Silva from Roth. Your line is open.

speaker
Suji De Silva

Hi, Bill. Hi, Dan. Congrats on the progress here. Just, Bill, on your comments on the number of AECs per GPU increasing, I'm just curious, you know, in general, is that increasing kind of the forecasting needs of your customers in the AI area versus traditional cloud versus three months ago? Or, you know, is the AI line stable, already anticipated? And is the traditional cloud part coming back?

speaker
Bill

Yeah, I would say there's really no change in the programs that we talked about three months ago.

speaker
Bill Brennan

I'd say the additional information that we're sharing today is that this need for more performance and more bandwidth is really something that we're seeing as we look at next generation AI cluster designs. And so that's a bit of new information that the number of connections per GPU is doubling or even more than doubling. and that'll obviously drive growth. Now as it relates to, we talk about front-end networks and back-end networks, and of course AI clusters, they're all connected to the front-end network. As that relates to, say, general compute versus AI, hard for us to see from a forecasting standpoint how that breaks out, because the same AECs are used for both from a front-end network perspective. But I would say generally that the momentum in the market for AI, there's no question it's still a huge amount of momentum. And we see that really for the foreseeable future. If I talk about what's the tradeoff for us, if there's a real return from a general compute market share perspective, Of course, we'll benefit from that. We're really used in both. And when we talk about the third swim lane, which is we have AI appliances, general compute, server racks. So these are both server racks. We talk about the third swim lane being switch racks. And we see that growing in popularity as well, especially as the market moves towards 100 gig per lane speeds.

speaker
Suji De Silva

Got it. That's very helpful call, Bill. Thanks. And then, you know, staying on this increase in AECs per GPU8, does that introduce a customization opportunity as well? I'm thinking kind of the old YRAC opportunity, things like that. Or are those more standard cables, but just more of them?

speaker
Bill Brennan

Yeah, I would say none of these are standard. You know, so in the AI appliance application, what we're seeing is that there's maybe little or zero standard products that are being designed right now. So all of them have special features. I will say that we're delivering cables with two connectors, three connectors, four connectors, and even five connectors. And so when you give these really creative designers of the racks, just the entire A appliance rack, You know, it's fun to see what they're going to come up with, and we're very much open to, you know, making their rack design more efficient.

speaker
Operator

Thank you.

speaker
Bill

One moment for our next question.

speaker
Operator

And we have a follow-up from the line of Quinn Bolton from Needham. Your line is open.

speaker
Needham

Hey, Bill, wondering if you could just sort of address the AEC versus ACC debate that seems to have kind of popped up after OFC as NVIDIA is looking to use ACC's in its NVLink fabric. Do you see perhaps as a result of that growing adoption of ACC's or do you think ACC's are going to be really use case limited going forward?

speaker
Bill Brennan

Use case limited. We don't see ACCs anywhere in the market other than what you described at NVIDIA. Perfect. Very simple. Thank you.

speaker
Bill

Thank you. One moment for our last question.

speaker
Operator

And our next question will come from Tor Savan from Stifel. The line is open.

speaker
Tor Svanberg

Yes, Svanberg. Just two follow-ups. So first of all, Bill, in your prepared remarks, you talk about accelerating the speed to market pretty meaningfully. Is that a result of your engagements with customers, or have you implemented internally any new technologies or anything like that to really get the product to market quicker?

speaker
Bill Brennan

I think it's really due to a number of things in the way that we've organized and also the way that we work with our customers. I think from the standpoint of our ability to collaborate, it's really on a different level. We've got weekly, if not daily, interaction between our engineering teams at Credo and our customer. And so that relates directly to our ability to deliver first samples. And then when we talk about moving something into production, there's many different levels of qualification. And, you know, we've taken complete ownership of that. You know, and when we think about that, what does that mean? That means, you know, that we've got, you know, more than 10 thermal chambers that are in constant use. And what are we doing there? You know, so our customers ship us switches or appliances, you know, with the configuration that they want to be qualified, that they're planning on taking to production. We run the qualification test for them. So it's live traffic, varying temperature, varying voltage. And we're doing a lot of the work for them up front. And so when they go into a final qualification mode, they know that what we're delivering is highly predictable because we've already delivered data based on their prescriptive tests that they give us with the equipment that they ship us. And so from the standpoint of delivering first, it's about being organized to respond quickly. So qualifying first, we're doing a lot of the work for our customers, and that's really taking it up a notch.

speaker
Tor Svanberg

Great. And just my last question is a clarification. I just want to make sure – I mean, I think I got this right, but I just want to make sure that it is clear to everybody. So AI revenue, three-quarters, that includes product and licensing revenue, and that is the number that you expect to double year-over-year at Q4 Fiscal 35.

speaker
Dan Fleming

That is correct.

speaker
Tor Svanberg

Great. Thank you.

speaker
Operator

Thank you. And there are no further questions at this time. Mr. Brennan, I turn the call back over to you.

speaker
Bill Brennan

So thanks to everybody for the questions. We really appreciate the participation, and we look forward to the continued conversation on the callbacks. Thank you.

speaker
Operator

And this concludes today's conference call. You may now disconnect. Everyone have a great day.

Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-