Credo Technology Group Holding Ltd

Q2 2024 Earnings Conference Call

11/29/2023

spk19: Ladies and gentlemen, thank you for standing by. At this time, all participants are in a listen-only mode. Later, we will conduct a question and answer session. At that time, if you have a question, you will need to press star 11 on your push button phone. I would now like to turn the conference over to Dan O'Neill. Please go ahead, sir.
spk08: Good afternoon, and thank you all for joining us on our fiscal 2024 Second Quarter Earnings Call. Today I am joined by Credo's Chief Executive Officer, Bill Brennan, and Chief Financial Officer, Dan Fleming. I'd like to remind everyone that certain comments made in this call today may include forward-looking statements regarding expected future financial results, strategies and plans, future operations, the markets in which we operate, and other areas of discussion. These forward-looking statements are subject to risks and uncertainties that are discussed in detail in our documents filed with the SEC. It's not possible for the company's management to predict all risks, nor can the company assess the impact of all factors on its business or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement. Given these risks, uncertainties, and assumptions, The forward-looking events discussed during this call may not occur, and actual results could differ materially and adversely from those anticipated or implied. The company undertakes no obligation to publicly update forward-looking statements for any reason after the date of this call to conform these statements to actual results or to changes in the company's expectations, except as required by law. Also during this call, we will refer to certain non-GAAP financial measures which we consider to be important measures of the company's performance. These non-GAAP financial measures are provided in addition to and not as a substitute for or superior to financial performance prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and how reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed using the investor relations portion of our website. I will now turn the call over to our CEO, Bill.
spk05: Thank you, Dan. Welcome to everyone joining our Q2 fiscal 24 earnings call. I'll start with an overview of our fiscal Q2 results. I'll then discuss our views on our outlook. After my remarks, our CFO, Dan Fleming, will provide a detailed review of our Q2 financial results and share the outlook for the third fiscal quarter. We will then be happy to take questions. For the second quarter, Credo reported revenue of $44 million and non-GAAP gross margin of 59.9%. Our Q2 results and our future growth expectations are driven by the accelerating market opportunity for high-speed and energy-efficient connectivity solutions. We target port speeds up to 1.6 terabits per second with solutions including active electrical cables or AECs, optical DSPs, laser drivers and TIAs, line cart PHYs, SERDES chiplets, and SERDES IP licensing, enabling us to address a broad spectrum of connectivity needs throughout the digital infrastructure market. Each of these solutions leverage our core SERDES technology and our unique customer-focused design approach. As a result, Credo delivers application-specific, high-speed solutions with optimized energy efficiency and system costs. And our advantage expands as the market moves to 100 gig per lane speeds. Within the data center market today, we've seen a dramatically increasing demand for higher bandwidth, higher density, and more energy efficient networking. This demand is driven by the proliferation of generative AI applications. For the past several years, Credo has been collaborating with our customers on leading edge AI platforms that are now in various stages of ramping production. In fact, the majority of Credo revenue will be driven by AI applications for the foreseeable future. Now, I'll review our overall business in more detail. First, I'll discuss our optical business. I'm pleased with the traction we've been gaining in this market. In the quarter, we continued shipping to multiple global hyperscale end customers, and we're making progress in positioning Credo to add additional hyperscale end customers in the upcoming quarters. targeting 400-gig and 800-gig applications. Credo also has optical design movements in various stages with module customers and networking OEMs for the fiber channel market and with service providers for 5G infrastructure deployments. Credo plays a disruptive role in the optical DSP market. Our fundamental CERTI's technology is leveraged to provide a compelling combination of performance, energy efficiency, and system costs Additionally, we focus on solving our customers' problems and market challenges through engineering innovation. At the OFC Optical Conference in March of this year, there was an important call to action to address the unsustainable power and cost increases for optical modules in the 800 gig and 1.6T generations. Much industry discussion has ensued this year, especially related to the plausibility of the linear pluggable optics architecture or LPO, also sometimes referred to as linear direct drive. The LPO architecture is based on eliminating all optical DSP functionality. The industry has widely concluded that the LPO architecture will not be feasible for a material percentage of the optical module market, and that DSP functionality is critical to maintaining industry standards and interoperability, as well as achieving the bit error rate performance necessary for high yields in volume production. However, this does not mean that the industry call to action will be unanswered. Credo's response following OFC was to look at innovative ways to drastically reduce DSP power and subsequently cost through architectural innovation. Today, Credo issued a press release introducing our linear receive optical, or LRO DSPs. Our LRO DSP products provide optimized DSP capability in the optical transmit path only, and eliminate the DSP functionality in the optical receive path. This innovative architecture, as optimized by Credo, effectively reduces the optical DSP power by up to 50%, and at the same time lowers cost by eliminating unneeded circuitry. Our LRO products address the pitfalls of the LPO architecture by maintaining standards and enabling interoperability among many components of an optical system. And the DSP functionality maintains the equalization performance that's critical to high yields and volume production. We've already shipped our Dove 850 800 gig LRO DSP device and evaluation boards to our lead optical and hyperscale end customers for their development and testing. While any revenue ramp will be a ways out, I view this innovation as the latest example of Credo pioneering a new product category that directly addresses the energy and system cost challenges faced by the hyperscalers, especially for AI deployments. Regarding our AEC solutions, Credo continues to be an AEC market leader. While our initial success in our AEC business has been connecting front to end data center networks for general compute and AI appliances, We have seen an expansion in our AEC opportunity in the back-end networks that are fundamental to AI cluster deployments. Due to the sheer bandwidth required by back-end networks, an acceleration in single-lane speeds and networking density is driving the need for AECs, given the significant benefits compared to both passive copper cables and to active optical cables or AOCs for in-rack connectivity. we continue to make progress with our first two hyperscale customers for both front-end and back-end networks. And we're especially encouraged to see Credo AECs prominently featured in the leading edge deployments introduced at their respective conferences in November. Years in the making, we continue to maintain strong and close working relationships with our customers. And I'm pleased to say that in Q2, we made our initial shipments of 800 gig production AECs and industry first. And again, we've demonstrated our market leadership. We also continue to expand our hyperscale customer base with one in qual with 400 gig AEC solutions and another in development with 800 gig AEC solutions. Additionally, we've seen the increased need for 400 gig and 800 gig AECs among Tier 2 data center operators and service providers. As a group, these customers contribute meaningful revenue to Credo. I'll also highlight one of Credo's announcements at the recent Open Compute Conference in October. Credo announced the P3, Plugable Patch Panel System, a multi-tool that enables service providers and hyperscalers the freedom, by using the P3 and AECs, to decouple pluggable optics from core switching and routing hardware. The combination of the P3 and AECs enable network architects to optimize for power distribution and system cost, as well as to bridge varying speeds between switching and optical ports. We're engaged with several customers and believe the efforts will result in meaningful revenue in the future. To sum up, We remain confident that the increasing demand for greater networking bandwidth driven by AI applications, combined with the extraordinary value proposition of our AEC solutions, will drive continued AEC market expansion. Now, regarding our LineCard 5 business, Credo is an established market leader with our LineCard 5 solutions, which include retimers, gearboxes, and MACSEC 5s for data encryption. Our overall value proposition becomes even more compelling as the market is now accelerating to 100 gig per lane deployments. According to our customer base, Credo's competitive advantage in this market segment derives from the common thread across all of our product lines, which is leading performance and signal integrity that is optimized for energy efficiency and system cost. We're building momentum and winning design commitments for our Screaming Eagle 1.6 T5s, and for our customer-sponsored next-generation 1.6T MaxSec PHY. We remain excited about the prospects for this business with networking OEMs and hyperscale customers. Regarding our SERDES IP licensing and SERDES chiplet businesses, Credo's SERDES IP licensing business remains a strategically important part of our business. We have a complete portfolio of SERDES IP solutions that span a range of speeds reach distances, and applications with process nodes from 28 nanometer to 4 nanometer. And our initial 3 nanometer series IP for 112 gig and 224 gig is in FAB now. During Q2, we secured several licensing wins across networking data center applications. Our wins include new and recurring customers, a testament to our team's execution in contributing to our customers' success. We're also enthusiastic about the prospects for our chiplet solutions. During Q2, we secured a next-generation 112-gig, 4-nanometer CERTI chiplet win that includes customer sponsorship. Credo is aligned with industry expectations that chiplets will play an important role in the highest performance designs in the future. In conclusion, Credo delivered strong fiscal Q2 results. We remain enthusiastic about our business given the market demand for dramatically increasing bandwidth. This plays directly to Credo's strengths, and we're one of the few companies that can provide the necessary breadth of connectivity solutions at the highest speeds, while also optimizing for energy efficiency and system cost. As we embark on second half fiscal 24, we expect continued growth that supports a more diversified customer base across a diversified range of connectivity solutions. Lastly, I'm pleased to announce that yesterday, Credo published our first ESG report, which can be found on our website. As reiterated several times today in my comments, energy efficiency is built into our DNA and is a key part of our report. We aspire to be leaders across the ESG spectrum, and we strive to help enable our customers to be leaders as well. I'm very pleased with how Credo is pursuing our goals, and we look forward to continuing our positive ESG efforts. At this time, Dan Fleming, our CFO, will provide additional financial details. Thank you.
spk11: Thank you, Bill, and good afternoon. I will first review our Q2 results and then discuss our outlook for Q3 of fiscal 24. As a reminder, the following financials will be discussed on a non-GAAP basis, unless otherwise noted. In Q2, we reported revenue of $44 million, up 25% sequentially, and down 14% year over year. Our IP business generated $7.4 million of revenue in Q2, up 165% sequentially, and up 125% year over year. IP remains a strategic part of our business, but as a reminder, our IP results may vary from quarter to quarter, driven largely by specific deliverables to preexisting or new contracts. While the mix of IP and product revenue will vary in any given quarter over time, our revenue mix in Q2 was 17% IP, above our long-term expectation for IP, which is 10 to 15% of revenue. We expect IP as a percentage of revenue to be within our long-term expectations for fiscal 24. Our product business generated $36.7 million of revenue in Q2, up 13% sequentially and down 24% year over year. Our top three end customers were each greater than 10% of our revenue in Q2. In fact, our top four end customers each represented a different product line, which illustrates the increasing diversity of our revenue base. Our team delivered Q2 gross margin of 59.9% at the high end of our guidance range and up 10 basis points sequentially. Our IP gross margin generally hovers near 100 percent and was 95.6 percent in Q2. Our product gross margin was 52.7 percent in the quarter, down 405 basis points sequentially due to product mix and some minor inventory-related items, and up 39 basis points year over year. Total operating expenses in the second quarter were $27.1 million at the low end of our guidance range. down 1% sequentially and up 9% year-over-year. Our year-over-year OPEX increase was a result of an 11% increase in R&D as we continue to invest in the resources to deliver innovative solutions. Our SG&A was up 5% year-over-year. Our operating loss was $731,000 in Q2 compared to operating income of $3.2 million a year ago. The second quarter operating loss represented a sequential improvement of $5.7 million. Our operating margin was negative 1.7% in the quarter compared to positive 6.1% last year due to reduced top line leverage. We reported net income of $1.2 million in Q2 compared to net income of $2.2 million last year. Cash flow from operations in the second quarter was $5 million, an increase of $3.3 million year-over-year, due largely to a net reduction of inventory of $5 million in the quarter. CapEx was $2 million in the quarter, driven by R&D equipment spending, and free cash flow was $3 million, an increase of $6.9 million year-over-year. We ended the quarter with cash in equivalence of $240.5 million, an increase of $2.9 million from the first quarter. We remain well capitalized to continue investing in our growth opportunities while maintaining a substantial cash buffer. Our accounts receivable balance increased 17% sequentially to $32.7 million, while day sales outstanding decreased to 68 days, down from 73 days in Q1. Our Q2 ending inventory was $35.8 million, down $5 million sequentially. Now, turning to our guidance, we currently expect revenue in Q3 of fiscal 24 to be between $51 million and $53 million, up 18% sequentially at the midpoint. We expect Q3 gross margin to be within a range of 59 to 61%. We expect Q3 operating expenses to be between $28 million and $30 million. and we expect Q3 diluted weighted average share count to be approximately 166 million shares. We are pleased to see fiscal year 24 continue to play out as expected. While we see some near-term upside to our prior expectations, the rapid shift to AI workloads has driven new and broad-based customer engagement. We expect that this rapid shift will enable us to diversify our revenue throughout fiscal year 24 and beyond, as Bill alluded to. However, as new programs at new and existing customers ramp, we remain conservative with regard to the upcoming quarters as we continue to gain better visibility into forecasts at our ramping customers. In summary, as we move forward through fiscal year 24, we expect sequential revenue growth, expanding gross margins due to increasing scale and improving product mix, and modest sequential growth in operating expenses. As a result, we look forward to driving operating leverage in the coming quarters. And with that, I will open it up for questions.
spk19: At this time, I would like to remind everyone in order to ask a question, press star then the number 11 on your telephone keypad. We'll pause for just a moment to compile the Q&A roster.
spk02: Our first question comes from the line of Tashia Hari of Goldman Sachs.
spk03: Hi, good afternoon. Thank you so much for the question. I had two questions. First one on the revenue outlook. I just wanted to clarify, Dan, I think you mentioned sequential growth throughout the fiscal year. So April, I'm assuming, is up sequentially. I guess that's the first part. And then the second part, As you think about calendar 24, Bill, you gave quite a bit of color by product line. At a high level, the outlook sounds pretty constructive across AEC and your optical business, and I guess your certies business as well. But if you can try to quantify the growth that you're expecting into calendar 24 and what the top three key drivers are, that would be helpful. Thank you.
spk11: Yeah, so with regard to fiscal 24 on your first question, you know, generally speaking, we're very pleased with our quarterly sequential growth this year. And as we stated in our prepared remarks, you know, our Q3 guide was at the midpoint up 18%, $52 million at the midpoint. But as we stated on our call previously, you know, we expect modest top line growth fiscal year 23 to 24. So the key takeaway there, there's no change in our overall expectation for fiscal year 24.
spk05: And for the second question, I would just reiterate what Dan has said. As we look at our fiscal 24, it's playing out very much like we expected. So really no change there. We expect, I think, what should be considered fast sequential growth. And it's been, you know, driven by multiple factors, AECs, optical, chiplets. Really, we're firing on all cylinders.
spk03: And, Bill, sorry if I wasn't clear. Calendar 24, fiscal 25, I realize it's early and you've got, you know, many moving parts. But based on, you know, customer engagements, all the color you provided across product lines, how are you thinking about, you know, the overall business into next year? If you could provide some tips.
spk11: Yeah, we're not providing any formal guidance right now at this point for fiscal year 25. However, as you can imagine, we do expect meaningful growth based on all the customer engagements that we have. And as Bill mentioned, we continue to have lots of irons in the fire. But as we've stated, it takes a long time to turn a lot of these engagements into meaningful revenue, which will happen throughout the course of the year.
spk03: Okay, got it. And then as my follow-up on gross margins, as you noted in your remarks, Dan, I think your product gross margins were down sequentially in the October quarter off a really high base in July. But curious what drove the sequential decline there. And then as you look ahead, I think you talked about gross margins expanding over the next couple of quarters, I think you said. And what are the drivers there? And if you can speak to foundry costs potentially going from a headwind to something more neutral into calendar 24, and how the diversification of your customer base helps your gross margins going forward, that would be helpful. Thank you.
spk11: Yeah, so there was a lot to that question. Generally speaking, so as you correctly know, our Q2 product gross margin was down sequentially from Q1, which, and if you recall, Q1 was up substantially, 700 basis points from Q4. It's It's kind of easy to read probably too much into these movements quarter over quarter at the scale that we're at right now because there are slight product mix changes from quarter to quarter. In Q2, we also had some very minor inventory-related items that impacted product gross margin. But the important thing or the most important thing is that there's no change to our long-term expectation. Our gross margin expectation over the upcoming years is to expand to the 63% to 65% range. And from fiscal 23 to 24, you're seeing that play out, although it's not quite linear from quarter to quarter. And that will continue to play out through next year as well.
spk19: Thank you. Our next question. comes from the line of Tom O'Malley of Barclays.
spk18: Hey, guys. Good afternoon, and thanks for taking my question. I just wanted to clarify something you said on the call. You guys have talked previously about two customers that you're ramping with AC. You talked about one customer in qualification with 400G and one in development with 800G. I just wanted to make sure you're still referring to, you know, processes that you've talked about before, or are those new developments that you guys are talking about? Thank you.
spk05: I think we've alluded to those developments in the past, but I think these are additional hyperscale customers. So the first two that we've got, November was kind of a big month. Both of them had shows. So the Microsoft Ignite really prominently displayed their Maya AI appliance and rack. And you see the Credo AECs prominently displayed as part of that rack. So that's really something we've messaged in the past and now it's been publicly announced and shown. And also Amazon is having the reInvent conference right now as we speak. And if you look at the demos on the show floor, you'll see our 50 gig and 100 gig per lane products as part of those demonstrations. And so the two additional, one we're in qual with and we're expecting qualification to be completed sometime in the upcoming quarter, maybe give or take a month or so. And then the other one is more of a long-term plan as we're putting together an 800-gig customer-specific solution for another hyperscaler.
spk18: Super helpful. And then just on the optical side, you guys had previously talked about a new 400G customer. Is the upside in the near term the beginning of that ramp, or are you just seeing additional traction from customers you've talked about in the past I know that you had, there was some Chinese customers that you were looking to get back in the red runway. Can you just help me understand where the strength you're seeing in the optical DSP side is coming from? Thanks.
spk05: Yeah, so generally we continue to ramp with the partner that we're engaged with serving the U.S. hyperscaler. So that ramp is going to happen for the next several quarters. We're also seeing further signs of life in our customer base in China. And so we've actually got demand that we're seeing from three or four hyperscalers in China. As far as the new U.S. hyperscaler that we've talked about, really that is not baked into any of the numbers that we've talked about. And so that would be, you know, if we can ultimately close that We expect that will impact revenues in the fiscal 25 timeframe.
spk19: Thank you. Our next question comes from the line of Tore Svanberg of Stiefel.
spk02: Tore Svanberg, your line is open. Please go ahead. Please make sure your line is unmuted, and if you're in a speakerphone, lift your handset.
spk06: Yes, can you hear me?
spk19: Yes, sir. Please proceed.
spk07: Yes, sorry about that. Yeah, Bill, my first question was on the tweener slash halo product that you just announced this afternoon. You did say that this is something that should generate revenues longer term, but I think the market is also very, very hungry for lower costs near term. What kind of timeframe are we looking at here as far as when that product could be in production?
spk05: I think that the first message is that we've shipped samples that are going to be built into modules. We've shipped eval boards that are going to be thoroughly tested by our lead hyperscale customer. And so, you know, T0 is really now. And so the typical development time for an optical module is on the order of 12 months to get to production. And that's really based on, you know, building and qualifying the module and then going through qualification with the hyperscale end customer. And so as we, you know, as we look at kind of best case scenario, we're talking about, you know, something on the order of 12 months from now. So it could impact our fiscal 25
spk07: That's very helpful. And as my follow-up, I know the first half of the year, there were still some headwinds, obviously, from your largest customer inventory digestion on the compute side. I'm just wondering, you know, is that now, as we look at the January quarter, is that headwind completely behind you, or is there still some lingering effects there?
spk05: Well, I think as we think about, you know, the front-end networks, at this lead customer of ours. The application is general compute as well as AI. And so, of course, there's, you know, both of these applications are kind of contributing to the digestion of the inventory that was built up as a result of the pivot earlier in the year. And so as we look at fiscal 24, I think we've got good visibility. And exactly when it turns back on, I think we're still being conservative in a sense that we've got to wait for that to really develop in our fiscal 25.
spk07: Great. Thank you very much.
spk05: Thanks, Torrey.
spk19: Thank you. Stand by for our next question, which comes from the line of Carl Ackerman of B&P Paribas.
spk10: Yes, thank you, gentlemen. Two questions, if I may. The first question is a follow-up from the previous one, but you are introducing AOC solutions today to address both DSP-based and non-DSP-based optical links. How do you see the adoption of non-DSP-based solutions for back-end network connections in Calendar 24? And as you address that question, I guess why not introduce an AEC solution for back-end networks?
spk20: So let me take the first part of that question.
spk05: Really, the two solutions that we've got for optical are what we might call a full DSP, which is kind of the traditional approach where there's a DSP on the transmit path as well as the receive path on a given optical link. That activity is going to continue. The product that we really announced today was eliminating the DSP on the receive path and having it on the transmit path only. And so you might say that that would be half of the DSP on a typical optical link. And so those are really the two solutions that we're promoting. We believe that completely eliminating the DSP is really not something that's going to play out in a big way. Analysts have been out front saying that they don't see it ever being more than 10% of the market if it achieves that level. So you'd have to have a very tight control over the entire link to be able to manage that. And that's just not the typical scenario in the market today. Typically, people are putting together various solutions, and interoperability is really the key, as well as troubleshooting and ultimately yielding in production. Second part of your question was regarding AECs, and we are absolutely building AECs for back-end networks. And the AECs are really covering in-rack, you know, three-meter or less solutions. There are also rack-to-rack connections, and those are all optical connections, whether they're AOCs or transceivers. And especially in that, you know, in that situation for you know, rack-to-rack connectivity within a cluster, that's where we really believe that the LRO DSP is going to be highly applicable and really quite valuable to customers.
spk10: Thanks for that. For my follow-up, I want to pivot to your IP business. This is primarily tied to data center today, or at least a data center-focused application. But over time, the idea is that as PAM3 ramps, it will transition more toward consumer applications How do you expect the end market mix of your IP business transitioning to consumer over the next few quarters? Thank you.
spk05: So as we look at our IP business, you know, primarily today it's Ethernet. We've talked about one large consumer license that we've, you know, that we've engaged on for consumer, and that's moving to 40 gig PEM3 for the CIO 80 license. or 80 gigabits per second, two lanes of 40 gig for that market. And that market is going to be out sometime in the future, probably on the order of two to three years before that ramps production. I don't expect it to be a big part of our IP business long term. I expect that our Ethernet IP business will continue very strongly, and I also believe that From a PCIe perspective, we'll be able to talk about that as we bring our 64-gig and 128-gig solutions to market.
spk19: Thank you. Our next question, please stand by, comes from the line of Vijay Rakesh of Mizuho. Please go ahead, Vijay.
spk15: Yeah, hey, Bill and Dan. Just on the PPP, the pluggable solution, batch panel. Is that including the AEC and are all the three customers using it or is it, how do you see that ramping, I guess?
spk05: Yeah, so you broke up a little on the line, but I'll answer the question by saying that this P3 was something that was developed in conjunction with a leading service provider. So they spoke about their challenges as they were connecting ZR optics to routers or switch ports. And so this was really developed with them and their application in mind, also knowing that developing this solution, it would become a multi-tool in a sense to be able to solve different networking problems associated with power and cooling and control plane access. And so our lead customer is a service provider, but we're seeing that there's also applications where this really fits well. When we talk about the situation where switch and router port speed are different from the optic speeds that a customer wants to use. So a customer could connect 800 gig ZR optics with 400 gig switch ports, or vice versa. They could move to the fastest switches, 800 gig ports, but still use 400 gig ZR. So in a sense, this P3 system can gearbox and really seamlessly connect different speed optics with different speed ports on routers and switches. Also, from a thermal distribution standpoint, this is a really useful tool in a sense because some customers want to use lower-cost, smaller switches that lack the power and cooling envelope for advanced DR optics. So you would have a lot of stranded ports. So in a sense, you can take that thermal management away from the switch. And so there's... You know, there's multiple applications. We introduced this at OCP, and we realized putting out a multi-tool like this that, you know, basically enables optics to be connected directly with AECs as a different type of solution. We were surprised at the ideas that some of the engineers that came by our booth at OCP were surprised at some of the great ideas that they came up with. So generally, when we think about this product, We think about it in terms of a combination of the P3 and AECs. So we developed the P3 to basically be a catalyst for more AEC demand.
spk15: Got it. And so in better utilizing the stranded ports, I guess, does the P3 with the AEC actually double your content on the server top of the rack or...?
spk05: You know, it's hard to say. I don't think there's a relative reference point on contact. You know, these are new applications, and, you know, with our lead customer, we think that the content can be significant. But the nice thing is this is really an application, you know, as we prove out our lead customer, this is one that, you know, many service providers we think will pick up.
spk15: Got it. And then the last question on your 10% customers, how many were there in the quarter And if you were to look out, let's say, fiscal exiting calendar 25, any thoughts on how many 10% customers do you think you would be working on?
spk11: Yeah, so for Q2, we had, as you'll see when our queue is filed, we had three 10% end customers. Recall last quarter, we added an additional disclosure to show end customers. So three, you'll see the largest one was 29%. Generally, we don't disclose who our 10% customers are, but obviously the 29% one was Microsoft. Most importantly, we continue to expand our customer base throughout the year. One of the customers, one of those three end customers is a new end customer, as you'll see in our disclosure. So it's hard to answer the latter part of your question, how many we'll have at the you know, maybe four.
spk19: Thank you. Please stand by for our next question. Our next question comes from the line of Suji Da Silva of Roth MKM.
spk17: Hi, Bill. Hi, Dan. My question is on the competitive landscape. I'm wondering what you're seeing in the chip-based AEC efforts, chip plus cable guys competing with you. Are you guys able to provide a faster time to market? Is that one of the reasons you're in some of these demo racks, perhaps. And maybe you can talk about the share. You might think you'd be having an AEC market versus the size. Thanks.
spk05: I think we've been consistent in saying that we don't expect to maintain 100% of the AEC market. And we do see competitors, as this product category becomes really more and more established as a de facto way of making short in-rack connections, We do see more competitors. The way that we're organized, for sure, we're going to be able to deliver better time to market. And what we're seeing is that for the high-volume applications, customers are asking for special features, special functions. And fundamentally, we are responsible for working. I mean, our company, although we're a chip company, I've built a system organization for AECs. And so, you know, we're the ones that are working directly with the hyperscalers. We're the ones having daily conversations when crunch time comes. And so, for sure, we've got a time-to-market advantage. And so I think the way that this will play out, I think that our market share, you know, will ultimately play out. And, you know, I hope that – you know, that we maintain more than 50% long-term. And I think that's a function of being first. That's a function of having a model that delivers just a better experience with hyperscale customers directly. Okay.
spk17: All right. Thanks, Bill. And then my other question is on the customer base and where they are in the racks. You talked about Amazon and Microsoft demoing the racks, and they seem like they're a little bit ahead of the rest of the customer base, but perhaps you can clarify that. And if so, Are the other folks really close behind them, or do those guys have maybe a substantial technical lead just trying to figure out how the customers may waterfall in for you?
spk05: Yeah, I think from a timing standpoint, I would expect the third customer would probably ramp in the upcoming two to three quarters. It takes time for these new platforms to be deployed. And then the fourth customer would be following that by a number of quarters. So I think it's I think it's one where the first two customers, of course, you know, the architectures that they've decided to take to market, really each one of these customers is different in a sense. So I wouldn't say that they're necessarily ahead from a technology standpoint or, you know, it's just that they've chosen to move forward more quickly than the others.
spk19: Thank you. Our next question. comes from the line of Richard Shannon of Craig Harlem.
spk02: Richard, please make sure your line is muted.
spk19: And if you're in a speakerphone, lift your handset.
spk13: Can you hear me now?
spk19: Yes, sir. Please proceed.
spk13: All right. Great. Thanks. Dan, I have a question for you based on the comments and your prepared remarks. I'm not sure if I caught it correctly, but I think you said you had three 10% customers and including your Next largest one, the top four, all came, each were supporting a different product line. I think we can all guess what the first one is, but I wonder if you can delineate specifically which product lines each of the next three customers were primarily purchasing.
spk11: Yeah, it kind of covers the broad gamut of our product lines, actually. So obviously the largest one being Microsoft is AEC. But we've, for a long time, our line card five business has been strong, so you can assume that would be in there. Optical DSP, we have been gaining traction there, starting with Q1, as we described last quarter. And then our chiplet business, we described a bit last quarter as well. So that kind of covers all of the different product lines that are materially contributing at this point in time.
spk13: Okay. Since you didn't say it in your prepared remarks and you have talked about it in this context in the past, you didn't say DSP was 10% that I would assume that the fourth customer is at, or is it one of the 10% customers of DSP?
spk11: Yeah, I mean, you could assume it's near that if it's not at that, you know, being where it is. And what we've said, you know, we haven't changed our expectation there. We expect for next fiscal year, our target is to be at 10% or more of revenue for optical DSP. And as our first production ramp is occurring with a large hyperscaler, you might expect that we'd have a quarter or two this year where it trips 10%, based upon their build schedule. Okay.
spk13: All right, Ferdinand, thanks for that characterization. I guess my second question is on product gross margins. We've had a couple of quarters of, I guess, somewhat volatile, but I think you're still talking directionally upwards over time here. Maybe specifically on the product gross margins here, with the growth in AECs, is it fair to think that product line as gross margins has continued to grow, and has it been somewhat steady, or is it the volatility coming from that line?
spk11: Yeah, I would expect all of our, over the long term, most of our product lines will grow a bit in gross margin, really due to increasing scale. That had been a large part of our story last year, last fiscal year. With the Microsoft reset this year, fluctuations in gross margin have really been more about product mix as opposed to scale. Although, now that we're approaching a point where we'll be exiting the year, at record levels of revenue, that scale factor will come in again. So I would expect some uplift in AECs as well as kind of really across the board as we stay on target to achieve that 63% to 65% overall gross margin.
spk19: Thank you. Our next question. comes from the line of Quinn Bolton of Needham & Company.
spk04: Thanks for taking my question. I guess I wanted to follow up on your comments about both Microsoft Ignite and the reInvent conference for Amazon. You talked about the Maya 100 accelerator racks. I think in the Microsoft blog, there were certainly lots of purple cables, so it's great to see, but can you give us some sense in that Maya 100 rack? Are we talking about You know, as many as 48, you know, multi-hundred gig AECs for the back-end network, as well as, you know, a number of lower speed for the front-end network. And then for the re-event, is Amazon looking at similar architectures? Or, you know, can you just give us some sense of what the AEC content might look like in some of those AI racks?
spk05: Yeah, so on the Maya platform, I think you've got it absolutely right that the back-end network is comprised of 800 gig or 100 gig per lane AECs. The front-end network is also connected with Credo AECs, and those are lower speed. So you're right in terms of, you know, the number total in the rack, and you can kind of visually see that when they introduced that as part of the keynote. I would say that for Amazon, they're also utilizing Credo AECs for front-end connections as well as back-end. And so I think just the nature of those two different types of networks, there's going to be some strong similarities between the architectures.
spk04: And Bill, I think in the past you had talked about some of these AI applications and I think you're referring to the back-end networks here, might not ramp until kind of late fiscal 24 and then maybe not until fiscal 25. It sounds like, at least in the Microsoft announcement, that they may be starting to ship these racks as early as kind of early next year. And so I'm kind of wondering, could you give us an update? When do you think you see volume revenue from AECs in the back-end networks Could that be over the next couple of quarters or do you still think it may be, you know, further out than that?
spk05: Well, I think that, you know, it's playing out the way that we've expected. And we've spoken about this on earlier calls that in our fiscal 24, you know, the types of, you know, volume or revenue that we've built into the model is really based on you know, qualifications, small pilot types of build. So it's meaningful, but not necessarily what you would expect to see from, you know, a production ramp. And so as we look out into fiscal 25, you know, we still are, you know, being somewhat conservative about when exactly these are going to ramp. And so it was nice to see all of these things talked about publicly in November. However, you know, Deploying these at a volume scale, it's a complicated thing that they've got to work through. And so when we talk about when exactly does the linear ramp start, that's when we're confident it's going to happen in fiscal 25, but we can't necessarily pinpoint what quarter.
spk02: Understood. Thank you.
spk19: Our next question, please stand by, comes from the line of Tori. Svenberg of Stifel. Please go ahead, Tori.
spk07: Yes, I just had a follow-up. So, Bill, I think you've said in the past that for the AAC business with AI, you're looking at sort of a 5 to 10x opportunity versus general compute. And I guess related to Quinn's question, you know, sort of the timing of how that plays out, Is again that five to ten primarily on the back end side or are you also starting to see?
spk05: The the the contributing on the on the front end side of the AI clusters Yeah, so I think generally as we as we talk about AI versus general compute We're starting to think about it in terms of you know front-end networks and back-end networks and so when we see a rack of AI appliances and of course there's going to be a front-end network that looks very similar to what we see for general compute. And so to a certain extent, the way it plays out from a ratio perspective, serving the front-end network is really something that's common for both general compute and AI. You might see a larger number of general compute servers in a rack, so it might say the per rack front-end opportunity for general compute might be a little bit larger than AI. But just generally, when we think about the back-end networks, the network that is really networking every GPU within a cluster, that's where we see the big increase in overall networking density. And Quinn earlier talked about the idea of having 48 connections to the back-end network or 48 AECs within an AI appliance rack that are dedicated to the back-end network. versus, say, if it's a rack with eight appliances, there'd be eight AECs for the front end. So that's where we see in an actual appliance rack, you know, we can talk about five to six times the volume. But then when we think about the switch racks that are part of that back-end network, there's also an additional opportunity there. And that's when we can think about the overall opportunity compared to front end being five to ten times the volume.
spk07: That's very helpful. That's my last question. Um, and I have to ask you this question, just given, you know, your strong certies IP, but as, as it relates to the chip that market, obviously, you know, the CPU market is, is the first to embrace that, but are you starting to see the GPU market, uh, moving in the direction of GPUs as well, or is it just way too early for that?
spk05: I think that, that, You know, the standard that Intel has been promoting, the UCIE standard, I think that that is going to be a big market for chiplets. You know, and that, for us, ties in closely with the efforts that we're making on PCIE. And so, you know, one thing I would note is that the acceleration and speeds is happening really across the board. And so we've been targeting the 64-gig PAM-4 chip. PCIe Gen 6 CXL3 market, but I also see an acceleration for the next generation, 128 gig. And so that's very much part of what's happening with this explosion in the AI market is this need for faster and faster speeds. And so I think that you're going to see the same type of thing that's happened in Ethernet. You're going to see that happen with PCIe. And at OCP this year, we had kind of a vision piece that we presented with the possibility of CXL, you know, really, and PCIe possibly being the protocol for back-end network connectivity as well as an expansion in front-end networks. So there's really exciting things coming in the future as we see that standard accelerating.
spk14: Great. Thank you so much.
spk19: Thank you. There are no further questions at this time. Mr. Brennan, I turn the call back over to you.
spk05: Thank you very much for the questions. We really appreciate the participation and we look forward to following up on the callbacks. Thank you.
spk19: This concludes today's conference call.
spk02: You may now disconnect. you Thank you. Thank you. Thank you.
spk19: Ladies and gentlemen, thank you for standing by. At this time, all participants are in a listen-only mode. Later, we will conduct a question and answer session. At that time, if you have a question, you will need to press star 11 on your push button phone. I would now like to turn the conference over to Dan O'Neill. Please go ahead, sir.
spk08: Good afternoon, and thank you all for joining us on our fiscal 2024 Second Quarter Earnings Call. Today I am joined by Credo's Chief Executive Officer, Bill Brennan, and Chief Financial Officer, Dan Fleming. I'd like to remind everyone that certain comments made in this call today may include forward-looking statements regarding expected future financial results, strategies and plans, future operations, the markets in which we operate, and other areas of discussion. These forward-looking statements are subject to risks and uncertainties that are discussed in detail in our documents filed with the SEC. It's not possible for the company's management to predict all risks, nor can the company assess the impact of all factors on its business, or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement. Given these risks, uncertainties, and assumptions, The forward-looking events discussed during this call may not occur, and actual results could differ materially and adversely from those anticipated or implied. The company undertakes no obligation to publicly update forward-looking statements for any reason after the date of this call to conform these statements to actual results or to changes in the company's expectations, except as required by law. Also during this call, we will refer to certain non-GAAP financial measures which we consider to be important measures of the company's performance. These non-GAAP financial measures are provided in addition to and not as a substitute for or superior to financial performance prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and how reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed using the investor relations portion of our website. I will now turn the call over to our CEO, Bill.
spk05: Thank you, Dan. Welcome to everyone joining our Q2 fiscal 24 earnings call. I'll start with an overview of our fiscal Q2 results. I'll then discuss our views on our outlook. After my remarks, our CFO, Dan Fleming, will provide a detailed review of our Q2 financial results and share the outlook for the third fiscal quarter. We will then be happy to take questions. For the second quarter, Credo reported revenue of $44 million and non-GAAP gross margin of 59.9%. Our Q2 results and our future growth expectations are driven by the accelerating market opportunity for high-speed and energy-efficient connectivity solutions. We target port speeds up to 1.6 terabits per second with solutions including active electrical cables or AECs, optical DSPs, laser drivers and TIAs, line cart PHYs, SERDES chiplets, and SERDES IP licensing, enabling us to address a broad spectrum of connectivity needs throughout the digital infrastructure market. Each of these solutions leverage our core SERDES technology and our unique customer-focused design approach. As a result, Credo delivers application-specific, high-speed solutions with optimized energy efficiency and system costs. And our advantage expands as the market moves to 100 gig per lane speeds. Within the data center market today, we've seen a dramatically increasing demand for higher bandwidth, higher density, and more energy efficient networking. This demand is driven by the proliferation of generative AI applications. For the past several years, Credo has been collaborating with our customers on leading edge AI platforms that are now in various stages of ramping production. In fact, the majority of Credo revenue will be driven by AI applications for the foreseeable future. Now, I'll review our overall business in more detail. First, I'll discuss our optical business. I'm pleased with the traction we've been gaining in this market. In the quarter, we continued shipping to multiple global hyperscale end customers, and we're making progress in positioning Credo to add additional hyperscale end customers in the upcoming quarters. targeting 400-gig and 800-gig applications. Credo also has optical design movements in various stages with module customers and networking OEMs for the fiber channel market and with service providers for 5G infrastructure deployments. Credo plays a disruptive role in the optical DSP market. Our fundamental CERTI's technology is leveraged to provide a compelling combination of performance, energy efficiency, and system costs Additionally, we focus on solving our customers' problems and market challenges through engineering innovation. At the OFC Optical Conference in March of this year, there was an important call to action to address the unsustainable power and cost increases for optical modules in the 800 gig and 1.6T generations. Much industry discussion has ensued this year, especially related to the plausibility of the linear pluggable optics architecture or LPO, also sometimes referred to as linear direct drive. The LPO architecture is based on eliminating all optical DSP functionality. The industry has widely concluded that the LPO architecture will not be feasible for a material percentage of the optical module market, and that DSP functionality is critical to maintaining industry standards and interoperability, as well as achieving the bit error rate performance necessary for high yields in volume production. However, this does not mean that the industry call to action will be unanswered. Credo's response following OFC was to look at innovative ways to drastically reduce DSP power and subsequently cost through architectural innovation. Today, Credo issued a press release introducing our linear receive optical, or LRO DSPs. Our LRO DSP products provide optimized DSP capability in the optical transmit path only, and eliminate the DSP functionality in the optical receive path. This innovative architecture, as optimized by Credo, effectively reduces the optical DSP power by up to 50%, and at the same time lowers cost by eliminating unneeded circuitry. Our LRO products address the pitfalls of the LPO architecture by maintaining standards and enabling interoperability among many components of an optical system. And the DSP functionality maintains the equalization performance that's critical to high yields and volume production. We've already shipped our Dove 850 800 gig LRO DSP device and evaluation boards to our lead optical and hyperscale end customers for their development and testing. While any revenue ramp will be a ways out, I view this innovation as the latest example of Credo pioneering a new product category that directly addresses the energy and system cost challenges faced by the hyperscalers, especially for AI deployments. Regarding our AEC solutions, Credo continues to be an AEC market leader. While our initial success in our AEC business has been connecting front-to-end data center networks for general compute and AI appliances, we have seen an expansion in our AEC opportunity in the backend networks that are fundamental to AI cluster deployments. Due to the sheer bandwidth required by back-end networks and acceleration in single-lane speeds and networking density is driving the need for AECs, given the significant benefits compared to both passive copper cables and to active optical cables, or AOCs, for in-rack connectivity. We continue to make progress with our first two hyperscale customers for both front-end and back-end networks. And we're especially encouraged to see Credo AECs prominently featured in the leading edge deployments introduced at their respective conferences in November. Years in the making, we continue to maintain strong and close working relationships with our customers. And I'm pleased to say that in Q2, we made our initial shipments of 800 gig production AECs and industry first. And again, we've demonstrated our market leadership. We also continue to expand our hyperscale customer base with one in qual with 400 gig AEC solutions and another in development with 800 gig AEC solutions. Additionally, we've seen the increased need for 400 gig and 800 gig AECs among tier two data center operators and service providers. As a group, these customers contribute meaningful revenue to Credo. I'll also highlight one of Credo's announcements at the recent Open Compute Conference in October. Credo announced the P3, Plugable Patch Panel System, a multi-tool that enables service providers and hyperscalers the freedom by using the P3 and AECs to decouple pluggable optics from core switching and routing hardware. The combination of the P3 and AECs enable network architects to optimize for power distribution and system cost as well as to bridge varying speeds between switching and optical ports. We're engaged with several customers and believe the efforts will result in meaningful revenue in the future. To sum up, we remain confident that the increasing demand for greater networking bandwidth driven by AI applications, combined with the extraordinary value proposition of our AEC solutions, will drive continued AEC market expansion. Now, regarding our LineCard 5 business, Credo is an established market leader with our LineCard 5 solutions, which include retimers, gearboxes, and MACSEC 5s for data encryption. Our overall value proposition becomes even more compelling as the market is now accelerating to 100 gig per lane deployments. According to our customer base, Credo's competitive advantage in this market segment derives from the common thread across all of our product lines, which is leading performance and signal integrity that is optimized for energy efficiency and system costs. We're building momentum and winning design commitments for our Screaming Eagle 1.6T PHYs and for our customer-sponsored next-generation 1.6T MACSEC PHY. We remain excited about the prospects for this business with networking OEMs and hyperscale customers. Regarding our SERTI's IP licensing and SERTI's chiplet businesses, Credo's SERTI's IP licensing business remains a strategically important part of our business. We have a complete portfolio of SERDES-IP solutions that span a range of speeds, reach distances, and applications with process nodes from 28 nanometer to 4 nanometer. And our initial 3 nanometer SERDES-IP for 112 gig and 224 gig is in FAB now. During Q2, we secured several licensing wins across networking data center applications. Our wins include new and recurring customers, a testament to our team's execution in contributing to our customers' success. We're also enthusiastic about the prospects for our chiplet solutions. During Q2, we secured a next-generation 112-gig, 4-nanometer CERTI chiplet win that includes customer sponsorship. Credo is aligned with industry expectations that chiplets will play an important role in the highest performance designs in the future. In conclusion, Credo delivered strong fiscal Q2 results. We remain enthusiastic about our business given the market demand for dramatically increasing bandwidth. This plays directly to Credo's strengths, and we're one of the few companies that can provide the necessary breadth of connectivity solutions at the highest speeds while also optimizing for energy efficiency and system costs. As we embark on second half fiscal 24, we expect continued growth that supports a more diversified customer base across a diversified range of connectivity solutions. Lastly, I'm pleased to announce that yesterday, Credo published our first ESG report, which can be found on our website. As reiterated several times today in my comments, energy efficiency is built into our DNA and is a key part of our report. We aspire to be leaders across the ESG spectrum, and we strive to help enable our customers to be leaders as well. I'm very pleased with how Credo is pursuing our goals, and we look forward to continuing our positive ESG efforts. At this time, Dan Fleming, our CFO, will provide additional financial details. Thank you.
spk11: Thank you, Bill, and good afternoon. I will first review our Q2 results and then discuss our outlook for Q3 of fiscal 24. As a reminder, the following financials will be discussed on a non-GAAP basis unless otherwise noted. In Q2, we reported revenue of $44 million, up 25% sequentially and down 14% year-over-year. Our IP business generated $7.4 million of revenue in Q2, up 165% sequentially and up 125% year-over-year. IP remains a strategic part of our business, but as a reminder, our IP results may vary from quarter to quarter, driven largely by specific deliverables to preexisting or new contracts. While the mix of IP and product revenue will vary in any given quarter over time, our revenue mix in Q2 was 17% IP, above our long-term expectation for IP, which is 10% to 15% of revenue. We expect IP as a percentage of revenue to be within our long-term expectations for fiscal 24. Our product business generated $36.7 million of revenue in Q2, up 13% sequentially and down 24% year-over-year. Our top three end customers were each greater than 10% of our revenue in Q2. In fact, our top four end customers each represented a different product line, which illustrates the increasing diversity of our revenue base. Our team delivered Q2 gross margin of 59.9% at the high end of our guidance range and up 10 basis points sequentially. Our IP gross margin generally hovers near 100% and was 95.6% in Q2. Our product gross margin was 52.7% in the quarter, down 405 basis points sequentially due to product mix and some minor inventory related items and up 39 basis points year over year. Total operating expenses in the second quarter were $27.1 million at the low end of our guidance range, down 1% sequentially and up 9% year over year. Our year over year OPEX increase was a result of an 11% increase in R&D as we continue to invest in the resources to deliver innovative solutions. Our SG&A was up 5% year over year. Our operating loss was $731,000 in Q2 compared to operating income of $3.2 million a year ago. The second quarter operating loss represented a sequential improvement of $5.7 million. Our operating margin was negative 1.7% in the quarter compared to positive 6.1% last year due to reduced top line leverage. We reported net income of $1.2 million in Q2, compared to net income of $2.2 million last year. Cash flow from operations in the second quarter was $5 million, an increase of $3.3 million year over year, due largely to a net reduction of inventory of $5 million in the quarter. CapEx was $2 million in the quarter, driven by R&D equipment spending. and free cash flow was $3 million, an increase of $6.9 million year over year. We ended the quarter with cash in equivalence of $240.5 million, an increase of $2.9 million from the first quarter. We remain well capitalized to continue investing in our growth opportunities while maintaining a substantial cash buffer. Our accounts receivable balance increased 17% sequentially to $32.7 million, while day sales outstanding decreased to 68 days, down from 73 days in Q1. Our Q2 ending inventory was $35.8 million, down $5 million sequentially. Now, turning to our guidance, we currently expect revenue in Q3 of fiscal 24 to be between $51 million and $53 million, up 18% sequentially at the midpoint. We expect Q3 gross margin to be within a range of 59% to 61%. We expect Q3 operating expenses to be between $28 million and $30 million. And we expect Q3 diluted weighted average share count to be approximately 166 million shares. We are pleased to see fiscal year 24 continue to play out as expected. While we see some near-term upside to our prior expectations, the rapid shift to AI workloads has driven new and broad-based customer engagement. We expect that this rapid shift will enable us to diversify our revenue throughout fiscal year 24 and beyond, as Bill alluded to. However, as new programs at new and existing customers ramp, we remain conservative with regard to the upcoming quarters as we continue to gain better visibility into forecasts at our ramping customers. In summary, as we move forward through fiscal year 24, we expect sequential revenue growth, expanding gross margins due to increasing scale and improving product mix, and modest sequential growth in operating expenses. As a result, we look forward to driving operating leverage in the coming quarters. And with that, I will open it up for questions.
spk19: At this time, I would like to remind everyone, in order to ask a question, press star, then the number 11 on your telephone keypad. We'll pause for just a moment to compile the Q&A roster.
spk02: Our first question comes from the line of Tashia Hari of Goldman Sachs.
spk03: Hi, good afternoon. Thank you so much for the question. I had two questions. First one on the revenue outlook. I just wanted to clarify, Dan, I think you mentioned sequential growth throughout the fiscal year. So April, I'm assuming, is up sequentially. I guess that's the first part. And then the second part, as you think about calendar 24, Bill, you gave quite a bit of color by product line at a high level of the outlook. sounds pretty constructive across AEC and your optical business, and I guess your SERDES business as well. But if you can, you know, try to, you know, quantify the growth that you're expecting into calendar 24 and what the top three key drivers are, that would be helpful. Thank you.
spk11: Yeah, so with regard to fiscal 24 on your first question, you know, generally speaking, we're very pleased with our quarterly sequential growth this year. And as we stated in our prepared remarks, you know, our Q3 guide was at the midpoint up 18%, $52 million at the midpoint. But as we stated on our call previously, you know, we expect modest top-line growth fiscal year 23 to 24. So the key takeaway there, there's no change in our overall expectation for fiscal year 24.
spk05: And for the second question, I would just reiterate what Dan has said. As we look at our fiscal 24, it's playing out very much like we expected. So really, no change there. We expect, I think, what should be considered fast sequential growth. And it's been driven by multiple factors, AECs, optical, chiplets. Really, we're firing on all cylinders.
spk03: And Bill, sorry if I wasn't clear, calendar 24, fiscal 25, I realize it's early and you've got many moving parts, but based on customer engagements, all the color you provided across product lines, how are you thinking about the overall business into next year?
spk11: We're not providing any formal guidance right now at this point for fiscal year 25. However, as you can imagine, we do expect meaningful growth based on all the customer engagements that we have. And as Bill mentioned, we continue to have lots of irons in the fire. But as we've stated, it takes a long time to turn a lot of these engagements into meaningful revenue, which will happen throughout the course of the year.
spk03: Okay, got it. And then as my follow-up on gross margins, as you noted in your remarks, Dan, I think your product gross margins were down sequentially in the October quarter, off a really high base in July. But curious what drove the sequential decline there? And then as you look ahead, I think you talked about gross margins expanding over the next couple of quarters, I think you said. And what are the drivers there? And if you can speak to foundry costs, you know, potentially going from a headwind to something more neutral into calendar 24, and how the diversification of your customer base helps your gross margins going forward, that would be helpful. Thank you.
spk11: Yeah, so there was a lot to that question. Generally speaking, so as you correctly know, our Q2 product gross margin was down sequentially from Q1, which, and if you recall, Q1 was up substantially, 700 basis points from Q4. It's kind of easy to read probably too much into these movements quarter over quarter at the scale that we're at right now because there are slight product mix changes from quarter to quarter. In Q2, we also had some very minor inventory-related items that impacted product gross margin. But, you know, the important thing or the most important thing is that there's no change to our long-term expectation. Our gross margin expectation over the upcoming years is to expand to the 63% to 65% range. And from fiscal 23 to 24, you're seeing that play out, although it's not quite linear from quarter to quarter. And that will continue to play out through next year as well.
spk19: Thank you. Our next question comes from the line of Tom O'Malley of Barclays.
spk18: Hey, guys. Good afternoon, and thanks for taking my question. I just wanted to clarify something you said on the call. You guys have talked previously about two customers that you're ramping with AEC. You talked about one customer in qualification with 400G and one in development with 800G. I just wanted to make sure you're still referring to, you know, processes that you've talked about before, or are those new developments that you guys are talking about? Thank you.
spk05: I think we've alluded to those developments in the past, but, you know, I think... These are additional hyperscale customers. So the first two that we've got, November was kind of a big month. Both of them had shows. So the Microsoft Ignite really prominently displayed their Maya AI appliance and rack. And you see the Credo AECs prominently displayed as part of that rack. So that's really something we've messaged in the past and now it's been publicly announced and shown. And also Amazon is having the reInvent conference right now as we speak. And if you look at the demos on the show floor, you'll see our 50 gig and 100 gig per lane products as part of those demonstrations. And so the two additional, you know, one we're in qual with and we're expecting, you know, qualification to be completed sometime in the upcoming quarter, maybe give or take a month or so. And then the other one is more of a long-term project. plan as we're putting together an 800-gig customer-specific solution for another hyperscaler.
spk18: Super helpful. And then just on the optical side, you guys had previously talked about a new 400G customer. Is the upside in the near term the beginning of that ramp, or are you just seeing additional traction from customers you've talked about in the past? I know that you had – there were some Chinese customers that you were looking to get back into Red Broadway. Can you just help me understand – where the strength you're seeing in the optical DSP side is coming from?
spk05: Yeah, so generally we continue to ramp with the partner that we're engaged with serving the U.S. hyperscaler. So that ramp is going to happen for the next several quarters. We're also seeing, you know, further signs of life in our customer base in China. And so we've actually got, you know, We've got demand that we're seeing from three or four hyperscalers in China. As far as the new U.S. hyperscaler that we've talked about, really that is not baked into any of the numbers that we've talked about. And so that would be, you know, if we can ultimately close that, we expect that will impact revenues in, you know, in the fiscal 25 timeframe.
spk19: Thank you. Our next question comes from the line of Torrey Svanberg of Stifel.
spk02: Torrey Svanberg, your line is open. Please go ahead. Please make sure your line isn't muted. And if you're in a speakerphone, lift your handset.
spk06: Yes, can you hear me?
spk19: Yes, sir. Please proceed.
spk07: Yes, sorry about that. Yeah, Bill, my first question was on the tweener slash halo product that you just announced this afternoon. You did say that, you know, this is something that should generate revenues longer term. But I think the market is also very, very hungry for lower costs near term. So what kind of timeframe are we looking at here as far as when that product could be in production?
spk05: So I think that the first message is that we've shipped samples that are going to be built into modules. We've shipped eval boards that are going to be thoroughly tested by our lead hyperscale customer. And so T0 is really now. And so the typical development time for an optical module is on the order of 12 months to get to production. And that's really based on building and qualifying the module and then going through qualification with the hyperscale end customer. And so as we look at kind of best case scenario, we're talking about something on the order of 12 months from now. So it could impact our fiscal 25.
spk07: That's very helpful. And as my follow-up, I know the first half of the year, there were still some headwinds, obviously, from your largest customer, inventory digestion on the compute side. I'm just wondering, you know, is that now, as we look at the January quarter, is that headwind completely behind you, or is there still some lingering effects there?
spk05: Well, I think as we think about, you know, the front-end networks at this lead customer of ours, The application is general compute as well as AI. And so, of course, both of these applications are kind of contributing to the digestion of the inventory that was built up as a result of the pivot earlier in the year. And so as we look at fiscal 24, I think we've got good visibility. And exactly when it turns back on, I think we're still being conservative in a sense that We've got to wait for that to really develop in our fiscal 25.
spk07: Great. Thank you very much.
spk05: Thanks, Torrey.
spk19: Thank you. Stand by for our next question, which comes from the line of Carl Ackerman of B&P Paribas.
spk10: Yes, thank you, gentlemen. Two questions, if I may. The first question is a follow-up from the previous one, but you are introducing AOC solutions today to address both DSP-based and non-DSP-based optical links. How do you see the adoption of non-DSP-based solutions for back-end network connections in calendar 24? And as you address that question, I guess why not introduce an AEC solution for back-end networks?
spk20: Let me take the first part of that question.
spk05: Really, the two solutions that we've got for optical are what we might call a full DSP, which is kind of the traditional approach where there's a DSP on the transmit path as well as the receive path on a given optical link. That activity is going to continue. The product that we really announced today was eliminating the DSP on the receive path and having it on the transmit path only. And so you might say that that would be half of the DSP on a typical optical link. And so those are really the two solutions that we're promoting. We believe that completely eliminating the DSP is really not something that's going to play out in a big way. Analysts have been out front saying that they don't see it ever being more than 10% of the market if it achieves that level. So you'd have to have a very tight control over the entire link to be able to manage that. And that's just not the typical scenario in the market today. Typically, people are putting together various solutions, and interoperability is really the key, as well as troubleshooting and ultimately yielding in production. Second part of your question was regarding AECs, and we are absolutely building AECs for back-end networks. And the AECs are really covering in-rack, you know, three-meter or less solutions. There are also rack-to-rack connections, and those are all optical connections, whether they're AOCs or transceivers. And especially in that, you know, in that situation for you know, rack-to-rack connectivity within a cluster, that's where we really believe that the LRO DSP is going to be highly applicable and really quite valuable to customers.
spk10: Thanks for that. For my follow-up, I want to pivot to your IP business. This is primarily tied to data center today, or at least a data center-focused application. But over time, the idea is that as PAM3 ramps, it will transition more toward consumer applications How do you expect the end market mix of your IP business transitioning to consumer over the next few quarters? Thank you.
spk05: So as we look at our IP business, you know, primarily today it's Ethernet. We've talked about one large consumer license that we've, you know, that we've engaged on for consumer, and that's moving to 40 gig PEM3 for the CIO 80 license. or 80 gigabits per second, two lanes of 40 gig for that market. And that market is going to be out sometime in the future, probably on the order of two to three years before that ramps production. I don't expect it to be a big part of our IP business long term. I expect that our Ethernet IP business will continue very strongly, and I also believe that From a PCIe perspective, we'll be able to talk about that as we bring our 64-gig and 128-gig solutions to market.
spk19: Thank you. Our next question, please stand by, comes from the line of Vijay Rakesh of Mizuho. Please go ahead, Vijay.
spk15: Yeah, hey, Bill and Dan. Just on the PPP, the pluggable solution, patch panel. Is that including the AEC and are all the three customers using it or is it, how do you see that ramping, I guess?
spk05: Yeah, so you broke up a little on the line, but I'll answer the question by saying that this P3 was something that was developed in conjunction with with a leading service provider. So they spoke about their challenges as they were connecting ZR optics to routers or switch ports. And so, you know, this was really developed with them and their application in mind, also knowing that developing this solution, it would become a multi-tool in a sense to be able to solve, you know, different networking problems associated with power and cooling and control plane access. And so our lead customer is a service provider, but we're seeing that there's also applications where this really fits well. When we talk about the situation where switch and router port speeds are different from the optic speeds that a customer wants to use. So a customer could connect 800-gig ZR optics with 400-gig switch ports, or vice versa. They could move to the fastest switches, 800-gig ports, but still use 400-gig ZR. So in a sense, this P3 system can gearbox and really seamlessly connect different speed optics with different speed ports on routers and switches. Also, from a thermal distribution standpoint, this is a really useful tool in a sense because some customers want to use lower-cost, smaller switches that lack the power and cooling envelope for advanced DR optics. So you would have a lot of stranded ports. So in a sense, you can take that thermal management away from the switch. And so, you know, there's... You know, there's multiple applications. We introduced this at OCP, and we realized putting out a multi-tool like this that, you know, basically enables optics to be connected directly with AECs as a different type of solution. We were surprised at the ideas that some of the engineers that came by our booth at OCP, we were surprised at some of the great ideas that they came up with. So generally when we think about this product, We think about it in terms of a combination of the P3 and AECs. So we developed the P3 to basically be a catalyst for more AEC demand.
spk15: Got it. And so in better utilizing the stranded ports, I guess, does the P3 with the AEC actually double your content on the server top of the rack or...?
spk05: You know, it's hard to say. I don't think there's a relative reference point on contact. You know, these are new applications, and, you know, with our lead customer, we think that the content can be significant. But the nice thing is this is really an application, you know, as we prove out our lead customer, this is one that, you know, many service providers we think will pick up.
spk15: Got it. And then the last question on your 10% customers, how many were there in the quarter And if you were to look out, let's say fiscal exiting calendar 25, any thoughts on how many 10% customers do you think you would be working on?
spk11: Yeah, so for Q2, we had, as you'll see when our queue is filed, we had three 10% end customers. Recall last quarter, we added an additional disclosure to show end customers. So three, you'll see the largest one was 29%. Generally we don't disclose who our 10% customers are, but obviously the 29% one was Microsoft. Most importantly, we continue to expand our customer base throughout the year. One of the customers, one of those three end customers is a new end customer, as you'll see in our disclosure. So it's hard to answer the latter part of your question, how many we'll have at the end of the year, but I would guess you know, maybe four.
spk19: Thank you. Please stand by for our next question. Our next question comes from the line of Suji Da Silva of Roth MKM.
spk17: Hi, Bill. Hi, Dan. My question is on the competitive landscape. I'm wondering what you're seeing in the chip-based AEC efforts, chip plus cable guys competing with you. Are you guys able to provide a faster time to market? Is that one of the reasons you're in some of these demo racks, perhaps, and maybe you can talk about the share you might think you'd be having in the AEC market versus the size? Thanks.
spk05: I think we've been consistent in saying that we don't expect to maintain 100% of the AEC market. And we do see competitors, as this product category becomes really more and more established as a de facto way of making short in-rack connections, We do see more competitors. The way that we're organized, for sure, we're going to be able to deliver better time to market. And what we're seeing is that for the high-volume applications, customers are asking for special features, special functions. And fundamentally, we are responsible for working. I mean, our company, although we're a chip company, I've built a system organization for AECs. And so, you know, we're the ones that are working directly with the hyperscalers. We're the ones having daily conversations when crunch time comes. And so, for sure, we've got a time-to-market advantage. And so I think the way that this will play out, I think that our market share, you know, will ultimately play out. And, you know, I hope that – you know, that we maintain more than 50% long-term. And I think that's a function of being first. That's a function of having a model that delivers just a better experience with hyperscale customers directly. Okay.
spk17: All right. Thanks, Bill. And then my other question is on the customer base and where they are in the racks. You talked about Amazon and Microsoft demoing the racks. And they seem like they're a little bit ahead of the rest of the customer base, but perhaps you can clarify that. And if so... Are the other folks really close behind them, or do those guys have maybe a substantial technical lead just trying to figure out how the customers may waterfall in for you?
spk05: Yeah, I think from a timing standpoint, I would expect the third customer would probably ramp in the upcoming two to three quarters. It takes time for these new platforms to be deployed. And then the fourth customer would be following that by a number of quarters. So I think it's I think it's one where the first two customers, of course, you know, the architectures that they've decided to take to market, really each one of these customers is different in a sense. So I wouldn't say that they're necessarily ahead from a technology standpoint or, you know, it's just that they've chosen to move forward more quickly than the others.
spk19: Thank you. Our next question. comes from the line of Richard Shannon of Craig Harlem.
spk02: Richard, please make sure your line is muted.
spk19: And if you're in a speakerphone, lift your handset.
spk13: Can you hear me now?
spk19: Yes, sir. Please proceed.
spk13: All right. Great. Thanks. Dan, I have a question for you based on the comments and your prepared remarks. I'm not sure if I caught it correctly, but I think you said you had three 10% customers and including your Next largest one, the top four, each were supporting a different product line. I think we can all guess what the first one is, but I wonder if you can delineate specifically which product lines each of the next three customers were primarily purchasing.
spk11: Yeah, it kind of covers the broad gamut of our product lines, actually. So obviously the largest one being Microsoft is AEC. But we've, for a long time, our line card five business has been strong, so you can assume that would be in there. Optical DSP, we have been gaining traction there, starting with Q1, as we described last quarter. And then our chiplet business, we described a bit last quarter as well. So that kind of covers all of the different product lines that are materially contributing at this point in time.
spk13: Okay. Since you didn't say it in your prepared remarks and you have talked about it in this context in the past, you didn't say DSP was 10% that I would assume that the fourth customer is at, or is it one of the 10% customers of DSP?
spk11: Yeah, I mean, you could assume it's near that if it's not at that, you know, being where it is. And what we've said, you know, we haven't changed our expectation there. We expect for next fiscal year, our target is to be at 10% or more of revenue for optical DSP. And as our first production ramp is occurring with a large hyperscaler, you might expect that we'd have a quarter or two this year where it trips 10%, based upon their build schedule. Okay.
spk13: All right. Fair enough. Thanks for that characterization. I guess my second question is on product gross margins. We've had a couple of quarters of, I guess, somewhat volatile, but I think you're still talking directionally upwards over time here. Maybe specifically on the product gross margins here, with the growth in AECs, is it fair to think that product line as gross margins has continued to grow, and has it been somewhat steady, or is it the volatility coming from that line?
spk11: Yeah, I would expect all of our, over the long term, most of our product lines will grow a bit in gross margin, really due to increasing scale. You know, that had been a large part of our story last year, last fiscal year. With the Microsoft reset this year, fluctuations in gross margin have really been more about product mix as opposed to scale. Although now that we're approaching a point where we'll be exiting the year, at record levels of revenue, that scale factor will come in again. So I would expect some uplift in AECs as well as kind of really across the board as we stay on target to achieve that 63% to 65% overall gross margin.
spk19: Thank you. Our next question. comes from the line of Quinn Bolton of Needham & Company.
spk04: Thanks for taking my question. I guess I wanted to follow up on your comments about both Microsoft Ignite and the reInvent conference for Amazon. You talked about the Maya 100 accelerator racks. I think in the Microsoft blog, there were certainly lots of purple cables, so it's great to see, but can you give us some sense in that Maya 100 rack? What were we talking about? You know, as many as 48, you know, multi-hundred gig AECs for the back-end network, as well as, you know, a number of lower speed for the front-end network. And then for the re-event, is Amazon looking at similar architectures? Or, you know, can you just give us some sense of what the AEC content might look like in some of those AI racks?
spk05: Yeah, so on the Maya platform, I think you've got it absolutely right that the back-end network is comprised of 800 gig or 100 gig per lane AECs. The front-end network is also connected with Credo AECs, and those are lower speed. So you're right in terms of the number total in the rack, and you can kind of visually see that when they introduced that as part of the keynote. I would say that for Amazon, they're also utilizing Credo AECs for front-end connections as well as back-end. And so I think just the nature of those two different types of networks, there's going to be some strong similarities between the architectures.
spk04: And Bill, I think in the past you had talked about some of these AI applications and I think you're referring to the back-end networks here, might not ramp until kind of late fiscal 24 and then maybe not until fiscal 25. It sounds like, at least in the Microsoft announcement, that they may be starting to ship these racks as early as kind of early next year. And so I'm kind of wondering, could you give us an update? When do you think you see volume revenue from AECs in the back-end networks Could that be over the next couple of quarters or do you still think it may be, you know, further out than that?
spk05: Well, I think that, you know, it's playing out the way that we've expected. And we've spoken about this on earlier calls that in our fiscal 24, you know, the types of, you know, volume or revenue that we've built into the model is really based on you know, qualifications, small pilot types of build. So it's meaningful, but not necessarily what you would expect to see from, you know, a production ramp. And so as we look out into fiscal 25, you know, we still are, you know, being somewhat conservative about when exactly these are going to ramp. And so it was nice to see all of these things talked about publicly in November. However, you know, Deploying these at a volume scale, it's a complicated thing that they've got to work through. And so when we talk about when exactly does the linear ramp start, that's when we're confident it's going to happen in fiscal 25, but we can't necessarily pinpoint what quarter.
spk02: Understood. Thank you.
spk19: Our next question, please stand by, comes from the line of Tori. Svenberg of Stifel. Please go ahead, Tori.
spk07: Yes, I just had a follow-up. So, Bill, I think you've said in the past that for the AAC business with AI, you're looking at sort of a 5 to 10x opportunity versus general compute. And I guess related to Quinn's question, you know, sort of the timing of how that plays out, Is again that five to ten primarily on the back end side or are you also starting to see?
spk05: The the the contributing on the on the front end side of the AI clusters Yeah, so I think generally as we as we talk about AI versus general compute We're starting to think about it in terms of you know front-end networks and back-end networks and so when we see a rack of AI appliances and of course there's going to be a front-end network that looks very similar to what we see for general compute. And so to a certain extent, the way it plays out from a ratio perspective, serving the front-end network is really something that's common for both general compute and AI. You might see a larger number of general compute servers in a rack, so it might say the per rack front-end opportunity for general compute might be a little bit larger than AI. But just generally, when we think about the back-end networks, you know, the network that is really networking every GPU within a cluster, that's where we see the, you know, the big increase in overall networking density. And Quinn earlier talked about, you know, the idea of having 48 connections to the back-end network or 48 AECs within an AI appliance rack that are dedicated to the back-end network. versus, say, if it's a rack with eight appliances, there'd be eight AECs for the front end. So that's where we see in an actual appliance rack, you know, we can talk about five to six times the volume. But then when we think about the switch racks that are part of that back-end network, there's also an additional opportunity there. And that's when we can think about the overall opportunity compared to front end being five to ten times the volume.
spk07: That's very helpful. That's my last question. Um, and I have to ask you this question, just given, you know, your strong certies IP, but as, as it relates to the chip that market, obviously, you know, the CPU market is, is the first to embrace that, but are you starting to see the GPU market, uh, moving in the direction of GPUs as well, or is it just way too early for that?
spk05: I think that, that, You know, the standard that Intel has been promoting, the UCIE standard, I think that that is going to be a big market for chiplets. You know, and that, for us, ties in closely with the efforts that we're making on PCIE. And so, you know, one thing I would note is that the acceleration and speeds is happening really across the board. And so we've been targeting the 64-gig PAM-4 chip. PCIe Gen 6 CXL3 market, but I also see an acceleration for the next generation, 128 gig. And so that's very much part of what's happening with this explosion in the AI market is this need for faster and faster speeds. And so I think that you're going to see the same type of thing that's happened in Ethernet. You're going to see that happen with PCIe. And at OCP this year, we had kind of a vision piece that we presented with the possibility of CXL, you know, really, and PCIe possibly being the protocol for back-end network connectivity as well as an expansion in front-end networks. So there's really exciting things coming in the future as we see that standard accelerating.
spk14: Great. Thank you so much.
spk19: Thank you. There are no further questions at this time. Mr. Brennan, I turn the call back over to you.
spk05: Thank you very much for the questions. We really appreciate the participation and we look forward to following up on the callbacks. Thank you.
spk19: This concludes today's conference call. You may now
Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-