Credo Technology Group Holding Ltd

Q4 2023 Earnings Conference Call

5/31/2023

spk13: Good day, and thank you for standing by. Welcome to the KEDO Q4 fiscal year 2023 earnings conference call. At this time, all participants are in a listen-only mode. After the speaker's presentation, there will be a question and answer session. To ask a question during the session, you will need to press star 11 on your telephone. You will then hear an automated message advising your hand is raised. To withdraw your question, please press star 11 again. Please be advised that today's conference is being recorded. I would now like to go ahead and turn the call over to Dan O'Neill. Please go ahead.
spk03: Good afternoon, and thank you all for joining us today for our fiscal 2023 fourth quarter and year-ending earnings call. Joining me today from Credo are Bill Brennan, our Chief Executive Officer, and Dan Fleming, our Chief Financial Officer. I'd like to remind everyone that certain comments made in this call today may include forward-looking statements regarding expected future financial results, strategies and plans, future operations, the markets in which we operate, and other areas of discussion. These forward-looking statements are subject to risks and uncertainties that are discussed in detail in our documents filed with the SEC. It's not possible for the company's management to predict all risks, nor can the company assess the impact of all factors on its business or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement. Given these risks, uncertainties, and assumptions, the forward-looking events discussed during this call may not occur, and actual results could differ materially and adversely from those anticipated or implied. The company undertakes no obligation to publicly update forward-looking statements for any reason after the date of this call to conform these statements to actual results or to changes in the company's expectations, except as required by law. Also during this call, we will refer to certain non-GAAP financial measures which we consider to be important measures of the company's performance. These non-GAAP financial measures are provided in addition to, and not as a substitute for or superior to, financial performance prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed using the investor relations portion of our website. With that, I'll now turn the call over to our CEO, Bill.
spk09: Thanks, Dan, and good afternoon, everyone. Thank you for joining our Q4 fiscal 23 earnings call. I'll begin by providing an overview of our fiscal year 23 and fiscal Q4 results. I will then highlight what we see going forward in fiscal 24. Dan Fleming, our CFO, will follow my remarks with a detailed discussion of our Q4 and fiscal year 23 financial results. and share our outlook for the first quarter. Credo is a high-speed connectivity company delivering integrated circuits, system-level solutions, and IP licenses to the hyperscale data center ecosystem, along with a range of other data centers and service providers. All our solutions leverage our core series technology and our unique customer-focused design approach, enabling Credo to deliver optimized, secure, high-speed solutions with significantly better power efficiency and cost. Our electrical and optical connectivity solutions deliver leading performance with port speeds ranging from 50 gig up to 1.6 terabits per second. While we primarily serve the Ethernet market today, we continue to expand into other standards-based markets as the need for higher speed with more power-efficient connectivity increases exponentially. Credo continues to have significant growth expectations within the accelerating market opportunity for high-speed connectivity solutions. In fact, the onset of generative AI applications is already accelerating the need for higher speed and more energy-efficient connectivity solutions. And this is where Credo excels. I'll start with comments on our fiscal 2023 results. Today, Credo is reporting results from our first full fiscal year as a public company. In fiscal 23, Credo achieved just over $184 million in revenue, up 73% over fiscal 22, and we achieved non-GAAP gross margin of 58%. Product revenue increased 87% year over year, primarily due to the ramp of our active electrical cable solutions. License revenue grew 28% year over year, from $25 million to $32 million. Throughout fiscal 23, we had several highlights across our product lines. For active electrical cables, or AECs, we continued to lead the market Credo pioneered during the last several years. Our team continued to quickly innovate with application-specific solutions, and we've been successful in expanding our engagements to include multiple data centers and service providers. Our customer-focused innovation has led to more than 20 different versions of AECs shipped for qualification or production in the last year, and we remain sole sourced in all our wins. And while our significant power advantage was a nice-to-have a couple years ago, it's increasingly becoming imperative as our hyperscaler customers are pushed to lower their carbon footprint. For optical DSPs, Credo continued to build momentum by successfully passing qualification for 200 gig and 400 gig solutions at multiple hyperscalers with multiple optical module partners. In addition, Credo introduced our 800 gig optical DSPs, laser drivers, and TIAs, and we announced our entry into the coherent optical DSP market. For line card price, We continue to expand our market leadership. In particular, Credo built upon our position as the leader for MaxTech PHYs with over 50% market share. We also extended our performance and power efficiency advantages for 100 gig per lane line card PHYs with the introduction of our Screaming Eagle family of retimers and gearboxes with up to 1.6 terabits per second of bandwidth. For IP licensing, we continue to build on our offering of highly optimized series IP. In the year, we licensed CERTI's IP in several process nodes from 4 nanometer to 28 nanometer, with speeds ranging from 28 gig to 112 gig, and reach performance ranging from XSR to LR. We believe our ability to innovate to deliver custom solutions remains unparalleled. We maintain very close working relationships with hyperscalers, and we'll continue to collaborate with them to deliver solutions that are optimized to their needs. Despite recent macroeconomic headwinds in the data center industry, we believe the need for higher speed with better power efficiency will continue to grow. This plays perfectly to CREVA's strengths, which is why we remain optimistic about our prospects in fiscal 24 and beyond. I will now discuss the fourth quarter more specifically. In Q4, we delivered revenue of $32.1 million and non-GAAP gross margin of 58%. I'll now provide an overview of key business trends for the quarter. First, regarding AEC, market forecasters continue to expect significant growth in this product category due to the benefits of AECs compared to both legacy direct-attached copper cables and compared to active optical cables, which are significantly higher power and higher cost. With our largest customer, we're encouraged by our development progress on several new AEC programs, including an acceleration in their first 100 gig per lane AI program where they intend to deploy Credo AECs. We saw the initial ramp of a second hyperscale customer, which we expect to grow meaningfully throughout the year. We're ramping 50 gig per lane NIC to Tor AEC solutions for both their AI and compute applications. And I'm happy to report that Credo has been awarded this customer's first 100 gig per lane program. We're also actively working to develop several other advanced AEC solutions for their next-generation deployments. We continue to make progress with additional customers as well. We remain in flight with two additional hyperscalers and are also engaged in meaningful opportunities with service providers. We've seen momentum building for AEC solutions across AI, compute, and switch applications, and we continue to expect to benefit as speeds move quickly to 100 gig per lane. Regarding our progress on optical solutions, in the optical category, we've leveraged our CERTiS technologies to deliver disruptive products, including DSPs, laser drivers, and TIAs for 50 gig through 800 gig port applications. We remain confident we can gain share over time due to our compelling combination of performance, power, and cost. In addition to the hyperscalers that have previously production-qualified Credo's optical DSPs, we started the production ramp of a 400-gig optical DSP for a U.S. hyperscaler as the end customer. At OFC in March, we received very positive feedback on our market solutions, including our Dove 800 products, as well as on our announcement to enter the 100-gig ZR co-parent DSP market. We're well positioned to win hyperscalers across a range of applications, including 200-gig, 400-gig, and 800-gig port speeds. We're also engaged in opportunities for fiber channel, 5G, OTM, and PON applications with optical partners, service providers, and networking OEMs. Within our LionCard 5 category, during the fourth quarter, we saw growing interest in our solutions, specifically for our Screaming Eagle 1.6 terabit per second vibes. We've already been successful winning several design commitments from leading networking OEMs and ODMs for the Screaming Eagle devices. Credo was selected due to our combination of performance, signal integrity, power efficiency, and cost effectiveness. We also made significant development progress with our customer-sponsored next-generation 5-nanometer 1.6 terabit per second MaxTech PHY, which we believe will extend our leadership well into the future for applications requiring encryption. Regarding our CERTI's IP licensing and CERTI's chiplet businesses, Our IP deals in Q4 were primarily led by our 5 and 4 nanometer, 112 gig series IP, which according to customers, offers significant power advantage versus competition based on our ability to power optimize to the reach of an application. Our service chiplet opportunity continues to progress. Our collaboration with Tesla on their Dojo supercomputer design is an example of how connectivity chiplets can enable advanced next generation AI systems. We're working closely with customers and standard size, such as the UCIE consortium, to ensure we retain leadership as the chiplet market grows and matures. We believe the acceleration of AI solutions across the industry will continue to fuel our licensing and chiplet businesses. To sum up, the hyperscale landscape has shifted swiftly and dramatically in 2023. Compute is now facing a new horizon, which is generative AI. We expect this shift to accelerate the demand for energy-efficient connectivity solutions that perform at the highest speeds. From our viewpoint, this technology acceleration increases the degree of difficulty and will naturally slim the field of market participants. We remain confident that our technology innovation and market leadership will fuel our growth as these opportunities materialize. We expect to grow sequentially in Q1 and then continue with sequential quarterly revenue growth throughout fiscal 24. We believe our growth will be led by multiple customers across our range of connectivity solutions, which will result in a more diversified revenue base as we exit fiscal 24. I'll now turn the call over to our CFO, Dan Fleming, who will provide additional details. Thank you.
spk04: Thank you, Bill, and good afternoon. I will first provide a financial summary of our fiscal year 23, then review our Q4 results. And finally, discuss our outlook for Q1 and fiscal 24. As a reminder, the following financials will be discussed on a non-GAAP basis, unless otherwise noted. Revenue for fiscal year 23 was a record at $184.2 million, up 73% year over year, driven by product revenue that grew by 87%. Gross margin for the year was 58.0%. Our operating margin improved by 13 percentage points, even as we drew our product revenue mix. This illustrates the leverage that we can produce in the business. We reported earnings per share of 5 cents, an 18-cent improvement over the prior year. Moving on to the fourth quarter, in Q4, we reported revenue of $32.1 million, down 41% sequentially and down 14% year-over-year. Our IP business generated $5.7 million of revenue in Q4, down 55% sequentially and down 49% year-over-year. IP remains a strategic part of our business, but as a reminder, our IP results may vary from quarter to quarter. driven largely by specific deliverables to preexisting contracts. While the mix of IP and product revenue will vary in any given quarter over time, our revenue mix in Q4 was 18 percent IP, above our long-term expectation for IP, which is 10 to 15 percent of revenue. We continue to expect IP as a percentage of revenue to come in above our long-term expectations for fiscal 24. Our product business generated $26.4 million of revenue in Q4, down 37% sequentially and flat year over year. Our team delivered Q4 gross margin of 58.2%, above the high end of our guidance range, and down 94 basis points sequentially due to lower IP contribution. Our IP gross margin generally hovers near 100%, and was 97.4% in Q4. Our product gross margin was 49.7% in the quarter, up 245 basis points sequentially and up 167 basis points year-over-year due principally to product mix. Total operating expenses in the fourth quarter were $27.2 million within guidance and up 6% sequentially and 25% year-over-year. Our year-over-year OPEX increase was a result of a 36% increase in R&D as we continue to invest in the resources to deliver innovative solutions. Our SG&A was up 12% year-over-year as we built out public company infrastructure. Our operating loss was $8.5 million in Q4. a decline of $10.7 million year over year. Our operating margin was negative 26.4% in the quarter, a decline of 32.2 percentage points year over year due to reduced top line leverage. We reported a net loss of $5.7 million in Q4, $8.3 million below last year. Cash flow used by operations in the fourth quarter was $11.8 million, a decrease of $14.2 million year over year, due largely to our net loss and changes in working capital. CapEx was $3.9 million in the quarter, driven by R&D equipment spending. And free cash flow was negative $15.7 million, a decrease of $8.4 million year over year. We ended the quarter with cash and equivalents of $217.8 million, a decrease of $15.2 million from the third quarter. This decrease in cash was a result of our net loss and the investments required to grow the business. We remain well capitalized to continue investing in our growth opportunities while maintaining a substantial cash buffer for uncertain macroeconomic conditions. Our accounts receivable balance increased by 14.6% sequentially to $49.5 million, while day sales outstanding increased to 140 days, up from 72 days in Q3 due to lower revenue. Our Q4 ending inventory was $46.0 million, down $4.3 million sequentially. Now, turning to our guidance. We currently expect revenue in Q1 of fiscal 24 to be between $33 million and $35 million, up 6% sequentially at the midpoint. We expect Q1 gross margin to be within a range of 58% to 60%. We expect Q1 operating expenses to be between $26 million and $28 million. We expect Q1 basic weighted average share count to be approximately 149 million shares. We feel we have moved through the bottom in the fourth quarter. While we see some near-term upside to our prior expectations, we remain cautious about the back half of our fiscal year due to uncertain macroeconomic conditions. In summary, as we move forward through fiscal year 24, we expect sequential revenue growth, expanding gross margins due to increasing scale, and modest sequential growth in operating expenses. As a result, we look forward to driving operating leverage as we exit the year. And with that, I'll open it up for questions. Thank you.
spk13: Thank you. As a reminder, if you have a question, please press star 11 on your telephone. As well, please wait for your name to be announced before you proceed with your question. One moment while we compile the Q&A roster. The first question that we have is coming from Tori Sandberg of Stifel. Your line is open.
spk01: Yes, thank you.
spk05: For my first question, and in regards to the Q1 guidance as far as what's driving the growth, you know, given your gross margin comment, I assume that, you know, AEC will probably continue to be down. with perhaps the growth coming from, you know, from kind of for BSP and IP. Is that sort of the correct thinking, or if not, please correct me.
spk04: Hi, Tori. This is Dan. So you're correct in that if you look at the sequential increase in gross margin from Q3 to Q4, while our product revenue was down, that's really reflective of a favorable product mix. where AEC, as we all know, which is on the lower end of our margin profile, contributed less of the overall product mix. That trend will continue in Q1, and I would characterize that really as broadly across all of our other product lines, not really singling out one specific product line that's taking up the slack from AEC, so to speak.
spk05: Sounds good. And that's my follow-up question for you, Bill, with generative AI, as you mentioned in your script, you know, things are clearly changing. We're just hoping you could talk a little bit more granular about how it impacts each business. I'm even thinking about sort of the 800 gig pan four cycle. I mean, is that getting pulled in? So, yeah, I mean, how, if you just give us a little bit more color on how, you know, generative AI could impact, you know, each of your four business units at this point. Thank you.
spk09: Sure. Sure. Absolutely. So I think generally, I think that AI applications will create revenue opportunities for us across our portfolio. I think the largest opportunity that we'll see is with AEC. However, optical DSPs, there will definitely be a big opportunity there. Even line card files, chiplets, even Certi's IP licensing will get an uplift as AI deployments increase. So maybe I can start first with with AECs. Now it's important to kind of identify the differences between traditional compute server racks, which is kind of commonly referred to, you know, uses the front end network. So basically a NIC to TOR connection, the TOR up to the leaf and spine network. The, you know, the typical compute rack would have 10 to 20 AECs in rack, meaning in rack connections from NIC to TOR. And, you know, highlight that leading edge lane rates today for these connections with compute servers is 50 gig per lane. Within an AI cluster, in addition to the front-end network, which is similar, there's a back-end network referred to as the RDMA network. And that basically allows the AI appliances to be networked together within a cluster directly. And, you know, if we start going through the map, this back-end network has 5 to 10x the bandwidth as the front-end network. And so the other important thing is to note within these RDMA networks, there are leaf spine racks as well. And so if we look at one example of a customer that, you know, we're working with in deploying, The AI appliance rack itself will have a total of 56 AECs between the front-end and back-end networks. Each leaf spine rack is a cloth rack or a disaggregated chassis, which will have 256 AECs. And so when we look at it from an overall opportunity for AECs, this is a huge uplift in volume. The volume coincides with the bandwidth. Now, lane rates will quickly move. Certain applications will... will go forward at 50 gig per lane. Others will go straight to 100 gig per lane. And so we see probably a 5x plus revenue opportunity difference between the typical, if you were to say apples to apples with the number of compute server racks versus an AI cluster. So to kind of extend into optical, there is also a typically large There's typically a large number of AOCs in the same cluster. So you can imagine that the short in-rack connections are going to be done with AECs. These are three meters or less. But these appliances will connect to the back end leaf spine racks, these disaggregated racks. All of those connections will be AOCs. Those are connections that are greater than three meters. And so if we look at this, this is all upside to, you know, say a traditional compute deployment where there's really no AOCs connecting rack to rack. Okay, so when we look at, you know, the overall opportunity, we think that the additional AEC opportunity within an AI cluster is probably twice as large, twice as many connections as AOCs, but the AOC opportunity for us will be significant in a sense that AOCs represent the most cost-sensitive portion of the optical market. And so it's also a lower technology hurdle since the optical connection is well-defined and it's within the cable. So this is a really natural spot for us to be disruptive in this market. We see some are planning on deploying with 400-gig AOCs. Others are planning to go straight to 800-gig AOCs. So we view AECs as the largest opportunity. Optical DSPs for sure will get an uplift in the overall opportunity set. But also I think that if we look at Tesla as an example, that's an example of where as they deploy, we're going to see really nice opportunity for our chiplets that we did for them for that Dojo supercomputer. And, you know, it's an example of how AI applications are doing things completely differently. And we view that long term. This will be, you know, kind of a natural thing for us to benefit from. We can extend that to Certi's IP licensing. Many of the licenses that we're doing now are targeting different AI applications. And also, don't forget line cards. The opportunity for, you know, for the network OEMs and ODMs is also increasing. And, you know, of course, line card fives are, you know, are something that, you know, go on those switch line cards that are developed. So, you know, generally speaking, I think that AI will drive faster lane rates. And we've been very, very consistent with our message that, you know, as the market hits the knee in the curve on AI deployments, we're naturally going to see lane rates go, you know, more quickly to 100 gig per lane. And that's where we really see our business taking off. So we're getting a really nice revenue increase from 50 gig per lane applications, but we really see acceleration as 100 gig per lane happens. And especially when you start thinking about the power advantages that all of our solutions offer compared to others that are doing similar things. That might have been more than you were looking for, but...
spk05: No, that's a great overview. Thank you so much, Bill. That was great. Thank you.
spk10: Sure.
spk13: Thank you for your question. And one moment while we prepare for the next question. And the next question will be coming from Quinn Bolton of Needham & Company. Your line is open.
spk08: Thanks very much for taking my question. Bill, maybe a follow-up to Tori's question, just sort of the impact of generative AI on the business, given that most of your AEC revenue today comes from the standard compute racks rather than AI racks, what do you see in terms of potential cannibalization, at least in the near term, as these hyperscalers prioritize building out the AI racks, potentially at the expense of compute deployments, again, in the near term?
spk09: So I feel very good about how we're positioned. It is the case that our first ramp with our largest customer was Compute Rack. I think we're very well positioned with our customer as they transition to AI deployments. So we've talked in the past about two different types of deployments at the server level. Of course, compute will continue. And we can all guess as to what ratio it's going to be between compute and AI. We've got the roadmap very well covered for compute. So I think we're well set. And so as that resumes at our largest customer, I think we're going to be in good shape. I'm actually more excited about the acceleration of the AI program that we've been working with this same customer on for close to a year. And so I feel like we're well covered for both compute and AI. And that's really a long-term statement. So a little bit of new information I would say is that with our second hyperscale customer, you know, just to give an update generally on that and then, you know, relate that back to the same point that I was making about the earlier customer. We are right on track with the AEC ramp. The first program is a compute server rack that we've talked about. We saw small shipments in Q4, and we expect to see a continued ramp through fiscal 24. However, during the past several months, a new AI application has taken shape. So if we would have talked 100 days ago, we wouldn't have talked about this program. And so we quickly delivered a different configuration of the AEC that was designed for the Compute Server Act. So if you recall, we did a straight cable as well as an X cable configuration. So they asked us to deliver a new configuration A new configuration had, you know, specific changes that were needed for their deployment. And we delivered the new configuration within weeks, which is, you know, that's another example of the benefit to how we're organized. The qualification is underway, and we expect this AI appliance rack to also ramp in our fiscal 24. It's unclear as to the exact, you know... schedule from a time standpoint and a volume standpoint, but we feel like, you know, this is going to be another significant second program for us. And so I think that, you know, I think that for both our first and our second hyperscale customer, I think we're covering the, you know, the spectrum between compute and AI. So I feel like we're really in great shape. So hopefully that answers your question. Now, if I take it a little bit further and say, okay, long-term, let's say it's 80% compute, 20% AI, and you think maybe because the opportunity for us is five times larger in AI, maybe the opportunity is similar if the ratio is like that. So compute might be equal to AI from an AEC perspective. I think that Any way that goes, we're going to benefit. If it goes 50-50, that's a big upside for us with AEC, given the fact that there's, you know, larger volume, larger dollars associated with an AI cluster deployment. And so I think that, you know, for us, it won't affect us, you know, one way or another. Maybe in the near-term quarters, yeah. But, you know, the situation at our first customer really hasn't changed since the last update. So we think that the year-over-year increase in revenue for that customer will happen in FY25, as we've discussed before.
spk08: Okay, but no further push-out or delay of the compute rack at the first hyperscaler, given the potential reprioritization to AI in the near term?
spk09: Well, the new program qualifications, we've talked about two of them, they're still scheduled in the relatively near future. And, of course, as those get you know, qualified in RAMP, we'll see benefit from that. But, you know, it's a little bit tough to track month by month, right? That's a little bit too specific in a timeframe standpoint. So we've seen a slight delay, but it's not something that we're necessarily concerned about.
spk08: Got it, Bill. And then just a clarification on this second hyperscaler, I think the last update you said you may not yet have a hard forecast for that hyperscaler's needs on the AEC side. Have you received sort of a hard PO or at least a more reliable forecast that you're now sort of forecasting that business from in fiscal 24?
spk09: Yeah, it's coming together. It's coming together, and I think we feel comfortable saying that the the revenue that will be generated by this second customer will be significant. And I'm not exactly able to talk about how significant. I think that we continue to view this through a conservative lens because we really don't know how the second half is going to shape up. But all the indicators that we've heard over the last 90 days are quite positive. And I think Dan referenced the fact that in Q2, we expect significant material revenue as that starts.
spk15: Perfect. Thank you.
spk13: Thank you. One moment while we prepare for the next question. And our next question will be coming from Suji De Silva of Roth Capital. Your line is open.
spk19: Hi, Bill. Hi, Dan. I just want to talk about the AEC, the products. You have multiple products, and I just wanted to know, are there certain ones that are more relevant to an AI rack versus a traditional compute rack, or are they all applicable across the board?
spk09: I would say that I wouldn't classify all of these solutions. I wouldn't lump them together. We're very much looking at the AEC opportunity as one where we're positioned to implement really customer-specific requests. And so part of what we're seeing is that most of the designs that we're engaged now have something very specific to a given customer. And so I can say that we're seeing that there's a large number of customers moving to 100 gig per lane quickly. But we're also seeing customers that are reconfiguring existing NICs and building different AI appliances with those NICs. And so they're going to be able to ramp with 50 gig per lane solutions. You know, now as far as configurations go, you know, we see straight cable opportunities. We see wide cable opportunities. We, you know, we see opportunities where, you know, just recently we had a customer ask us to, you know, have 100 gig on one of the cable and 50 gig on the other end of the cable. And so obviously that's a breakout cable. But it's an interesting challenge because this is the first time we'll be mixing different generations of ICs. And so, again, this is something we're able to do because we're so unique in a sense that we have a dedicated organization internal to Credo that's responsible for delivering these system solutions. You know, it's really that single party that's responsible for collaborating with a customer, designing, developing, delivering, qualifying, and then supporting the designs with our customers. And so, you know, I can't emphasize enough that, you know, you give engineers at these hyperscalers the opportunity to innovate in a space they've never been thought of. And it's something that we're getting really good uptake on. And, you know, of course, Our investment in the AEC space is really unmatched by any of our competition. I think we're unique in the sense that we can offer this type of flexibility. So to answer your question, I couldn't really point to one type of cable that is going to be leaned on.
spk19: No, it's helpful. It paints a picture of how the cables are being deployed here. And then also, I believe in the prepared remarks you mentioned 20 AECs being qualified for shipment, if I heard that right. I'm curious across how many customers or how many programs that is, just to understand the breadth of that qualification effort.
spk09: Yeah, I would say that there's a set of hyperscalers that are really the large, large opportunity within the industry for you know, for the AEC opportunity. But we've also had a lot of success with data centers that might not qualify as a capital H hyperscaler, as well as service providers. And so, you know, we can look at the relationships with hyperscalers directly, and there's several SKUs that, you know, that we've delivered. And there's even more in the queue, you know, for these, you know, next generation systems. But even if we look at, you know, I think, you know, we're, you know, if you look at the number of million dollar per, you know, per quarter or per year customers that we've got, the list is really increasing. The product category, I think, has really been solidified over the last six to nine months. And you see that also because a lot of companies are announcing that they intend to compete longer term. All right.
spk14: Okay. Thanks, Bill. Thank you. One moment while we prepare for the next question.
spk13: And our next question will be coming from Carl Ankerman of BNP. Your line is open.
spk21: Thank you. I have two questions. Good afternoon, Dan and Bill. I guess, first off, it's great to see the sequential improvement in your fiscal Q1, but But I didn't hear you confirm your fiscal 24 revenue outlook from 90 days ago. And, you know, I guess could you just speak to the visibility you have on your existing programs that gives you confidence in the sequential growth that you spoke about throughout fiscal 24? If you could just touch on that, that would be helpful.
spk04: Thanks, Carl. This is Dan. So, yeah, generally speaking, you know, we – As we've described, we see some near-term upside, but we still remain a bit cautiously optimistic about the back half of the year. So we're very comfortable, ultimately, with the current estimates for the back half. We do have, certainly, increasing visibility as time passes, and we hope to provide meaningful updates over the next upcoming quarters. We're working hard to expand these growth opportunities for FY24 and beyond, and we remain very encouraged with what we're seeing, especially with the acceleration of AI programs.
spk20: Got it. Understood. Thanks for that.
spk21: I guess as a follow-up, of the DSP opportunity that you've highlighted and prepared remarks, are you seeing your design engagements primarily in Fiscal 24 on coherent offerings, Or are you seeing more opportunities in DCI for your 400 gig and 800 gig opportunities? Thank you.
spk09: Yeah, so the opportunities that we're seeing, the large opportunities that we're seeing are really within the data center. And I can say that it's across the board 200 gig, 400 gig, and 800 gig. All of these hyperscalers have different strategies as to how they're deploying optical. I think we continue to make progress with 200 and 400, and I think we're in a really good position from a time-to-market perspective on 800 gig. And so we can talk about the cycles that we're spending with every hyperscaler. We're also aligning ourselves very closely in a strategic go-to-market strategy with select optical module companies. And we think that as it relates to DCI and Coherent specifically, we're in development for that first solution that we're pursuing, which is 100 gigs ER. And we feel like that development will take place throughout this year and that we'll see first revenue in the second half of calendar 24. But as far as 400 gig, that would really be a second follow-on. type of DCI opportunity for us. Now, in the ZR space, we're going to be unique because we'll market and sell the DSP to optical module makers. And so we intend to engage three to four module makers in addition to our partner, Effect Photonics. And that makes us somewhat unique in the sense that other competitors are going directly to market with the ZR module. I highlight power is really an enabler here. And the key thing is we can do 100 gig ZR module and fit under the power ceiling for a standard QSFP connector, which is roughly 4.5 watts. So there's a large upgrade cycle from 10 gig modules that we'll enable, but also there's new deployments in addition. So that kind of gives you a little bit of flavor about the coherent, but I really see our opportunities more within the data center.
spk14: All right, so thank you.
spk13: Thank you. One moment while we prepare for the next question. And our next question will be coming from Vivek Gheria of Bank of America. Your line is open.
spk12: Thanks for taking my question. Bill, I'm curious to get your perspective on some of these technology changes. One is the role of InfiniBand that's getting more share in these AI clusters. What does that do to your AEC opportunity. Is that a competitive situation? Is that a complementary situation? And then the other technology question is some of your customers and partners have spoken about their desire to consider co-packaged optics and linear direct drive type of architectures. What does that do to the need for standalone pluggables?
spk09: Thanks. I appreciate the opportunity to talk about Ethernet versus InfiniBand because there's been a lot said about that. Generally, we see coexistence. I think, you know, depending on how you look at the market forecast information, there is a point soon in the future when Ethernet exceeds InfiniBand for AI specifically. Beyond AI, I think it's game over already. Whether you measure the TAM in ports or dollars, Ethernet is forecasted to far exceed InfiniBand in the out years, so calendar 25 and beyond. And so if we think about from an absolute PAM perspective, forecasters are showing Ethernet dollars perspective. They're showing that Ethernet surpasses InfiniBand by 2025. And so the forecasts show a CAGR for Ethernet of greater than 50%, while InfiniBand, they're showing a CAGR of less than 25%. And so, you know, you can also look at this from a port cost perspective, where InfiniBand is two to four times the ASP per port compared to Ethernet, depending on who you talk to. And so, in a sense, it's so secret that, you know, the the world will continue to do what the world does they'll pursue cost-effective decisions and we think from a technology standpoint they're they're very similar so if you think from a cost perspective if you look apples to apples and you think that an infiniband port is two to four x the cost of an ethernet port in a sense you could justify that one to three of those ports of ethernet are free in in comparison to infiniband so i think that um You know, our position here is that we really believe that Ethernet is going to prevail. We're working on so many AI programs. Every single one of them is Ethernet.
spk12: And then, Bill, anything on the move by some of your customers to think about co-packaged optics and direct drive? And while I'm at it, maybe let me just ask, you know, Dan, a follow-up on the fiscal 24th. I think, Dan, you suggested you're comfortable with where I think expectations are right now. That still implies a very steep ramp into the back half. So I'm just trying to square, you know, the confidence in the full year with some of just kind of the macro caution that came through in your comment.
spk04: Yeah, we are confident in how we have guided. And as I mentioned, we're we're very comfortable with the current estimates. If we look at FY24, as you allude to, we see strong sequential top line growth throughout the year in order to achieve those numbers. And it's kind of well documented what's happened at Microsoft to us for this fiscal year. So if we exclude Microsoft, What that means is we have in excess of 100% year-on-year growth of other product revenue from other customers, which, again, we're very confident, based on all of the traction that we've seen recently, that we'll be able to achieve that. And, of course, I'll just reiterate, one of the key drivers is AI in some of those programs. So hopefully that gives you some additional color on our confidence for FY24.
spk09: Yeah, regarding your question about the linear direct drive, that was, I think, this year's shiny object at OFC. I do think that, you know, the idea that it's really a, you know, the idea is really how to address the power challenges, you know, basically move away from the optical DSP. I think that You know, this is not a new idea. There was a big move towards this linear direct drive in the 10 gig space when that was emerging. And, you know, I think the fact that there is really none in existence, I think the DSP was chosen then. It was really critical to close the system. Our feeling is that I think we'll see much of the same this year. I think Marvell did a great job in, you know, kind of setting expectations correctly. They did a long session right after OFC that I think addressed it quite well. I think you'll see small deployments where every link in the system is very, very controlled, but these are typically very small in terms of the overall TAM. Now, we're fully signed up. If the real goal is power, that's exactly where we play. So we're fully signed up to looking at unique approaches in the future to be able to offer compelling things from a power perspective. And it's not like I'm completely dismissing the concept that was really behind the idea of linear direct drive. We're actually viewing that as a potential opportunity for us in solving the problem differently. But generally speaking, I don't think you'll see in the future a world where linear direct drive is measured in any kind of significant way. It's not to say that people aren't spending money trying to prove it out right now. That is happening. And regarding CPO, I think that was something that was talked about for many, many years prior. And I think also on that, you'll see smaller deployments if that's ultimately something that some customers embrace. But I don't think you'll see it in a big way. That's simply not what the customer base is looking for. Thank you, Bill. Thank you, Dan.
spk13: Thank you. One moment for the next question. And our next question will be coming from David Liu of
spk11: Mr. Hall, please go ahead.
spk02: All right, yeah, thanks. This is David on for Vijay at Mizuho. My first question is assuming that if in fiscal 25 data suited demand for general compute improves and you see the continued new AI ramps, can you provide any more color on the puts and takes there and, you know, the type of operating leverage you can improve
spk04: Well, we're not giving specific guidance yet to fiscal 25, but you're right in that the ingredients certainly exist where operating leverage. We should exit FY24 with pretty robust operating leverage, and that we would expect, based on what we know now, to carry forward into FY25. But we haven't framed yet, of course, what that's going to ultimately look like.
spk02: Okay, sure. And I guess for my second question, when you're talking with hyperscalers on these new AI applications, how important is sort of your TCO advantage when they're exploring your solution, or are they currently kind of just primarily focused on time to market and maximum performance and just getting their AI deployments out there?
spk09: So I just want to make sure you said total cost of ownership?
spk06: Yes, yes.
spk09: Yeah, it's, you know, I think it's hands down in favor of AEC. So if we look at 100 gig lane rates, I think that, you know, the conclusion throughout the market is that there's two ways to deploy short cabled solutions. It's really AEC or AOC. If we look at it from a CAPEX standpoint, AECs are about half the cost. If we look at it from an OPEX standpoint, also about half the cost, about half the power, you know, half the ASP for an apples-to-apples type solution. So I think the TCO benefit is significant. The other thing you've got to consider is that especially when you're down in server racks, you know, these are different than switch racks in a sense that having a failure with your You know, your cable solution is, it becomes a very urgent item. And so when we think about AOCs and the reliability in terms of number of years, it is probably, you know, anywhere from one-third to one-tenth. The AECs that we sell are, we talk about a 10-year product life. And so it kind of matches or exceeds the life of the rack that it's being deployed in. The same cannot be true for, you know, it cannot be said for any kind of optical solution. So I think across the board, it's hands down the TCO is much more favorable for AEC. Okay.
spk16: Thank you.
spk13: Thank you. One moment for the next question. And our next question. will be coming from Quentin Bolton of Needham and Company. Your line is open.
spk08: Hey, guys. Thanks for the quick two follow-ups. One, Dan, was there any contra revenue in the April quarter?
spk04: That's an excellent question, Quinn. I'm glad you caught that. Actually, so there was, and you will see that when we file our 10-K. In the past, you've been able to see that in our press release, in our DAP to non-DAP reconciliation. But from Q4 and going forward, we're no longer excluding that contra revenue from our non-GAAP financials. And this really came about through a comment from the SEC, not singling out Credo, but actually all of Amazon's suppliers who have a warrant or Amazon has a warrant with them. So the positive things of this change are You'll still be able to track ultimately what that warrant expense is, but when we file our Q and K, and looking historically, it doesn't really make a reporting difference on a non-GAAP basis. It was not material, the difference. And it just makes calculation a little bit more straightforward going forward. Our only non-GAAP reconciling item going forward, or at least for the foreseeable future, is really just share-based compensation.
spk08: Got it. So the revenue doesn't change. You'll just sort of – you won't be making the adjustments for the contra revenue and the non-GAAP gross margin calculation going forward? Got it.
spk04: That's exactly correct. Yeah, revenue is still revenue. It has a portion of it which is contra revenue, which obviously brings down the revenue a little bit.
spk08: Got it. Okay. And then for Bill, would you expect in fiscal 24 a meaningful initial ramp of the 200 or 400 gig optical DSPs, or would you continue to encourage investors to think that the optical DSP ramp is really beyond a fiscal 24 event at this point?
spk09: I think that when we think about significant, we think about crossing the 10% of revenue threshold, and we don't see that until fiscal 25. We do see signs of life in China, and as I said, we're shipping 400 gig optical DSPs to a U.S. hyperscaler now. My expectation is throughout the year we're going to have a lot more success stories to talk about, but those ramps will most likely not take place within the next three quarters. So it's really a fiscal 25 target at this point.
spk08: Got it. But it starts this year. You're not calling it meaningful because it doesn't hit 10% threshold.
spk07: Right. Exactly.
spk08: Got it. Okay. Thank you.
spk13: Thank you. One moment. While we have a follow-up question... And that question will be coming from Tory Sandberg of Stifel. Your line is open.
spk05: Yes, Tory here. Bill, maybe a follow-up to the previous question about 200, 400 gig. I was a little bit more curious about 800 gig. You know, are you seeing any changes at all to the timelines there? I think the expectation was that the 800 gig market would, you know, maybe take up second half of next calendar year. But with all these new AI trends, just wondering if you're seeing any pulling activity there or maybe even seeing some cannibalization versus 200 gig and 400 gig.
spk09: I think that my expectation is that this is really a calendar year 24 type of market takeoff. And whether it's the second half or first half, we, of course, would like to see it in the first half, given the fact that that would imply that there would be success in pulling in AI programs. And so there's a lot of benefit that comes with 800 gig modules and the implication that it has on our AEC business. But I definitely see it kind of in that timeframe. I don't really see it as a cannibalization of the 200 and 400 gig. It's really, unless you look at it, that these new deployments are in lieu of the old technology. But like I said before, every hyperscaler has their own strategy related to the port size that they plan on deploying. Everybody's got a unique architecture. And where we see optical is typically in the leaf spine network, anything above the tor. In AI, I think the real opportunity is going to be with AOCs. And that, I think, is going to be a very large 800-gig market when those – AI clusters really begin deployment, which, again, I think could be in calendar 24. So appreciate the question, though.
spk05: Great. Thank you.
spk13: Thank you. That concludes the Q&A for today. I would like to turn the call back over to Bill Brennan for closing remarks. Please go ahead.
spk09: Really appreciate the participation today, and we look forward to following up on the callbacks. So thanks very much.
spk13: This concludes today's conference call. Thank you all for joining and everyone enjoy the rest of your evening. Hello. Thank you. Thank you. Thank you. Thank you. Good day, and thank you for standing by. Welcome to the KEDO Q4 fiscal year 2023 earnings conference call. At this time, all participants are in a listen-only mode. After the speaker's presentation, there will be a question and answer session. To ask a question during the session, you will need to press star 11 on your telephone. You will then hear an automated message advising your hand is raised. To withdraw your question, please press star 11 again. Please be advised that today's conference is being recorded. I would now like to go ahead and turn the call over to Dan O'Neill. Please go ahead.
spk03: Good afternoon, and thank you all for joining us today for our fiscal 2023 fourth quarter and year-ending earnings call. Joining me today from Credo are Bill Brennan, our Chief Executive Officer, and Dan Fleming, our Chief Financial Officer. I'd like to remind everyone that certain comments made in this call today may include forward-looking statements regarding expected future financial results, strategies and plans, future operations, the markets in which we operate, and other areas of discussion. These forward-looking statements are subject to risks and uncertainties that are discussed in detail in our documents filed with the SEC. It's not possible for the company's management to predict all risks, nor can the company assess the impact of all factors on its business or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement. Given these risks, uncertainties, and assumptions, The forward-looking events discussed during this call may not occur, and actual results could differ materially and adversely from those anticipated or implied. The company undertakes no obligation to publicly update forward-looking statements for any reason after the date of this call to conform these statements to actual results or to changes in the company's expectations, except as required by law. Also during this call, we will refer to certain non-GAAP financial measures which we consider to be important measures of the company's performance. These non-GAAP financial measures are provided in addition to, and not as a substitute for or superior to, financial performance prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed using the investor relations portion of our website. With that, I'll now turn the call over to our CEO, Bill.
spk09: Thanks, Dan, and good afternoon, everyone. Thank you for joining our Q4 fiscal 23 earnings call. I'll begin by providing an overview of our fiscal year 23 and fiscal Q4 results. I will then highlight what we see going forward in fiscal 24. Dan Fleming, our CFO, will follow my remarks with a detailed discussion of our Q4 and fiscal year 23 financial results. and share our outlook for the first quarter. Credo is a high-speed connectivity company delivering integrated circuits, system-level solutions, and IP licenses to the hyperscale data center ecosystem, along with a range of other data centers and service providers. All our solutions leverage our core series technology and our unique customer-focused design approach, enabling Credo to deliver optimized, secure, high-speed solutions with significantly better power efficiency and cost. Our electrical and optical connectivity solutions deliver leading performance with port speeds ranging from 50 gig up to 1.6 terabits per second. While we primarily serve the Ethernet market today, we continue to expand into other standards-based markets as the need for higher speed with more power-efficient connectivity increases exponentially. Credo continues to have significant growth expectations within the accelerating market opportunity for high-speed connectivity solutions. In fact, the onset of generative AI applications is already accelerating the need for higher speed and more energy-efficient connectivity solutions. And this is where Credo excels. I'll start with comments on our fiscal 2023 results. Today, Credo is reporting results from our first full fiscal year as a public company. In fiscal 23, Credo achieved just over $184 million in revenue, up 73% over fiscal 22, and we achieved non-GAAP gross margin of 58%. Product revenue increased 87% year over year, primarily due to the ramp of our active electrical cable solutions. License revenue grew 28% year over year, from $25 million to $32 million. Throughout fiscal 23, we had several highlights across our product lines. For active electrical cables, or AECs, we continued to lead the market Credo pioneered during the last several years. Our team continued to quickly innovate with application specific solutions, and we've been successful in expanding our engagements to include multiple data centers and service providers. Our customer-focused innovation has led to more than 20 different versions of AECs shipped for qualification or production in the last year, and we remain sole-sourced in all our wins. And while our significant power advantage was a nice-to-have a couple years ago, it's increasingly becoming imperative as our hyperscaler customers are pushed to lower their carbon footprint. For optical DSPs, Credo continued to build momentum by successfully passing qualification for 200 gig and 400 gig solutions at multiple hyperscalers with multiple optical module partners. In addition, Credo introduced our 800 gig optical DSPs, laser drivers, and TIAs, and we announced our entry into the coherent optical DSP market. For line card buys, We continue to expand our market leadership. In particular, Credo built upon our position as the leader for MaxTech 5s with over 50% market share. We also extended our performance and power efficiency advantages for 100 gig per lane line card 5s with the introduction of our Screaming Eagle family of retimers and gearboxes with up to 1.6 terabits per second of bandwidth. For IP licensing, we continue to build on our offering of highly optimized series IPs. In the year, we licensed CERTI's IP in several process nodes from 4 nanometer to 28 nanometer, with speeds ranging from 28 gig to 112 gig, and reach performance ranging from XSR to LR. We believe our ability to innovate to deliver custom solutions remains unparalleled. We maintain very close working relationships with hyperscalers, and we'll continue to collaborate with them to deliver solutions that are optimized to their needs. Despite recent macroeconomic headwinds in the data center industry, we believe the need for higher speed with better power efficiency will continue to grow. This plays perfectly to CREVA's strengths, which is why we remain optimistic about our prospects in fiscal 24 and beyond. I will now discuss the fourth quarter more specifically. In Q4, we delivered revenue of $32.1 million and non-GAAP gross margin of 58%. I'll now provide an overview of key business trends for the quarter. First, regarding AEC, market forecasters continue to expect significant growth in this product category due to the benefits of AECs compared to both legacy direct-attached copper cables and compared to active optical cables, which are significantly higher power and higher cost. With our largest customer, we're encouraged by our development progress on several new AEC programs, including an acceleration in their first 100 gig per lane AI program where they intend to deploy Credo AECs. We saw the initial ramp of a second hyperscale customer, which we expect to grow meaningfully throughout the year. We're ramping 50 gig per lane NIC to TOR AEC solutions for both their AI and compute applications. And I'm happy to report that Credo has been awarded this customer's first 100 gig per lane program. We're also actively working to develop several other advanced AEC solutions for their next generation deployments. We continue to make progress with additional customers as well. We remain in flight with two additional hyperscalers and are also engaged in meaningful opportunities with service providers. We've seen momentum building for AEC solutions across AI, compute, and switch applications, and we continue to expect to benefit as speeds move quickly to 100 gig per lane. Regarding our progress on optical solutions, in the optical category, we've leveraged our CERTES technologies to deliver disruptive products, including DSPs, laser drivers, and TIAs for 50 gig through 800 gig port applications. We remain confident we can gain share over time due to our compelling combination of performance, power, and cost. In addition to the hyperscalers that have previously production-qualified Credo's optical DSPs, we started the production ramp of a 400-gig optical DSP for a U.S. hyperscaler as the end customer. At OFC in March, we received very positive feedback on our market solutions, including our Dove 800 products, as well as on our announcement to enter the 100-gig ZR co-parent DSP market. We're well-positioned to win hyperscalers across a range of applications, including 200-gig, 400-gig, and 800-gig port speeds. We're also engaged in opportunities for fiber channel, 5G, OTM, and PON applications with optical partners, service providers, and networking OEMs. Within our LionCard 5 category, during the fourth quarter, we saw growing interest in our solutions, specifically for our Screaming Eagle 1.6 terabit per second drives. We've already been successful winning several design commitments from leading networking OEMs and ODMs for the Screaming Eagle devices. Credo was selected due to our combination of performance, signal integrity, power efficiency, and cost effectiveness. We also made significant development progress with our customer-sponsored next-generation 5-nanometer 1.6 terabit per second MACSEC 5, which we believe will extend our leadership well into the future for applications requiring encryption. Regarding our CERTI's IP licensing and CERTI's chiplet businesses, Our IP deals in Q4 were primarily led by our 5 and 4 nanometer, 112 gig series IP, which according to customers, offers significant power advantage versus competition based on our ability to power optimize to the reach of an application. Our service chiplet opportunity continues to progress. Our collaboration with Tesla on their Dojo supercomputer design is an example of how connectivity chiplets can enable advanced next generation AI systems. We're working closely with customers and standard sides, such as the UCIE Consortium, to ensure we retain leadership as the chiplet market grows and matures. We believe the acceleration of AI solutions across the industry will continue to fuel our licensing and chiplet businesses. To sum up, the hyperscale landscape has shifted swiftly and dramatically in 2023. Compute is now facing a new horizon, which is generative AI. We expect this shift to accelerate the demand for energy-efficient connectivity solutions that perform at the highest speeds. From our viewpoint, this technology acceleration increases the degree of difficulty and will naturally slim the field of market participants. We remain confident that our technology innovation and market leadership will fuel our growth as these opportunities materialize. We expect to grow sequentially in Q1 and then continue with sequential quarterly revenue growth throughout fiscal 24. We believe our growth will be led by multiple customers across our range of connectivity solutions, which will result in a more diversified revenue base as we exit fiscal 24. I'll now turn the call over to our CFO, Dan Fleming, who will provide additional details. Thank you.
spk04: Thank you, Bill, and good afternoon. I will first provide a financial summary of our fiscal year 23, then review our Q4 results. And finally, discuss our outlook for Q1 and fiscal 24. As a reminder, the following financials will be discussed on a non-GAAP basis, unless otherwise noted. Revenue for fiscal year 23 was a record at $184.2 million, up 73% year over year, driven by product revenue that grew by 87%. Gross margin for the year was 58.0%. Our operating margin improved by 13 percentage points, even as we drew our product revenue mix. This illustrates the leverage that we can produce in the business. We reported earnings per share of 5 cents, an 18 cent improvement over the prior year. Moving on to the fourth quarter, in Q4, we reported revenue of $32.1 million, down 41% sequentially and down 14% year-over-year. Our IP business generated $5.7 million of revenue in Q4, down 55% sequentially and down 49% year-over-year. IP remains a strategic part of our business, but as a reminder, our IP results may vary from quarter to quarter. driven largely by specific deliverables to pre-existing contracts. While the mix of IP and product revenue will vary in any given quarter over time, our revenue mix in Q4 was 18% IP, above our long-term expectation for IP, which is 10 to 15% of revenue. We continue to expect IP as a percentage of revenue to come in above our long-term expectations for fiscal 24. Our product business generated $26.4 million of revenue in Q4, down 37% sequentially and flat year over year. Our team delivered Q4 gross margin of 58.2%, above the high end of our guidance range, and down 94 basis points sequentially due to lower IP contribution. Our IP gross margin generally hovers near 100%, and was 97.4% in Q4. Our product gross margin was 49.7% in the quarter, up 245 basis points sequentially and up 167 basis points year over year, due principally to product mix. Total operating expenses in the fourth quarter were $27.2 million within guidance and up 6% sequentially and 25% year-over-year. Our year-over-year OPEX increase was a result of a 36% increase in R&D as we continue to invest in the resources to deliver innovative solutions. Our SG&A was up 12% year-over-year as we built out public company infrastructure. Our operating loss was $8.5 million in Q4. a decline of $10.7 million year over year. Our operating margin was negative 26.4% in the quarter, a decline of 32.2 percentage points year over year due to reduced top line leverage. We reported a net loss of $5.7 million in Q4, $8.3 million below last year. Cash flow used by operations in the fourth quarter was $11.8 million, a decrease of $14.2 million year over year, due largely to our net loss and changes in working capital. CapEx was $3.9 million in the quarter, driven by R&D equipment spending. And free cash flow was negative $15.7 million, a decrease of $8.4 million year over year. We ended the quarter with cash and equivalents of $217.8 million, a decrease of $15.2 million from the third quarter. This decrease in cash was a result of our net loss and the investments required to grow the business. We remain well capitalized to continue investing in our growth opportunities while maintaining a substantial cash buffer for uncertain macroeconomic conditions. Our accounts receivable balance increased by 14.6% sequentially to $49.5 million, while day sales outstanding increased to 140 days, up from 72 days in Q3 due to lower revenue. Our Q4 ending inventory was $46.0 million, down $4.3 million sequentially. Now, turning to our guidance, We currently expect revenue in Q1 of fiscal 24 to be between $33 million and $35 million, up 6% sequentially at the midpoint. We expect Q1 gross margin to be within a range of 58% to 60%. We expect Q1 operating expenses to be between $26 million and $28 million. We expect Q1 basic weighted average share count to be approximately 149 million shares. We feel we have moved through the bottom in the fourth quarter. While we see some near-term upside to our prior expectations, we remain cautious about the back half of our fiscal year due to uncertain macroeconomic conditions. In summary, as we move forward through fiscal year 24, we expect sequential revenue growth, expanding gross margins due to increasing scale, and modest sequential growth in operating expenses. As a result, we look forward to driving operating leverage as we exit the year. And with that, I'll open it up for questions. Thank you.
spk13: Thank you. As a reminder, if you have a question, please press star 11 on your telephone. As well, please wait for your name to be announced before you proceed with your question. One moment while we compile the Q&A roster. The first question that we have is coming from Tori Sandberg of Stifel. Your line is open.
spk01: Yes, thank you.
spk05: For my first question, and in regards to the Q1 guidance as far as what's driving the growth, you know, given your gross margin comment, I assume that, you know, AEC will probably continue to be down. with perhaps the growth coming from, you know, from panel 4 BSP and IP. Is that sort of the correct thinking, or if not, please correct me.
spk04: Hi, Tori. This is Dan. So you're correct in that if you look at the sequential increase in gross margin from Q3 to Q4, while our product revenue was down, that's really reflective of a favorable product mix. where AEC, as we all know, which is on the lower end of our margin profile, contributed less of the overall product mix. That trend will continue in Q1, and I would characterize that really as broadly across all of our other product lines, not really singling out one specific product line that's taking up the slack from AEC, so to speak.
spk05: Sounds good. And that's my follow-up question for you, Bill, with generative AI, as you mentioned in your script, you know, things are clearly changing. I was just hoping you could talk a little bit more granular about how it impacts each business. I'm even thinking about sort of the 800 gig pan four cycle. I mean, is that getting pulled in? So, yeah, I mean, how, if you just give us a little bit more color on how, you know, generative AI could impact, you know, each of your four business units at this point. Thank you.
spk09: Sure. Sure. Absolutely. So I think generally, I think that AI applications will create revenue opportunities for us across our portfolio. I think the largest opportunity that we'll see is with AEC. However, optical DSPs, there will definitely be a big opportunity there. Even line card files, chiplets, even Certi's IP licensing will get an uplift as AI deployments increase. So maybe I can start first with with AECs. Now, it's important to kind of identify the differences between traditional compute server racks, which is kind of commonly referred to, you know, uses the front-end network, so basically a NIC to TOR connection, the TOR up to the leaf and spine network. The, you know, the typical compute rack would have 10 to 20 AECs in rack, meaning in-rack connections from NIC to TOR. And, you know, highlight that leading edge lane rates today for these connections with compute servers is 50 gig per lane. Within an AI cluster, in addition to the front end network, which is similar, there's a back end network referred to as the RDMA network. And that basically allows the AI appliances to be networked together within a cluster directly. And, you know, if we start going through the map, this back-end network has 5 to 10x the bandwidth as the front-end network. And so the other important thing is to note within these RDMA networks, there are leaf spine racks as well. And so if we look at one example of a customer that, you know, we're working with in deploying, The AI appliance rack itself will have a total of 56 AECs between the front-end and back-end networks. Each leaf spine rack is a cloth rack or a disaggregated chassis, which will have 256 AECs. And so when we look at it from an overall opportunity for AECs, this is a huge uplift in volume. The volume coincides with the bandwidth. Now, lane rates will quickly move. Certain applications will... will go forward at 50 gig per lane. Others will go straight to 100 gig per lane. And so we see, you know, probably a 5x plus revenue opportunity difference between, you know, the typical, if you were to say apples to apples with the number of compute server racks versus an AI cluster. So to kind of extend into optical, there is also a typically large There's typically a large number of AOCs in the same cluster. So you can imagine that the short in-rack connections are going to be done with AECs. These are three meters or less. But these appliances will connect to the back end leaf spine racks, these disaggregated racks. All of those connections will be AOCs. Those are connections that are greater than three meters. And so if we look at this, this is all upside to, you know, say a traditional compute deployment where there's really no AOCs connecting rack to rack. Okay, so when we look at, you know, the overall opportunity, we think that the additional AEC opportunity within an AI cluster is probably twice as large, twice as many connections as AOCs, but the AOC opportunity for us will be significant in a sense that AOCs represent the most cost-sensitive portion of the optical market. And so it's also a lower technology hurdle since the optical connection is well-defined and it's within the cable. So this is a really natural spot for us to be disruptive in this market. We see some are planning on deploying with 400-gig AOCs. Others are planning to go straight to 800-gig AOCs. So we view AECs as the largest opportunity. Optical DSPs for sure will get an uplift in the overall opportunity set. But also I think that if we look at Tesla as an example, that's an example of where as they deploy, we're going to see a really nice opportunity for our chiplets that we did for them for that Dojo supercomputer. And, you know, it's an example of how AI applications are doing things completely differently. And we view that long term. This will be, you know, kind of a natural thing for us to benefit from. We can extend that to Certi's IP licensing. Many of the licenses that we're doing now are targeting different AI applications. And also, don't forget line cards. The opportunity for, you know, for the network OEMs and ODMs is also increasing. And, you know, of course, line card fives are something that go on those switch line cards that are developed. So, you know, generally speaking, I think that AI will drive faster lane rates. And we've been very, very consistent with our message that, you know, as the market hits the knee in the curve on AI deployments, we're naturally going to see lane rates go, you know, more quickly to 100 gig per lane. And that's where we really see our business taking off. So we're getting a really nice revenue increase from 50 gig per lane applications, but we really see acceleration as 100 gig per lane happens. And especially when you start thinking about the power advantages that all of our solutions offer compared to others that are doing similar things. That might have been more than you were looking for, but...
spk05: No, that's a great overview. Thank you so much, Bill. That was great. Thank you.
spk10: Sure.
spk13: Thank you for your question. And one moment while we prepare for the next question. And the next question will be coming from Quinn Bolton of Needham & Company. Your line is open.
spk08: Thanks very much for taking my question. Bill, maybe a follow-up to Tori's question, just sort of the impact of generative ai on the business you know given that most of your aec revenue today comes from the standard compute racks rather than ai racks what do you see in terms of potential cannibalization at least in the near term as these hyperscalers prioritize building out the ai racks potentially at the expense um of of compute deployments again at you know in in the near term
spk09: So I feel very good about how we're positioned. It is the case that our first ramp with our largest customer was Compute Rack. I think we're very well positioned with our customer as they transition to AI deployments. So we've talked in the past about two different types of deployments at the server level. Of course, compute will continue. And we can all guess as to what ratio it's going to be between compute and AI. We've got the roadmap very well covered for compute. So I think we're well set. And so as that resumes at our largest customer, I think we're going to be in good shape. I'm actually more excited about the acceleration of the AI program that we've been working with this same customer on for close to a year. And so I feel like we're well covered for both compute and AI. And that's really a long-term statement. So a little bit of new information, I would say, is that with our second hyperscale customer, just to give an update generally on that and then relate that back to the same point that I was making about the earlier customer. We are right on track with the AEC ramp. The first program is a compute server rack that we've talked about. We saw small shipments in Q4, and we expect to see a continued ramp through fiscal 24. However, during the past several months, a new AI application has taken shape. So if we would have talked 100 days ago, we wouldn't have talked about this program. And so we quickly delivered a different configuration of the AEC that was designed for the compute server rack. So if you recall, we did a straight cable as well as an X cable configuration. So they asked us to deliver a new A new configuration had, you know, specific changes that were needed for their deployment. And we delivered the new configuration within weeks, which is, you know, that's another example of the benefit to how we're organized. The qualification is underway, and we expect this AI appliance rack to also ramp in our fiscal 24. It's unclear as to the exact, you know... schedule from a time standpoint and a volume standpoint, but we feel like, you know, this is going to be another significant second program for us. And so I think that, you know, I think that for both our first and our second hyperscale customer, I think we're covering the, you know, the spectrum between compute and AI. So I feel like we're really in great shape. So hopefully that answers your question. Now, if I take it a little bit further and say, okay, long-term, let's say it's 80% compute, 20% AI, and you think maybe because the opportunity for us is five times larger in AI, maybe the opportunity is similar if the ratio is like that. So compute might be equal to AI from an AEC perspective. I think that Any way that goes, we're going to benefit. If it goes 50-50, that's a big upside for us with AEC, given the fact that there's, you know, larger volume, larger dollars associated with an AI cluster deployment. And so I think that, you know, for us, it won't affect us, you know, one way or another. Maybe in the near-term quarters, yeah. But, you know, the situation at our first customer really hasn't changed since the last update. So we think that the year-over-year increase in revenue for that customer will happen in FY25, as we've discussed before.
spk08: Okay, but no further push-out or delay of the compute rack at the first hyperscaler, given the potential reprioritization to AI in the near term?
spk09: Well, the new program qualifications, we've talked about two of them, they're still scheduled in the relatively near future. And, of course, as those get you know, qualified and RAMP will see benefit from that. But, you know, it's a little bit tough to track month by month, right? That's a little bit too specific in a timeframe standpoint. So we've seen a slight delay, but it's not something that we're necessarily concerned about.
spk08: Got it, Bill. And then just a clarification on this second hyperscaler, I think, you know, the last update, you said you may not yet have a hard forecast for that hyperscaler's needs on the AEC side. Have you received sort of a hard PO or at least a more reliable forecast that you're now sort of forecasting that business from in fiscal 24?
spk09: Yeah, it's coming together. It's the revenue that will be generated by this second customer will be significant. And I'm not exactly able to talk about how significant. I think that we continue to view this through a conservative lens because we really don't know how the second half is going to shape up. But all the indicators that we've heard over the last 90 days are quite positive. And I think Dan referenced the fact that in Q2, we expect significant material revenue as that starts.
spk15: Perfect. Thank you.
spk13: Thank you. One moment while we prepare for the next question. And our next question will be coming from Suji De Silva of Roth Capital. Your line is open.
spk19: Hi, Bill. Hi, Dan. I just want to talk about the AEC, the products. You have multiple products, and I just wanted to know, are there certain ones that are more relevant to an AI rack versus a traditional compute rack, or are they all? applicable across the board?
spk09: I would say that I wouldn't classify all of these solutions. I wouldn't lump them together. We're very much looking at the AEC opportunity as one where we're positioned to implement really customer-specific requests. And so part of what we're seeing is that most of the designs that we're engaged now have something very specific to a given customer. And so I can say that we're seeing that there's a large number of customers moving to 100 gig per lane quickly. But we're also seeing customers that are reconfiguring existing NICs and building different AI appliances with those NICs. And so they're going to be able to ramp with 50 gig per lane solutions. You know, now as far as configurations go, you know, we see straight cable opportunities. We see wide cable opportunities. We, you know, we see opportunities where, you know, just recently we had a customer ask us to, you know, have 100 gig on one of the cable and 50 gig on the other end of the cable. And so obviously that's a breakout cable. But it's an interesting challenge because this is the first time we'll be mixing different generations of ICs. And so, again, this is something we're able to do because we're so unique in a sense that we have a dedicated organization internal to Credo that's responsible for delivering these system solutions. You know, it's really that single party that's responsible for collaborating with a customer, designing, developing, delivering, qualifying, and then supporting the designs with our customers. And so, you know, I can't emphasize enough that, you know, you give engineers at these hyperscalers the opportunity to innovate in a space they've never been thought of. And it's something that we're getting really good uptake on. And, you know, of course, Our investment in the AEC space is really unmatched by any of our competition. I think we're unique in the sense that we can offer this type of flexibility. So to answer your question, I couldn't really point to one type of cable that is going to be leaned on.
spk19: No, it's helpful. It paints a picture of how the cables are being deployed here. And then also, I believe in the prepared remarks, Mark, you mentioned 20 AECs being qualified for shipment, if I heard that right. I'm curious across how many customers or how many programs that is, just to understand the breadth of that qualification effort.
spk09: Yeah, I would say that there's a set of hyperscalers that are really the large, large opportunity within the industry for you know, for the AEC opportunity. But we've also had a lot of success with data centers that might not qualify as a capital H hyperscaler, as well as service providers. And so, you know, we can look at the relationships with hyperscalers directly, and there's several SKUs that, you know, that we've delivered. And there's even more in the queue, you know, for these more advanced next generation systems But even if we look at, you know, I think, you know, if you look at the number of million-dollar per quarter or per year customers that we've got, the list is really increasing. The product category, I think, has really been solidified over the last six to nine months. And you see that also because a lot of companies are announcing that they intend to compete longer term. All right.
spk14: Okay. Thanks, Bill. Thank you. One moment while we prepare for the next question.
spk13: And our next question will be coming from Carl Ankerman of BNP. Your line is open.
spk21: Thank you. I have two questions. Good afternoon, Dan and Bill. I guess, first off, you know, it's great to see the sequential improvement in your fiscal Q1, But I didn't hear you confirm your fiscal 24 revenue outlook from 90 days ago. And, you know, I guess could you just speak to the visibility you have on your existing programs that gives you confidence in the sequential growth that you spoke about throughout fiscal 24? If you could just touch on that, that would be helpful.
spk04: Thanks, Carl. This is Dan. So, yeah, generally speaking, you know, As we've described, we see some near-term upside, but we still remain a bit cautiously optimistic about the back half of the year. So we're very comfortable ultimately with the current estimates for the back half. We do have certainly increasing visibility as time passes, and we hope to provide meaningful updates over the next upcoming quarters. We're working hard to expand these growth opportunities for FY24 and beyond, and we remain very encouraged with what we're seeing, especially with the acceleration of AI programs.
spk20: Got it. Understood. Thanks for that.
spk21: I guess as a follow-up, of the DSP opportunity that you've highlighted and prepared remarks, are you seeing your design engagements primarily in Fiscal 24 on coherent offerings, Or are you seeing more opportunities in DCI for your 400 gig and 800 gig opportunities? Thank you.
spk09: Yeah, so the opportunities that we're seeing, the large opportunities that we're seeing are really within the data center. And I can say that it's across the board 200 gig, 400 gig, and 800 gig. All of these hyperscalers have different strategies as to how they're deploying optical. I think we continue to make progress with 200 and 400, and I think we're in a really good position from a time-to-market perspective on 800 gig. And so we can talk about the cycles that we're spending with every hyperscaler. We're also aligning ourselves very closely in a strategic go-to-market strategy with select optical module companies. And we think that as it relates to DCI and Coherent specifically, we're in development for that first solution that we're pursuing, which is 100 gigs ER. And we feel like that development will take place throughout this year, and that we'll see first revenue in the second half of calendar 24. But as far as 400 gig, that would really be a second follow-on. type of DCI opportunity for us. Now, in the ZR space, we're going to be unique because we'll market and sell the DSP to optical module makers. And so we intend to engage three to four module makers in addition to our partner, Effect Photonics. And that makes us somewhat unique in the sense that other competitors are going directly to market with the ZR module. I highlight power is really an enabler here. And the key thing is we can do a 100 gig ZR module and fit under the power ceiling for a standard QSFP connector, which is roughly 4.5 watts. So there's a large upgrade cycle from 10 gig modules that we'll enable, but also there's new deployments in addition. So that kind of gives you a little bit of flavor about the coherent, but I really see our opportunities more within the data center.
spk14: Understood. Thank you.
spk13: Thank you. One moment while we prepare for the next question. And our next question will be coming from Vivek area of Bank of America. Your line is open.
spk12: Thanks for taking my question. Bill, I'm curious to get your perspective on some of these technology changes. One is the role of InfiniBand that's getting more share in these AI clusters. What does that do your AEC opportunity? Is that a competitive situation? Is that a complementary situation? And then the other technology question is some of your customers and partners have spoken about their desire to consider co-packaged optics and linear direct drive type of architectures. What does that do to the need for standalone pluggables?
spk09: Thanks. I appreciate the opportunity to talk about Ethernet versus InfiniBand because there's been a lot said about that. Generally, we see coexistence. I think, you know, depending on how you look at the market forecast information, there is a point soon in the future when Ethernet exceeds InfiniBand for AI specifically. Beyond AI, I think it's game over already. Whether you measure the TAM in ports or dollars, Ethernet is forecasted to far exceed InfiniBand in the out years, so calendar 25 and beyond. And so if we think about from an absolute TAM perspective, forecasters are showing Ethernet dollars perspective. They're showing that Ethernet surpasses InfiniBand by 2025. And so the forecasts show a CAGR for Ethernet of greater than 50%, while InfiniBand, they're showing a CAGR of less than 25%. And so you can also look at this from a port cost perspective, where InfiniBand is two to four times the ASP per port compared to Ethernet, depending on who you talk to. And so in a sense, it's so secret that the world will continue to do what the world does. They'll pursue cost-effective solutions. And we think from a technology standpoint, they're very similar. So if you think from a cost perspective, if you look apples to apples, if you think that an InfiniBand port is 2 to 4x the cost of an Ethernet port, in a sense, you could justify that one to three of those ports of Ethernet are free in comparison to InfiniBand. So I think that our position here is that we really believe that Ethernet is going to prevail. We're working on so many AI programs. Every single one of them is Ethernet.
spk12: And then, Bill, anything on the move by some of your customers to think about co-packaged optics and direct drive? And while I'm at it, maybe let me just ask, you know, Dan, a follow-up on the fiscal 24. I think, Dan, you suggested you're comfortable with where I think expectations are right now. That still implies a very steep ramp into the back half. So I'm just trying to square you know, the confidence in the full year with some of just kind of the macro caution that came through in your comment.
spk04: Yeah, we are confident in how we have guided. And as I mentioned, we're very comfortable with the current estimates. If we look at FY24, as you allude to, you know, there's we see strong sequential top line growth throughout the year in order to get to achieve those numbers. And, you know, it's kind of well documented what's happened at Microsoft to us for this fiscal year. So if we exclude Microsoft, what that means is we have in excess of 100% year on year growth of other product revenue from other customers, which again, we're very confident based on all of the traction that we've seen recently that we'll be able to achieve that. And of course, I'll just reiterate, one of the key drivers is AI in some of those programs. So hopefully that gives you some additional color on our confidence for FY24.
spk09: Yeah, regarding your question about the linear direct drive, that was, I think, this year's shiny object at OFC. I do think that, you know, the idea that it's really a, you know, the idea is really to how to address the power challenges. you know, basically move away from the optical DSP. I think that, you know, this is not a new idea. There was a big move towards this linear direct drive in the 10 gig space when that was emerging. And, you know, I think the fact that there is really none in existence, I think that the DSP was chosen then. It was really critical to close the system. Our feeling is that I think we'll see much of the same this year. I think Marvell did a great job in kind of setting expectations correctly. They did a long session right after OFC that I think addressed it quite well. I think you'll see small deployments where every link in the system is very, very controlled. But these are typically very, very small in terms of the overall TAM. Now, we're fully signed up. If the real goal is power, that's exactly where we play. So we're fully signed up to looking at unique approaches in the future to be able to offer compelling things from a power perspective. And it's not like I'm completely dismissing the concept that was really behind the idea of linear direct drive. We're actually viewing that as a potential opportunity for us in solving the problem differently. But generally speaking, I don't think you'll see in the future a world where linear direct drive is measured in any kind of significant way. It's not to say that people aren't spending money trying to prove it out right now. That is happening. And regarding CPO, I think that was something that was talked about for many, many years prior. And I think also on that, you'll see smaller deployments if that's ultimately something that some customers embrace. But I don't think you'll see it in a big way. That's simply not what the customer base is looking for. Thank you, Bill. Thank you, Dan.
spk13: Thank you. One moment for the next question. And our next question will be coming from David Liu of
spk11: Mr. Hall, please go ahead.
spk02: All right, yeah, thanks. This is David on for Vijay at Mizuho. My first question is assuming that if in fiscal 25 data suited demand for general compute improves and you see the continued new AI ramps, can you provide any more color on the puts and takes there and, you know, the type of operating leverage you can improve?
spk04: Well, we're not giving specific guidance yet to fiscal 25, but you're right in that the ingredients certainly exist where, you know, operating leverage. We should exit FY24 with pretty robust operating leverage, and that we would expect, based on what we know now, to carry forward into FY25. But we haven't framed yet, of course, what that's going to ultimately look like.
spk02: Okay, sure. And I guess for my second question, when you're talking with hyperscalers on these new AI applications, how important is sort of your TCO advantage when they're exploring your solution, or are they currently kind of just primarily focused on time to market and maximum performance and just getting their AI deployments out there?
spk09: So I just want to make sure you said total cost of ownership?
spk06: Yes, yes.
spk09: Yeah, it's, you know, I think it's hands down in favor of AEC. So if we look at 100 gig lane rates, I think that, you know, the conclusion throughout the market is that there's two ways to deploy short cabled solutions. It's really AEC or AOC. If we look at it from a CAPEX standpoint, AECs are about half the cost. If we look at it from an OPEX standpoint, also about half the cost, about half the power, half the ASP for an apples-to-apples type solution. So I think the TCO benefit is significant. The other thing you've got to consider is that especially when you're down in server racks, these are different than switch racks in a sense that having a failure with your You know, your cable solution is, it becomes a very urgent item. And so when we think about AOCs and the reliability in terms of number of years, it is probably, you know, anywhere from one-third to one-tenth. The AECs that we sell are, we talk about a 10-year product life. And so it kind of matches or exceeds the life of the rack that it's being deployed in. the same cannot be true for, you know, it cannot be said for any kind of optical solution. So I think across the board, it's hands down the TCO is much more favorable for AEC. Okay.
spk16: Thank you.
spk13: Thank you. One moment for the next question. And our next question. will be coming from Quentin Walton of Needham and Company. Your line is open.
spk08: Hey, guys. Thanks for the quick two follow-ups. One, Dan, was there any contra revenue in the April quarter?
spk04: That's an excellent question, Quint. I'm glad you caught that. Actually, so there was, and you will see that when we file our 10-K. In the past, you've been able to see that in our press release, in our DAP to non-DAP reconciliation. But from Q4 and going forward, we're no longer excluding that contra revenue from our non-GAAP financials. And this really came about through a comment from the SEC, not singling out Credo, but actually all of Amazon's suppliers who have a warrant or Amazon has a warrant with them. So the positive things of this change are You'll still be able to track ultimately what that warrant expense is, but when we file our Q&K and, you know, looking historically, there's not really, it doesn't really make a reporting difference on a non-GAAP basis. It was not material, the difference. And it just makes, you know, calculation a little bit more straightforward going forward. Our only non-GAAP reconciling item going forward, or at least for the foreseeable future, is really just share-based compensation.
spk08: Got it. So the revenue doesn't change. You'll just sort of – you won't be making the adjustments for the contra revenue and the non-GAAP gross margin calculation going forward? Got it.
spk04: That's exactly correct. Yeah, revenue is still revenue. It has a portion of it which is contra revenue, which obviously brings down the revenue a little bit.
spk08: Got it. Okay. And then for Bill, would you expect in fiscal 24 a meaningful initial ramp of the 200 or 400 gig optical DSPs, or would you continue to encourage investors to think that the optical DSP ramp is really beyond a fiscal 24 event at this point?
spk09: I think that when we think about significant, we think about crossing the 10% of revenue threshold. And we don't see that until fiscal 25. We do see signs of life in China. And as I said, we're shipping 400 gig optical DSPs to a U.S. hyperscaler now. My expectation is throughout the year we're going to have a lot more success stories to talk about, but those ramps will most likely not take place within the next three quarters. So it's really a fiscal 25 target at this point.
spk08: Got it. But it starts this year. You're not calling it meaningful because it doesn't hit 10% threshold.
spk07: Right. Exactly.
spk08: Got it. Okay. Thank you.
spk13: Thank you. One moment. While we have a follow-up question, And that question will be coming from Tory Sandberg of Stifel. Your line is open.
spk05: Yes, Tory here. Bill, maybe a follow-up to the previous question about 200, 400 gig. I was a little bit more curious about 800 gig. You know, are you seeing any changes at all to the timelines there? I think the expectation was that the 800 gig market would, you know, maybe take up second half of next calendar year. But with all these new AI trends, just wondering if you're seeing any pulling activity there or maybe even seeing some cannibalization versus 200 gig and 400 gig.
spk09: I think that my expectation is that this is really a calendar year 24 type of market takeoff. And whether it's the second half or first half, we, of course, would like to see it in the first half, given the fact that that would imply that there would be success in pulling in AI programs. And so there's a lot of benefit that comes with 800 gig modules and the implication that it has on our AEC business. But I definitely see it kind of in that timeframe. I don't really see it as a cannibalization of the 200 and 400 gig. It's really, unless you look at it, that these new deployments are in lieu of the old technology. But like I said before, every hyperscaler has their own strategy related to the port size that they plan on deploying. Everybody's got a unique architecture. And where we see optical is typically in the leaf spine network, anything above the tor. In AI, I think the real opportunity is going to be with AOCs. And that, I think, is going to be a very large 800-gig market when those – AI clusters really begin deployment, which, again, I think could be in calendar 24. So appreciate the question, though.
spk05: Great. Thank you.
spk13: Thank you. That concludes the Q&A for today. I would like to turn the call back over to Bill Brennan for closing remarks. Please go ahead.
spk09: Really appreciate the participation today, and we look forward to following up on the callbacks. So thanks very much.
spk13: This concludes today's conference call. Thank you all for joining and everyone enjoy the rest of your evening.
Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-