This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.
Astera Labs, Inc.
11/4/2024
Hello, good afternoon. My name is Jeremy, and I will be your conference operator today. At this time, I would like to welcome everyone to the Astera Labs Q3 2024 Earnings Conference Call. All lines have been placed on mute to prevent any background noise. After management's remarks, there will be a question and answer session. If you would like to ask a question during this time, simply press star followed by the number one on your telephone keypad. If you would like to withdraw your question, simply press the pound key. We do ask if you're asking a question to limit yourself to one question plus one follow-up. Thank you. I will now turn the call over to Leslie Green, Investor Relations for Astera Labs. Leslie, you may begin.
Thank you, Jeremy, and good afternoon, everyone, and welcome to the Astera Labs Third Quarter 2024 Earnings Conference Call. Joining us on the call today are Jitendra Mohan, Chief Executive Officer and Co-Founder Sanjay Gajendra, President and Chief Operating Officer and Co-Founder, and Mike Tate, Chief Financial Officer. Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, expected future financial results, strategies and plans, future operations, and the markets in which we operate. These forward-looking statements reflect management's current beliefs, expectations, and assumptions about future events, which are inherently subject to risks and uncertainties that are discussed in today's earnings release and the periodic reports we file from time to time with the SEC, including risks set forth in the final perspectives relating to our IPO. It is not possible for the company's management to predict all risks and uncertainties that could have an impact on these forward-looking statements, or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement. In light of these risks and uncertainties and assumptions, the results, events, or circumstances related and reflected in the forward-looking statements discussed during this call may not occur, and actual results could differ materially from those anticipated or implied. All of our statements are made based on information available to management as of today, and the company undertakes no obligation to update such statements after the date of this call to conform to these as a result of new information, future events, or changes in our expectations, except as required by law. Also during this call, we will refer to certain non-GAAP financial measures, which we consider to be important measure of the company's performance. These non-GAAP financial measures are provided in addition to and not as a substitute for or superior to financial results prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed through the investor relations portion of our website and will also be included in our filings with the SEC which will also be accessible through the investor relations portion of our website. With that, I would like to turn the call over to Jitendra Mohan, CEO of Acera Labs.
Thank you, Leslie. Good afternoon, everyone, and thanks for joining our third quarter conference call for fiscal year 2024. Today, I will provide a quick Q3 financial overview, discuss several of our recent company-specific product and strategic announcements, followed by a discussion around the key secular trends driving our market opportunity. I will then turn the call over to Sanjay to delve deeper into our growth strategy. Finally, Michael provides additional details on our Q3 results and our Q4 financial guidance. Acera Labs delivered strong Q3 results, setting our fifth consecutive record for quarterly revenue at $113 million, which was up 47% from the last quarter and up 206% versus the prior year. Our business has entered a new growth phase with multiple product families ramping across AI platforms, featuring both third-party GPUs and internally developed AI accelerators, which drove the Q3 sales upside versus our guidance. We also demonstrated strong operating leverage during Q3 with non-GAAP operating margin expanding to over 32% and delivered non-GAAP EPS of 23 cents. Looking into Q4, we expect our revenue momentum to continue, largely driven by the Aries PCIe and Taurus Ethernet product lines, with Scorpio Fabrics, which is continuing to ship in pre-production volumes. The criticality of connectivity in modern AI clusters continues to grow with trillion parameter model sizes, multi-step reasoning models, and faster, more complex AI accelerators. These developments present a tremendous opportunity for Astera Labs' intelligent connectivity platform to enhance AI server performance and productivity with our differentiated hardware and software solutions. At the start of Q4, we announced our fourth product line, the Scorpio Smart Fabric Switch family, which expands our mission of solving the increasingly complex connectivity challenges within AI infrastructure, both for scale-out and for scale-up networks. Extending beyond our current market footprint of PCI Express and Ethernet Retimer-class products, and controller-class devices for CXL memory, our Scorpio Smart Fabric Switch family delivers meaningfully higher functionality and value to our AI and cloud infrastructure customers. We estimate that Scorpio will expand our total market opportunity for our four product families to more than $12 billion by 2028. Scorpio family unlocks a large and growing opportunity across AI server head node scale-out applications with the P-series, and AI accelerators scale up clustering use cases with the X-Series. The P-Series devices directly address the challenge of keeping modern GPUs fed with data at ever-increasing speeds, while the X-Series devices improve the efficiency and size of AI clusters. The Scorpio family was purpose-built from the ground up for these AI-specific workloads with close alignment with our hyperscaler and AI platform partners. At the recent 2024 OCP Global Summit, we demonstrated the industry's first PCIe Gen 6 Fabric Switch, which is currently shipping in pre-production volumes for AI platforms. We are happy to report that we already have design wins for both Scorpio P Series and X Series, and that our recent product launches further accelerated strong customer and ecosystem interest. Over the coming quarters, we expect to further expand our business opportunities for the Scorpio product family across PCIe Gen 5, PCIe Gen 6, and platform-specific customized connectivity platforms. Further expanding our market opportunity, Astera Labs has joined the Ultra Accelerator Link UA-Link Consortium as a promoting member on the board of directors, along with industry-leading hyperscalers and AI platform providers. This important industry initiative places us at the forefront of developing and advancing an open, high-speed, low-latency interconnect for scale-up connectivity between accelerators. SR Labs' deep expertise in developing advanced silicon-based connectivity solutions, along with a strong track record of technology execution, makes us uniquely suited to contribute to this compelling and necessary industry initiative. With a shift towards shorter AI platform refresh cycles, hyperscalers are increasingly relying on their trusted partners as they deploy new hardware in their data center infrastructure at an unprecedented pace and scale. To date, we have demonstrated strong execution with our Aries, Taurus, Leo, and now Scorpio product family. Our products increase data, networking, memory bandwidth, and capacity, and our Cosmos software provides our hyperscaler customers with the tools necessary to monitor and observe the health of their expensive infrastructure to ensure maximum system utilization. To conclude, Estera Labs' expanding product portfolio, including the new Scorpio Smart Fabric switches, is cementing our position as a critical part of AI connectivity infrastructure, delivering increased value to our hyperscaler customers and unlocking additional multi-year growth trajectories for Estera Labs. With that, let me turn the call over to our President and COO, Sanjay Gajendra, to discuss some of our recent product announcements and our long-term growth strategy. Thanks, Jitendra, and good afternoon, everyone.
We are pleased with our Q3 results and strong financial outlook for Q4. Overall, we believe we have entered a new phase as a company based on two key factors. First is the increased diversity of our business. In 3Q, our business diversified significantly with new product lines and form factors going to high-volume production. We also started ramping on multiple new AI platforms based on internally developed AI accelerators at multiple customers to go along with the continued momentum with third-party GPU-based AI platforms. These together helped achieve a record sequential growth in 3Q. Second, with the introduction of Scorpio smart fabric switches, our market opportunity has significantly expanded. This new product line delivers increased value to our hyperscaler customers, and for Estera Labs, unlocks higher dollar content in AI platforms and additional multi-year growth trajectories. Let me explain our business drivers in more detail. As noted, we now have multiple product lines, generations, and form factors in high-volume production. This includes 80 smart DSP retimers, and smart cable modules for PCIe 5.0 and Taurus smart cable modules for 200 and 400 gig ethernet active electrical cables. We are also shipping pre-production volumes for Leo CXL memory controllers, AD3 timers for PCIe 6.0, and Scorpio fabric switch solutions for PCIe head node connectivity. All these new products deliver increased value to our customers and therefore command higher ASPs. Our first market Scorpio PCIe Gen 6 Fabric Switch addresses a multi-billion dollar opportunity with a ground-up architecture designed for AI data flows and delivers maximum predictable performance per watt in mixed-mode PCIe head node connectivity compared to incumbent solutions. We are currently shipping Scorpio P-Series in pre-production quantities to support qualification for customized AI platforms based upon leading third-party GPUs. Interest for Scorpio P-Series has accelerated since the former launch given its differentiated features, and as a result, we are engaging in multiple design opportunities across a diverse spectrum of AI platforms. These include both PCIe Gen 6 and PCIe Gen 5 implementations on third-party GPU and internal accelerator-based platforms. Overall, we are very bullish on the market for our entire product portfolio across third-party GPU-based systems with increasing dollar content per GPU on our new design wins And we believe that Estera's opportunity with internally developed AI accelerator platforms can be even larger with opportunities both in the scale-out and back-end scale-up clustering use cases. This additional market opportunity has unlocked design activity for Ares and Taurus product lines for reach extension within and between racks and for our newly introduced Scorpio X series fabric switches for homogeneous accelerator-to-accelerator connectivity with maximum bandwidth. The Scorpio X series is built upon our software-defined architecture and leverages our Cosmos software suite to support a variety of platform-specific customization, which provide the hyperscalers with valuable flexibility. As we look ahead to 2025, we will begin to ramp designs across new internally developed AI accelerator-based platforms that will incorporate multiple Estella Labs product families, including Ares, Taurus, and Scorpio. As a result, we will continue to benefit from increased dollar content per accelerator in these next-generation AI infrastructure deployments. Though we remain laser-focused on AI platforms, we continue to see large and growing market opportunities within the general-purpose compute space for our PCIe, Ethernet, and CXL product families. While the transition to faster bandwidth requirements within general-purpose computing trails the leading adoption across AI systems, the market size remains substantial. Our Aries business within general-purpose compute is poised to benefit from the transition of PCIe peripherals to Gen 5 speeds and the introduction of new CPU generations from Intel and AMD. We are also ramping volume production of our Taurus SEMs for front-end networking across hyperscaler general-purpose server platforms. We expect to see broadening adoption of Leo CXN memory controllers across the ecosystem as CXL capable server CPUs are deployed in new cloud infrastructure over the coming years. In summary, we believe Astera Labs has entered a new growth phase and we are well positioned to outpace industry growth rates through a combination of strong secular tailwinds and the expansion of a silicon content opportunity in AI and general purpose cloud platforms. We have become a trusted and strategic partner to our customers with over 10 million smart connectivity solutions widely deployed and field tested across nearly all major AI infrastructure programs globally. The introduction of the Scorpio smart fabric switch family and our strategic involvement with the UA-Link standard for scale-up connectivity is the next critical step in our corporate journey. We are hard at work collaborating with our partners to identify and develop new technologies that will expand Estera's footprint from retimer solutions for connectivity or copper within the rack to fabric and optical solutions that connect AI accelerators across the data center. While we have come a long way, there's a remarkable sense of urgency and energy within the company for the opportunities that lay ahead. With that, I will turn the call over to our CFO, Mike Tate, who will discuss our Q3 financial results and our Q4 outlook.
Thanks, Sanjay, and thanks to everyone for joining the call. This overview of our Q3 financial results and Q4 guidance will be on a non-GAAP basis. The primary difference in Estera Labs' non-GAAP metrics is stock-based compensation and its related income tax effects. Please refer to today's press release available on the investor relations section of our website for more details on both our GAAP and non-GAAP Q4 financial outlook, as well as a reconciliation of our GAAP to non-GAAP financial measures presented on this call. For Q3 of 2024, Astera Labs delivered record quarterly revenue of $113.1 million, which was up 47% versus the previous quarter and 206% higher than the revenue in Q3 of 2023. During the quarter, we ship products to all major hyperscalers and AI accelerator manufacturers, and we recognize revenue across all four of our product families. Our ARIES product family continues to be our largest sales contributor and helped drive the upside in our revenues this quarter. ARIES revenues are being driven by continued momentum with third-party GPU-based AI platforms, as well as strong ramps on new platforms featuring internally developed AI accelerators from multiple hyperscaler customers. Also in Q3, Taurus revenue started to diversify beyond 200 gig applications with an initial production ramp of our 400 gig Ethernet-based systems, which are designed into both AI and general purpose compute systems. Q3 LEO CXL revenues continue to be driven by our customers purchasing pre-production volumes for the development of their next generation CXL capable compute platforms. Lastly, we began to ship pre-production volumes of our recently announced Scorpio Smart Fabric Switch family during Q3. Q3 non-GAAP gross margins was 77.8% and was down 20 basis points compared with 78% in Q2 of 2024, and better than our guidance of 75%, driven by higher sales volume and a favorable product mix. Non-GAAP operating expenses for Q3 were $51.2 million, up from $41.1 million in the previous quarter, driven by greater than expected hiring conversion during the quarter, as we aggressively pushed to support additional commercial opportunities. Within Q3 non-GAAP operating expenses, R&D expenses was $36 million, sales and marketing expenses was $7 million, and general and administrative expenses was $8.3 million. Non-GAAP operating margin for Q3 was 32.4%, up from 24.4% in Q2, and demonstrated strong operating leverage as revenue growth outpaced higher operating expenses. Interest income in Q3 was $10.9 million. Our non-GAAP tax provision was $7.3 million for the quarter, which represented a tax rate of 15% on a non-GAAP basis. Non-GAAP fully diluted share count for Q3 was 173.8 million shares, and our non-GAAP diluted earnings per share for the quarter was 23 cents. Cash flow from operating activities for Q3 was $63.5 million. we ended the quarter with cash cash equivalents and marketable securities of 887 million dollars now turning to our guidance for q4 of fiscal 2024 we expect q4 revenue to increase to within a range of 126 million dollars and 130 million dollars up roughly 11 to 15 percent from the prior quarter for q4 We expect continued strong growth from our ARIES product family across a diverse set of AI platforms, some of which are just starting to ramp, and also from our TORI's SEM for 400 gig applications and additional pre-production shipments of our Scorpio P-series switches. We expect non-gap gross margins to be approximately 75%. The sequential decline in gross margins is driven by an expected product mix shift towards hardware solutions during the quarter. We expect non-GAAP operating expenses to be in a range of approximately $54 million to $55 million as we continue to expand our R&D resource pool across headcount and intellectual property. Interest income is expected to be approximately $10 million. Our non-GAAP tax rate should be approximately 10%. And our non-GAAP fully diluted share count is expected to be approximately 179 million shares. Adding this all up, we are expecting non-GAAP fully diluted earnings per share of a range of approximately 25 cents to 26 cents. This concludes our prepared remarks. And once again, we appreciate everyone joining the call. And now we will open the line up for questions. Operator?
All right. Thank you. And as a reminder, if you would like to ask a question today, please simply just press star followed by the number one. I'll give you one moment to compile the roster. Okay, our first question comes from Harlan Sir from JPMorgan. Please go ahead.
Good afternoon and congratulations on the strong results. You know, on your core, retimer business looks like, you know, very strong this year, looks strong next year. The majority of the XPU shipments are still going to be, I think, Gen 5 based where, Your market share is still somewhere in the range of, I think, like 95%. And your customers, you know, both merchant and ASIC are ramping new SKUs starting second half of this year, first half of next year. You've also got the ramp of your Gen 6 products, retimers, Scorpio switch products with your lead GPU customer, which they're ramping now. AEC firing, SCM is firing. So given all of this activity, I assume your visibility and backlog is quite strong. Maybe if you can just qualitatively sort of articulate your confidence on driving sequential growth over the next several quarters, or at least visibility for a strong first half of next year versus second half of this year.
Yeah, thanks, Harlan. Yeah, right now our visibility is very strong, both you know, as always with a backlog position, but also the breadth of designs we have. Right now, we're really kind of entering a new phase of growth here where our revenue streams are clearly diversifying. If you look at right now versus a year ago, we're very excited about the breadth of designs and product lines that we're ramping in. So in the back half of this year, obviously, TORS has been very incremental. And that continues in Q4. And also, the programs that we're ramping on Taurus are just getting started. So we have good confidence that Taurus will continue to grow nicely into 2025. Aries, as you alluded to, the Gen 5 story has still got a lot of legs. We have both with the Merchant GPUs, but also on these internal ASIC designs, a lot of those programs are just starting. And with the Gen 5, You know, we have both the front-end connectivity and the back-end connectivity. Incrementally, Gen 6 will start to play out as well, and with Gen 6, we get an ASP boost on top of that. And then finally, with LEO, you know, we've been talking about LEO for quite a while, but these are new types of technologies that are being adopted in the marketplace, and we're happy to report that we do have line of sight to our first production shipments starting in the middle of the year next year.
Perfect. Thank you for that. And then on the early traction you're getting with your Scorpio switch portfolio, you know, team has talked on some of the performance benefits, both at the silicon level and chip level, but how much of the differentiator is your Cosmo software stack that your customers have already built into their data center management systems, adopted it with your Ares retimer products. And now you've got the same software stack is integrated into your switch products that enables telemetry, link performance, link monitoring, tuning of parameters, all that kind of stuff, which is so critical for data center managers. But is the software stack familiarity, you know, current adoption, you know, sort of a big differentiator on the traction with Scorpio?
Marlon, this is Jitendra, and you're absolutely right. There are several things that differentiate the Scorpio family. First and foremost, I would say that we built this ground up for AI applications. If you think about historical deployments of PCI Express switches, they were generally built for server applications, for storage, and things like that, which are quite different from AI. So we designed the chip for AI applications. We put in the performance that's required for these AI applications. In addition, even the form factor has been designed for AI. So rather than building a large switch, we ended up building a smaller device such that you're not running these high-speed signals all over the board. So all of that is very purpose-built for AI. Now to your point about software, as you remember, our chips are architected with a software-first approach. So we can deliver a lot of customization based on Cosmos, which is something that our hyperscaler customers are looking for, especially for the X family, which is deployed in a more homogeneous GPU to GPU connectivity. Where the Scorpio family sits, we have access to a lot more diagnostic information, and we can couple that with the information that we are collecting from our other families deployed, such as Ares and even Taurus, to provide a holistic view of the AI infrastructure to the data center operators. Both from the hardware side, the purpose-built nature of these devices, as well as the software stack that comes with it, is a big differentiator for us.
Very insightful.
Thank you. Our next question comes from the line of Joe Moore from Morgan Stanley. Please go ahead.
Great. Thank you, and congratulations on the numbers. I wonder if you could talk about the Scorpio business in terms of you gave a number for 2028. Can you give us a sense for what that ramp looks like, any kind of
um qualitative discussion of how big it could be in calendar 25 would be very helpful thank you yeah thanks joe yeah um what's exciting is you know since our public release we're getting a lot of incremental excitement um from the customer base um and and what's really neat about the scorpio product family is it's a diversity designs that we're seeing so clearly being first the gen six there's a lot of uh interest in that application But there's still a lot of Gen 5 opportunities that are developing that we're addressing as well, because with a Gen 6 capable switch, it's backwards compatible. So both design opportunities are open for us. And then incrementally, we have the X series that does the back end connectivity, and that's kind of a greenfield opportunity as well. So these designs are generally a little more customized system. The bring up in the qualification cycle is a little bit more long, so we take a conservative view on how they ramp, like we always do with most of our business. But overall, given that, we expect Scorpio to exceed 10% of our revenues in 2025 as the deployments get into production during the course of the year. and exit the year at a very good run rate and good momentum going into 2026.
That's great. Thank you. And then on LEO, you talked about some of the ramps there. I guess this application in particular of these large memory servers being able to actually reduce the CPU count and maintain this high memory bandwidth, can you just talk about that application, and are you seeing that as a material factor next year as well?
Yeah, so Joe, this is Sanjay here. Yes, I think like we've been maintaining with any new technology, it takes a little bit of time for things to mature. So right now, the way we look at CXL is it's a transition from the sort of the crawl stage to walk stage. There are three or four key use cases that have emerged. One of them is what you noted, which is the large in-memory database applications. And there, the use case becomes one of how do you enable more memory capacity. In the past, this was done by adding additional CPUs into the server box to provide for more memory channels. But what we have demonstrated is that by using LEO, you're not only able to get the higher performance by the added memory, but from an overall TCO standpoint, it's significantly less than adding additional CPUs. So that's one key use case that we see from a deployment standpoint. But having said that, I think at OCP this year, you might have seen some of our videos and all that, there's been tremendous amount of different platform level solutions that were being deployed for various high bandwidth applications, HPC applications, including some of the rack level disaggregation type of use cases. So to that standpoint, there are many different ways in which the technology can develop. But just like any new technology, it will take some time before the requisite ecosystem and software is built. So we are in that period right now, getting those pieces in place. And 2025 is when we expect the production volumes to begin.
Great. Thank you.
Our next question comes from the line of Blaine Curtis from Jefferies. Please go ahead.
Hey, good afternoon, and congrats on the results. I want to ask you, last quarter you kind of talked about the September growth. I think it was like 20 million, and you kind of loosely said it was kind of one-third retimers, one-third the PCIe cabling, and one-third Taurus. I'm not expecting you to dial this in completely, but you kind of did double that. So I'm kind of just curious the strength you're seeing between your products, if you could add a little bit more color. And then also just between AI accelerators and GPUs, obviously the big GPU vendor has a ramp coming with Blackwell, but that's not exactly now. So I'm just kind of curious, you know, what's driving the strength in September and December a little bit more.
Yeah, thanks, Blaine. Yeah, I, you know, in Q3, our business is really benefiting from the strong contribution of multiple product lines. And, you know, the Ares SCM and Taurus both was, you know, really big, strong ramp quarters. Those ramps kind of, you know, were consistent with our expectations. The upside to the guidance was driven largely from ARIES revenue, both for the third-party GPUs, but also as well with the strong ramps on new platforms on the internally developed AI accelerators. And we're seeing that across multiple hyperscaler customers, so it's not just one. So the upside was largely driven by that ARIES revenue.
Thanks. And then, Mike, I want to ask you on with the addition of the Scorpio product line. But before, you kind of talked about how when some of the products like Taurus or the PCIe modules ramped, it would be a little bit diluted to margins because it's not a chip sale. How do you think about it if you do have switches greater than 10%, you know, maybe switch margins or series? How do we think of that blend next year?
Yeah. So for Scorpio... There will be a broader range of margins. There's different use cases, so it depends on the functionality and the volume. But at this point, we believe Scorpio will not impact our long-term gross margin targets of 70%, and it was kind of contemplated when we set up those targets. I'd say overall, beyond just Scorpio, though, we are seeing a wider range of margin profiles across all our product portfolios. you know, mix will be important to contribute from a quarter-to-quarter perspective. But we still, you know, feel good about the 70% target.
Thanks, Len. Thanks, Len. Our next question comes from the line of Thomas O'Malley from Barclays. Please go ahead.
Hey, guys. Thanks for taking my questions. my first one's just on the the x series uh for scorpio i think this is the first real kind of back-end switch that we've seen in the marketplace for pcie could you talk about your positioning there uh how far you think you're ahead um and would you expect the same kind of competitive dynamic that you're seeing uh in the p series switch um just talk about um you know where you are um competitively and and just from an opportunity perspective do you think longer term the X series is a bigger opportunity than the P series.
Tom, let me take that question. This is Jitendra. So you have three questions in there, so you don't get a follow-up. So the first question, let me ask with the P series first. So P series is actually broadly applicable to all of the accelerators. All of the accelerators require connectivity from the GPU or the accelerator to the head node or to the NIC and so on. So P series is applicable to all of them. The P family for us supports PCI Express Gen 6. So that's where the deployment will happen. I already mentioned what are some of the advantages of the family. But at the same time, we should not estimate the Gen 5 socket. There are also Gen 5 designs that are taking place right now, both with the third-party GPUs as well as with ASICs, internally developed ASICs. So I think that's the market opportunity we see with the P series. We estimate that the TAM for this P series to be about a billion dollars plus or minus today and growing to double of that over time. But you're also correct that we do think over the longer period, X series will have a bigger TAM. The TAM today is nearly zero. It's not very commonly used outside of the NVIDIA ecosystem. We do expect many hyperscalers to start deploying this, starting with the X family and the designs for which that we have. And we are able to do that because of the architecture that we have. Because of our software-defined architecture, we can customize many parts of the X-Series to cater to the specific requirements of the hyperscalers, both on the side of performance, the exact configuration that they require, lane counts and so on, and also the diagnostic framework that they require to monitor their infrastructure. So over time, we do expect X-Series to become larger. Now, I also mentioned during the remarks that we have joined the UA Link Consortium, and that gives us another market, another opportunity where we can play with backend interconnect.
Helpful. Let me sneak in another one. I know I broke the rules there on the first one, but the second is just on the Taurus product family. So historically, you've been concentrated within one customer. Can you talk about the breadth of your engagements there? Has that kind of expanded to multiple customers? And when you look out into next year, is that going to be largely consolidated to one or two customers? Do you see that kind of proliferating across multiple? Thanks again.
Yeah, this is Sanjay here. Let me take that. Yeah, so 2025 is the year that we think, you know, the business will get broader. As we've always said, AC or active electrical cable is a case-by-case situation, meaning it's not like every hyperscaler uses active electrical cables. So to that standpoint, we do expect as data rates go higher with 800 gig and so on, that market to be much more diversified than what it is today. So with that said, I mean, today, if you look at our business, we do have our AC modules or Taurus modules going into both AI and compute platforms. There are different kinds of cables in terms of various different configurations that they need to support. So overall, I want to say we are fairly diversified with our business today, but as the speeds increase, in 2025 and beyond, we do expect that the customer basis will continue to evolve. With the note that I highlighted, that every infrastructure will be different. The place where the AEC would be used will differ between the various hyperscalers.
Our next question comes from the line of Torres Vonberg from Steifel. Please go ahead.
Yes, thank you, and congrats on the strong results. You mentioned that PCIe Gen 6 is going to be in pre-production this quarter. When do you expect it to be in actual volume production? Would that be in the first half of next year, Q1?
I want to say it depends on the customer's timeline, so we don't want to speak for any of our customers on what they are communicating. But in general, what I would say, though, similar to what we've shared with you in the past, that our design ends are in the customized RAC implementation. So to that standpoint, the timing of qualification and the deployment would be based on that. But in general, between the initial design wins we had to where we are now, where we are engaging with multiple opportunities, both for Gen 5 and Gen 6, and both for third-party GPUs as well as internally developed GPUs. So to that standpoint, our opportunities on Scorpio continues to grow. And as you look at overall for 2025, like Mike suggested, we do expect our contribution from Scorpio to be in excess of 10% for overall revenue. That's very helpful.
And I had a follow up question on Scorpio in relation to your PCIe retimer business. So would those typically pull each other? Meaning, you know, could it be instances where there's a switch, a PCIe switch with somebody else's retimers? Or I mean, do they pretty much go hand in hand, especially given the Cosmos software?
So if you look at today's deployment with Gen 5, at least from an industry snapshot standpoint, it's mix and match, right? Meaning our retimers get used with switch products from other vendors. We have gone, because of our software-based architecture, it allows us to uniquely customize and optimize for different system level configurations. So that is what it is today. Going forward with Cosmos, we do see an advantage because we have integrated the management framework, the customization framework, and the optimization type of feature set across all of our products. Meaning if a customer is using Cosmos today for ARIES, they will very easily be able to extend the infrastructure that they've already built to run on top of our Scorpio devices, that's a unique advantage we bring compared to some of the alternatives out there. Very helpful. Thank you.
Our next question comes from Ross Seymour from Deutsche Bank. Please go ahead.
I think so, yes. A couple questions and congrats on the strong results. The first question, Jitinder, you mentioned, and Sanjay, you too, about the importance of the ASIC side of the business really ramping up strongly. What was the inflection point that's really driving that? That's something where the market itself is just getting stronger, or is there something with the inflection point of your technology that's being adopted and penetrating the market faster?
Yeah, so I think in terms of the ASIC design, I think it's fairly... public knowledge now that all the hyperscalers have doubled down on the amount of investment they're doing for their own internal ASIC programs. The third party GPUs obviously have done a great job, but also hyperscalers are starting to realize where the money is in terms of their AI use cases and workloads. So to that standpoint, we have been seeing a increased investment from hyperscalers in terms of their internal programs. And we are, of course, addressing those across all of our product lines. So if you look at our business today, like we highlighted in the prepared remarks, we have truly entered a new phase in terms of our overall business, where we not only have the third party GPU based designs, we also have several internal AI internally developed accelerator-based AI platforms, and then we have multiple of our product lines that are ramping on these platforms. The one caveat, one additional point I would note is that for the internally developed AI platforms, we get to play not only in the front-end network connecting the GPU to CPU and storage, we also get to play in the back-end. which generally tends to be, like I call it, a fertile land where there are multiple connectivity requirements that we can service with our Aries and Taurus product line, and now, of course, with the Scorpio X series product line.
Thanks for that. And I guess as my one follow-up, a quarter ago, we were having significant debates about your statements about the average content per GPU. Obviously, that's not as big a topic today. Um, now that we know about Scorpio with more detail, but if I revisit that and you talk still on this call about the average content going up, is that just because of Scorpio, something you had in your back pocket before that you obviously couldn't tell us about, or do you still believe the retimer content in some way, shape or form will still be bigger on most of these platforms going forward, especially for the third party merchant suppliers.
Yeah. So I think when we talked about it before, there were two reasons we highlighted. One is. Generally speaking, with each new generation of a protocol like PCI going from Gen 5 to Gen 6, there is an ASP uplift. That's number one. Number two, of course, we were hinting at the Scorpio product line, which because of the value it delivers to customers is at a higher ASP, as you can imagine. So overall, if you look at the design events we have today, the dollar content per GPU goes up. That's one way to look at it based on what we've shared before. The other way to also look at it is that for internally developed platform, we get to play also in terms of the backend network like I noted, we get to also address some of these scale out networks that are based on Ethernet using our Taurus module. So overall, if you look at sort of the increasing speed, additional product line, as well as the fact that the internally developed platforms, AI accelerator-based platforms, they are starting to gain more and more traction. So when you look at all of them, on an average, our content is on the up.
Thank you.
Our next question comes from the line of Quinn Bolton from Needham. Please go ahead.
Hey, guys. Thanks for squeezing me in. I guess I wanted just a follow-up clarification maybe. But for the Scorpio family being over 10% of revenues, is that largely from the P series, or would you expect any X series production revenue in 2025 given the longer, I think, design and cycle for the back-end scale-up networks?
We have designs for both. Both P and X will contribute to revenues. The P designs will generally be first, but we do see X starting in the back half of the year as well.
Excellent. And then can you give us some sense for the P series and the X series? On the retimer, I think the market sort of generally looked at you know, the ARIES retimer attach rate per GPU or per accelerator. You know, is there any framework you could provide for us for P series, X series? Would you look or expect a typical, you know, attach rate per GPU accelerator? Would that be one-to-one? Would it be less than one-to-one? Could it be higher than one-to-one? Any thoughts on attach rates for P series and X series? Thank you.
Yeah, so you say it's a very broad question because there's all kinds of implementations that are out there. The high level I'll probably share three points. The first is P series is broadly applicable in the sense that it could work for a third party GPU or an internally developed accelerator because every accelerator doesn't matter where it comes from, needs to connect to the head node side, which generally includes the networking, the storage, as well as the CPU. So to that standpoint, that will be a very broadly used device. And when it's used, it's one-to-one, meaning every GPU would need one of our Scorpio P-series device. That's number one. Number two is the X-series. Now, these are generally used for GPU-to-GPU interconnect. So to that standpoint, depending on the configuration, the number of devices is a function of the number of ports that a X-series device exposes and really depends on how the back-end fabric is built. And to that standpoint, again, it truly depends on how the configuration is being built. And this one, like Mike noted, is a greenfield use case, meaning if you keep NVIDIA and their and we switch aside, everyone else is starting to build configurations that are obviously going to need some kind of a switching functionality, which is what we're addressing with our X series device. So that's the second point to keep in mind. And then in general, what I would say is that overall, you know, depending on where things are and how big of a chip that we're building, the attach rate will continue to evolve, but In general, the dollar content that we're talking about is expected to continue to grow, both because we are adding more functionality with devices like Scorpio, and at the same time, we are seeing additional pull-in for products like Ares and Taurus and other things that we're working on.
Our next question comes from the line of Mark Lipikas from Evercore ISI. Please go ahead.
Hi, thanks for taking my question. A question on the diversification. If you think just longer term, you can diversify by your customer base and then by your product line. So, you know, pick your time in the future, three years out, five years out. What do you think your split will be between, you know, the merchant GPU player versus, you know, the custom AI accelerator? player like how your products will be attached to either type of solution and then you know let's just say three years out you you know five five product lines is it would you expect to have a still have a skew to one or would you expect to have like 20 20 in each product line bucket or something like that if you could help us out how you're thinking about diversification like three or three years out I think that'd be helpful and then I had a follow-up thanks
Mark, sorry, we're not going to be too much of a help. But what I will say, this is Jitendra, by the way, what I will say is we like all of the growth vectors that you mentioned. We are going to grow with the third-party GPUs. We are already seeing significant growth with the internally developed ASIC platforms. All of our product families are contributing to revenue, Sanjay and Mike said. And we are hard at work building new products, both new generations of the devices we already have, as well as new product families that will come to bear fruit over the next three to five years. So in terms of which will become more dominant or less dominant, fortunately, we don't need to predict that. We are very happy to service both opportunities. What I would say, as Sanjay mentioned earlier, is the hyperscalers are putting a lot of dollars into building out their own internal solutions. and deploying them and customizing them to their own infrastructure requirements. And because we are able to play in the backend interconnect with those customers, we do expect that to, over time, become a larger portion of our revenue. Gotcha. That's very helpful.
And a follow-up, if I may, maybe for Mike. You guys delivered a lot of upside to the outlook you originally provided. Can you help us understand the mechanics of how you're able to deliver that kind of upside? Do you necessarily have to have the inventory on your balance sheet in order to meet that? Can you get an upside to an order and then have the chip you know, fab to a die and then package, assembles, and test within a quarter. If you could just help us think about how you met the upside and, you know, the potential to do that again going forward. Thanks.
Sure. Yeah, we always want to be able to supply to upsides to our customers' needs, given that we're such a critical component in their system. So, as you look at our balance sheet, we typically carry a pretty healthy level of weeks of inventory. Now, if you see in Q3 that those weeks dropped down, you know, reasonably well from the previous quarter. So that was inventory that was there to supply into the upsides. We are, you know, seeing that play out in Q3, we've already started to increase our inventory buys to get that days or weeks of inventory built up for Q4. But, you know, we never want to be in a position where we can't meet our customers' demands, even if they come in in the short term. We also do that so they don't feel like they need to stock up on their end. So we try to keep a very lean inventory channel at the same time.
Very helpful. Thank you. Just to add to that, one thing I would say is that the way Estera's supply chain is designed from early days was designed to address the demand profile of what you would expect from hyperscalers. which folks that have service hyperscaler customers will understand. It's a very volatile type of thing in terms of how the upsides come in. So what we have done is for each one of our products, made sure that there are multiple sources, if you will, in terms of OSAT and substrate vendors and all that stuff. So it's been done very thoughtfully. very purpose-built in some ways for the demand profiles that we expect in these kind of markets. So we stand to gain from some of the work that's been done as part of the initial supply chain structure and how we have laid it out. Very helpful. Thank you.
Our next question comes from the line of Suji Da Silva from Roth Capital. Please go ahead.
Hi, Jitendra Sanjay. My congrats on the strong quarter here. On the mix of revenues, I'm not really sure where Taurus is this year, but looking ahead to next year, I'm curious how much that grows and helps diversify the products. Is it going to roughly a third kind of plus or minus? Is that too high? Just to get a sense of how Aries and Taurus diversify next year.
Clearly for Q4, it's going to increase as a percentage of the Q4 revenues. Beyond that, given the broadening you know, of Aries, Scorpio, and other contributors. I don't think we can be as granular as to predict what is represented as a percentage of revenues, but it definitely will be one of our, you know, nice, strong road drivers for the company.
Okay. Michael, that's helpful. Thanks. And then just trying to understand the share situation on Aries. In Gen 5, clearly you're very strong here. Is it the expectation that Gen 6, you guys maintain a similar share? There'll be competitive share opportunities? across that, or just what's the thoughts on Gen 6 share for areas versus a very strong Gen 5 share?
I mean, obviously, we have a biased look on this. We have been sampling our Gen 6 devices since February of this year. And clearly, we have learned a lot in the last three, four years by being deployed in several different AI platforms worldwide. So we've taken the learnings from it. We've already been engaged with the market with our Gen 6 free timers. We've already learned a lot about how some of the Gen 6-based systems are being designed and deployed, including some of the customization that are required for the various accelerators. So to that standpoint, we do believe that we have a strong start. And we have backlog and everything to support that. We've been shipping that for some time now from a pre-production standpoint. So overall, we do believe that we'll continue to make grounds. But having said that, this is a big market, and we do expect competition to come in. There is no surprises on that front.
It's very helpful, College Center.
Thanks, guys.
Our final question comes from the line of Richard Shannon from Craig Howland. Please go ahead.
Well, thanks, guys, for taking my questions. Maybe one for, I guess, for either Jitender or Sanjay regarding Cosmos. Maybe you can talk about any competitive response you're seeing from others who are getting in this market, either from the retirement or switch side or both. And it seems like their press releases have alluded to capabilities in this direction. Obviously, your first-to-market move seems to have a lot of stickiness. I'm sure you're not standing still and moving forward here. I think you can talk about what you're seeing there from a competitive response and how you're trying to maintain that barrier to switching, please.
That's a great question. We've been talking about our Cosmos software fairly openly. for about a year now. And so it's kind of fair to assume that other people will if they want to try to copy something similar. But what you have to realize is Cosmos software is not just a collection of APIs and so on. It absolutely is that. There is also a lot of knowledge that has gone into the software by us being in the trenches over the last three to four years, understanding what works in the ecosystem, what issues some of the ASICs and other products have, and how we work around them. So all of that learnings have gone into this Cosmos software to make it very rich. And as Sanjay mentioned, we've been sampling our Gen 6 product for over six months now. And all of the learnings that we've seen firsthand by being shoulder to shoulder with our customers, they have again gone in to make our Cosmos system software very rich and reliable. So anybody who wants to come in will have to have that level of experience and soak time to make their software as good. Now, the other interesting thing is with the launch of the Scorpio family, we get even more access to diagnostics and what's going on with the network congestion and things like that, that we can now very uniquely enable for our customers, which somebody without having all of these components in play would find it difficult to do. So we believe Cosmos is a very differentiated software for us, and we continue to make it better and better over time.
And just to add to what Jitendra said, again, Cosmos, you've got to look at it in two ways. There is the part that runs in the operating stacks of the hyperscalers, but there is also the part that runs within our chip. The point I'm trying to make is that our chips have been defined or developed in a software-centric approach. So to that standpoint, it's not just about the diagnostics. It's about how the fundamental architecture for CHIP is being done in order to make it more of the eyes and ears from a diagnostic and treatment standpoint, but also in terms of customization and various different things that hyperscalers care about in terms of making it uniquely fit into their infrastructure requirements.
Wonderful. Thanks for all that detail. My follow-on question, while I'd love to ask yet another question on the Scorpio line, which is obviously a very interesting product announcement, maybe I'll ask one on the Leo product line in that general area here. I guess maybe kind of looking back from, say, the beginning of the year or a year ago, when hyperscalers were looking at CXL and trying to examine the use cases that made sense, I want to get your sense of whether you think that they're developing and getting certainty on them and the pace and the timeframe that you originally thought. You obviously talked about a design, I think, ramping by mid next year. Is that kind of the timeframes you expected back then? Or maybe just kind of talk about the hyperscalers experiences and figure out those use cases.
The combination of both, right? In the sense that CXL is a new technology. There needs to be a clear use case and ROI. And that we believe at this point is starting to get established based on the work that we have done and others have done from a CPU memory vendor standpoint. So that's becoming clear. But I think probably a year, two years ago, maybe there was a little bit of overhype on things like pooling and others that got made out. But I think those have been clarified now. So the use cases are much more real. And to that standpoint, it's really a matter of dollars flowing back into the general compute area, the CPU launches, which of course both Intel and AMD did recently with support for CXL. So I think it's all starting to add up, but I will add the caution like I noted before. I think we're still in that, you know, crawl to walk stage. There is some more work that needs to be done before we are up and running in full speed with CXL.
wonderful thanks for that detailing and congratulations on the great numbers guys thank you thank you thank you all right there are no further questions at this time i turn the call back over to leslie green for closing remarks thank you everyone for your participation and questions and we look forward to updating you on our progress this concludes the days today's conference call you may announce this