This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

Astera Labs, Inc.
5/6/2025
25 earnings conference call. All lines have been placed on mute to prevent any background noise. After the management remarks, there will be a question and answer session. If you would like to ask a question during this time, simply press star followed by the number one on your telephone keypad. If you would like to withdraw your question, again, press star and the number one. Thank you.
is now my pleasure to turn the call over to leslie green investor relations with astera labs leslie you may begin thank you amy good afternoon everyone and welcome to the astera labs first quarter 2025 earnings conference call joining us on the call today are jitendra mohan chief executive officer and co-founder sanjay gajendra president and chief operating officer and co-founder and mike kate chief financial officer before we get started i would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, the expected future financial results, strategies and plans, future operations, and the markets in which we operate. These forward-looking statements reflect management's current beliefs, expectations, and assumptions about future events, which are inherently subject to risks and uncertainties that are discussed in detail in today's earnings release and the periodic reports and filings We file from time to time with the SEC, including our risks set forth in our most recent annual report on Form 10-K and our upcoming filing on Form 10-Q. It is not possible for the company's management to predict all risks and uncertainties that could have an impact on these forward-looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statements. In light of these risks, uncertainties, and assumptions, the results, events, or circumstances reflected in the forward-looking statements discussed during this call may not occur, and actual results could differ materially from those anticipated or implied. All of our statements are based on information available to management as of today, and the company undertakes no obligation to update such statements after the date of this call except as required by law. Also during this call, we will refer to certain non-GAAP financial measures, which we consider to be an important measure of the company's performance. These non-GAAP financial measures are provided in addition to and not as a substitute for financial results prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we've issued today, which can be accessed through the investor relations portion of our website. With that, I would like to turn the call over to Jitendra Mohan, CEO of Acera Labs. Jitendra?
Thank you, Leslie. Good afternoon, everyone, and thanks for joining our first quarter conference call for fiscal year 2025. Today, I'll provide an overview of our Q1 results, followed by a high-level discussion around our long-term growth strategy. I will then turn the call over to Sanjay to walk through the key AI and cloud infrastructure applications that are driving our market opportunity. Finally, Mike will give an overview of our Q1 2025 financial results and provide details regarding our financial guidance for Q2. Accera Labs had a strong start to 2025, with Q1 results coming in above guidance. Quarterly revenues of $159.4 million was up 13% from the prior quarter, and up 144% versus Q1 of last year. Our Aries product family continues to see strong demand and is diversifying across both GPU and custom ASIC-based systems for a variety of applications, including scale-up and scale-out connectivity. Our Taurus product family also demonstrated strong growth given by continued deployment on AI and general-purpose systems that are leading hyperscaler customers. UO continues to shift in pre-production volumes as customers progress through the qualification of their next generation systems, leveraging new CSL-capable data center server CPUs. Finally, we expect our Scorpio P-series switches and 86 retimers to shift from pre-production builds to volume production in the late Q2 timeframe to support the ramp of customized GPU-based that scale AI system design. On the organizational front, we are very excited to announce the appointment of Dr. Craig Barrett as an addition to our board of directors. Craig brings a wealth of experience across execution and innovation that will help Astera Labs expand our connectivity leadership position in cloud and AI infrastructure. Our vision is to provide a broad portfolio of connectivity solutions for the entire AI rack through purpose-built silicon, hardware, and software, to support computing platforms based on both custom ASICs and merchant GPUs. Significant progress towards this vision began in the second half of 2024 as the company transitioned from primarily supplying PCIe retimers for NVIDIA AI servers to now becoming an integral supplier for AI rack-level connectivity topologies. We are in an ideal position to enable these new rack-level connections with our broadening portfolio of Scorpio fabric switches Ethernet and PCIe active cable modules, retimers, CXL memory expansion, optical interconnects, and other products under development. Our revenue is diversifying across multiple AI platforms based on both custom ASICs and merchant GPUs across different product families, thereby enhancing our revenue profile and delivering increasing value across the AI rack. We are well positioned to continue to deliver above-market growth over the long term, and we will continue to increase our investments in R&D to support our vision to own the connectivity infrastructure within the AI RAC. On the product front, we recently expanded our market-leading PCIe 6 connectivity portfolio to now also include gearboxes and optical connectivity technology to complement our existing fabric switches, retimers, and smart cable modules. The new 86 PCIe smart gearbox is purpose-built to bridge the speed gap between the latest generation PCI-6 devices and the existing PCI-5 ecosystem. Multiple hyperscalers are designing a gearbox into both AI and general purpose compute platforms. We also demonstrated our PCI-6 over optics technology to enable AI accelerator scale-up clustering across racks. This holistic portfolio approach is essential given the increasing complexity of PCI-6 topology. Point solutions are no longer sufficient. Our broad first-to-market PCI 6 connectivity portfolio once again puts us in a leadership position to deliver the most reliable and widely interoperable solutions into the ecosystem. Increasing our market opportunity also remains a crucial focus for Astera Labs. We believe we are in a great position to address the large emerging opportunity associated with the scale of connectivity open industry standard Ultra Accelerator Link, or UA-Link. Last month, the UA-Link 1.0 specification was released to enable 200 gigs per lane connections by supporting up to 1024 accelerators. UA-Link combines the best of two worlds, natively offering memory semantics of PCIe and the fast speed of Ethernet, but without the software complexity and performance limitations of Ethernet. The adoption of UA-Link will enable the industry to move beyond proprietary solutions towards a scalable and interoperable AI ecosystem. With broad industry support and adoption, the proliferation of UA-Link can represent a multibillion-dollar additional market opportunity for Astera Labs by 2029. Beyond UA-Link, we look for next-generation standards, including PCIe Gen 7, 800-gig Ethernet, and CXL3 to drive additional market opportunity for Astera Labs to increase unit shipments and higher dollar content per platform. Scaling our platform approach is another important strategic priority for the company. Our RackScale connectivity focus encompasses our complete product portfolio spanning standalone silicon, hardware solutions, and the Cosmos software suite. The Sterilabs Intelligent Connectivity Platform provides technology breadth to our hyperscaler customers while also enhancing performance and productivity driven by a better together product portfolio design approach. As an example, Our customers can integrate Scorpio and ARIES solutions in combination to obtain even more advanced diagnostics and telemetry capabilities. While ARIES can track the reliability and robustness of PCIe 6 links, Scorpio provides packet-level visibility for increased observation of data center traffic. The synergy between our products, underpinned by Cosmos, ensures comprehensive support and seamless connectivity to drive system efficiency and performance across various applications. In summary, we continue to take advantage of robust secular trends and strong business momentum by accelerating the pace of our R&D investment to access new and emerging market growth opportunities to service the entire AI rack. With that, let me turn the call over to our president and COO, Sanjay Gajendra, to discuss our growth strategy in more detail.
Thanks, Jitendra, and good afternoon, everyone. Today, I want to provide an update on our progress and opportunities within three key AI and cloud infrastructure application categories as we establish Astera as a critical connectivity supplier for the entire AI rack. First off, scale-up connectivity for AI and cloud infrastructure represents a significant and rapidly growing marketplace. Increasing accelerator cluster sizes, faster interconnect requirements, and overall system complexity challenges are creating substantial dollar content opportunities. These opportunities are driving strong demand for our reach extension solution in the near term, and we expect these trends to drive additional revenue growth across multiple product lines over the longer term. We're also pleased by the growing interest in Scorpio X series solution for scale-up connectivity. Designed to maximize AI accelerator utilization with consistent performance and reliability, Scorpio X solutions will be central to next-generation AI racks. This trend will increase our silicon content opportunity to hundreds of dollars per accelerator and serve as an anchor socket for integrating additional Astera Labs connectivity solutions along with our Cosmos software suite at a rack scale. I'm excited to share that we will begin shipping pre-production volumes for Corpio X series starting late this quarter. Longer term, hyperscalers will also look to the UA-Link protocol to deliver faster data transfer rates with a more scalable architecture. Our expanding roadmap is providing customers with a long-term strategy towards scaling their AI accelerator clusters and infrastructure. We expect to deliver UA-Link solutions in 2026 to solve scale-up connectivity challenges for next-generation AI infrastructure. Next. The scale-out connectivity application. Front-end scale-out connectivity topologies are becoming increasingly intricate as next-generation AI accelerators necessitate faster speeds while also supporting comprehensive interoperability with other peripherals. Over the past few years, we have established a robust business within scale-out topologies through our reach extension portfolio of Aries and Taurus products across PCIe and Ethernet protocols. As the market transitions to PCIe 6 capable GPUs, we now see an expanded market opportunity that includes our Scorpio P series product family and Aries 6 V-timer and Gearbox solutions. Our Cosmos software framework enables seamless expansion into these additional sockets in our customers' platforms. Utilizing PCIe 6 data rates to support 800-gig scale-out connectivity will be a primary focus for AI infrastructure providers in the coming years. The Scorpio P series, combined with Ares 6, represents the first-to-market solution specifically designed to achieve this objective. Looking ahead to Q2, we anticipate accelerated shipments of Scorpio P-series switches and 86 retimers on customized rack-scale AI platform based on market-leading GPUs. Additionally, we continue to identify further opportunities for Scorpio P-series outside of rack-scale systems with multiple engagements on modular topologies, that support enhanced customization. Our reference design and collaboration with Wistrom on NVIDIA Blackwell-based MGX systems exemplifies this expanding opportunity set as we aim to bolster our presence within OEM and enterprise channels. We remain the sole connectivity provider that has demonstrated complete end-to-end PCIe 6 interoperability with NVIDIA's Blackwell GPUs, and we are actively working across the ecosystem to enable future-proof infrastructure capable of leveraging the increased throughput and performance of the PCIe 6 standard. Finally, we continue to see large and growing opportunities within the general-purpose compute infrastructure market. We expect revenue growth from general compute-based platform opportunities featuring next-generation CPUs, network cards, and SSDs with our Aries PCIe 6 retimers, Aries PCIe gearboxes, Taurus Ethernet SEMs, and Leo CXL product families. For the next couple years, the transition of data center server CPUs to support PCIe 6 will be a catalyst for additional unit growth and higher ASPs for our Aries product family. For Ethernet, Taurus continues to see growth on general purpose platforms leveraging 400 gig switching port speeds. CXL will also expand our general purpose compute exposure with expected volume ramps on hyperscaler customer programs in second half of 2025. Overall, general purpose compute remains an important application for our intelligent connectivity platform and is expected to drive diversification of our revenue profile over the long term. In conclusion, the strong customer traction with Scorpio, along with increasing opportunities in AI connectivity and general purpose computing, allows us to drive future growth. By innovating and expanding our product offerings, we aim to meet the evolving needs of our customers and capitalize on our vision to deliver high-performance connectivity solutions for AI racks to support PCIe and UA-Link-based scale-up, Ethernet-based scale-out, PCIe-based peripheral and CXL-based memory connectivity, with all these components seamlessly integrated with our Cosmos software suite for advanced observability, fleet management, and rapid market deployment. With that, I will turn the call over to our CFO, Mike Tate, who will discuss our Q1 financial results and our Q2 outlook.
Thanks, Sanjay, and thanks to everyone for joining the call. This overview of our Q1 financial results and Q2 guidance will be on a non-GAAP basis. The primary difference in Estera Labs non-GAAP metrics is stock-based compensation and its related income tax effects. Please refer to today's press release available on the investor relations section of our website for more details on both our GAAP and non-GAAP Q2 financial outlook, as well as the reconciliation of our GAAP to non-GAAP financial measures presented on this call. For Q1 of 2025, Estera Labs delivered quarterly revenue of $159.4 million, which was up 13% versus the previous quarter and 144% higher than the revenue in Q1 of 2024. During the quarter, we enjoyed strong revenue growth from both our Ares and Taurus product lines, supporting both scale up and scale out PCIe and Ethernet connectivity for AI rack level configurations. Leo CXL controllers and Scorpio Smart Fabric switches both ship pre-production volumes as their customers work to qualify their platforms for volume deployment in mid to late 2025. Q1 non-GAAP gross margins was 74.9%. It was up slightly from the December quarter levels as product mix remained largely constant. Non-GAAP operating expenses for Q1 of $65.6 million We're up from the previous quarter as we continue to scale our R&D organization to expand and broaden our long-term market opportunity. Within Q1 non-GAAP operating expenses, R&D expenses were $45.4 million. Sales and marketing expenses were $9.4 million. And general and administrative expenses were $10.9 million. Non-GAAP operating margin for Q1 was 33.7%. Interest income in Q1 was $10.4 million. Our non-GAAP tax rate for Q1 was 7.1%. Non-GAAP fully diluted share count for Q1 was 178 million shares, and our non-GAAP diluted earnings per share for the quarter was 33 cents. Cash flow from operating activities for Q1 was $10.5 million, And we ended the quarter with cash, cash equivalents and marketable securities of $925 million. Now turning to our guidance for Q2 of fiscal 2025. We are aware and focused on navigating the rapidly changing and dynamic macro environment. Policy initiatives including tariffs and changing export restrictions are a few of the variables that are likely to have at least some impact on demand across the global economy, including the AI and cloud infrastructure markets. Despite these factors, our business continues to have strong momentum as we execute on our long-term growth strategy. With that said, we expect Q2 revenue to increase to within a range of $170 million and $175 million, up roughly 7% to 10% from the prior quarter. For Q2, we expect Aries and Taurus revenues to grow on a sequential basis. Our LEO CXL controller family will continue shipping in pre-production quantities to support ongoing qualifications ahead of an expected production ramp in the second half of 2025. Finally, we expect our Scorpio product revenues to grow sequentially in Q2 as the initial designs of customized GPU-based rack-level systems begin to ramp in volume late in the quarter. We continue to expect Scorpio revenue to comprise at least 10% of our total revenue for 2025. We expect non-GAAP gross margins to be approximately 74%, as the mix between our silicon and hardware modules remains largely consistent with Q1. We expect second quarter non-GAAP operating expenses to be in a range of approximately $73 million to $75 million. Operating expense growth in Q2 is driven is being driven by continuing investment in our research and development function as we look to expand our product portfolio and grow our addressable market opportunity. Interest income is expected to be approximately $10 million. Our non-GAAP tax rate should be approximately 10%, and our non-GAAP fully diluted share count is expected to be approximately 178 million shares. Adding this all up, we are expecting our non-GAAP fully diluted earnings per share in a range of approximately 32 cents to 33 cents. This concludes our prepared remarks. And once again, we appreciate everyone joining the call. And now we'd like to open the line for questions. Operator?
Thank you. Excuse me. The floor is now open for questions. As a reminder to enter the queue, press star followed by the number one on your telephone keypad. You may also withdraw your question via the star one as well. If you are called upon to ask your questions and are listening via loudspeaker on your device, please pick up your handset and ensure that your phone is not on mute when asking your question. We do request for today's session that you please limit to one question and one follow-up. We do invite you to reenter the queue for additional questions. Again, press star and the number one. Our first question comes from the line of Harlan Sir with JP Morgan. Your line is now open.
Good afternoon. Thanks for taking my question. I love a question. You know, on the overall AI and data center spending environment, there have been some concerns on the CapEx spending momentum potentially this year, maybe some AI compute digestion. We also kind of recently saw some of the AI bans to China recently, and then obviously paraffin trade concerns, as you guys articulated in your prepared remarks. On the flip side, you've got the strong ramp of your merchant GPU customers on the next generation AI platforms, strong new AI ASIC XPU RAM, so you've got new entrants into the AI ASIC XPU market. Since last earnings, I mean, has anything changed meaningfully positive or negative on the customer programs or the demand outlook for this year? And more importantly, your confidence level on continued strong growth for next year?
Thanks, Harlan. Yeah, first off on the tariffs, we have not seen any material impact on our business, but it is fluid and the rules are still subject to change. So it's something we're watching closely. But so far, we have not seen an impact. We do note that the hyperscalers stuck to their CapEx in the recent calls, and one actually increased. So that was encouraging. So we'll continue to monitor that. In regards to restrictions to China, we did see an impact on that in this year. We do have designs that we were the retimer on those programs. to the extent our customers cannot procure the GPUs, that does have a headwind for us that we contemplated in our guidance. Yeah, and then on the customer side. I appreciate that.
Yeah, on the customer and the business side, just to touch on that question that you asked. You know, the great thing about our overall revenue profile is that there are, you know, multiple ways in which we are approaching the market. the diversity across both custom ASIC-based platforms versus merchant GPU-based platforms, scale up versus scale out, and the multiple product lines that we have enables us to approach the market in many different ways. And to that standpoint, for us, for first half, what we're expecting is that our revenue would be driven largely by the PCIe scale-up and the Ethernet scale-out opportunities, along with the initial shipment of Scorpio P-series and 86 going into the customized rack. And second half, of course, lays nicely on top with some of the production RAMs that we're expecting with the customized racks, which, again, for us is the Scorpio switches along with the PCIe 6 retimers. These are now qualified, so we are starting to see that shipments start becoming significant. So that's part of second half. And second half, of course, we have the CXL initial shipments that we're expecting for production volumes. And the Scorpio X switches for the scale-up going into the custom ASICs, those are also expected to start hitting initial production volumes in the second half of this year, which essentially gives us multiple waves, if you will, and sets us up nicely for future revenue growth even beyond 25.
Thank you. Your next question comes from the line of Blaine Curtis with Jefferies. Your line is now open.
Hey, guys. Nice to meet you. Thanks for taking my question. I want to talk about scale-up. You mentioned it several times. I think you even said a couple hundred dollars per accelerator. Today, you know, I think you're selling some retimers and then some PCIe cabling. Can you walk us through the progression of scale-up in your participation and kind of maybe set some timing? Because I know UAL is probably, you know, later next year. So what's the scale-up opportunity for you in between now and then?
Hi, Ben. This is Jitendra. Scale-up presents a very good opportunity for us. As you know, so far our revenues have been driven primarily by scale-out opportunities. But for the first half, as Sanjay laid out, we have a significant contribution from scale-up. And the reason that's so important for us is scale-up is really a very rich opportunity of high-speed interconnects that need to deliver low latency and high throughput. And that's where we play today with our Aries retimer products and the starting shipments of Scorpio X family. And we do expect this opportunity to continue to grow as cluster sizes grow and the data rates increase. So we have significant opportunities that we are working on for PCI Express-based scale-up networks based on our current Scorpio X family. But then it also dovetails very nicely into UAL, and we expect this to be a multibillion-dollar opportunity as we provide a full, holistic portfolio of devices to address UAL infrastructure. And as far as the URL itself is concerned, the spec is now final. It's been released as the 1.0 spec. And so you can imagine that the products will start to be worked on now and start to see first samples in 2026 with the revenue contribution the following year. So that is a very big opportunity that we are very well positioned to take advantage of.
Thanks so much. And then I want to ask you on Taurus, you called out growth in March and then continued growth in June. Can you talk about – I don't know if you want to frame how big that business has gotten, whether there's some rough metrics you can kind of give us, and then just kind of curious the diversity of the customer base beyond the lead customer as well.
Currently, we're shipping the 50 gigs solution. That continues to grow. We have multiple designs at our lead customer. which are, you know, the largest one being the internal AI accelerator-based platform, which is still in a kind of a ramp phase. We do look to broaden out beyond our lead customer. That's probably going to be the next technology jump to 100 gig speeds.
Thank you. Your next question comes from the line of Ross Seymour with Deutsche Bank. Your line is now open.
Hi, guys. Thanks for asking a question. Last quarter's call, you really made a big point about the diversification of your customer base. Tonight, it seems like it's a lot more about the diversification of your products that you're offering. So I guess if we kind of blended those two together, what are you seeing as the changes in the environment on the ASIC side versus the merchant GPU and how you're broadening technology is being applied differently between them?
Yes. So for us, again, we play in both the space with different strategies and different products. Now, the key thing that we are excited about is the growing interest on Scorpio X family. So these are fabric switches that are used to interconnect multiple accelerators together. So to that standpoint, A, it's not only a significant dollar opportunity because the ASP of this product tends to be high, but these are also products that are turning out to be anchor sockets for us. If you think of an AI rack being built, you have the accelerators and then you have the fabric that interconnects the accelerators. So what we are transitioning and what we're excited about is that the Scorpio X device is now translating to be an anchor socket. Think of it as like a mothership around which we are able to now add a lot more products that go along with it, whether it's a silicon level products or a module or other form factors that we're considering. So overall, I want to say that from an opportunity space standpoint, For Estera, the custom ASIC-based implementation tends to offer a lot more opportunities. And with Scorpio X becoming more and more, gaining more and more traction, we do believe that we are in a good position looking forward, not just to service near-term business, but also longer-term with potentially UA-Link being the industry-wide standard for scale-up topologies.
Thanks for all those details. I guess as my follow-up, just sticking on the custom silicon side of things, does the competitive environment change at all there? I think this was a question that was asked on the last call as well, but considering the XPU providers are oftentimes your primary connectivity competitors, I know the best technology always wins, but does the bundling capability that could occur in that XPU market actually lead to more competition on your side, or is that something you're not seeing?
The competition will always be there and competition will always try to sell more. I think that's a given. What you need to keep in mind is that we are working with large hyperscalers that needs to also consider the supply chain and ecosystem and other considerations from a risk management standpoint. So that's where we is what we see. And in general, it also comes down to technology differentiation. For us, the fabric devices and, in fact, all of our connectivity devices are developed ground up for AI type of workloads. So there is a clear advantage and benefit that we offer, which is recognized and valued by our customers. Our Cosmos software provides unprecedented visibility into what's happening in the network, being able to predict performance, being able to predict upcoming failures, and so on, which are all critical requirements if you think about how complex an AI RAC looks like, and then will continue to become more complex. So all in all, for the reasons I noted, both from a risk management, commercial, and technology differentiation standpoint, we are seeing that customers continue to work with us, and we see an increasing interest for our Scorpio line of products.
Thank you. Your next question comes from the line of Thomas O'Malley with Barclays. Your line is now open.
Hey, guys. Thanks for taking my questions and nice results. First one's for you, Mike. You mentioned that there was a China impact on your sales. It's never been a significant portion of your model, but could you give us a feeling just of how large that impact was and what that impact will be over the next couple of quarters?
Yeah. So, you know, we... We ship into China with our retirees predominantly right now, and they were attached to third-party merchant GPU systems. Both were restricted hard stuff during the quarter. So there was a modest impact that we have to overcome. China revenues, when you look at end customer demand, is less than 10% of our revenues. So it's been manageable enough, and given the strength of our business and other product lines, too. to continue to grow through this challenge.
Helpful. And then, um, maybe a broader question. So you guys have been very consistent in kind of describing the year as first half, PCIe scale up, ethernet scale out, and a lot of the custom silicon with, um, you know, the second half, you see more Scorpio, more retimers. There's been a ton of noise in the market, and that's just the price you pay from being very visible and attached to, NVIDIA, but you heard about a lot of differences in terms of what the ramp cadence was coming into this year versus where we were today. Some large hyperscalers, maybe some weakness in their programs, and then potentially with your large customers, some systems that are maybe delayed and moving to more DGX-style solutions. I understand you have to be sensitive about talking about customers, but to the best extent that you can, coming into this year versus where you are today, Could you maybe describe if there's any differences to what you saw in the ramp of your revenue and maybe comment on anything that maybe has changed that can help us understand what's going on? Thank you.
Tom, that's a good point that you bring up. These systems are incredibly complex, and a lot of things have to go right in order for the whole system to not only get deployed, but then get deployed at scale. And we, of course, try to do our best to make sure that we are never the bottleneck in terms of the deployment of these systems. And as Sanjay mentioned earlier, we've done a pretty good job so far with our pre-production shipments and getting the products qualified and so on. But we always have to take some buffer or some kind of conservativeness in terms of what it would take for our end customers to kind of complete that qualification and deployment. And so far, I would say that our expectations have come true largely and changed from when we started the year, which is what our judgment was based on what we were hearing from our customers. And do you want to add any more colors?
Yeah, I think the city will always continue to be conservative, just to underline what you can expect. But having said that, the revenue models and the guidance that we are providing or the outlook that we are sharing, you know, comprehends all this because we are so close to these customers that we see a lot of stuff and we're able to consider and contemplate that when we provide guidance. So to that point, you know, we do feel comfortable and confident about where we are. We just need to continue to make sure that we execute and deliver our part of the subsystem. And then the rest of it, like we noted, will account for that in a conservative model, knowing how complex the systems are to get deployed.
Thank you. Your next question comes from the line of Joe Moore with Morgan Stanley. Your line is now open.
Great, thank you. I guess we've heard a lot of the large language model developers talk about sort of tightness in inference markets and kind of a lack of hardware to deal with inference. Are you guys seeing that? Does that translate into strength in any part of the business for you or just anything you can do to corroborate or mitigate those concerns?
Thanks, Joe. For us, to the first level, inference and training are about the same. Same products get used in both of these systems. So we do benefit from both training and inference. And as you look at some of these larger models, a mixture of experts and so on, what we are finding is that the amount of compute that's required to draw inference from these models is even higher, 10x higher than previously. And as a result, the basic unit of compute is starting to become a rack, kind of a rack-level system of GPUs. which also happens to be the same basic unit for training. And so with the increased complexity of these rack-level systems, we actually see more opportunities overall for both inference as well as for training. As Sanjay mentioned earlier, this quarter we also released a, you know, Scorpio-based inference system for smaller-scale inference. So that should allow us to kind of benefit from some of these smaller-scale inference systems as well. So having said all of that, we do see today that our customers are also using the same set of systems to do both inference and training, and we benefit from both.
Great. That's helpful. And for my follow-up, you mentioned racks. A lot of the rack scale systems seem to be sort of tricky to get up and running. I guess those issues seem to be worse a few months ago. But does that affect you guys? I know you have good content across you know, both rack and non-rack merchant solutions and ASICs, but just, you know, is that kind of a change in, in, in ramp schedule? Is that having any impact on you guys?
There is always complexity, Joe, I would say. But like we noted, we are modeling that and providing guidance that takes a conservative view on how some of the systems are being built and deployed. That's probably the right thing to do from a business outlook standpoint. But having said that, going forward, what we're doing is also something I do want to share, Joe, which is to take more of an AI rack-level view in how we approach the market. You know, the vision that we outlined is to be the connectivity supplier for the entire AI rack. And like noted in my prepared remarks, focus on four main protocols, which is PCIe and UA-Link for scale-up, Ethernet for scale-out, PCIe for peripheral connectivity, and CXL for memory expansion. and approach it holistically, so we're providing a variety of different products, whether it's three-timers, gearboxes, fabric devices, switches, you know, across both copper and optical, so that going forward for next-generation AI systems, at least from a connectivity infrastructure standpoint, it's a holistic solution that not only considers silicon products, but also hardware and software. That's how we see the evolution, if you will, And Estera, we believe, is well suited to deliver the entire connectivity at a rack level, which we are executing both in terms of what we are servicing today and then going forward, of course, with UA-Link.
Thank you. Your next question comes from the line of Tori Svanberg with Stifel. Your line is now open.
Yes, thank you, and congrats on the quarter. Jitendra, I was hoping you could elaborate a little bit more on the Area 6 upgrade cycle here, especially in reference to gearbox products. You know, if you could add any color on, you know, how diversified are the use cases going to be for Area 6 gearbox products?
Yeah, so let me just take a second to kind of explain what a gearbox device does. It's primarily used to match two generations of the same standard, meaning in this particular case, one side of the device talks PCIe Gen 5, the other side talks Gen 6. The reason these products are essential is if you look at the CPUs today, they're still at Gen 5. GPUs have already transitioned for Gen 6. The same thing could happen with the networking and storage interfaces. So what a gearbox device does is two things. One, it takes care of the signal quality, similar to a read timer. And on top of that, takes care of matching the protocol generations on two ends of the chip. So in terms of opportunities, we have multiple engagements today that we're servicing for the gearbox device. In fact, we've already started shipping pre-production volume for supporting some of the opportunities. And this would, again, not create an additional time to our 80s business because it's adding to the retimer time, but essentially bringing in a higher level of ASP simply because you're able to not only do retiming, but also do some of the speed matching that I noted. And we will continue to offer products like that, which is critical because if you think of a typical engineer that's working on an AI system, they need multiple tools. And Estera is providing that to them, both in terms of retimers, gearboxes, fabric devices, providing the software. So overall, we're providing a holistic portfolio that is also opening up that added interest and momentum for customers to use our products.
Okay, thank you for that call, Sanjay. As my follow-up for Mike, not to nitpick here, but I mean, your DSOs came in at 40 days. I think throughout most of last year, they were around mid-20s. Anything going on there to note?
It's really just the linearity was more balanced in the quarter than previous quarters. I think this going forward will be a more typical level of DSOs as we move you know, grow as a company and have more multiple product line shipping. Perfect. Thank you, Mike.
Thank you. The next question comes from the line of Quinn Bolton with Needham and Company. Your line is now open.
Maybe just a quick clarification, guys. Mike, I think in your prepared comments, you said that Ares and Tourist would grow in the June quarter. Just wanted to make sure that ARIES, is that only the on-the-board retimer products? Does that include the ARIES SCM for scale-up applications? I know you talked about strength and scale-up, but just wanted to see if you could make a specific comment on the ARIES SCM product, because it certainly sounds like the LEED ASIC platform is still in a ramp phase.
Yes, it is. The ARIES SCM is doing very well for us, but also, Other internal AI accelerator cloud platform providers are also ramping. So we're seeing growth and that's more chip on board for scale out and the areas that seems for scale up. So we're seeing both contributing growth. So these are on internal AI accelerator systems. Got it.
Okay. Thank you for that. And then maybe Senator Assange, just longer term, I know you're ramping Scorpio P-Series switches first, but it sounds like the X-Series is a larger TAM. If that starts to enter production late this year with higher dollar content per accelerator, would you expect X-Series to potentially cross over P revenue, say, by the end of 26, or would that be more of a 27 event? Just trying to get some sense of what you think the pace of the ramp might be once X-Series enters production.
Yeah, so if you consider the P series and the X series TAMs, we kind of outlined them to be both about $2.5 billion each. But if you look at the X series, the available TAM today is nearly zero outside of NVIDIA. So it's a very, very rapidly growing market for us. And we do estimate that it will be the single largest sort of product line for us. As Sanjay mentioned earlier, the X family shipments are just going to start, have just started in this quarter. and we'll start to ramp later on this year for the full volume of them really coming in in 2026. So we do expect in the 26 and 27 timeframe for the X family to become a larger contribution to our revenues.
If I can add to that, not just revenue like noted, X series is our anchor socket. It's the socket around which we are building the entire product line. at a completely higher level when it comes to supporting the X-Series. So overall, we do believe that that anchor socket and the fact that this is a greenfield TAM that's rapidly growing puts us in a great position to be able to, you know, offer multiple product lines in order to service the entire AI rack.
Thank you. Your next question comes from the line of Atif Malik with Citi. Your line is now open.
Thank you for taking my question. This is Papak Silla in for Atif. I guess my first question might be more of a broader question for Jitendra or Sanjay. I believe one year ago kind of ASTER announced some progress made around PCI-U over optics, and this year at OFC we saw various demos on the technology. I guess at a higher level, does Pasteur still see PCIe over-optic as a path forward to extend PCIe beyond copper and beyond scale-up, or perhaps the focus is still on copper?
Okay, so the good question, the focus right now is definitely on copper, as we have discussed through the entirety of this call. But as Sanjay mentioned earlier, our job really is to provide all of the tools that our customers require in order for them to deploy their AI infrastructure. Our customers usually prefer to deploy over copper just because it's kind of more reliable, low power, you know, better TCO and so on. And so we will continue to support that for as long as, you know, we can. But at some point in time, as data rates go up and the reach requirements increase, you will have to go to optical solutions, and that's where we have our very innovative PCI Express over optical demonstrations to provide that additional capability to our customers. Kind of more broadly speaking, as you go from PCI Express into UAL, where the line rates are even higher, we believe that copper will still be the dominant media at 200 beats per second. Now, if you start looking at speed even beyond that, That's where optical may start to have a play, and we are very actively working with our customers to figure out what that intercept is and the different type of innovations that are required to deploy a scale of optics-based solutions at those data rates.
Got it. No, that's helpful. And for my kind of follow-up, this question might be more for Mike, and I totally get you don't guide beyond the next quarter, but if our math is right, Scorpio sells in September and December. could reach mid-teens to 20% of sales on a quarterly basis if we take into account the 10% plus sales you gave out for the year. With that in mind, can we think of actually the second half gross margin actually being up with the first half qualitatively?
Overall for gross margins, as we diversify our product line, we're going to see a wider range of margins per product offering. Just given how quick the market's moving, you know, you have less time to optimize the product portfolio for cost optimization. You're always chasing the next generation for revenue growth. So with that wider range of margins, we still expect our longer-term growth margin targets of 70% to be the direction we're heading. You know, not only this year, but over time. So I would still encourage people to think about the margins as we grow the company to trend towards 70%.
Thank you. Your next question comes from the line of Srini Pajuri with Raymond James. Your line is now open.
Thank you. A question on the custom racks and also what sort of mix I think you're assuming for custom racks as we go into the second half. you can talk about what's driving that inflection. I know it's always been in the middle of the year for custom racks, but is there any, I guess, software or is it related to the Blackwell Ultra or some other hardware component that's kind of enabling customers to switch over to custom racks? And as I said before, if you could talk about what sort of mix assumptions you're making as we go into the second half of the year.
Great question. It's unfortunately difficult to give out an exact mix because it actually keeps changing or keeps evolving. But to go back to the point that you made, Blackwell, first of all, is a fantastic architecture, fantastic platform for customers to take advantage of. But they also have the challenges or restrictions of what their data center looks like and so on. So there is a lot of incentive for our customers to customize this rack to take advantage of their existing data centers, their existing hardware, the software infrastructure that they have, security, even their supply chain. So that's why we do believe that many of the hyperscalers will customize the racks and deploy in their data centers. However, the timing of that will vary. So hyperscaler customers that are kind of more focused on time to market or low engineering effort will likely, you know, side with the reference designs to get started and then, you know, customize the racks later on. We can definitely talk about the information that's now publicly released, where the first application for this customization was to use in-house SmartNICs in order to attach them to the BlackBell platform. And that's a fantastic opportunity for us, and we expect that to start to ramp in the second half of this year.
Got it. Thanks for that. And then, More of a longer term question dependent on your link you know, obviously, it seems like a very large market potential out there, but at the same time, some of your competitors are focusing on ethernet. You know, these are obviously pretty large competitors and i'm guessing so much your customers are probably going to stick to ethernet as well. Just wondering how you think about you know how the market might shake out longer term you know ethernet versus your link you know, in terms of. what, you know, I guess sort of applications might adapt, you know, Ethernet for, you know, uh, scale up horses, uh, UI link. Thank you.
Yeah. Um, so your link, as you correctly pointed out, it's a brand new standard, right? It's been purpose built for AI for, for, uh, uh, specific workloads around training as well as inference. Whereas Ethernet is, has been very widely deployed. I mean, it's probably the most widely deployed, uh, protocol for scale out. And now what's happening is with the UEC. Ultra Ethernet Consortium. They're trying to retrofit Ethernet to maybe work on scale up. So if you look at kind of longer term horizon, probably fair to say that you would be able to build scale up networks using both UAL as well as UEC. However, there are pretty significant differences between the two and I can maybe categorize them into two buckets. One is on technical performance and the other one is ecosystem diversity. So if you just look at it from a technical performance standpoint, UOL brings the best of two words, right? It gets the memory semantics, the ease of software, the lower latencies of a PCI Express protocol, and then combines it with the fastest speeds that are available for Ethernet. At the end of the day, you get a very efficient system with lossless packet transmission, very low latency, and very high throughput. Whereas in the case of Ethernet, you're trying to retrofit some of these features while keeping Ethernet backwards compatible or keeping UVC backwards compatible. So overall, our belief is that just purely from a performance standpoint, UAL will be more performant compared to UEC. The other part of it is ecosystem diversity. And being on the board of UAL, we see a lot of traction both from customers as well as for vendors to build products into this ecosystem. We believe that over time, we will have a rich ecosystem where multiple vendors are building different components that will go into a UAL-based rack or UAL infrastructure. And certainly, SR Labs is also very well positioned to play into that ecosystem. Overall, we believe that hyperscaler customers will prefer this diverse ecosystem over one that is more proprietary or locked into one large component provider.
Thank you. And our final question comes from the line of Suji Da Silva with Roth Capital Partners. Your line is now open.
Hi, Jitendra, Sandra, Mike. Quick question on Scorpio products. Do we think of there being any appreciable price difference between the X product and the B product, or are they fairly similarly priced?
Yeah, so in general, the functionality is much more valuable, if you will, on the X series because it's used to interconnect GPUs. And the GPU utilization, of course, being a prime factor if you think of the overall performance of a training cluster or an inference cluster. So to that standpoint, X-Series does bring in a lot more value. And therefore, you can assume that the ASPs tends to be significantly higher. And again, X-Series is not one device, to be very clear. There are multiple part numbers. So there would be situations where, you know, maybe one part number is not at the same level as B series. But in general, you can just look at it from a per lane standpoint or per port standpoint and look at the value delivered. And on that basis, the X series will always be a much more valuable, much more higher ASP product than a B series.
Okay, great. And then the other question I have is sort of updating on the thoughts, you know, versus kind of coming out of the IPO of mostly chips, but then Taurus being a paddle board. Can you update on your thoughts on chips versus using board level solutions or even do module level solutions make sense here given how much content you have to integrate all of that?
Yeah, so I think like Jitendra laid out, right, our vision really is to be a connectivity supplier at an AI rack scale, right? We want to solve customers' needs and challenges. And we are sort of uniquely set up as a company in the sense that we have our silicon engineering team, we have our hardware engineering team, and we have our software engineering team that includes what the service on the Cosmos side. So what I want to say is in that context, obviously, we are trying to provide a rack scale solution to our customers with the one variable, which is the compute trace coming from either third-party or internally developed ASIC platforms. But the rest of the connectivity, whether it's based on copper or optical, we seek to address that and service that. And the exact form factors that we will take really depends on the customer needs. But we have the capability to go from silicon to hardware or software. So we'll always look at trying to maximize the opportunity that's available to Estera so we can continue to grow and thrive as a company.
Thank you. There are no further questions at this time, so I'd like to turn the call back over to Leslie Green for closing remarks.
Thank you, Amy, and thank you, everyone, for your participation and questions. We look forward to updating you on our progress. As we announced today, we will be conducting a webinar hosted by J.P. Morgan on expanding opportunities in AI infrastructure with UA Link. You can find full details in our press release, and you can also check the investor relations portion of our website for this and other upcoming financial conferences and events. Thank you. Thank you. This concludes today's conference call. You may now disconnect.