This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

Astera Labs, Inc.
8/5/2025
time I would like to welcome everyone to the Astera Labs second quarter earnings conference call. All lines have been placed on mute to prevent any background noise. After management remarks there will be a question and answer session. If you would like to ask a question during this time simply press star followed by the number one on your telephone keypad. If you would like to withdraw your question press the pound key. Thank you. I will now turn the call over to Leslie Green investor relations for Astera Labs. Leslie you may begin.
Thank you Rebecca. Good afternoon everyone and welcome to the Astera Labs second quarter 2025 earnings conference call. Joining us on the call today are Jitendra Mohan chief executive officer and co-founder Sanjay Gajendra president chief operating officer and co-founder and Mike Tate chief financial officer. Before we get started I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding among other things expected future financial results strategies and plans future operations and the markets in which we operate. These forward-looking statements reflect management's beliefs expectations and assumptions about future events which are inherently subject to risks and uncertainties that are discussed in detail in today's earnings release and in the periodic reports and filings we file from time to time with the SEC including the risks set forth and our most recent annual report on form 10k and our upcoming filing on form 10q. It is not possible for the company's management to predict all risks and uncertainties that could have an impact on these forward-looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement. In light of these risks uncertainties and assumptions the results events or circumstances reflected in the forward-looking statements discussed during this call may not occur and actual results could differ materially from those anticipated or implied. All of our statements are based on information available to management as of today and the company undertakes no obligation to update such statements after the date of this call except as required by law. Also during this call we will refer to certain non-GAAP financial measures which we consider to be an important measure of the company's performance. These non-GAAP financial measures are provided in addition to and not as a substitute for financial results prepared in accordance with US GAAP. A discussion of why we use non-GAAP financial measures and reconciliation between our GAAP and non-GAAP financial measures is available in the earnings relief we issued today which can be accessed through the investor relations portion of our website. And with that I would like to turn the call over to Jitendra Mohan, CEO of Asera Labs. Jitendra.
Thank you Leslie. Good afternoon everyone and thanks for joining our second quarter conference call for fiscal year 2025. Today I'll provide an overview of our Q2 results followed by a discussion around our rack scale connectivity vision. I will then turn the call over to Sanjay to walk through Astera Labs near and long-term growth profile. Finally Mike will give an overview of our Q2 2025 financial results and provide details regarding our financial guidance for Q3. Astera Labs deliver strong results in Q2 with all financial metrics coming in favorable to our financial guidance. Quarterly revenue of $191.9 million was up 20% from the prior quarter and up 150% versus Q2 of last year. Growth within the quarter was driven by both our signal conditioning and switch fabric product lines, establishing a meaningful new revenue baseline for the company to build upon. This quarter we achieved a key milestone with our market leading Scorpio P Series switches, supporting PCI-6 scale out applications ramping into volume production to support the deployment and general availability of customized rack scale AI system designs based on merchant GPUs. Strong demand for our PCI-6 solutions helped to drive material top line upside during the quarter. Scorpio exceeded 10% of total revenue making it the fastest ramping product line in the history of Astera Labs. Furthermore we continue to see strong activity and engagement across both our Scorpio P Series and X Series PCIe fabric switches and we are pleased to report that we won new designs across multiple new customers during the quarter. We remain on track for Scorpio to exceed 10% of total revenue in 2025 while becoming the largest product line for Astera Labs over the next several years. Our Ares product family grew during the quarter and continues to diversify across both GPU and custom ASIC based systems for a variety of applications including scale up and scale out connectivity. Additionally our first to market Ares 6 solutions supporting PCI-6 began volume ramp during the quarter within rack scale merchant GPU based systems. Our Taurus product family demonstrated strong growth driven by AEC demand supporting the latest merchant GPUs, custom AI accelerators as well as general purpose compute platforms. Leo continues to ship in pre-production quantities as customers expand their development rack clusters to qualify new systems leveraging the recently introduced CXL capable data center CPU platforms. In addition to strong financial and operational performance during Q2, we continue to expand our strategic relationships across both customers and ecosystem partners as the industry pushes forward with innovative new technologies. First we've broadened our collaboration with Nvidia to support NVLink Fusion providing additional optionality for customers to deploy Nvidia AI accelerators while leveraging high performance scale up networks based on NVLink technology. Next we announced a partnership with Alchip Technologies to advance the silicon ecosystem for AI rack scale infrastructure by combining our comprehensive connectivity portfolio with their custom ASIC development capabilities. Within the CXL ecosystem industry progress continues with SAP recently highlighting their collaboration with Microsoft featuring Intel's Xeon 6 processors to optimize SAP HANA database performance by utilizing CXL's new memory expansion. Lastly we joined AMD on stage during their advancing AI 2025 keynote presentation as a trusted partner to showcase UALink which is the only truly open memory semantic based scale of fabric purpose built for AI workloads. To continue the relentless pursuit of AI model performance, data center infrastructure providers are beginning a to what we call AI infrastructure 2.0. We define this AI infrastructure 2.0 transition as the proliferation of open standards based AI rack scale platforms that leverage broad innovation, interoperability, and a diverse multi-vendor supply chain. This transition is in its early stages and we are strategically crafting our roadmaps to help lead the secular connectivity trends over coming years. The transition to AI infrastructure 2.0 is especially significant at the rack level as modern AI workloads demand ultra-low latency communication between hundreds of tightly integrated accelerators over a scale-up network. Estera Labs is well positioned to support this infrastructure transformation as an anchor solution partner with expertise across the entire First we support a variety of interconnect protocols including UALink and PCIe for scale-up, Ethernet for scale-out, and CXL for memory. We are very excited about the momentum behind the UALink scale-up connectivity standard which exemplifies the open ecosystem approach by combining the low latency of PCIe and the fast data rates of Ethernet to deliver -in-class -to-end latency and bandwidth. Next, we provide a broad suite of intelligent connectivity products to address the entire rack across both purpose-built silicon and hardware solutions, all featuring our Cosmos software for -in-class speed monitoring and management. Lastly, our deep partnerships across the entire ecosystem continue to expand as we work closely with ASIC and GPU vendors to align features, interoperability, and roadmaps to solve the rack-scale connectivity challenges of tomorrow. In summary, Estera Labs has demonstrated strong momentum in our business and the prospects for continued diversification and scale are driving our roadmaps in R&D investment. We are in the early stages of the AI infrastructure 2.0 transformation which Estera Labs is uniquely positioned to help proliferate over the coming years. Scale-up connectivity for rack-scale AI infrastructure alone will add close to $5 billion of market opportunity for us by 2030. And we remain committed to supporting our customers as they choose the architectures and technologies that best suit their AI performance goals and business objectives. With that, let me turn the call over to our President and COO Sanjay Gajendra to outline our vision for growth over the next several years.
Thanks, Jyothendra, and good afternoon, everyone. Today, I want to provide an update on our recent execution followed by an overview of the meaningful market opportunities and growth catalyst that Estera Labs will address within the forthcoming transition to AI infrastructure 2.0. Our goal is to deliver a purpose-built connectivity platform that includes silicon, hardware, and software solutions for rack-scale AI deployments. To achieve this goal, our approach has been to increase our addressable dollar content in AI servers by rapidly expanding our product lines to provide a comprehensive connectivity platform and capture higher value sockets that include smart cable modules, gearboxes, and fabric solutions. We also see increasing attach rates driven by higher speed interconnects in platforms deployed by customers who are collectively investing hundreds of billions of dollars on AI infrastructure annually. Starting in Q2 of 2025, Estera Labs executed the next step in its high-growth evolution by ramping our PCIe, Corpio fabric switches, and 86 retimers into volume production. This latest wave of growth has further diversified our overall business as we now have three product lines contributing about 10% of total sales. During this transition, our silicon dollar content opportunity has expanded into the range of multiple hundreds of dollars per AI accelerator, which has effectively established a new revenue baseline for the company. Looking ahead, we are excited about the opportunities enabled by scale-up interconnect topologies. Given the extreme importance of scale-up connectivity to overall AI infrastructure performance and productivity, we see the Corpio X-Series solution as the anchor socket with the next-generation AI ramp. We are engaged with over 10 unique AI platform and cloud infrastructure providers who are looking to utilize our fabric solution for their scale-up networking requirements. We look for Corpio X-Series to begin shipping for customized scale-up architectures in late 2025 with a shift to high-volume production over the course of 2026. With the ramp of Corpio X-Series for scale-up connectivity topologies next year, we expect our overall silicon dollar content opportunity per AI accelerator to significantly increase. Overall, we expect this to be another step up from a baseline revenue standpoint. Also, given the scale-up connectivity opportunity, we expect our Corpio X-Series revenue to quickly outgrow Corpio P-Series revenue. In 2026 and beyond, cloud platform providers and hyperscalers will begin to deploy next-generation platforms as the industry transitions to AI infrastructure 2.0. We believe the fastest path to this transformation lies in purpose-built solutions developed within open ecosystems with a multi-vendor supply chain. For Estera Labs, this transformation will be the catalyst for the next wave of overall market opportunity and revenue growth. Our expertise and support for major interconnect protocols, including TTIE, Ethernet, TXL, and UA-Link puts us in an excellent position to participate in these next-generation design conversations. UA-Link represents the cleanest and most optimized scale-up strategy for AI accelerator providers, given its robust performance potential, open ecosystem, diverse supply chain, and purpose-built approach. Early industry momentum has been very encouraging with multiple hyperscalers and several compute platform providers looking to incorporate UA-Link into their accelerator roadmap and engaging with RFPs as an indication of strong interest. As the leading promoter of UA-Link, Estera Labs is committed to developing and commercializing a broad portfolio of UA-Link connectivity solutions ranging from AI fabrics to signal conditioning solutions and other I-O components. Proliferation of UA-Link in 2027 and beyond will represent a long-term growth vector for Estera Labs. In conclusion, we are proud of our execution over the past several years, demonstrating strong and profitable revenue growth, diversification of customers and applications, and exposure to a broadening range of AI infrastructure applications and use cases. We believe this momentum is in its early stages as we fully embrace an industry transition to AI infrastructure 2.0, which will expand our opportunity across even more customers and platforms. Over the next several years, we look to build upon this newly established baseline of business as we partner tightly with customers and the broader ecosystem to deliver and deploy -in-class rack-scale solutions to fuel the next wave of AI evolution. With that, I will turn the call over to our CFO, Mike Tate, who will discuss our Q2 financial results and our Q3 outlook.
Thanks, Sanjay, and thanks to everyone for joining the call. This overview of our Q2 financial results and Q3 guidance will be on a non-GAAP basis. The primary difference in Estera Labs non-GAAP metrics is stock-based compensation and its related income tax effects. Please refer to today's press release available on the Invest Relations section of our website for more details on both our GAAP and non-GAAP Q3 financial outlook, as well as a reconciliation of our GAAP to non-GAAP financial measures presented on this call. For Q2 of 2025, Estera Labs delivered quarterly revenue of $191.9 million, which was up 20% versus the previous quarter and 150% higher than the revenue in Q2 of 2024. During the quarter, we enjoyed revenue growth from both our ARIES and TORS product lines supporting both scale up and scale out PCIe and Ethernet connectivity for AI rack-level configurations. Scorpio Smart Fabric switches transitioned to volume production in Q2 with our PSIRI product line for PCIe 6 scale out applications deployed within leading GPU customized rack scale systems. LEO CXL controllers shifted pre-production volumes as customers continued to work towards qualifying platforms ahead of volume deployment. Q2 non-GAAP gross margin was 76% and was up 110 basis points from March quarter levels, with product mix remaining largely constant across higher volumes. Non-GAAP operating expenses for Q2 of $70.7 million were up roughly $5 million from the previous quarter as we continued to scale our R&D organization to expand and broaden our long-term market opportunity. Within Q2 non-GAAP operating expenses, R&D expenses were $48.9 million. Sales and marketing expenses were $9.4 million. In general, and administrative expenses were $12.4 million. Non-GAAP operating margin for Q2 was .2% of 550 basis points from the previous quarter. Interest income in Q2 was $10.9 million. Our non-GAAP tax rate per Q2 was 9.4%. Non-GAAP fully diluted share count per Q2 was 178.1 million shares and our non-GAAP diluted earnings per share for the quarter was 44 cents. Cash flow from operating activities per Q2 was $135.4 million. And we ended the quarter with cash, equivalent to marketable securities of $1.07 billion. Now turning to our guidance for Q3 of fiscal 2025. We expect Q3 revenues to increase to within a range of $203 million and $210 million up roughly 6% to 9% from the second quarter levels. For Q3, we expect Ares, Taurus, and Scorpio to provide growth in the quarter. For Ares, we are seeing growth from a number of end customer platforms where we support scale up and scale out connectivity. Taurus growth is driven by new designs going into volume production for scale out connectivity. Scorpio will primarily be driven by the continued deployment of our P series solutions for scale out applications on third party GPU platforms. We expect non-GAAP gross margins to be approximately 75% with the mix between our silicon and hardware module businesses remaining largely consistent with Q2. We expect third quarter non-GAAP operating expenses to be in the range of approximately $76 million to $80 million. Operating expense growth in Q3 is driven by the continued investment in our research and development function as we look to expand our product portfolio and grow our market opportunity. Interest income is expected to be $10 million. Our non-GAAP tax rate should be approximately 20%. The increase in our non-GAAP Q3 tax rate reflects the impact of the recent change in the tax law passed in July with an expectation that our full year non-GAAP tax rate 2025 to now be approximately 15% following this tax law change. Our non-GAAP fully diluted share count is expected to be approximately 180 million shares. Adding this all up, we are expecting non-GAAP fully diluted earnings per share of a range of $0.38 to $0.39. This concludes our prepared remarks and once again we appreciate everyone joining the call and now we will open the line for questions. Operator?
At this time I would like to remind everyone in order to ask a question press star then the number one on your telephone keypad. We'll pause for just a moment to compile the Q&A roster. Your first question comes from the line of Harland Seward with JP Morgan. Your line is open.
Good afternoon. Congratulations on the very strong results. You know within your Scorpio family of switching products it is good to see the strong ramp of Scorpio P this past quarter. Within the same portfolio it looks like the team is qualified and set to ramp the Scorpio X series for XPU to XPU ACID connectivity. You talked about 10 platform wins. What's been the biggest differentiator? Is it performance i.e. latency, throughput, is it fully optimized with your signal conditioning products? Is that a consideration and how much does the familiarity with Cosmos software play a role? And you guys have always called this an anchor product which pulls in more of your solutions alongside your Cosmos software suite. Is this how it's playing out with your basic XPU customers? You lead with Scorpio X and you've been successful at driving higher attached with your other products?
Thank you so much for the question and you're absolutely right. The success that we have enjoyed so far is rooted on primarily I would say three things. First is just our closeness to our customers. So over this time period we want the kind of trusted partner status with our customers so we get a ringside view of what their plans are, what it is that they're planning to deploy and when. The second part of that is really our execution track record. We have shown time and again that our team executes with purpose and we deliver to our promises. So with both of these we get the first sort of call for developing new products for going into new product platforms at our customers. And that's where the Cosmos software suite comes in. Cosmos for the audience here is our software suite that unites all of our products together and this is how we allow our products to be customized, optimized for unique applications as well as collect a lot of very rich diagnostics information that allows our customers to really see how their correctivity infrastructure is operating. So with the use of Cosmos we can customize our products to deliver higher performance which translates to sometimes lower latency, sometimes higher throughput, sometimes different diagnostics features for our customers. And as a result of that we've been able to use Scorpio as an anchor socket in these applications because this is something that gets designed in upfront and then we figure out signal conditioning opportunities with our Aries and Taurus products in these platforms. And the Scorpio X in particular because the customers use kind of derivatives of PTA Express we have been able to customize Scorpio X to deliver this lower latency and higher throughput.
Thank you for that very insightful and and for my second question you know just over the past 90 days we put a lot of focus and announcements on scale up networking connectivity. On UALink as you mentioned right the team did the Wall Street teaching back in May obviously the team is a key member of the UALink consortium. AMD you know recently fully endorsed UALink as its scale up networking architecture of choice for all future generations of its RockScale solutions and we know of at least one other basic XPU vendor that's going to be moving to UALink as well. Beyond this like what's been the reception and interest level on UALink and can or will the AsteriTeam speed up its time to market on UALink based products or is the timing still to sample products next year with volume deployment in calendar 27?
Yeah, Harlan this is Sanjay here thank you for the question. To your point absolutely we see tremendous amount of interest with UALink. There are obviously the technical advantages that you get with low latency and familiarity with how the the transport layer works based on its which is PCIe. Also the fact that it supports memory semantics natively is also a strong reason why customers are liking that interface. The big upside of course is the physical layer which now has been upgraded to support up to 200 gig on the ethernet side. So there are several technical reasons that are going in favor of UALink. So customers that were using PCIe or PCIe like fabrics see this as a natural progression in order to support the AI infrastructure needs going forward. Now what we'll also note is that you know it's not just about technical stuff it's about ecosystem and the broad availability of components that are required for scale up and that's again where UALink shines in the sense that it's truly a open standard, it's truly a multi-vendor supply chain and those are additional reasons why customers tend to gravitate towards UALink. We do have like noted several customers we're counting 10 plus right now that are looking at leveraging some of the open standards whether it's PCIe in the short term, combination of PCIe and UALink in the midterm and transitioning perhaps to a broader UALink deployment in 2027 and later. So overall I think the momentum is shifting positively and we're excited to be in the middle of it and driving the adoption of open and scalable supply chain in the market.
Great thank you.
Your next question comes from the line of Ross Seymour with Georgia Bank. Your line is open.
Hi guys thanks for let me ask a couple questions and congrats on the strong results and guidance. Maybe to no surprise I wanted to stay on the Scorpio family. The diversity of engagements is also interesting to me and as far as you're talking about it as an anchor tenant I just wondered if you could go into a little bit of the profile of the types of customers, how it's changed from your initial customer and then perhaps how much incremental business and interest those customers are showing in other products as they realize as well it's an anchor tenant. Sort of how are you leveraging that Scorpio relationship to bring in more business? Any sort of illustrations of that would be helpful.
Absolutely again thank you for that question. So just to kind of remind we have two product series within Scorpio. One is the Scorpio P series that just started ramping to production to support some of the third party GPUs that are ramping. And the P series is designed for scale out connectivity, very broad use case from interconnecting GPUs to custom NICs to storage and things like that. So Scorpio P series we have a broad base of customers that are leveraging that solution, designing in going to production, deep in technical evaluations and so on. So that would be a broad play for us with PCIe based scale out interconnect and storage type of interconnect. Scorpio X series which is designed for scale up networking to interconnect the GPUs and accelerator. This we see like you noted as an anchor socket because that is truly the socket that holds all the GPUs together. And today like we noted we have 10 plus customers that we're engaging when it comes to scale up networking using Scorpio X series. And this is also pulling in rest of our products both because of the advantages that Cosmos brings to the table by unifying all of our products. Plus at the same time the fact that someone is using a fabric solution and they would need a gearbox or a re-timer or other controller type of products. Those are all playing into having that first call with the customer or having that early access at an architectural stage which translates into an opportunity for us where we can not only offer the fabric device but also the surrounding components that come along with it as a connectivity platform.
Thanks for that color. And I guess as my second question one for Mike and I think the first one's going to be pretty quick so I might have a clarification in there as well. The gross margin is beaten and you're staying solidly above your 70% long-term target. So I guess the question is is there anything that slows down your trajectory to the 70% and the clarification would be the tax rate at 20%. Is that this year but not next year? Which is the number we should think of going forward? The 15, the 20, or the 10 you used to be? Thank you.
Okay thanks Ross. I'll start with the taxes. The 20% is specifically to Q3 because that was the quarter that the tax law changed so we had to catch up for the previous two quarters. For Q4 you should expect it to normalize around 15% and then you know longer term with this new tax law in place it's probably in the around the 13% range. For the gross margins when we have an inflection up in revenues like we did you do have the benefit of higher revenues over fixed operating costs so that was the incremental benefit for us. We do expect to see some pretty growth from our hardware modules going into the back half of this year into 2026 so you know as we make it through 2026 we still encourage people to think of our long-term target model 70% as something that we'll be delivering. Thank you.
Your next question comes from the line of Blaine Curtis with Jeffreys. Your line is open. Hey guys I'll echo
the progress on the results. I guess I want to ask on the Scorpio products. I mean I think 10% in the June quarter was ahead of what many people were looking at so maybe you can just help us with the shape of that product. I mean you still said 10% for the year I'm assuming it's or greater than 10% but I'm sure it's much greater than that. I mean can you help us a little bit with as you look to September you know you have 15 million dollars of growth how to think about Aries versus Scorpio and if you any kind of thoughts on how to guide us to model this Scorpio product line this year?
Yeah this is Mike. Yeah for Q2 the Scorpio P launched into volume production a little ahead of what we anticipated so provided the upside in the quarter. From this base level now it is it continues to grow in Q3 and Q4 but we have more P series designs kind of coming into play that will layer on top of it that's more in 2026. For the X series we do have pre-production volumes here but really that starts to go into high volume production during the course of 2026 in layering even more growth. Ultimately what we called out is the X series is going to be grow grow to be bigger than P series so it's a very exciting opportunity just given the dollar value of the design opportunities are much higher than the P series just given the use cases of the scale of connectivity. So both will grow we did reiterate that it will exceed 10 percent of our revenues for the year which is quite an accomplishment for the first year of a product line. It is poised to be our largest product line of the company as we make it through the following two years.
Thanks and I just want to ask I think in terms of the scale up opportunity clearly you were clear that X will be more material next year kind of pre-production this year. I just want to ask this because there was a lot of rumors out there in terms of are there any opportunities for scale up with Scorpio P or maybe in short are you going to be shipping to anything material this year for for scale up versus the scale out you already talked
about? The scale up this year is predominantly pre-production volumes and you know these systems are pretty complex that they're shifting into so we like tried to be conservative on how we you know telegraph those going forward but the volume opportunities scale up connectivity for switching is a much bigger dollars opportunity for us as we as we look forward but those designs really will start to enter into full volume production during the course of 2026 so not an immediate you know not a driver in the next couple quarters.
Thanks Mike.
Your next question comes from the line of Joe Moore with Morgan Stanley. Your line is open.
Great thank you. I wonder if you could talk about UALink versus other architectures and I guess your involvement with NVLink Fusion. Are you agnostic to those various solutions? Are you more favorable towards open source or proprietary? Just kind of walk us through the the potential outcomes for you and with these these battles are being fought.
Yeah yeah Joe this is Jitendra. I'm happy to do that. So let's start with NVLink just because NVLink is perhaps the most widely deployed scale up architecture that's available today and we are very happy to be you know part of the NVLink Fusion ecosystem. So if you look at the history of NVLink it really is a fabric that is built ground up for AI. It uses memory semantics to make sure that all of the GPUs can be addressed as you know as if they are one large GPU. It has low latencies. It does add Ethernet based service to get the higher speeds and of course NVIDIA has popularized that with their NVR72 deployment. If you go from there to let's say UALink you find many similarities. UALink also has this genesis in PCI Express. It is a memory semantics based protocol. It uses lossless networking you know several other technical advancements that are suitable for AI workloads and then the whole protocol is really custom built for optimizing the throughput for AI type of traffic. So I think it some of which happen to be Ethernet based and some are completely proprietary as well. The other advantage of UALink is it's open. It's an open ecosystem. We know that many hyperscalers are part of the promoter board members as well as many vendors frankly who are working to deploy solutions for this UALink and as a result we expect to see a very vibrant ecosystem of providers of vendors and customers with the UALink and I think that will be its defining characteristic and why we believe UALink will be adopted widely over time and as promoter members of UALink consortium ourselves we are very happy to both participate in this standard and not only participate but come up with a full portfolio of solutions that include switches, retimers, cables and what have you to enable our customers to build a full UALink. So to answer the question the other question you asked with UALink we have a lot of dollar content opportunity but at the same time we will continue to service our customers who are today using PCI express and we have a huge opportunity there as well as Ethernet for scale out applications for cabling applications and over time also with NVLink Fusion.
That's very helpful thank you and then I get the question a lot. If you guys can size your exposure to merchant GPU platforms versus ASIC I know there's probably a little bit higher content opportunity for you on the ASIC side but any sense for what that split looks like and where that may be going over time?
Yeah Joe so we do address both of these opportunities. Our opportunity on the merchant GPU platform comes when our customers customize the rack designs. This is the opportunity for both our ADs and SQARP UPCD that Sanjay and Mike touched upon earlier. We saw a lot of ramp happening with that in this last quarter. In addition to that we are also shipping the Taurus Ethernet cables for scale out applications but when you go to the scale up that becomes a very big opportunity for us just because of the density of interconnect when you're trying to connect all of these GPUs together and when that network happens to be based on PCI express we have an even larger attach rate which drives our dollar content on these XPU platforms into several hundreds of dollars per XPU. So over time we do see the SQARP family as our largest revenue contributor and largely deployed on XPUs.
Great
thank you very
much.
Your next question comes from the line of Tom O'Malley with Barclays. Your line is open.
Hey guys thanks for taking my question. You mentioned that you were engaged with 10 plus customers on the X-Switch side. Could you just give us a picture of how many of those are engaged on PCIe today and how many of those are engaged on the UAL side and if you're engaged with one on PCIe are you often engaged with one on UAL as well? Can you maybe talk about that split right now?
Yeah so this is Sanjay here. So what we can notice that the 10 plus opportunities that we highlighted these are both hyperscalers as well as AI platform providers and these are all today based on PCIe. So these are nearer term opportunities that we're tracking. Having noted that like Jitendra highlighted UALink is a standard and open standard that contemplates the requirements of scale up networking in terms of speed and other capabilities going forward. So many of these customers that we're engaging with today with PCIe are also looking at UALink. Some of them might continue to stay with PCIe. Some of them will transition to UALink in the midterm but longer term as UALink ecosystem develops and matures we do expect that UALink will continue to be a solution that both the merchant GPU as well as custom accelerator provides would standardize on.
Helpful and then as my follow-up I'm curious and there's been obviously a lot of news articles interquarter about switching attach rates with XPUs and then also general purpose silicon. So if you look at the large guy in the market in a 72 array there's you know nine switch trays a couple switches per so like a 25 switching attach rate to a single XPU or general piece of silicon in that instance like when you're ramping an XPU with a with a custom silicon customer can you maybe walk us through like specifically with the X switch if that attach rate is higher or lower or what's the reason for that that would be super helpful. Thank you.
So the obviously we don't comment on individual platforms and customer deployment scenario but in general the Scorpio switches X series switches interconnect GPUs and there are depending on the platform there are different configurations for number of GPUs in a pod. So within a STERA and the product portfolio that we are developing is designed in a way that it addresses a variety of different use cases and the attach rate vary so it probably will be a broad answer to your question but in general we have the the engagements we have the designments now it's a matter of all of these platforms getting qualified and ramping to production with due course of course as they get into production we'll be able to add more color on how that's shaping our revenue and our growth.
Your next question comes from the line of Tori Sandberg with Stifle Financial Corp. Your line is open.
Yes thank you and let me add my congratulations as well. I guess my first question is on you talked about this new revenue base. I mean you now have three product lines in production that obviously doubled your revenue base. Now you're talking about infrastructure 2.0 and Scorpio X series really you know sort of creating a new revenue level so I mean should we should we sort of infer with that that you know you will double the sort of run rate again as X series starts to ramp? Is that the way we should look at it?
A great question but I always like to make this correction it's not recommended three timer just to keep it keep our engineering folks happy but you make a great point and that's exactly we believe is the beauty of our business model where we have approached the business in a series of growth steps. We started the journey being on all the Nvidia based platforms with the PCIe retimers which got the company off the ground from a revenue growth standpoint. The second step that we hit was to expand our PCIe retimer and Ethernet retimer business to go after custom ASICs so this transition happened in Q3 of last year. Now where we are is our third step in that growth journey where we are ramped up our Scorpio P series PCIe based switch products along with our 86 retimers so that's going on all the third party Nvidia based GPU platforms that are ramping. The fourth step that we are highlighting as part of the call today is the Scorpio X series which is designed for scale up networking and that is currently underway in the sense that we are still in pre-production and like we highlighted throughout 2026 we expect that wave to transition to high volume production providing us a new baseline for revenue and these are of course higher value sockets meaning the dollar content with the Scorpio X series switches are significantly higher than what we have done so far so you could expect that to play into the overall revenue projections that we would have as we get towards 2026 and the fifth step that we called out as part of the communication is the UA-Link that is going to be a growth story in 2027 and that is a greenfield application for us with a much broader deployment of scale up networking along with a variety of other products that we intend to for UA-Link and that is going to be the fifth step that we are executing towards.
Yeah thank you for walking through all that Sanjay really appreciate it and as my follow-up and related to UA-Link it does feel like the standard is sort of regaining a lot of traction. I'm just curious why that is is it because of AI moving more into inferencing is it because of the 128 gig version it just feels like there's been a little bit of a change in the last few months so any color you can add on that would be great.
If you don't mind could you repeat your question we didn't quite get the question you asked.
Yeah I was asking about UA-Link sort of regaining a lot of traction at least that's the way it feels to us and I'm just wondering why that is is it because of AI moving more towards inferencing is it because of the 128 gig version or is there anything else that's going on there? Thank you for clarifying that
to Riya. So UA-Link is gaining actually a lot of traction. If you just for as a reminder UA-Link was only introduced the specification was only introduced in towards the end of Q1 of this year so since then it has gained tremendous amount of traction. We've got you know AMD talked about it very recently in Taipei as part of the OCP summit and several of the hyperscalers are very closely engaged in figuring out what their roadmap intercepts will look like for UA-Link for all the reasons that we talked about earlier in the call. I might will also say that majority of these engagements are at 200 gigabit per second per lane rate and not at the 128.
Perfect thank you.
Your next question comes from the line of Sebastian Naji with William Blair your line is open.
Yeah good afternoon thank you for taking the questions. I know a lot of the focus is rightfully on the AI tailwinds but could you maybe comment on what you're seeing in non-AI adoption and in particular what you might be seeing on Gen 5 PCI adoption and general purpose servers and could that be a meaningful contributor to ARIES growth going forward?
Yeah absolutely thanks for highlighting that we always overlook the general compute nowadays but to your point that's a transition that we're tracking. AMD released their Venice CPU which does support PCI Gen 6 as well so we do see that sort of playing out in terms of design opportunities and a new set of production ramps happening for our ARIES product line both on the timer class devices as well as other sockets that we develop whether it is the torus modules or gearbox devices so in general those are additional opportunities for us to grow our business and we are tracking those things as part of our overall outlook and let's not forget LEO products which are our CXL controllers these are designed for memory expansion for CPUs in particular so finally we have CPUs that support CXL technology and ready for deployment so we are excited about the opportunities that we're tracking between all the three product lines ARIES, Taurus and LEO going into the general compute use cases.
Great okay that's really helpful and if I could a second question I want to ask about the use of ethernet and scale up going forward you have the broadcom positioning itself to address both the scale out and scale up part of the network with its latest generation of ethernet chips and I'm wondering how do you see scale up ethernet potentially eating into that PCIe part of the market where Aster has such a strong position?
This is Jitenza maybe I'll take this question so if you look at you know our customers today they are deploying the scale up network with the technologies that are available to them which is NVLink for Nvidia designs of course PCI express for several of the customers that we touched upon earlier in the call and some of the customers are also using ethernet and largely this has to do with the availability of the switching infrastructure you know the two protocols PCI express as well as NVLink are basically kind of custom built for memory access for memory semantics so you can use that to make your multiple GPUs in a cluster look like one large GPU ethernet is a fantastic protocol but was never designed for scale up it was designed for kind of large scale internet traffic and it is very very good at that however because of the availability of the switches some of the customers have tried to run RDMA and other proprietary protocols over ethernet to do scale up and in that scenario it does suffer from higher latencies and throughput now I think what you are referring to is scale up ethernet where broadcom is trying to borrow several of the same features that are present in PCI express and ua-link such as memory semantics lossless networking etc and put them on top of ethernet at that point it looks something quite different from ethernet and so the switching infrastructure as well as the xp infrastructure has to evolve for somebody to use that but I believe that the the differentiation the real differentiation between the two has to do with the openness of the ecosystem the SUV is still you know dominated by broadcom whereas if you look at ua-link it's a very open ecosystem very vibrant ecosystem with multiple vendors working on products and multiple hyperscalers looking to really take their destiny in their own hands and and you know relying on ua-link over time
great that's really helpful thank you so much and congrats on the quarter
thank you your next question comes from the line of quinn bolton with needham and company your line is open
hey jay center I just wanted to follow up on that question about su broadcom introduced their tom occultra switch recently with a 250 nanosecond latency which it seems like it significantly reduces the latency problems that traditionally internet has had can you give us some sense how that 250 nanosecond latency pursue compared to what you're able to achieve on pci express and ua-link and then i've got a follow-up
yes we're able to achieve even lower latencies with some of the the products that we have and and other products that we have in development but again it comes back to designing something that is purpose built for ai it is not about just the -to-point latency if you look at the -to-end latency in the system we believe that ua-link and pci express today is going to be lower latency and the second point about that is utilization of bandwidth even though over time the the current offering from broadcom uses 100 gigabit per second per lane but over time every standard will migrate towards 200 gigabit per second per lane both ua-link ethernet as well as nvlink is already there today however how efficiently you use that raw data rate depends varies from protocol to protocol ua-link has been designed to be extremely efficient with that and really i see very high utilization of the data pipe that is that is available so on a technical basis i do think that ua-link will be superior to other protocols but again you know not to sort of mention this yet again but the the the big advantage of ua-link is in its openness that it's an open standard that our customers the hyperscalers can build their infrastructure once and then ideally plug in whichever gpu or xpu they want that supports an open interoperable ecosystem like ua-link
got it my follow-up question i think in the script you guys talked about you know an expansion in the opportunities with torus and i'm kind of wondering if you could expand on that is that are you seeing sort of adoption of of higher per lane speeds on on that torus product and adoption of 800 gig cables are you seeing um you know adoption beyond your lead customer and torus just just any additional color you could provide on torus would be would be helpful thank you
yeah so um like you correctly said and what we have shared in the past as well is that we expect broader adoption of ac's when the ethernet data transitions to 800 gig that's starting to happen we expect most of the deployments to be ramping up in volume in 2026 and to that standpoint again we're tracking and we're engaged with the customers that are deploying it one point to keep in mind is that our business model for aec is designed for scale in other words we develop this cable modules that fit into the cable assemblies of existing cable vendors and there are a variety of them that service the the data center market so our business model is to go after the ram and not necessarily the initial few volume that that might be deployed so to that standpoint we're tracking and we're engaged with the right customers and as the volume starts ramping we do expect to have a significant diversification and growth in our torus module business but most of this we are modeling it in 2026 versus this year
okay so it sounds like the volume this year continues to be more 50 gig per lane and then you see that diversification in 2026 is 100 gig per lane becomes more uh more uh you know sees wider adoption exactly
and our business model like noted is designed for that multi-vendor cable supply chain and we do believe that's the right strategy and that's what hyperscalers look for the initial poc limited volume deployment you know they might go with one vendor but very quickly each one of these hyperscalers do want to have the diversity and and and as well as the supply chain capacity to drive volume and that is essentially being our focus when it comes to a business model on the aec site got
it thank you
your next question comes from the line of papa sila with city your line is open
thank you for taking my question and uh great great thank you and congrats for the great result my i guess my first question is uh kind of following your your announcement of a partnership with a high kind of performance basically here recently i guess can you touch a little bit more on the kind of extent of that collaboration even more at a chip level in terms of the iotip type of kind of partnership or even more other kind of device level with your area portfolio
yeah so i'll answer that question by sort of uh sharing our vision and goal that we're executing towards so our vision is to provide purpose-built connectivity platform for ai infrastructure that includes silicon products hardware products and software products of course the focus for us has been on the connectivity side of the ai rack when you think of an ai rack there are other components that go which primarily includes the compute nodes whether based on third-party merchant gpus and cpus or custom asex that alchep and others develop for hyperscalers so what we are a strong believer in is that the ai rack the way it's defined today is not scalable in the sense that it's more proprietary as the industry transitions to what we are calling ai infrastructure 2.0 the entire ai rack has to be based on an open scalable multi-vendor type of approach and to that standpoint what we're doing is not only developing the connectivity products for addressing the various aspects of an ai rack whether it's scale up or scale out and other connectivity at the same time we are partnering with you know third-party gpu vendors we talked about the announcement that we did with amd we're also engaging with custom asex providers including alchep so that end of the day the hyperscalers who are our common customers get a rack that is well tested interoperable the software is all consistent and so on to ensure that it delivers the highest level of performance so that is the scope of the collaboration that we are having with alchep and other providers and over time you will see us announce more partnerships as we seek to establish the open rack that we believe is critical for deploying ai at scale
got it no that's that's very helpful and if i can speak just one more and this might be more for for my on the growth margin it seems like over the last two quarters particularly since the announcement kind of growth margin keeps going up but i mean in the september quarter you're guiding it to 75 which at the very least at the midpoint seems to be kind of down a little bit i'm just curious on just any additional color on that because it seems like by all indications scorpio will continue to go up and the mixed trend we are seeing currently it seems to be kind of moving in the same direction in september as well so we were just kind of curious on that guy down growth margin in september quarter
yeah we we do see growth from scorpio but we also see a good solid growth in tors as well during the quarter so you know the tors as a module it's hardware so it carries a little bit lower growth margins of stainless silicon so you'll see that dynamic play out to a smaller extent in the quarter and you know as we move move into 2026 we still want to have people take in it was going towards our longer-term model 70
your next question comes from the line of suji desilva with roth capital your line is open
hi jatendra sanjay my congrats on the strong quarter here maybe you can give us a framework on the retimer content for a link that's for scale out versus scale up maybe it's similar but maybe there's some differences it'd be curious to understand what the unit opportunities might be and how they might be different
yeah so when you look at the the retimers you know the contrast with the switches is the following which is the switches get designed in right at the inception at the architecture stage your customers will think about how they're going to connect either gpu to other gpus in a scale up or the gpu to nix or storage as part of that scale out system so once the switch is designed in and as the rack starts to get put together then we look at the question of of reach and sometimes you you find that you need retimers in a link other times actually you don't need the timers in the link sometimes the retimers go on the board as a kind of a chip down format but other times they are better suited to be put in in cables in an aec format the good news with us with astera is that we provide this full portfolio of devices for our customers to choose from from from switches to gearboxes to chip down retimers to retimers in active electrical cables so they can look at you know one company one astera to figure out their entire all the solutions at the rack level
okay and neither one would be higher than the other necessarily just to be clear can you repeat that neither one will be higher than the other scale up versus scale out
necessarily yeah it really depends upon the system architecture in scale up there are many many more links than there are in scale out however it is prohibited from a power standpoint to put retimers and all the links so typically you will see you know the links that are shorter where you are able to go from the switch to the gpu over a shorter distance will not use the timers but the links that are longer will potentially use the timer sometimes we have scale up domains that exceed one rack so you might have two racks side by side that are part of a scale up domain and in which case you end up with cable solution and and the you need the timers in the scale up in those scenarios
helpful thanks and then my follow-ups on scorpio x you talked about 10 customer engagements i'm wondering if that implies multiple programs per customer if they're going to think about using you standard in their platforms any color on how those are kind of shaping up would be helpful in programs versus customers
yes the templates we noted are unique customers now within each customer there are multiple opportunities that we're tracking some of them are you know design wins and some of them are ramping to production some of them are design ends going through qualification some of those are you know early engagement so in general we are very pleased with the amount of traction that we're seeing for our scorpio family excellent thanks andy thanks everybody
thank
you
there are no further questions at this time i will turn the call back over to leslie green for closing remarks
thank you everyone for your participation today and questions and please refer to our investor relations website for information regarding upcoming financial conferences and events thanks so much
this concludes today's conference call you may now disconnect