2/10/2025

speaker
JL
Conference Operator

I will now turn the conference over to Leslie Green, investor relations of Astera Labs. Leslie, you may begin.

speaker
Leslie Green
Director of Investor Relations

Thank you, JL. Good afternoon, everyone, and welcome to the Astera Labs fourth quarter 2024 earnings conference call. Joining us today on the call are Jitendra Mohan, chief executive officer and co-founder, Sanjay Gajendra, president and chief operating officer and co-founder, and Mike Tate, chief financial officer. Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, expected future financial results, strategies and plans, future operations, and the markets in which we operate. These forward-looking statements reflect management's current beliefs, expectations, and assumptions about future events, which are inherently subject to risks and uncertainties that are discussed in detail in today's earnings release and the periodic reports and filings we file from time to time with the SEC, including the risks set forth in the final prospectus relating to our IPO and our upcoming filing on Form 10-K. It is not possible for the company's management to predict all risks and uncertainties that could have an impact on these forward-looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement. In light of these risks, uncertainties, and assumptions, the results, events, or circumstances reflected in forward-looking statements discussed during this call may not occur, and actual results could differ materially from those anticipated or implied. All of our statements are made based on information available to management as of today, and the company undertakes no obligation to update such statements after the date of this call to conform to these as a result of new information, future events, or changes in our expectations, except as required by law. Also during this call, we will refer to certain non-GAAP financial measures, which we consider to be an important measure of the company's performance. These non-GAAP financial measures are provided in addition to and not as a substitute or superior to financial results prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed through the investor relations portion of our website and will also be included in our filings with the SEC, which will also be accessible through the investor relations portion of our website. With that, I would like to turn the call over to Jitendra Mahan, CEO of Asterilabs. Jitendra?

speaker
Jitendra Mohan
Chief Executive Officer and Co-Founder

Thank you, Leslie. Good afternoon, everyone, and thanks for joining our fourth quarter conference call for fiscal year 2024. Today, I'll provide an overview of our Q4 and full year 2024 results, followed by a discussion around the key secular trends and the company-specific drivers that will help Astera Labs deliver above-industry growth in 2025 and beyond. I will then turn the call over to Sanjay to discuss our medium and long-term growth strategy in more detail. Finally, Mike will provide details of our Q4-24 financial results in addition to our financial guidance for Q1 of 2025. External apps deliver strong Q4 results and set our sixth consecutive record for quarterly revenue at $141 million, which was up 25% from last quarter and up 179% versus Q4 of the prior year. Revenue growth during Q4 was primarily driven by our Aries PCIe retimer, and Taurus Ethernet Smart Cable module product families. Within these product families, we saw additional diversification driven by strong demand for both AI scale-up and scale-out connectivity. Leo and Scorpio momentum also continued with both products shipping in pre-production volumes during the first quarter to support our customers' qualifications across multiple platforms for a variety of use cases. Looking at the full year, our strong Q4 finish culminated in an outstanding 2024 which saw full-year sales increase by 242% year-over-year to $396 million. The value of our expanding product portfolio across both hardware and software was reflected in our robust fiscal 2024 non-GAAP gross margin of 76.6%. Over the past 12 months, we have aggressively broadened our technology capabilities by investing in our R&D organization to solve next-generation connectivity infrastructure challenges. We successfully increased our headcount in 2024 by nearly 80% to 440 full-time employees. In Q4, we also closed a small but strategic acquisition that included a talented group of architects and engineers to help accelerate our product development, strengthen our foundational IP capability, and provide holistic connectivity solutions for our hyperscaler customers at a rack scale. Our revenue growth in 2024 was largely driven by ARIES products, along with a strong ramp of Taurus in the fourth quarter. We expect 2025 to be a breakout year as we enter a new phase of growth driven by production revenue from all four of our product families to support a diverse set of customers and platforms. In 2025, our Aries and Taurus retimers are on track to continue their strong growth trajectory. Also, Estella Labs is poised to be a key enabler of CXL proliferation over the next several years with the volume ramp of our Leo family expected to start in second half of 25. Finally, our Scorpio smart fabric switches will begin ramping this year with new and broadening engagements for scale-up with our X-Series and scale-out with our P-Series switches. In time, we expect Scorpio fabric switches to become our largest product line, given the size and growth of the market opportunity for AI fabrics. Across our industry, hyperscalers are pushing the boundaries of scale-up compute to support large language models, that continue to grow in capacity and complexity. Recent algorithmic improvements have shown the potential to deliver AI applications with better return on investment for AI infrastructure providers. These innovations enable increased adoption and broader use cases for AI across the industry. The secular trends underlying our business are projected to be robust in 2025, driven by growing CapEx investments by hyperscalers in AI and cloud infrastructure. Hyperscalers are deploying internal ASIC-based rack-scale AI servers that use end-to-end scale-up networks to deliver larger, higher performance in more efficient clusters. These scale-up networks require every accelerator to connect with every other accelerator with fully non-blocking, high-throughput, and low-latency data pipes. This drives the need for more and faster interconnect, and the homogeneity of such a system allows for many optimizations and innovations. But PCIe-based scale-up clusters are innovative Scorpio X-Series and Aries Retimer family are perfectly suited to provide a custom-built interconnect solution. In addition to PCIe-based scale-up opportunities, we are excited about the next potentially broader opportunity with Ultra Accelerator Link or UA-Link. This is an impactful initiative by the AI industry to develop an open scale-up interconnect fabric for the entire market. Early in 2025, significant progress has been made advancing the development of UA-Link to provide the industry with a high-speed scale-up interconnect for next-generation AI clusters. The UA-Link consortium recently expanded its board of directors to include several more technology leaders, including Alibaba Cloud and Apple. Given our intimate involvement within this open standard, we are seeing overall engagement accelerate for Astera Labs' next-generation high-speed connectivity solutions. In summary, we anticipate the market opportunity for high-speed connectivity to increase at a faster rate than underlying AI accelerator shipments. We look to take full advantage of these robust trends by broadening our existing product portfolio with differentiated hardware and software solutions across multiple protocols and interconnect media. We are accelerating our pace and level of development driven by our customers to deliver new products that address the rapidly growing market opportunity ahead of us. With that, let me turn the call over to our President and COO, Sanjay Gajendra, to discuss our growth strategy in more detail.

speaker
Sanjay Gajendra
President and Chief Operating Officer and Co-Founder

Thanks, Jacindra, and good afternoon, everyone. 2024 was a significant year for Astera Labs as we diversified our business across multiple vectors. We launched our Scorpio fabric switches, generated revenue from all four of our product lines, and transitioned into high-volume production for Aries and Taurus smart cable modules. We also started ramping multiple new AI platforms based on internally developed AI accelerators at multiple customers to go along with continued momentum with third party GPU-based AI platforms. This expansion to internal accelerator-based platforms took off in the third quarter and helped us establish a new revenue baseline for our business with continued growth in the fourth quarter. As we look into 2025, we see strong secular trends across the industry supported by higher capex spent by our customers, broadening deployment of AI infrastructure driven by more efficient AI models, and company-specific catalysts that should drive above-market growth rates for Estera Labs. Specifically for 2025, we expect three key business drivers. One is the continued deployment of internally developed AI accelerator platforms that incorporate multiple Estera Labs product families, including Ares, Taurus, and Scorpio. As a result, we'll continue to benefit from increased dollar content per accelerator in these next-generation AI infrastructure systems. Based on known design wins, backlog, and forecast from multiple customers, we see strong continued growth of our 80s PCIe Gen 5 products in 2025. As AI accelerator cluster sizes scale within the rack and rack-to-rack, we see meaningful opportunities to drive reach extension with our Aries retimer solutions in both chip-on-board and smart cable module formats. Our Taurus product family has demonstrated strong growth for the past several quarters, paving the way for solid revenue contributions in 2025. We continue to see good demand for Taurus 400-gig Ethernet solutions utilizing our smart cable modules both for AI and general-purpose compute infrastructure applications. Looking ahead, we view the transition to 800-gig Ethernet at the late 2025 event for a broader market opportunity in 2026. Additionally, design activity for Scorpio X-series products across next-generation scale-up architectures in AI accelerator platforms is also showing exciting momentum as we continue to broaden our customer engagements. The Scorpio X series is built upon a software-defined architecture and leverages our Cosmos software suite and supports a variety of platform-specific customizations, which enables valuable flexibility for our customers. We are pleased to report that we have received the first pre-production orders for our Scorpio X series product family. The second driver for our 2025 business is the expected production ramp of custom AI racks based on industry-leading third-party GPUs. We are shipping pre-production quantities to support qualification of designs utilizing our Scorpio P-series and Aries PCIe Gen 6 solutions to maximize GPU throughput while leveraging our customers' own internal networking hardware and software capabilities. These programs are driving higher dollar content opportunities for Estella Labs on a per rack and per accelerator basis. And we expect volume deployments to begin in the second half of this year. At the DesignCon 2025 trade show, we demonstrated the better together combination of PCIe Fabric Switch, PCIe Retimer, and 100 gig per lane Ethernet Retimer solutions utilizing our Cosmos software suite. This provides deeper levels of telemetry to pinpoint connectivity issues in complex topologies while enabling tighter integration of Cosmos APIs into our customers' operating stacks. We also showcase the first public demonstration of end-to-end interoperability between Scorpio fabric switches, Aries retimers, and Micron's PCIe Gen 6 SSDs. This demonstration highlighted the maturity of our PCIe Gen 6 solutions, growing PCIe Gen 6 ecosystem, and our performance leadership by doubling the maximum storage throughput possible today and setting a new industry benchmark. The third driver for our 2025 business is general compute in the data center. We expect revenue growth from general compute-based platform opportunities featuring new CPUs, new network cards, and SSDs with our Aries PCIe retimers, Taurus Ethernet SEM, and the Leo CXL product families. Though General Compute is a smaller portion of our business compared to AI servers, we benefit from the diversity with multiple layers of growth. Overall, we're excited by the many opportunities and secular trends in front of us to drive 2025 revenues. We're also encouraged by our customers and partners increasing their trust in us and opening new opportunities for new product lines to support their platform roadmaps. As a result, we'll continue to aggressively invest in R&D to further expand our product and technology portfolio as we work to increase our total addressable market. We'll build upon our semiconductor, software, and hardware capabilities to address comprehensive connectivity solutions at a rack scale to ensure robust performance maximum system utilization, and capital efficiency. As we look to 2026 and beyond, our playbook remains the same. One, stay closely aligned with our customers and partners. Two, innovate exponentially in everything we do. And three, continue to be laser focused on product and technology execution. Our long-term growth strategy is to aggressively attack the large and growing high-speed connectivity market. We estimate our portfolio of hardware and software solutions across retimers, controllers, and fabric switches will address a $12 billion market by 2028. Significant portions of this market opportunity, such as AI fabric solutions for backend scale-up applications, our green field in nature. With a diverse and broad set of technology capabilities, we are partnering with key AI ecosystem players to help solve the increasingly difficult system-level interconnect challenges of tomorrow. By helping to eliminate data, networking, and memory connectivity bottlenecks, our value proposition expands and will drive our dollar content opportunity higher. In conclusion, we are motivated by the meaningful opportunity that lies before us and will continue to passionately support our customers by strengthening our technology capabilities and investing in the future. Before I turn the call over to our CFO, Mike Tate, to discuss Q4 financial results and our Q1 outlook, I want to take a quick moment to thank our customers, partners, and most importantly to our team and families for a great 2024. With that, Mike.

speaker
Mike Tate
Chief Financial Officer

Thanks, Sanjay, and thanks to everyone for joining the call. This overview of our Q4 financial results and Q1 2025 guidance will be on a non-GAAP basis. The primary difference in Astera Labs non-GAAP metrics is stock-based compensation and its related income tax effects. Please refer to today's press release available on the investor relations section of our website for more details on both our GAAP and non-GAAP Q1 financial outlook, as well as a reconciliation of our GAAP to non-GAAP financial measures presented on this call. For Q4 of 2024, Astera Labs delivered record quarterly revenue of $141.1 million, which was up 25% versus the previous quarter and 179% higher than the revenue in Q4 of 2023. During the quarter, we enjoyed strong revenue growth of both our Aries and Taurus smart cable module products, supporting both scale-up and scale-out PCIe and Ethernet connectivity for AI rack-level configurations. For Leo CXL and Scorpio smart fabric switches, we ship pre-production volumes as our customers work to qualify their products for production deployments later in 2025. Q4 non-gap gross margins was 74.1%. and was down from the September quarter levels due to a product mix shift towards hardware-based solutions with both our ARIES and TORS smart cable modules. Non-GAAP operating expenses for Q4 were $56.2 million, up from $51.3 million in the previous quarter as we continue to scale our R&D organization to expand and broaden our long-term market opportunity. As previously mentioned on this call, we closed a small acquisition toward the latter half of the quarter, which also contributed to slightly higher spending during the period. Within Q4, non-GAAP operating expenses, R&D expense was $37.8 million, sales and marketing expense was $8.1 million, and general and administrative expenses was $10.4 million. Non-GAAP operating margin for Q4 was 34.3%, up from 32.4% in Q3, which demonstrated strong operating leverage as revenue growth outpaced increased operating expenses. Interest income in Q4 was $10.6 million. On a non-GAAP basis, given our cumulative history of non-GAAP profitability, starting in Q4, we will no longer be accounting for a full valuation allowance on our deferred tax assets. As a result, in the fourth quarter, we realized an income tax benefit for this change resulting in a Q4 tax benefit of $7.6 million and an income tax benefit rate of 13%, which compares to our previous guidance of an income tax expense rate of 10%. Non-GAAP fully diluted share count for Q4 was $177.6 million shares, and our non-GAAP diluted earnings per share for the quarter was 37 cents. Excluding the impact of the Q4 tax benefit just noted, and based on a 10% non-GAAP income tax rate during the quarter, as previously guided, non-GAAP EPS would have been 30 cents. Cash flow from operating activities for Q4 was $39.7 million, and we ended the quarter with cash equivalents and marketable securities of $914 million. Now turning to our guidance for Q1 of fiscal 2025. We expect Q1 revenues to increase to within a range of $151 million and $155 million, up roughly 7% to 10% from the prior quarter. For Q1, we expect continued growth from our ARIES product family across multiple customers over a broad range of AI platforms. We look for our Taurus SCM revenue for 400 gig applications to also provide strong contribution to the top line in Q1. Our LEO CXL controller family will continue shipping in pre-production quantities to support ongoing qualification ahead of volume ramp in the second half of 2025. Finally, we expect our Scorpio product revenue to grow sequentially in Q1, driven by growing pre-production volumes of designs for rack scale systems We continue to expect Scorpio revenue to comprise at least 10% of our total revenue for 2025 with acceleration exiting the year. We expect non-GAAP growth margins to be approximately 74% as the mix between our silicon and hardware modules remain consistent with Q4. We expect first quarter non-GAAP operating expenses to be in a range of approximately $66 million to $67 million. Operating expenses will grow in Q1 is largely driven by three factors. One, continued momentum in expanding our R&D resource pool across headcount and intellectual property. Two, seasonal labor expense step-ups associated with annual performance merit increases and payroll tax resets. And three, a full quarter contribution of the strategic acquisition we executed in the latter part of Q4. We continue to be committed to driving operating leverage over the long term via strong revenue growth while reinvesting into our business to support the new market opportunities associated with next generation AI and cloud infrastructure projects. Interest income is expected to be approximately $10 million. Our non-GAAP tax rate should be approximately 10% and our non-GAAP fully diluted share count is expected to be approximately 180 million shares. Adding this all up, We are expecting non-GAAP fully diluted earnings per share in a range of approximately $0.28 to $0.29. This concludes our prepared remarks. And once again, we appreciate everyone joining the call. And now we will open the line for questions. Operator?

speaker
JL
Conference Operator

Thank you. At this time, I would like to remind everyone in order to ask a question, press star, then followed by the number one on your telephone keypad. We also ask that for this session that you please restrict yourself to one question and one follow up. Just a moment to compile the Q&A roster. Your first question comes from the line of Harlan Sir of JP Morgan. Your line is open.

speaker
Harlan Sir
Analyst at JP Morgan

Good afternoon and congratulations on the strong results and execution. You know, the one big inflection in accelerated computing AI, as you mentioned, is the ramp of numerous AI ASIC XPU programs, right? You've got GPU at Google, Tranium at Amazon, MTIA at Meta, also multiple new programs in the not-too-distant future. Looks like these custom AI program ramps are growing actually faster than the overall merchant GPU market. So the question is, what percentage of your business last year came from merchant GPU AI systems versus basic base systems and what do you expect that mix to be say exiting this year I mean asic programs seem to be on a faster growth trajectory you have a very very strong attach here across all your products as you guys mentioned I'm just wondering if the team agrees with me on this view yeah we're very excited about the addition of these internal AI accelerator programs also in particular

speaker
Mike Tate
Chief Financial Officer

On those programs, to the extent you're doing the scale-up connectivity, the unit volume steps up in a meaningful way compared to the merchant GPU designs that we see right now. If you look into 2024, the first half of the year was predominantly merchant GPUs, as that was the first to really adopt our product lines. And then in Q3, that's the first quarter it inflected up with the internal AI accelerators, especially you see that with our Taurus and our Ares SCM business inflecting up. Q3 was a partial quarter and then Q4 was a full quarter. So it really set up a nice baseline of revenues. Now, if you look into 2025, we see both contributing growth. The first half of the year will be more predominantly the internal AI accelerator programs. But beginning of the back half of the year, the transition on the merchant GPUs will also be very strong for us. This is where you'll see the custom rack configurations start to deploy. And that's where we see a big dollar increase in our content per GPU with Scorpio starting to ramp.

speaker
Harlan Sir
Analyst at JP Morgan

I appreciate that. And on the balance sheet, inventories were up almost 80% sequentially in the December quarter. That's an all-time high for the team. I think it's up 60% versus the average inventory level over the past four quarters. Is the significant step up reflective of a strong multi-quarter shipment profile across the overall portfolio, or maybe reflective of a step up in more of your board level solutions, or is it a kind of a combination of both?

speaker
Mike Tate
Chief Financial Officer

Well, if you remember in Q3, our revenues were very strong. We were up 47% sequentially. A lot of that strength developed during the quarter, so we drew down our inventories pretty significantly in Q3. Now in Q4, we had time to build back to our more normalized level. So this level of inventory is actually where we feel most comfortable. We always want to be in a position to support upsides from our customers because most of our programs are sole sourced. But this level reflects the growth in our business now.

speaker
JL
Conference Operator

Oh, great.

speaker
Harlan Sir
Analyst at JP Morgan

Thank you.

speaker
JL
Conference Operator

Your next question from the line of, sorry, Blaine Curtis of Jefferies. Your line is open.

speaker
Blaine Curtis
Jefferies Analyst

Hey, good afternoon. Thanks, Tim. A question I had too. I was just kind of curious. You mentioned the strength in Taurus. I didn't know if you were going to dial us in as to how maybe big that was in December. And then, Mike, I wanted to ask on gross margin. I know it's a mix of hardware and you're seeing strength from at least your ASIC customer there with multiple products. I'm just kind of curious as you think about 25 as Scorpio ramps, how that shifts and what the impacts are of the shape of gross margin for this year.

speaker
Mike Tate
Chief Financial Officer

Yeah, so for Q4, you see the margins, they tick down. We did highlight that Taurus and Aries, which are in the module form factors, grew as a percentage of our total revenues. And the upside in the quarter was in particular from Taurus as well. So that's the margin going down to 74.1%, which is reasonably close to what we had expected. Now, as you go into 2025, we still could see good contribution from TORS and the ARIES SCM modules. But, you know, as we make it through the year, the ARIES board on chip, as well as LEO and Scorpio, are a positive for us as well. So we think, you know, Q1 and Q2, we should have a consistent margin profile of around 74%. And, you know, as we've highlighted, you know, Margins will be trending down closer to the longer-term model, 70%. But it all depends on the mix of our hardware versus silicon.

speaker
Blaine Curtis
Jefferies Analyst

Thanks. And then I was just kind of curious if you could, now that you're a quarter removed from launching Scorpio, just kind of comment on the design momentum you've had in terms of the number of engagements. And I'm just kind of curious in terms of from a competitive landscape-wise, for scale up outside of, you know, MVLink, what else is out there that you're seeing that you're competing against with those products?

speaker
Sanjay Gajendra
President and Chief Operating Officer and Co-Founder

Let me help answer that. So for us, as you know, Scorpio has got two series, P series for PCIe and head node use case, which tends to be pretty broad. And then X series is for the GPU clustering on the backend side. So overall, since we launched, we continue to pick up multiple design opportunities at this point, both for P-Series as well as for X-Series. X-Series tends to be a more sort of a longer design and qualification just because it's going into the back-end GPU side, but the front-end, the P-Series is what we expect to start contributing meaningful revenue starting second half of this year as the production volumes take off. We have been shipping pre-production for P-Series already, and then for X-Series, we started receiving our first pre-production orders So to that standpoint, what I want to share is that the momentum on Scorpio has been definitely more than what we expected, largely driven by the feature set that we have implemented, where Scorpio is the first set of fabric devices for PCIe and backend connectivity that's been developed ground up for AI use cases. So to that standpoint, the customers see the value in the features we have and the performance that we're delivering.

speaker
Jitendra Mohan
Chief Executive Officer and Co-Founder

And, Blaine, this is Jitendra. Your question on what else is out there. Of course, clearly, NVLink is the one that is most commonly deployed, most widely deployed within the NVIDIA ecosystem. And we don't play there. Other than that, there is, of course, the PCIe-based, you know, scale-up network that Sanjay just talked about. And the other alternative is Ethernet-based scale-up networks. And the difference between the two is really Ethernet is a very commonly used standard, but was not designed for scale-up. So the latencies are quite high, and you don't quite get the performance of Ethernet, which is why we see many of our customers gravitate towards PCIe-based systems, which are inherently better suited for this application. Now, what we see happening in the future is the industry might try to get behind UA-Link which is a developing standard that we are very excited about. And with the UA-Link, you get the benefit of both Ethernet speeds, as well as the lower latency and the memory-based IO of the PCI-like protocols. Now, over time, we do expect Scorpio to become our largest product line, just because the market for scale-up interconnect is so large. So very excited about what's coming in this space.

speaker
JL
Conference Operator

Thank you. Your next question comes from the line of Joe Moore of Morgan Stanley. Your line is open.

speaker
Joe Moore
Morgan Stanley Analyst

Great. Thank you. I know there's a lot of attention on the deep-seek innovations that we saw a couple weeks ago. Can you talk about what you've seen, how you would position that with regards to other innovations that we've seen? Do you see it as deflationary to the long-term opportunity in AI? I just would love to get your perspective on this.

speaker
Jitendra Mohan
Chief Executive Officer and Co-Founder

Jules, I'll just detain you again. So let me start on that. First of all, you're right. There is a lot of discussion, a lot of articles that have been written about DeepSeq. So I'm not sure exactly what all we will be able to add. But what I do want to point out is what matters most is what our customers, the hyperscalers, think about DeepSeq. And in the face of that announcement, they have all gone and increased the CapEx spending. So that really shows that the hyperscalers believe in the future of AI and And the continued demand for GPUs and accelerators is likely to continue. And I can kind of give you my perspective on why that is. If you break it down into two, first, if you look at inference, for inference, DeepSeq has shown that algorithmic improvements will drive the cost of inference lower. And we have seen time over again, when the cost goes down, the adoption goes up. It happened with PCs. It happened with phones. It happened even with servers when virtualization kicked in. And we do think that AI will follow a similar trajectory, so it gets more adoption. And then if you look at training, just consumers like you and I, we are all looking for better results from these models. And by embracing some of the innovations that the DeepSeq team has put forward, the quality of results from these models will go up. And that, again, is beneficial for the overall AI ecosystem. So our focus has always been on enabling AI infrastructure with both the third-party GPUs as well as ASIC platforms. and to the extent that any of the dynamics change, we stand to benefit from the trends.

speaker
Joe Moore
Morgan Stanley Analyst

That's very helpful. Thank you. And then for my follow-up, you talked about Leo ramp in the second half of the year. I know you're seeing quite a bit of interest in kind of memory bandwidth boosting kind of capabilities there. Can you help us size how important that could be in the second half?

speaker
Mike Tate
Chief Financial Officer

Yeah, so we've been working with our customers as the next generation CPUs that support CXL come to market. These are going to initially be very high memory data intensive applications, high performance compute type applications. So we'll see those start to deploy in the back half of the year. Ultimately, longer term, we do expect the CXL technology to be very beneficial for more mainstream general compute. So we hope to see that play out. And in 2026 and 2027.

speaker
Sanjay Gajendra
President and Chief Operating Officer and Co-Founder

Yeah, to add to that, Joe. Yes, to add to what Mike said, just to give you a little bit broader picture too, I think there are at this point, based on the fact that we have been working closely with the hyperscalers and the CPU vendors for quite some time now, it's become pretty clear that there are three or four sort of applications that are driving the ROI or the use cases for CXO. So at this point, we understand that the first one is getting deployed in the second half of this year, and we do expect additional use cases and the associated opportunities to come along probably in 2026 and beyond. Great.

speaker
JL
Conference Operator

Thank you. Your next question comes from the line of Tori Svanberg of Stifel. Your line is open.

speaker
Tori Svanberg
Stifel Analyst

Yes, thank you, and congrats on the strong results. I had a question on Scorpio. I think you said you still expect it to be more than 10% of revenues in calendar 25. Obviously, that number is now higher. Is that going to be predominantly the P series, or are you going to get some contribution already from X series in calendar 25?

speaker
Mike Tate
Chief Financial Officer

We expect contribution for both the the P series will be first to launch and then the the X series. We do expect contribution a lot of part of the of the year.

speaker
Tori Svanberg
Stifel Analyst

Great and that's a follow up on the Taurus. Could you just talk a little bit about the revenue profile there? How diversified is it by the customer and use case and?

speaker
Mike Tate
Chief Financial Officer

how do you think about this that business first step versus second half because obviously if it is more diversified in nature then obviously you know maybe second half will be even greater than first half but yeah just just some profile of that revenue base right now yeah so we we're shipping both 200 gig and 400 gig 400 gig is what really uh launched here in q3 and q4 and we we have uh multiple designs across different types of configurations and we also support different cable providers and different form factors. The 400 gig opportunity is still relatively limited opportunity set out there. So we've been focusing primarily on our lead customer. And this should continue to provide good strong growth for both in 2025 driven by both AI and general purpose in 2025.

speaker
Tom O'Malley
Barclays Analyst

then a lot of part of the year then we see the market start to transition to 800 gig great thank you your next question comes from the line of tom o'malley of barclays your line is open hey guys thanks for taking my questions uh my first was in your prepared remarks you mentioned that over time scorpio would become your biggest product line i don't think you've mentioned that before uh perhaps you could talk about the time frame that you're thinking about that product line taking over is this something that we should be seeing potentially as early as 2026. And is that a function of just Scorpio growing faster than you had originally thought, or perhaps areas coming down to some extent, just understanding why you made that comment in the preamble?

speaker
Sanjay Gajendra
President and Chief Operating Officer and Co-Founder

Yes, I think To add some color to that, again, the ASP profile of a read timer class device and a switch device tends to be very different, meaning on the switch side, we do get a significantly higher ASP. And if you look at least for the customized AI racks that are being deployed, we are essentially adding a Scorpio socket to go along with the read timer socket. And given that attach rate, configuration, what we also see is that the dollar content per GPU will go up. But in general, the switch is a much bigger TAM out there. And then we get to play both in the front end with the P series and the back end. Back end tends to be obviously a lot more fertile in many ways because you have many GPUs talking to each other. And we benefit from having a high ASP device like the X series switches. and then being deployed in a scale that's much more significant compared to any of the products that we have released so far. It does not mean that the timers or CXO controllers are going to go away. It simply means that the The time that we're able to address with Scorpio tends to be larger. And given the market momentum and opportunities that we're seeing, including some of the roadmap products we're developing, we feel confident that going forward that will continue to evolve and become a flagship product, both from a technology and revenue standpoint.

speaker
Tom O'Malley
Barclays Analyst

Super helpful. And then my follow-up was just, I think it was Mike's commentary on one of the first questions here on the call. You talked about kind of the year 2025 on the ARIES side, talking about how in the first half of the year you would see more internal AI efforts, followed by the second half of the year being more emergent GPU. That comment was a bit surprising to me, given we're going through a big product transition now at the large customer of yours. So is there any change in the way that you see the ramp of 2025 versus where you did before? I would have anticipated maybe the merchant GPU being a bit stronger earlier in the year. Just any reason behind those comments, it caught me a little off guard. Thank you.

speaker
Mike Tate
Chief Financial Officer

Sure. First of all, the merchant GPU drives both Scorpio and Ares. So the big incremental piece of the merchant GPUs is the Scorpio content, which is all new for us. The designs that we have are complex in nature. They're all new. To get them to productize and ramp, we're looking at that to start off in the back half of the year. Right now, in the first half of the year, it's pre-production. These are all for custom rack configuration, so the customization adds a little bit of lead time to the volume ramps.

speaker
JL
Conference Operator

Your next question comes from the line of Ross Seymour of Deutsche Bank. Your line is open.

speaker
Ross Seymour
Deutsche Bank Analyst

Hi, guys. Thanks for asking a couple questions. The first one is a little bit higher level, and it's on diversification. You guys talked from a product diversification point of view with Scorpio being over 10% of your revenues going forward. But there's also an admittedly concentrated batch of hyperscalers. There's also the customer concentration. So whether it be on the customer side or the product side or both, any way you can give us a little bit of framework of how 2024 ended and how you think 2025 will differ from a diversification lens?

speaker
Sanjay Gajendra
President and Chief Operating Officer and Co-Founder

Yeah, so overall 2024, in fact 2025 as well, we are shipping to all the hyperscalers across multiple different product sets that we have. So there should not be any doubt or question about that. But having said that, there are some nuances that are important to keep in mind when you're dealing with the data center market. The first thing, like we always say, Customer concentration is an occupational hazard in the data center market just because there are only a handful of hyperscalers. The second thing to keep in mind is that the hyperscalers differ in terms of their maturity when it comes to internal accelerator chip development. Some are more advanced than the others. For us, when we are designing and counting revenue from internal accelerator program Obviously, there will be a difference between the hyperscalers where we get to play both on the merchant as well as internal accelerator programs compared to hyperscalers where we only have the merchant silicon opportunity. So there is that nuance. The other one that is also true is that the appetite for new technology deployment differs from hyperscaler to hyperscaler. So there will be some hyperscalers that are pretty aggressive in terms of deploying new technology, others take some time. So you can expect that in a given window of time, there will be a situation where our revenue would be coming more from a given hyperscaler versus the other. It does not mean that the second hyperscaler is not a potential customer for us. It simply means that they take more time to deploy something just given their own workloads and other things that they're tracking. But overall, our goal is to make sure that the revenue contribution we get reflects the share that each hyperscaler has, meaning if a given hyperscaler has got a certain percentage of the market in terms of cloud services, then we expect that we are seeing similar numbers in our share.

speaker
Ross Seymour
Deutsche Bank Analyst

That's very helpful. Thank you for that. Mike, one for you. You mentioned earlier about the gross margin, why it was down a little bit below your guide in the fourth quarter, stays flat in the first. Sounds like you said stays flat in the second as well. Does that kind of hardware slash module mix shift in the second half? It sounded like from what you said that it does go back away from the hardware side to a little bit more towards the chip level. Was I hearing that correct or any sort of update on that would be helpful?

speaker
Mike Tate
Chief Financial Officer

Yeah, we do see growth in the hardware, but it should stay at a similar level as the rest of the business, so it's not growing as a percentage of the revenue in the first half of the year.

speaker
JL
Conference Operator

Your next question comes from the line of Quinn Bolton of Needham. Your line is open.

speaker
Quinn Bolton
Needham Analyst

Hey, guys. I wanted to come back to Joe's question on DeepSeq. And obviously, one of the benefits is greater deployment of AI models, which probably means a shift towards inferencing. Do you guys see on the inferencing side the need for clusters that are as large as we've historically seen on the training side? And if we don't, you know, if we see greater adoption of sort of more inferencing clusters but of smaller size. Is that a positive or a negative for the connectivity chance?

speaker
Jitendra Mohan
Chief Executive Officer and Co-Founder

Let me take a crack at that. So, I think, first of all, at the high level, I can say that our business is not strongly dependent on inferencing versus training. We tend to benefit from both of those. Now, the point that you made is valid in that you don't need as large of a cluster for inferencing as you need for training. Having said that, if you look at the deep seek announcement, or even for that matter, other folks that are announcing as well, these chain of thought models, which actually require far more compute than they have required historically. So over time, we do expect that the unit of compute will become a kind of a rack level AI server. What typically used to be a 2U or a 4U server will now be, you know, at a rack level. And when you go to a rack level, you come up with these, you know, different connectivity challenges that we are very well positioned to address. as we are doing today with many of our different product lines. So all in all, as this unit of compute goes to rack level, we will see higher opportunity. We already are participating in other form factors with our Scorpio P and Aries D-Timer type products. And so overall, we don't see this as a headwind or a tailwind. We just tend to benefit from both of those.

speaker
Quinn Bolton
Needham Analyst

I know that's very helpful. And then a second question just on the Scorpio P family. You mentioned that that growth really driven by the sort of custom versions of merchant GPU-based platforms. Wondering, do you guys have engagements for Scorpio P on the ASIC platforms as well? Thank you.

speaker
Sanjay Gajendra
President and Chief Operating Officer and Co-Founder

Yeah, absolutely. So we are for the... For the customers that are engaged right now, we do see opportunities both for P-Series and X-Series as it relates to ASIC platforms. P-Series more for the head node connectivity to interconnect GPUs with CPU storage and networking, and X-Series for the back end to interconnect GPUs themselves.

speaker
JL
Conference Operator

Your next question? Your next question comes from the line of Atif Malik of Citi. Your line is open.

speaker
Atif Malik
Citi Analyst

Hi, thank you for taking my question. My first question is on co-packaged optics, either Jitendra or Kanjay. There's a bit of discussion in terms of its timing. If you can share your thoughts on the volume of that and what impact it could have on your e-timer and PCIe opportunities.

speaker
Jitendra Mohan
Chief Executive Officer and Co-Founder

Let me start. First of all, at the high level, we don't expect CPO to negatively impact our business in the near future. The current products that we have or even the next generation of products that we have. The way we look at it, at 200 gigabit per second per lane at the rack level, the connectivity will remain largely copper. And as a matter of fact, I think the industry will work very hard to even keep it at copper at 400 gigabit per second. The reason for that is our customers really prefer copper. The reason for that is it is easier to deploy. When we go to optical, specifically CPO type solutions, you introduce a lot of additional components into what used to be a purely silicon-based package, which has its own challenges for reliability as well as serviceability aspects. So in general, what we have seen from our customers is if they can stay with copper, they will stay with copper. If they cannot stay with copper because the bandwidths are too high and so on, they will go to pluggable optics. And when pluggable optics do not become feasible, only then do you go to co-package optics. As a result, we see the first instance of co-package optics happen when the data rates are the highest, the line speeds are the highest, and the density of interconnect is the highest, which typically happens in Ethernet switches. So our view is that that's where CPU will get deployed first, and that's not an area that we play in today. So it's not likely to impact our near to medium-term revenues. And for the longer term, We will continue to explore different media types and keep watching the space for the solutions that our customers might like from us.

speaker
Atif Malik
Citi Analyst

Great, thank you. And Mike, can you talk about how should we think about OPEX for the year?

speaker
Mike Tate
Chief Financial Officer

Yeah, like we highlighted, we're going to continue to invest aggressively in the business. Q1 is a bigger step up than typical given for the reasons we outlined, including the small acquisition we did. So the rate of growth will hopefully normalize a little bit. But right now, we really believe it's the time to press our advantage and invest in the business. We do have a goal of hitting a long-term operating margin model of 40% operating margins. And that would be driven more by inflections in revenue growth versus our controlling our investment in R&D.

speaker
JL
Conference Operator

Thank you. Your next question comes from the line of Suji Da Silva of Roth Capital. Your line is open.

speaker
Suji Da Silva
Roth Capital Analyst

Hi, Dr. Thomas, Andre, Mike. First question, just to be clear on the customer focus, I know it's almost entirely hyperscale. I'm just curious if you've been approached by or trying to engage the emerging AI companies versus large establishers that doesn't fit in your business model.

speaker
Sanjay Gajendra
President and Chief Operating Officer and Co-Founder

It absolutely fits. And just to be very clear, right, we talk about hyperscalers because they are the ones that are deploying the big infrastructure right now in terms of AI training and so on. But as a company, you know, we are tracking both the OEM space for enterprise application as well as the emerging AI players in various different formats, right? So we are tracking it, but what we also believe is that the first generation of systems that are being rolled out right now take something that's directly available from, you know, a company that's providing GPUs or folks that are integrating that into OEM level boxes. So that is the first generation. Probably that will continue, probably with a little bit more customization in Gen 2 before we start seeing more sort of hardware level decisions being made in future generations. So overall, we are tracking it. But the fact of the matter right now is bulk of the TAM that's available is through the hyperscalers. And that's where we talked about. But at the same time, every OEM out there That's building AI servers. They are buying components from us either directly or through baseboards that they procure from GPU platform providers.

speaker
Suji Da Silva
Roth Capital Analyst

Okay, appreciate all the color there. And then my other question is on UALink. I'm just wondering what you think the biggest challenges or timing impacts of cutting over from Ethernet, PCIe to UALink are. And most importantly for Sarah, what's the shared content uplift implications of maybe UALink gaining traction?

speaker
Jitendra Mohan
Chief Executive Officer and Co-Founder

No, it's a great question. So, UA-Link promises to be a very good initiative for the industry to try to bring everybody together for a standardized scale of clusters. So, we see a lot of benefit for Estella Labs because of our prominent position on the board. Over time, we will develop a full UA-Link portfolio to address the connectivity requirements at a rack level that our customers will have. In terms of the timing, the standards group is working to release the standard, the final specification. That is supposed to happen at the end of Q1. So we expect the earliest products to hit the market sometime in 2026, which is when we will start to see first instance of UA-Link. Okay, thanks.

speaker
JL
Conference Operator

Your next question comes from the line of Richard Shannon of Craig Hallam. Your line is open.

speaker
Richard Shannon
Craig Hallam Analyst

Well, hi, guys. Thanks for taking my question as well. You know, I'll ask a question that kind of the competitive dynamics here, kind of going in between your Retimer and P-Series switch products. I guess, obviously, you have a really strong position in the Retimers. Coming into the market in the P-Series, you've got another competitor who's very strong there. And while you've got Cosmos, you know, the software barriers to switching there, switches are generally thought of as more of a stickier product. kind of a chip. And so I'm wondering how the competitive dynamics play out here. Do we see kind of an attach rate similar to what you're seeing in the P-Series to the Aries anytime soon?

speaker
Sanjay Gajendra
President and Chief Operating Officer and Co-Founder

Yeah, so first of all, this is a big market. So let's keep that in mind when you start thinking about this switch type of socket. There are many sockets, Gen 6, Gen 5, and so on, right? So there is several different areas that we can go and surely the market is seeing that the competitors are seeing that and you have folks that are playing there. The angle that we have taken, there is a software and Cosmos point that you made, which brings in a lot of diagnostic telemetry and fleet management type of features that scales across all of our products, copy or retimers and everything else that we do. But the bigger reason why Scorpio is gaining traction is how it's architected. The incumbent switches were originally designed for storage applications. So obviously the feature set that were incorporated were more tuned for addressing and attaching to an SSD drives. What we have done is essentially created a device for the first time where the data flows are created for a GPU to GPU traffic. the bandwidth optimization and other capabilities that are needed. So in general, the functionality itself is a lot better, and that's being appreciated by our customers. And based on that, we continue to build on the portfolio, and we do expect that we will be playing a significant role in that market. And in terms of the timing itself, so there is a timing aspect here, obviously. Gen 6 window is now, I mean, as far as we know, we are the only ones in the market providing PCIe Gen 6 switches. So many times in connectivity, what happens is the first vendor to provide a solid product that is scaling and getting qualified. Those things go a long way in terms of maintaining and building a competitive barrier.

speaker
Richard Shannon
Craig Hallam Analyst

Okay, excellent. Thanks for those thoughts. My second question is on the Scorpio X series. Wondering if you can kind of give us a picture looking forward here about the size of the scale of domains that hyperscalers are looking to do. I can't remember the exact number of what NVIDIA does for theirs today, but I imagine it's going to go up quite a bit here. And to what degree does the size play into your commentary about the Scorpio eventually being your largest product line at the time? Thank you.

speaker
Jitendra Mohan
Chief Executive Officer and Co-Founder

Can I actually take a crack at that? This is Jitendra. So we mentioned previously that not counting NVLink or NVIDIA, we expect the market for Scorpio X family, the TAM, to be $2.5 billion or more than that by 2028. And if you look at what it is today, it's effectively nearly zero. So it's a very, very rapidly growing TAM, and it's the largest TAM that we have, which is why we are bullish on the prospects of Scorpio and it becoming the largest product line over time.

speaker
JL
Conference Operator

Thank you. There are no further questions. I turn the call back over to Leslie Green for closing remarks.

speaker
Leslie Green
Director of Investor Relations

Thank you, JL, and thank you everyone for your participation and questions. We look forward to updating you on our progress.

speaker
JL
Conference Operator

This concludes today's conference call. I now disconnect.

Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-