This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.
Operator
At this time, I would like to welcome everyone to the Estera Labs first quarter 2024 earnings conference call. All lines have been placed on mute to prevent any background noise. After management remarks, there will be a question and answer session. If you would like to ask a question during this time, simply press star followed by the number one on your telephone keypad. If you'd like to withdraw your question, press star one again. We do ask that you please limit your questions to two. I will now turn the call over to Leslie Green, Investor Relations for Astera Labs. Leslie, you may begin.
Leslie
Thank you, Regina. Good afternoon, everyone, and welcome to the Astera Labs first quarter 2024 earnings call. Joining us today on the call are Jitendra Mohan, Chief Executive Officer and Co-Founder, Sanjay Jitendra, President, Chief Operating Officer and Co-Founder, and Mike Tate, Chief Financial Officer. Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, expected future financial results, strategies and plans, future operations, and the markets in which we operate. These forward-looking statements reflect management's current beliefs, expectations, and assumptions about future events. which are inherently subject to risks and uncertainties that are discussed in detail in today's earnings release and in the periodic reports and filings we file from time to time with the SEC, including the risks set forth in the final perspective relating to our IPO. It is not possible for the company's management to predict all risks and uncertainties that could have an impact on these forward-looking statements, or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement. In light of these risks, uncertainties, and assumptions, the results, events, or circumstances reflected in the forward-looking statements discussed during this call may not occur and actual results could differ materially from those anticipated or implied. All of our statements are made based on information available to management as of today, and the company undertakes no obligation to update such statements after the date of this call to conform to these as a result of new information, future events, or changes in our expectations, except as required by law. Also during this call, we will refer to certain non-GAAP financial measures, which we consider to be an important measure of the company's performance. These non-GAAP financial measures are provided in addition to and not as a substitute for or superior to financial results prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed through the investor relations portion of our website. and will also be included in our filings with the SEC, which will also be accessible through the investor relations portion of our website. With that, I would like to turn the call over to Jitendra Mohan, CEO of Astera Labs. Jitendra.
Jitendra
Thank you, Leslie. Good afternoon, everyone, and thanks for joining our first earnings conference call as a public company. This year is off to a great start, with Astera Labs seeing strong and continued momentum along with the successful execution of our IPO in March. First and foremost, I would like to thank our investors, customers, partners, suppliers, and employees for their steadfast support over the past six years. We have built Astera Labs from the ground up to address the connectivity bottlenecks to unlock the full potential of AI in the cloud. With your help, we've been able to scale the company and deliver innovative technology solutions to the leading hyperscalers and AI platform providers worldwide. But our work is only just beginning. We are supporting the accelerated pace of AI infrastructure deployments with leading hyperscalers by developing new product categories while also exploring new market segments. Looking at industry reports over the past several weeks, it is clear that we remain in the early stages of a transformative investment cycle by our customers to build out the next generation of infrastructure that is needed to support their AI roadmaps. According to recent earning reports, On a consolidated basis, CapEx spent during the first quarter for the four largest U.S. hyperscalers grew by roughly 45% year-on-year to nearly $50 billion. Qualitative commentary implies continued quarterly growth in CapEx for this group through the balance of the year. This is truly an exciting time for technology innovators within the cloud and AI infrastructure market, and we believe SR Labs is well-positioned to benefit from these growing investment trends. Against the strong industry backdrop, Eterra Labs delivered strong Q1 results with record revenue, strong non-GAAP operating margin, positive operating cash flows, while also introducing two new products. Our revenue in Q1 was $65.3 million, up 29% from the previous quarter, and up 269% from the same period in 2023. Non-GAAP operating margin was 24.3%, and we delivered $0.10 of pro forma, non-GAAP diluted earnings per share. I will now provide some commentary around our position in this rapidly evolving AI market. Then I will turn the call over to Sanjay to discuss new products in our growth strategy. Finally, Mike will provide additional details on our Q1 results and our Q2 financial guidance. Monkeys AI model sizes continue doubling about every six months, fueling the demand for high performance AI platforms running in the cloud. Modern GPUs and AI accelerators are phenomenally good at compute, but without equally fast connectivity, they remain highly underutilized. Technology innovation within the AI accelerator market has been moving forward at an incredible pace, and the number and variety of architectures continues to expand to handle trillion parameter models while improving AI infrastructure utilization. We continue to see our hyperscaler customers utilize the latest merchant GPUs and proprietary AI accelerators to compose unique data center-scale AI infrastructure. However, no two clouds are the same. The major hyperscalers are architecting their systems to deliver maximum AI performance based on the specific cloud infrastructure requirements, from power and cooling to connectivity. We are working alongside our customers to ensure these complex and different architectures achieve maximum performance and operate reliably even as data rates continue to double. As the systems continue to move data faster and grow in complexity, we expect to see our average dollar content per AI platform increase, and even more so with the new products we have in development. Our conviction in maintaining and strengthening our leadership position in the market is rooted in our comprehensive intelligent connectivity platform and our deep customer partnerships. The foundation of our platform consists of semiconductor-based and software-defined connectivity ICs, modules, and boards, which all support our Cosmos software suite. We provide customers with a complete customizable solution, chips, hardware, and software, which maximizes flexibility without performance penalties, delivers deep fleet management capabilities, and matches pace with the ever-quickening product introduction cycles of our customers. Not only does Cosmos software run on our entire product portfolio, but it is also integrated within our customers' operating staff to deliver seamless customization, optimization, and monitoring. Today, Acela Labs is focused on three core technology standards, PCI Express, Ethernet, and Compute Express Link. By shipping three separate product families, all generating revenue and in various stages of adoption and deployment, supporting these different connectivity protocols. Let me touch upon each of these critical data center connectivity standards and how we support them with our differentiated solutions. First, PCI Express. PCIe is the native interface on all AI accelerators, CPUs, and GPUs, and is the most prevalent protocol for moving data at high bandwidth and low latency inside servers. Today, we see PCIe Gen 5 getting widely deployed in AI servers. These AI servers are becoming increasingly complex. Faster signal speeds in combination with complex server topologies are driving significant signal integrity challenges. To help solve these problems, our hyperscalers and AI accelerator customers utilize our PCIe smart DSP retimers to extend the reach of PCIe Gen 5 between various components within a heterogeneous computer architecture. Our ARIES product family represents the gold standard in the industry for performance, robustness, and flexibility, and is the most widely deployed solution in the market today. Our leadership position, with millions of critical data links running through our ARIES retimers and our Cosmos software, enables us to do something more, become the eyes and ears to monitor the connectivity infrastructure and help fleet managers ensure their AI infrastructure is operating at peak utilizations. Deep diagnostics and monitoring capabilities in our chips and extensive fleet management features in our Cosmos software, which are deployed together in our customer's fleet, have become a material differentiator for us. Our Cosmos software provides the easiest and fastest path to deploy the next generation of our devices. We see AI workloads and newer GPUs driving the transition from PCIe Gen 5, running at 32 gigabits per second per lane, to PCIe Gen 6, running at 64 gigabits per second per lane. Our customers are evaluating our Gen 6 solutions now, and we expect them to make design decisions in the next six to nine months. In addition, while we see our ARIES devices being heavily deployed today for interconnecting AI accelerators with CPUs and networking, we also expect our ARIES devices to play an increasing role in backend fabrics, interconnecting AI accelerators to each other in AI clusters. Next, let's talk about Ethernet. Ethernet protocol is extensively deployed to build large-scale networks within data centers. Today, Ethernet makes up the vast majority of connections between servers and top-of-rack switches. Driven by AI workloads' insatiable need for speed, Ethernet data rates are doubling roughly every two years, and we expect the transition from 400 gig Ethernet to 800 gig Ethernet to take place later in 2025. 800 gig ethernet is based on 100 gigabits per second per lane signaling rate, which is facing tremendous pressure on conventional passive cabling solutions. Like our PCIe retimers, our portfolio of Taurus ethernet retimers helps relieve these connectivity bottlenecks by overcoming reach, signal integrity, and bandwidth issues by enabling robust 100 gig per lane connectivity over copper. And like our AD's portfolio, which is largely sold in a chip format, We sell our Taurus portfolio largely in the form of smart cable modules that are assembled into active electrical cables by our cable partners. This approach allows us to focus on our strengths and fully leverage our Cosmos software suite to offer customization, easy qualification, deep telemetry, and field upgrade to our customers. At the same time, this model enables our cable partners to continue to excel at bringing the best cabling technology to our common end customers. We expect 400 gig deployments based on our Taurus smart cable modules to begin to ramp in the back half of 2024. We see the transition to 800 gig Ethernet starting to happen in 2025, resulting in broad demand for AECs to both scale up and scale out AI infrastructure and strong growth for our Taurus Ethernet smart cable module portfolio over the coming years. Last is Compute Express Link, or CXL. CXL is a low-latency cache-coherent protocol which runs on top of PCIe protocol. CXL provides an open standard for disaggregating memory from compute. CXL allows you to balance the memory bandwidth and capacity requirements independently from compute requirements, resulting in better utilization of compute infrastructure. Over the next several years, data center platform architects plan to utilize CXL technology to solve memory bandwidth and capacity bottlenecks that are being exacerbated by the exponential increase in compute capability of CPUs and GPUs. Major hyperscalers are actively exploring different applications of CXL memory expansion. While the adoption of CXL technology is currently in its infancy, we do expect to see increased departments with the introduction of next-generation CXL-capable data centers server CPUs such as Granite Rapids, Turing, and others. Our first-to-market portfolio of Leo CXL memory connectivity controllers is very well positioned to enable our customers to overcome memory bottlenecks and deliver significant benefits to their end customers. We have worked closely with our hyperscaler customers and CPU partners to optimize our solution to seamlessly deliver these benefits without any application-level software changes. Furthermore, we have used our Cosmos software to include significant learnings we have had over the last 18 months and to customize our Leo memory expansion solution to the differing requirements from each hyperscaler. We anticipate memory expansion will be the first high-volume use case that will drive design into volume production in 2025 timeframe. We remain very excited about the potential of CXL in data center applications. and believe that most new CPUs will support CXL, and hyperscalers will increasingly deploy innovative solutions based on CXL. With that, let me turn the call over to our President and CEO, Sanjay Gajendra, to discuss some of our recent product announcements and our long-term growth strategy.
Sanjay Gajendra
Thanks, Jitendra, and good afternoon, everyone. Acer Labs is well-positioned to demonstrate long-term growth through a combination of three factors. We have a strong secular tailwinds with increased AI infrastructure investment. Two, the next generation of products within existing product lines are gaining traction. And third, the introduction of new product lines. Over the past three months, we announced two new and significant products that play an important role in enabling next generation AI platforms and provide incremental revenue opportunities as early as the second half of 2024. First, we expanded our widely-deployed, field-proven ARRI Smart DSP Retimer portfolio with the introduction and public demonstration of our ARRI 6 PCIe Retimer that delivers robust, low-power PCIe Gen 6 and CXL3 connectivity between next-generation GPUs, AI accelerators, CPUs, NICs, and CXL memory controllers. 86 is the third generation of our PCIe smart retimer portfolio and provides the bandwidth required to support data-intensive AI workloads while maximizing utilization of next-generation GPUs operating at 64 gigabits per second per link. Fully compatible with our field-deployed Cosmos software suite, AD6 incorporates the tribal knowledge we have acquired over the past four years by partnering and enabling hyperscalers to deploy AI infrastructure in the cloud. AD6 also enables the seamless upgrade path from current PCIe Gen 5 based platforms to next generation PCIe Gen 6 based platforms for our customers. With AD6, we demonstrated industry's lowest power at 11 watts at Gen 6 in full 16-lane configuration running at 64 gigabit per second per lane, significantly lower than our competitors and even lower than our own Aries Gen 5 day timer. Through collaboration with leading providers of GPUs and CPUs such as AMD, ARM, Intel, and NVIDIA, ARIS 6 is being rigorously tested at Astera's Cloud Scale Interop Lab and in customers' platforms to minimize inter-operation risk, lower system development cost, and reduce time to market. ARIS 6 was demonstrated at NVIDIA's GTC event during the week of March 18th. ARIS 6 is currently sampling to leading AI and cloud infrastructure providers and we expect initial volume ramps to begin in 2025. We also announced the introduction and sampling of our ADs, PCIE, and CXL smart cable modules for active electrical cables, or AECs, to support robust and long reach, up to seven meters copper cable connectivity. This is 3x the standard reach defined in the PCIE spec. Our new PCIe AAC solution is designed for GPU clustering applications by extending PCIe back-end fabric deployments to multiple racks. This new ARIES product category expands our market opportunity from within the rack to across racks. As with our entire product portfolio, ARIES smart cable modules support our Cosmos software suite to deliver a powerful yet familiar array of link monitoring, fleet management, and RAS tools, which are customizable for diverse needs of our hyperscaler customers. We leveraged our expertise in silicon, hardware, and software to deliver a complete solution in record time, and we expect initial shipments to begin later this year for the PCIE AACs. We believe this new ARIES product announcement represents another concrete example of Estella Labs driving the PCI ecosystem with technology leadership with an intelligent connectivity platform that includes silicon chips, hardware modules, and Cosmos software suite. Over the coming quarters, we anticipate ongoing generational product upgrades to existing product lines, and introduction of new product categories developed from the ground up to fully utilize the performance and productivity capabilities of generative AI. In summary, over the past few years, we have built a great team that is delivering technology that is foundational to deploying AI infrastructure at scale. We have gained the trust and support of our world-class customer base by executing, innovating, and delivering to our commitments. These tight relationships are resulting in new product developments and enhanced technology roadmap for Asterum. We look forward to continued collaboration with our partners as a new era unfolds driven by AI applications. With that, I will turn the call over to our CFO, Mike Tate, who will discuss our Q1 financial results and Q2 outlook.
Mike Tate
Thanks, Sanjay, and thanks to everyone for joining. This overview of our Q1 financial results and Q2 guidance will be on a non-GAAP basis. The primary difference in Astera Labs' non-GAAP metrics is stock-based compensation and the related income tax effects. Please refer to today's press release available on the investor relations section of our website for more details on both our GAAP and non-GAAP Q2 financial outlook, as well as the reconciliation of our GAAP to non-GAAP financial measures presented on this call. For Q1 of 2024, Astera Labs delivered record quarterly revenue of $65.3 million, which was up 29% versus the previous quarter and 269% higher than the revenue in Q1 of 2023. During the quarter, we shipped products to all the major hyperscalers and AI accelerator manufacturers. We recognize revenues across all three of our product families during the quarter, with ARIES products being the largest contributor. ARIES enjoyed solid momentum in AI-based platforms as customers continue to introduce and ramp their PCIe Gen 5 capable AI systems, along with overall strong unit growth with the industry's growing investment in generative AI. Also, we continue to make good progress with our Taurus and LEO product lines, which are in the early phases of revenue contribution. At Q1, Taurus revenues were primarily shipping into 200-gig Ethernet-based systems, and we expect Taurus revenues to sequentially track higher as we progress through 2024, as we also begin to ship into 400-gig Ethernet-based systems. Q1 LEO revenues were largely from customers purchasing pre-products volumes for the development of their next-generation CXL-capable compute platforms expected to launch late this year with the next server CPU refresh cycle. Q1 non-GAAP gross margins was 78.2% and was up 90 basis points compared with 77.3% in Q4 of 2023. The positive gross margin performance during the quarter was driven by healthy product mix. Non-GAAP operating expenses for Q1 were $35.2 million, up from $27 million in the previous quarter. With non-GAAP operating expenses, R&D expense was $22.9 million, sales and marketing expense was $6 million, and general and administration expenses were $6.3 million. Non-GAAP operating expenses during Q1 increased largely due to a combination of increased headcount and incremental costs associated with being a public company. The largest delta between non-GAAP and GAAP operating expenses in Q1 with stock-based compensation recognized in connection with our recent IPO and its associated employer payroll taxes, and to a lesser extent, our normal quarterly stock-based compensation expense. Non-GAAP operating margins for Q1 was 24.3%, as revenues scaled in proportion with our operating expenses on a sequential basis. Interest income in Q1 was $2.6 million. Our non-GAAP tax provision was $4.1 million for Q1. the quarter, which represents a tax rate of 22% on a non-GAAP basis. Pro forma non-GAAP fully diluted share count for Q1 was 147.5 million shares. Our pro forma non-GAAP diluted earnings per share for the quarter was 10 cents. The pro forma non-GAAP diluted shares includes the assumed conversion of our preferred stock for the entire quarter, while our GAAP share count only includes the conversion of our preferred stock for the step period following our March IPO. Going forward, given that all the preferred stock has now been converted to common stock upon our IPO, those preferred shares will be fully included in the share count for both GAAP and non-GAAP. Cash flow from operating activities for Q1 was $3.7 million, and we ended the quarter with cash, cash equivalents, and marketable securities of just over $800 million. Now turning to our guidance for Q2 of fiscal 2024. We expect Q2 revenues to increase from Q1 levels within a range of 10% to 12% sequentially. We believe our ARIES product family will continue to be the largest component of revenue and will be the primary driver of sequential growth in Q2. Within the ARIES product family, we expect the growth to be driven by increased unit demand for AI servers, as well as a ramp of new product designs with our customers. We expect non-GAAP gross margins to be approximately 77% given a modest increase in hardware shipments relative to standalone ICs. We believe as our hardware solutions grow as a percentage of revenue over the coming quarters, our gross margins will begin to trend towards our long-term gross margin model of 70%. We expect non-GAAP operating expenses to be approximately $40 million as we remain aggressive in expanding our R&D resource pool across headcount and intellectual property, while also scaling our back office functions. Interest income is expected to be $9 million. Our non-GAAP tax rate should be approximately 23%, and our non-GAAP fully diluted share count is expected to be approximately 180 million shares. Adding this all up, we are expecting non-GAAP fully diluted earnings per share of approximately 11 cents. This concludes our prepared remarks. Once again, we very much appreciate everyone joining the call, and now we open the line for questions. Operator?
Operator
At this time, I would like to remind everyone in order to ask a question, press star then the number one on your telephone keypad. Again, we ask that you please limit your questions to two. Our first question will come from the line of Harlan Sir with JP Morgan. Please go ahead.
Mike
Good afternoon, and congratulations on the strong results and guidance post their first quarter as a public company. As you guys mentioned, you know, many new AI XPU programs coming to the market, GPU, ASIC AI chip programs, accelerators. In terms of total XPU shipments this year, I think only half is going to be NVIDIA-based, so it is starting to broaden out. The good news is, obviously, the Astera team has exposure to all of these XPU programs. It does seem that the pace of deploying these XPU platforms has accelerated even over the past few months. So how much of the strong results and guidance is due to this acceleration, broadening in customer deployments? How much is more just kind of higher content of retimers versus your prior expectations? And then do you guys see the strong momentum continuing to the second half of this year?
Mike Tate
Thanks, Harlan. This is Mike. We started shipping into AI servers really in Q3 of last year. So it's just in the early innings. A lot of our customers have not fully deployed their AI system. So we're seeing incremental growth just from adding on the different platforms that we have designed ones in. But it's in a backdrop where there's clearly growing investment in AIs as well. So overall unit growth is also playing out. As we look out to the balance of this year, there's still a lot of programs that have not ramped yet. So we have high confidence that the Gen 5 Aries platform has a lot of growth ahead of it. And that continues into 2025 as well.
Mike
No, I appreciate that. And as you mentioned, there's been a lot of focus on next-gen PCIe Gen 6 platforms, obviously with the rollout of NVIDIA's Blackwell-based platform. And obviously, you know, with any market that is viewed of as fast-growing, you are going to attract competitors. We have seen, you know, some announcing by competitors. We know most of the Gen 5 design wins have already been locked up by the Estera team. You've been working with customers, as you mentioned, on Gen 6 for some time now. Maybe how do you compare the customer engagement momentum on Gen 6 versus the same period back when you were working with customers on Gen 5?
Sanjay Gajendra
Good question, Harlan. This is Sanjay here. Let me take that. So like you correctly said, Gen 5 is still a lot of legs on it. Let's be very clear on that. Like Mike noted, we do have, you know, platforms that are still ramping and still to come. So to that standpoint, we do expect Gen 5 to be with us for some time. And in terms of Gen 6, again, it's driven by the pace of innovation that's happening on the AI side. There is, as you probably know, the GPUs are not fully utilized. Some reports put it at around 50%. So there's still a lot of growth in terms of connectivity, which is essentially holding it back, right? Meaning there's a pace and a need to adopt faster speeds and links. So with NVIDIA announcing their Blackwell platform, those are the first set of GPUs that have Gen 6 on that. So to that standpoint, we do expect some of those deployments to happen in 2025. But in general, others are not far behind based upon public information that's out there. So we do expect the cycle time for Gen 6 adoption to perhaps be a little bit shorter than Gen 5. especially on the ai server application more so than the general purpose compute which is still going to be lagging when it comes to pcie gen 6 adoption your next question will come from the line of joe moore with morgan stanley please go ahead great thank you um following on from that can you talk about pci gen 5 in general purpose servers it seems like know if i look at the cpu penetration of gen 5 we're still at a pretty early stage you know do you see growth from general purpose and what are the applications driving that absolutely i mean primarily on the general purpose compute the main places where the the pciv timer gets used tends to be on the storage connectivity where you have ssds that are on the back of the server So to that standpoint, again, there are two things that have been holding it back, or three things perhaps. One is just the focus on AI. I mean, most of the dollars are going to the AI server application compared to general compute. The second thing is just the ecosystem readiness for Gen 5, primarily on the SSD side, which is starting to evolve with many of the major SSD NVMe players providing or ramping up on Gen 5-based NVMe drives. The third one really has been the CPU platforms. If you think about it, both from Intel and AMD, they're all on the cusp of introducing their next significant platform, whether it is Granite Rapids for Intel or Turing from AMD. So that is expected to drive the introduction of new platform. And if you combine that with the SSDs being ready for Gen 5, And based on the design events that we already have, you can expect that those things would be a contributing factor as dollars start flowing back into the compute side, general purpose compute side.
joe moore
Great. Thank you. And for my follow-up, you just mentioned Granite Rapids and Turin, which are the first kind of volume platforms supporting CXL2. What are you hearing in terms of, you know, the CPUs will be out, but what will be the initial adoption? And how quickly do you think that technology can roll out in 2025?
Sanjay Gajendra
Let me start off by saying, you know, CXL, every hyperscaler is in some shape or form evaluating and working with the technology, so it's well and alive. I think where the focus really has been in terms of CXL is on the memory expansion use case, specifically for CPUs. And the expansion could be for reasons like adding more memory for large database applications, more capacity memory. And the second use case, of course, is for more memory bandwidth, which are for HPC type of applications. So the thing that's been holding back is the availability of CPUs that support CXL at a production quality level. will change with Granite Rapids and Turing being available. So at this point, what we can say is that we've been providing chips for quite some time. We've been in pre-production and supported the various different evaluation POC type of activities that have happened with our hyperscaler customers. So to that standpoint, we do expect revenue to start coming in in 2025 from memory expansion use case for CXL.
Operator
Your next question will come from the line of Tori Spanberg with Stifel. Please go ahead.
Tori Spanberg
Yes, thank you, and let me add my congratulations. My first question is on Gen 6 PCI. So, Sanjay, you just mentioned that the design in cycle is going to be shorter than Gen 5. Now, since it's backwards compatible for your Gen 5, and especially given the Cosmos software platform, should we assume that you will basically retain most of those sockets that you already had in Gen 5 and then obviously some new ones as well for Gen 6?
Sanjay Gajendra
So that's the goal for the company. We have the Cosmos software and like noted, PCI Express is one of those protocols which, unlike Ethernet, tends to be a little messy, meaning it's something that's been around for a long time. It's a great technology. but it also requires a lot of handholding. And for us, what has happened is being in the customer's platforms, bringing up systems that ramp up to millions of devices has allowed us to understand what are the nuances, what works, what doesn't work, how do you make the link perform at the highest rate? So that tribal knowledge is something that we have captured within the Cosmos software that we built, running both on our chips as well as customers' platforms. So we do expect that as Gen 6 starts to materialize, a lot of those learnings will be carried over. Now, you're right that there's been a lot of competition that has come in as well, but we believe that, you know, when it comes to competition, they could have a similar product like us, but no matter what, there is a short time that's essential when it comes to connectivity type of chips. just given the interoperation and getting the kinks out and so on, meaning you could have a perfect chip yet have a failing system. The reason for that is the complexity of the system and how PCI Express standard is defined. So to that standpoint, I agree with what you said in the sense that we have the leading position now in the retimer market for PCIe, and we expect to build on that both with the new features we have added in PCIe Gen 6 or the 86 product line, and also the tribal knowledge that we have built by working with our partners over the last three, four years.
Tori Spanberg
That's a great perspective. And as my follow-up, I had a question on AEC. It sounds like that business is going to start ramping late this year. First of all, is that with multiple cable partners? And then related to that, are you the only company today that have an AAC at 7 meters.
Sanjay Gajendra
I don't know about the only customer. I would probably request maybe you need to do some research on it on where the competition is. But from a three-timer standpoint, which goes on this, we do have a leading position. So based on that, I would imagine that we are the We are the main provider here, both based on that and the customer traction that we're seeing. So this one is an interesting use case. So far, PCI Express, as you know, was defined to be inside the server. But what is happening now, and this is why we're excited about PCIe AECs, is that now we are opening up a new front in terms of clustering GPUs, meaning interconnecting accelerators. That is where the AECs would play, and that is a new opportunity that goes along with the Ethernet AECs that we already provide, which are also used for interconnecting GPUs on the backend network. So overall, we do believe that combining our PCIe AEC solution and Ethernet AEC solution, we're well set for some of these evolving trends And our revenue, we expect to start coming in for the later half of this year. And on PCIE, again, we do believe we are the only one, just to make sure I clarify what I initially said, just that I don't know if there is someone else talking about it that's not yet in the public domain.
Operator
Your next question will come from the line of Blaine Curtis with Jefferies. Please go ahead.
Ethernet
Hey, good afternoon. Thanks for taking my questions. Um, maybe first one for you to tender. I just curious, you know, you mentioned of right architectures, I think Harlan asked on, I was just kind of curious about, you know, obviously you have a lead customer and it's a lot of CPU to GPU connections. Um, that's just the nature of the market who has the volume, but I'm curious, you mentioned back, uh, you know, the backend fabrics, a bunch kind of curious, is that still conceptual or are you seeing designs for it? And maybe just talk about the widening out of just applications for what the return numbers are being used for.
Jitendra
Great question. So there are many applications where we use the retimers. Of course, we are most known for the connectivity from the GPU to the head node. That is where a lot of the deployments are happening. But these new applications also speak to how rapidly the AI systems are evolving. Every few months, we see a new AI platform come up, and that opens up additional opportunities for us. And one of those is to cluster GPUs together. There are two main protocols that are used in addition to NVLink, of course, which are used to cluster GPUs. That is PCI Express and Ethernet. And as Sanjay just mentioned, we now have solutions available to interconnect GPUs together, whether they are for PCI Express or Ethernet. Specifically in the case of PCI Express, some of our customers who want to use PCI Express for clustering GPUs together are now able to do so using our PCI Express re-timers. which are offered in the form of an active electrical cable. So this business is going to be in addition to the sustaining business that we have today in connecting GPUs to head nodes. Now we are connecting GPUs together in a cluster. And as you know, these are very intense, very dense mesh connections. So they can grow very, very rapidly. So we are very excited about where this will grow and starting with some revenue contributions late this year.
Ethernet
Thanks. And then maybe a question for Mike. The gross margin remained quite high. You said it was mixed. I mean, maybe you're just being kind of conservative with the IPO, but I was just kind of curious, did the mix come in? I mean, I think it's mostly retimers. I know as the other products start to ramp, that'll be the headwind. So I'm just kind of, how do you think about the rest of the year? Should we kind of just have it kind of come down with mix gradually as those new products ramp off this 77 that you're guiding to?
Mike Tate
Yeah, so just to remind everybody, our standalone ICs carry, you know, a pretty high margin relative to our hardware solutions. So when the mix gets a little more balanced with hardware versus standalone ICs, we're expecting our long-term gross margins to trend to 70%. In Q1, we were heavily weighted to standalone ICs, so a very favorable mix, and that's how we enjoyed the strong gross margins. As we go through the balance of this year and into next year, we will see an increasing mix of our modules and also add-in cards for CXL as well. So we think we'll have a gradual trend down towards a long-term model over time as that mix changes.
Operator
Your next question will come from the line of Thomas O'Malley with Barclays. Please go ahead.
Thomas O'Malley
Hey guys, thanks for taking my question. Mike, I just wanted to ask, I know you may not be giving segment details specifically, but could you talk about what you're able to, what contributed to the revenue in the quarter, and then looking out into June, could you talk about, from a revenue mix perspective, maybe some sequential help on what's growing? Obviously, the non-ICU business is growing, just given the fact that gross margins are pressured a bit, but just any color on the segments would be helpful to start.
Mike Tate
Sure. So as I mentioned, we started shipping into AI server platforms in volume in Q3, and a lot of our customers are still in a ramp mode to the extent we've been shipping for the past couple quarters. But there's still a lot of designs that haven't even begun to ramp. So we're still in the early phases that, you know, if you look out in time, we see the Gen 5 piece of it in AI continue to grow into next year as well. So as you look into Q2, the growth that we're guiding to is still largely driven by the Ares Gen 5 deployment in AI servers, both for existing platforms with increased unit volumes, but also the new customers beginning their ramps as well.
Thomas O'Malley
Helpful. And then just a broader one. In talking with NVIDIA, they're referencing their GD200 architecture becoming a bigger percent of the mix. NVLink 72 being more of the deployment that hyperscalers are taking. When you look at the Hopper architecture versus the Blackwell architecture and their NV72 platform where they're using NVLink amongst their GPUs, can you talk about the puts and takes when it comes to your retiming product? Do you see an attach rate that's any different than the current generation?
Jitendra
Let me take that. Great question. First, let me say that we are just at the beginning phases of AI. We will continue to see new architectures being produced by AI platform providers at a very rapid pace to just match up with the growth in AI models. And on top of that, we'll see innovative ways that hyperscalers will deploy these platforms in their cloud. So as these architectures evolve, so do the connectivity challenges. Some challenges are going to be incremental and some are going to be completely new. And so what we believe is given the increasing speeds, increasing complexities with these new platforms, We do expect our dollar content per AI platform to increase over time. We see these developments providing us good tailwinds going here into the future. So now to your question about the GP200 specifically, well, you know, first of all, we cannot speak about specific customer architectures. But here is something that is very clear to see. As the AI platform providers produce these new architectures, the hyperscalers will choose different form factors to deploy them. And in that way, no two clouds are the same. Each hyperscaler has a unique requirement, unique constraint to deploy EDI platforms. And we are working with all of them to enable these deployments. This combination of new platforms and very cloud-specific deployment strategies, it presents great opportunities for our PCIe connectivity portfolio. And to that point, as Sanjay mentioned, we announced the sampling of our Gen 6 retirement during GTC. If you look at our press release, you will see that brought support from AI platform providers. And to this day, to the best of our knowledge, we are still the only one sampling a Gen 6 solution. So on the whole, given the fact that speeds are increasing, complexity is increasing, in fact, the pace of innovation is going up as well, these all play to our strengths. And we have customers coming to us for new approaches to solve these problems. So we feel very good about the potential to grow our PCI connectivity business.
Operator
Your next question will come from the line of Quinn Bolton with Needham. Please go ahead.
Needham
Hey, guys. Let me offer my congratulations on the nice results and outlook. I just want to follow up on the use of PCI Express in the GPU to GPU backend networks. I think that's something, you know, historically you had excluded from your TAM, but it looks like it's becoming an opportunity here and starts to ramp in the second half of this year. Wondering if you could Just talk about the breadth of some of the custom AI accelerators that are choosing PCI Express as their interconnect over, say, Ethernet. And then I've got a follow-up.
Jitendra
Again, great question. So just to kind of follow up the response that we provided before, there are two, three dominant protocols that are used to cluster GPUs together. The one that's most well-known, of course, is NVLink, which is what NVIDIA uses and is appropriate to the interface. The other two are Ethernet and PCI Express. We do see some of our customers using PCI Express, and I think it would not be appropriate to say who, but certainly PCI Express is a fairly common protocol. It is the one that's natively found on all GPUs and CPUs and other data center components. Ethernet is also very popular, and to the extent that a particular customer chooses to use Ethernet or PCI Express, We are able to support them both with our solutions, the Aries PCIe retimer family as well as the Taurus Ethernet retimer family. We do expect these two to make meaningful contributions to our revenue, as I mentioned, starting with the end of this year and then, of course, continuing into next year.
Needham
Perfect. And my second question is you guys have talked about the introduction of new products as you know, new TAM expansion activity. And I'm not going to ask you to introduce them today, but just, you know, in terms of timing as we think out, are these new products, you know, timeline sort of introduction later this year, 2025, with revenue ramp in 2026, is that the general framework investors should be thinking about the new product that you've discussed? Again, I think we –
Sanjay Gajendra
As a company, we don't talk about unreleased products or the timing of it, but what I can share with you is the following. First, we've been very fortunate to be in the front row seat of AI deployment and enjoy a great relationship with the hyperscalers and AI platform providers. So we get to see a lot, we get to hear a lot in terms of some of the requirements. So clearly, we are going to be developing products that address the bottlenecks, whether it is on the data side, network side, or on the memory side. So we are working on several products, as you can imagine, that would all be developed, ground up for AI infrastructure, and enable connectivity solutions that will deploy the AI application sooner. There is a lot going on, a lot of new infrastructure, a lot of new GPU announcements, CPU announcements. So you can imagine, given the pace of this market and the changes that are upcoming, we do anticipate that these will all start having meaningful impact and incremental revenue impact to our business.
Operator
Your next question will come from the line of Ross Seymour with Deutsche Bank. Please go ahead.
Ross Seymour
Hi, guys. Thanks for asking the question. I wanted to go into the ASIC versus GPU side of things. As ASICs start to penetrate this market to certain degrees, how does that change, if any, the retimer camp that you would have, and I guess even the competitive dynamic in that equation, considering one of the biggest ASIC suppliers is also an aspiring competitor of yours?
Jitendra
So, great question again. You know, let me just refer back to what I said. which is we will see more and more different solutions come to the market to address the evolving AI requirements. Some of them are going to be GPUs from the kind of known AI providers like NVIDIA, AMD, and others. And some others will be custom built ASICs that are built typically by hyperscalers, whether they are AWS or Microsoft or Google and others. And the requirements for these two systems are common in some ways, but they do differ. You know, for example, what particular type of back-end connectivity they use, and exactly what are the ins and outs that are going into each of these chips. The good news is with the breadth of portfolio that we have and the close engagement with the several ASIC providers as well as the GPU providers, we understand the challenges of these systems very well. And not only are we providing solutions that address those today with the current generation, we are engaged with them very closely on the next generation, on the upcoming platforms, whether they are GPU-based or ASIC-based. to provide these solutions. You know, a great example was the Aries SCM, where we enabled using our trusted solution for PCI Express retimers, we enabled a new way of connecting some of these ASICs on the backend network.
Sanjay Gajendra
Maybe if I can add to that, you know, one way to visualize connectivity market or subsystem is a nervous system within the human anatomy, right? It's one of those things where you don't want to mess with it. Yes, there will be ASIC vendors, there are options off the shelf. Once the nervous system is built, tested, especially like what we have developed where the nervous system that we have built is specifically done for AI applications, and there's a lot of qualification, a lot of software investment that hyperscalers have done, and they want to reuse that across systems different kinds of topologies, whether it's ASIC-based or merchant silicon-based. And we do see that trend happening when we look at customers that we're engaged with today. And for protocols like PCI Express, Ethernet, and CXL, and especially various terraplates, these are standards-based. So to that standpoint, whatever end-poster architecture is being used, we believe that we will stand to gain from that.
Ross Seymour
Thanks for that. I guess as my follow-up, one quick one for Mike. How should we think about OPEX beyond the second quarter? I know there's a bigger step up there with a full quarter of being a publicly traded company, et cetera, but just walk us through your OPEX plans the rest of the year or even to the target.
Mike Tate
Yeah, I mean, thanks, Russ. We are continuing to try to invest quite a bit in headcount, particularly in R&D. There's so many opportunities ahead of us that we love to – get a jump on those products and also improve the time to market. That being said, we're pretty selective on who we bring into the company, so that will just meter our growth. And we believe our OpEx, although it's going to be increasing, will probably not increase at a rate of revenue over the near and long term. And that's why we feel good about a long term operating margin model of 4%. So over time, we do you know, feel confident we could trend that direction even with increasing investment in OpEx.
Operator
Your next question will come from the line of Suju De Silva with Roth MKM. Please go ahead.
Roth
Hi, Chitendra, Sandra, Mike. Congrats on the first quarter here. On the back end, the addressable market here that's non-NVLink, I'm trying to understand if the PCIe and Ethernet opportunities there will be adopted at a similar pace out of the gate or whether PCI would lead that adoption? in the non-NV-Link backend opportunity?
Sanjay Gajendra
It's hard to say at this point, just because there is so much of development going on here. I mean, you can imagine the non-NVIDIA ecosystem, they will rely on standards technologies, whether it is PCI Express or Ethernet. And the advantage of PCI Express is that it's low latency. significantly low latency compared to Ethernet. So there are some benefits to that. And there are certain extensions that people consider to add on top of PCI Express when it comes to the proprietary implementation. So overall, we do see this from a technology standpoint, PCI Express will have that advantage. Now, Ethernet also has been around, so we'll have to wait and see how all of this develops over the next, let's say, six to 18 months.
Jitendra
Just add to what Sanjay said. I think the good news for us in some ways is that we don't have to pick. We don't have to decide which one. We have chips, we have hardware, and we have software. So we have customers that come to us and say, hey, I need this for my new AI platform. Can you build me that? And that's what we've been doing.
Roth
Okay, great. Another question perhaps for Mike. The initial AEC programs ramping, maybe a few customers this year, a few customers next year, or maybe perhaps all of them this year. But do you perceive that those will be larger, lumpier program-based ramps, Mike, or will those be steady kind of build-outs as servers grow?
Mike Tate
I think the product ramps will mirror, you know, other product ramps will. They'll gradually build over a few quarters till they hit steady state. And if you layer them on top of each other, it just continues to build a nice growing revenue profile. So as you look at Taurus in 2024, We're shipping 200 gig right now. And then in the back half, we start to ship 400 gig. And if you look into 2025, 800 gig, which is ultimately the biggest opportunity in a much broader set of customers, will be when the market really becomes very large.
Operator
Your next question will come from the line of Richard Shannon with Craig Hallam. Please go ahead.
joe moore
Well, hi, guys. Thanks for taking my questions. Well, congratulations on coming public here. I guess I want to follow up on a couple of topics here that have been hit, including Suji's question here about the PCI Express AEC opportunity. Are these design wins or are these kind of pre-design win ramps you're talking about this year? And I guess ultimately my question on this topic here is, can this opportunity, these PCI Express AECs, become as big as your tourist family? In the foreseeable future?
Sanjay Gajendra
Yeah, so these are designments to qualify. We have been shipping this. We announced this. We demonstrated this at public forums. So to that standpoint, it's an opportunity that we're excited about. And like we noted early on, we do expect it to start contributing revenue for later half of this year.
joe moore
Okay, perfect. Thank you. And the second question is on CXL. I think you've mentioned a couple of applications here. Maybe if you can kind of express the breadth of interest here across hyperscalers and other customers for the ones you mentioned, and also for the next ones that are a little bit more expansive in nature, how do you see the testing and speccing out of those? Are those coming to market at the time you're hoping for, or is there a little bit more development required to get those to market?
Sanjay Gajendra
Yeah, so there are two questions. Let me take the first one, which is the CXL side. For CXL, there are four main use cases to keep in mind. Memory expansion, memory tiering, where you're trying to go for a TCO type of angle, memory pooling, and what is called as memory drives that Samsung and others are providing. We believe memory drives are more suitable for the enterprise customers. And whereas the first three are more suitable for cloud scale deployment. And there again, memory pooling is something that's further out in time is our belief, just because it requires software changes. So the ones that are more sort of short-term, medium-term is memory expansion and memory tiering. And like I noted early on, all the major hyperscalers, at least in the U.S., are all engaged on the CXL technology, but it is going to be a matter of time with both CPUs being available and dollars being available from a general purpose compute standpoint. Okay. And then in terms of your second question was, was that more on new products? Was that the context for it?
spk00
Yes.
Sanjay Gajendra
Yeah. Again, we don't talk about the exact timeframe, but you can imagine our last product we announced was a little over a year ago, so our engineers have not been quite. So they've been working hard. So to that standpoint, we are working very diligently and hard based upon a lot of interest and engagement from customers that we've already been working with.
Operator
There are no further questions at this time. I'll turn the call back over to Leslie Green for closing remarks.
Leslie
Thank you, everyone, for your participation and questions. We look forward to seeing many of you at various financial conferences this summer and updating you on our progress on our Q2 earnings conference call. Thank you.
Ethernet
Thank you, guys.
Leslie
Thank you. This concludes today's conference call. You may now disconnect.
Disclaimer