This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

Rambus, Inc.
4/27/2026
Welcome to the RAMBUS first quarter fiscal year 2026 earnings conference call. At this time, all participants are in a listen-only mode. At the conclusion of our prepared remarks, we will conduct a question and answer session. If you would like to ask a question, you may press star 1 on your touchtone phone at any time. If anyone should require assistance during the conference, please press star 0 at any time. As a reminder, this conference call is being recorded. I would now like to turn the conference over to John Allen, interim chief financial officer. You may begin your conference.
Thank you, operator, and welcome to the RAMBUS first quarter 2026 results conference call. I am John Allen, interim chief financial officer at RAMBUS, and on the call with me today is Luke Serafin, our CEO. The press release for the results that we will be discussing today has been filed with the SEC on Form 8K. We are webcasting this call along with the slides that we will reference during portions of today's call. A replay of this call can be accessed on our website beginning today at 5 p.m. Pacific time. Our discussions today will contain forward-looking statements, including our expectations regarding projected financial results, financial prospects, market growth, demand for our solutions, other market factors, including reflections of the geopolitical and macroeconomic environment, and the effects of ASC 606 and reported revenue, among other items. These statements are subject to risks and uncertainties that may be discussed during this call and are more fully described in the documents we file with the SEC, including our 8Ks, 10Qs, and 10Ks. These forward-looking statements may differ materially from our actual results, and we are under no obligation to update these statements. In an effort to provide greater clarity in the financials, we are using both GAAP and non-GAAP financial presentations in both our press release and on this call. A reconciliation of these non-GAAP financials to the most directly comparable GAAP measures has been included in our press release, in our slide presentation, and on our website at rambus.com on the investor relations page under financial releases. In addition, we will continue to provide operational metrics such as licensing billings to give our investors better insight into our operational performance. The order of our call today will be as follows. Luke will start with an overview of the business. I will discuss our financial results, and then we will end with Q&A. I will now turn the call over to Luc to provide an overview of the quarter. Luc?
Good afternoon, everyone, and thank you for joining us. We opened 2026 with a strong first quarter, meeting our financial targets and broadening our portfolio to address the accelerating demands of AI. The quarter reflects solid momentum as we execute against our roadmap to support long-term profitable growth for the company. This is an exciting time for Rambo. and we are well positioned to capitalize on the market trends in the data center and AI. For decades, we have developed foundational technologies and solutions across a wide range of memory and interconnects. Data heritage positions us well as systems become more diverse, memory dependent, and performance driven. To give more context, there are several market and technology trends playing out across the data center and AI that continue to work in our favor. As AI adoption accelerates and inference use cases expand, workloads are becoming more persistent and context-rich, and performance is increasingly defined by how efficiently data can be stored, accessed, moved, and secured. To support these workloads, AI infrastructure is becoming more complex and heterogeneous, combining a mix of traditional and AI server platforms to support orchestration, data management, and real-time execution at scale. At the same time, the expansion of inference, and particularly agentic AI with continuous reasoning and multi-step workflows, is driving more always-on activity and placing even greater demands on memory capacity, bandwidth, latency, and power efficiency. Together, these strengths are driving new memory and connectivity architectures to support purpose-built solutions across a wider range of use cases and form factors. This increases our opportunities for richer chip content and broader adoption of our industry-leading IP, reinforcing our position for sustainable long-term growth. Now let me turn to our quarterly results, starting with our chip business. Our performance reflects strong execution and ongoing leadership in our core DDR5 R3D chips. We delivered product revenue of $88 million in Q1, in line with our guidance, and up 15% year-over-year. Looking ahead, we expect to deliver double-digit product revenue growth in the second quarter. We continue to see increasing customer adoption of new products and remain well-positioned to support the ramp of next-generation platforms as they enter the market. We continue to execute on our strategy of delivering comprehensive, industry-leading chip solutions to address growing customer and market requirements. As I mentioned in my opening remarks, We recently expanded our product portfolio with the introduction of our chipset for JEDEC standard LPDDR5 EXO-CAM2 modules, building on the same signal and power integrity expertise we have applied across multiple generations of DDR. This chipset is the first offering in our roadmap of LPDDR-based server module solutions and includes new voltage regulators as well as the SPD-Hub support reliable, power-efficient server-class operation. As part of that roadmap, we are actively working with industry partners on the definition and development of LPDDR6-based SOCAM2 solutions, which would offer a natural upgrade path for future generation AI platforms. As AI server architecture is diversified to address varying performance, power efficiency, and form factor requirements, some platforms are now leveraging LPDDR-based memory. While LP memory offers attractive power characteristics, It was originally designed for mobile environments with very short signal paths and tight power margins, making reliable deployment in server systems inherently challenging. The SOCAM2 addresses these limitations through a compact CPU-proximate module architecture with optimized signal routing and localized power management to enable LPDDR modules to operate in server environments. The RAMBUS SOCAM2 chipset enables power efficient, reliable operation of up to 9.6 gigabits per second in a compact module form factor. As LP-based server modules scale to higher speeds and bandwidth in future generations, they will require increasingly sophisticated interface power and control functionality. This progression is similar to what we have seen in DDR-based server modules and reinforces our opportunity to extend our roadmap of high-value chip content across memory types in the future. As I mentioned previously, the ongoing expansion of AI is driving demand for a broader range of memory types and form factor. To meet these needs, we continue to build on our leadership solutions in DDR5, including chipsets for RDM and MRDM, and selectively expand our roadmap of novel solutions as they begin to play a complementary role in heterogeneous systems. With active engagement across customers and ecosystem partners, we're helping shape next generation server modules, reinforcing the opportunity for richer chip content and sustained growth. Turning now to Silicon IP, we saw strong customer traction in the first quarter with continued design wins at tier one companies and growing engagement across our portfolio. We remain focused on delivering industry-leading premium IP that enables differentiated solutions to AI in the data center including accelerators and networking chips across a wide range of architectures. There's increasing momentum for custom silicon in AI, especially among hyperscalers, as they tailor hardware to their own software stacks and deployment needs, optimizing for performance, power efficiency, and total cost at scale. This is driving an accelerating pace of design and expanding demand for value-added IP to support memory bandwidth, advanced connectivity, and security. During the quarter, we saw growing traction for our value-added PCIe Retimer and Switch IP to support increasingly complex AI systems across scale-up and scale-out environments. We also expanded our memory IP portfolio with the introduction of the industry's fastest HBM4E controller, setting a new benchmark for AI accelerator memory throughput. In addition, we launched a new network security engine designed for ultra-Internet to protect distributed AI clusters. All of these IP offerings are in great demand and further strengthened our position as a critical enabler of next-generation compute and connectivity solutions for AI infrastructure. In summary, we executed well in the first quarter. We delivered solid results and expanded our offerings for both chips and IP to expand our leadership in our core markets. As we look ahead, Rambus is well positioned to capitalize on the megatrends in data center and AI. Our sustained technology leadership, disciplined execution, and increasing traction across our portfolio of leadership products will continue to fuel our results. With that, we expect strong growth in 2026 and I'm confident in our long-term trajectory. As always, I want to thank our customers, partners, and employees for their continued trust and support. Now, I turn the call over to John to walk through the financials. John?
Thank you, Luke. I'd like to begin with a summary of our financial results for the first quarter on slide three. We delivered first quarter revenue and earnings in line with our guidance with solid contributions from each of our diversified businesses. We also continued our strong track record of cash generation This performance reflects the continued strength in our business model. Our strong balance sheet and disciplined capital allocation enable us to invest in growth initiatives while returning value to shareholders. Let me now provide you a summary of our non-GAAP income statement on slide five. Revenue for the first quarter was $180.2 million, which was in line with our expectations. Royalty revenue was $69.6 million, while licensing billings were 70.8 million. The difference between licensing billings and royalty revenue mainly relates to timing, as we do not always recognize revenue in the same quarter as we bill our customers. Product revenue was 88 million, representing 15% year-over-year growth, driven by continued strength in DDR5 products and ramping new project contributions. Contract and other revenue was 22.6 million, consisting predominantly of Silicon IP. As a reminder, only a portion of our Silicon IP revenue is reflected in contract and other revenue, and the remaining portion is reported in royalty revenue, as well as in licensing billings. Total operating costs, including cost of goods sold for the quarter, were $104.6 million. Operating expenses of $69.9 million were up sequentially due to seasonal payroll-related taxes in connection with equity investing. Interest and other income for the quarter was $6.9 million. Using an assumed flat tax rate of 16% for non-GAAP pre-tax income, non-GAAP net income for the quarter was $69.3 million. Now, let me turn to the balance sheet details on slide six. We ended the quarter with cash, cash equivalents, and marketable securities totaling $786 million. up 24 million from Q4, 2025 with strong operating cash of 83 million, partially offset by 38 million in taxes paid on equity vesting and 17 million in capital expenditures. We increased our inventory balance by $14 million during the quarter and expect to continue building inventory strategically in the second quarter. Our strong balance sheet gives us the flexibility to increase inventory, to support our product revenue growth, and manage through potential supply chain constraints. First quarter depreciation expense was $8.5 million. Free cash flow in the quarter was $66.3 million. Let me now review our non-GAAP outlook for the second quarter on slide seven. As a reminder, the forward-looking guidance reflects our best estimates at this time and our actual results could differ materially from what I'm about to review. In addition to the non-GAAP financial outlook under ASC 606, we also provide information on licensing billings, which is an operational metric that reflects amounts invoiced to our licensing customers during the period adjusted for certain differences. We expect revenue in the second quarter to be between $192 and $198 million. We expect product revenue to be between 95 and 101 million, a sequential increase of 11% at the midpoint of guidance. We expect royalty revenue to be between 72 and 78 million, and licensing billings between 76 and 82 million. We expect Q2 non-GAAP total operating costs, which includes cost of sales, to be between 114 and $110 million. we expect Q2 capital expenditures to be approximately $14 million. Non-GAAP operating results for the second quarter are expected to be between a profit of $78 and $88 million. For non-GAAP interest and other income and expense, we expect $7 million of interest income. We expect non-GAAP tax expenses to be between $13.6 and $15.2 million in Q2. We expect Q2 share count to be 110 million diluted shares outstanding. Overall, we anticipate Q2 non-GAAP earnings per share to range between 65 and 73 cents. Let me finish with a summary on slide eight. In closing, we delivered solid results in line with our objectives, driving ongoing profitability and cash generation. Our diversified portfolio remains a core strength, with each of the businesses contributing meaningfully to our performance. Our patent licensing business continues to deliver consistent, predictable performance, supported by the long-term agreements we have in place. Our silicon IP business is well positioned, driven by critical interconnect and security technologies, addressing the accelerated demand for AI solutions. Our product business grew 15% year over year, and is poised for sequential growth in the second quarter. We remain focused on delivering long-term shareholder value with year-over-year revenue growth in 2026. Before I open the call up to Q&A, I would like to thank our employees for their continued teamwork and execution. With that, I'll turn the call back to our operator to begin Q&A. Could we have our first question?
Thank you. Ladies and gentlemen, if you have a question, please press star 1 on your touchtone phone. Your first question comes from the line of Kevin Gerrigan with Jefferies. Please go ahead.
Yeah. Hey, team. Thanks for taking my questions. You know, can you just help us think about your product revenue into the June quarter? So, you know, last quarter you discussed the low double-digit revenue impact from the one-time OSAT issue. And I think we may have been expecting a larger sequential increase for June, just kind of given how strong demand has been. So can you just walk us through the drivers for the June quarter product revenue and why the recovery might be a little bit more measured?
Thank you, Kevin. Yes, sure. So the first thing I would say is that the issue that we had talked about in the prior call is behind us. Everything is being resolved. And it's a question now for us to re-stabilize the supply chain, which we are doing. And we see a normalization of that supply chain. So, you know, it is behind us and the revenue for Q2 is guided at 11% over Q1. So that's the right trajectory. And we continue to expect to grow sequentially after that in an environment where our footprint continues to be very strong. I mentioned in an earlier call that it was an older generation of DDR5. The market is transitioning from Gen 2 to Gen 3, which is a good catalyst for us. I would say we met or we guided to double-digit in the second quarter. We met what we said we would meet on the operational strain in Q1. And we would continue to grow sequentially quarters after that. We don't see any issue with the demand, and we don't see any more issues with the quality issue that we had in Q1. So we feel quite confident for the rest of the year as the market moves from gentle to gently.
Okay, great. And then just as a follow-up on your your LPDDR5 SOCAM2 server module chipset, when would you expect to start seeing revenue from this chipset and what kind of milestones should we watch to gauge traction?
I would see this as having a very good strategic impact at this point in time. The financial impact in the short run this year is going to be very minimal just because the volumes are very small for this type of solutions. As a reminder, it only addresses a very small portion of the AI workloads. The volumes are small. The content is small as well. But strategically, I wouldn't put it in the model for 2026. But it's strategically very, very important because there is a trend to look at LPDDR in the server environment in the long run. LPDDR still has issues to address the server requirements. But it also has fracturing, it has benefits. So we see this as a stepping stone for us. It builds on the fact that, you know, over the last few years, we have developed our product line as chipsets. So we have the whole chipset for the SOCAM 2. We have our own teams for power management development. And these are the two new chips that we are proposing for this project. you know, for these solutions. So we see this as a stepping stone. It allows us to engage with us, with other, you know, AI players in the industry. And we're working on next generation as well. But I don't think that the financial impact is going to be significant this year, just given the volumes.
Okay, great. I appreciate the call, Luke.
Thank you.
Your next question comes from the line of Tristan Guerra with Bayard. Please go ahead.
Hi, good afternoon. A quarter ago you highlighted shortages and sounded a little bit, maybe not cautious, but muted on the growth opportunity, and you provided a fairly muted data center unit forecast. How are shortages for components potentially impacting your revenue this year? What are you seeing that's different now than a quarter ago? And given the outlook for D1 to remain very tight next year, how should we look at your product revenue growth and specifically your RCD growth, excluding the new product layers that we'll be adding on to that? from a year-over-year growth standpoint. So in other words, would you expect, you know, the same type of growth next year, year-over-year versus this year? And I understand you're not guiding for next year, but just wanted to get a bit more color on what you see on the market that potentially could put constraint on your growth. And clearly that's an issue for a lot of other companies as well.
Yeah, thank you, Tristan. First of all, let me say a few words about the demand. You know, we do see demand continue to grow for standard servers, which is good for us, you know, with GenFig AI in particular. We expect the server market to grow faster this year than last year. We model it at, you know, low double-digit growth. Because despite the excitement around AI, there's also a large portion of the server market that is not AI-related. But we do see demand growing on the server side, which is really a good catalyst for us. But as we said last quarter, we're watching the situation with supply, especially on the back end. Certainly, since last quarter, the situation has not improved. We're working with our suppliers, but the lead times are long. And there is tension on the back end. So we take this into account when we forecast our business. This is one factor. The other factor that affects or that comes into play when we forecast is the timing of launch of new platforms in the markets. As you know, it's been the case in the past for us. know the launch of our new product depends on the launch of new platforms in the market and that's a dependency that we have so we don't see a situation as materially different than what we saw in q1 but from a supply standpoint things have not have not improved and we expect the supply situation to be tight going into 2027 as well when we talk to industry players okay that's useful and then
As my follow-up question, any additional color on the MRDM opportunity? I know you've talked in the past about some very initial shipments late this year, specifically with influencing. Any additional color as to where it could be in terms of revenue in 27? I think you've talked in the past about your expectation that you probably fully realized that 600 million TAM for MRDM by 28. So, you know, what should we be looking at for next year kind of thing between and what's really driving that? What's going to be driving the demand? Is it going to be mostly influencing and any additional color you may have, you know, beyond what you've said in the past on, you know, customer interest, you know, for this technology and where it's going to ramp?
Thank you, Tristan. First, we continue to make progress in the launch of these products and the interaction with our customers in this MRDN. We're excited by the opportunity for the reasons we've always talked about, larger capacity, larger bandwidth in the same ecosystem, so the adoption is easier. The main, I would say, factor affecting the ramp of our MRDN is going to be the timing of the launch of the platforms from Intel and AMD in particular, where they do have this capability attached in the next generation platform. So we continue to see the ramp starting in 2027 in earnest. And SAM at this point in time, which we still value at about $600 million, You know, if I keep saying, you know, the SAM, once the products are in the market, you know, and we get feedback and the market gives us feedback, we're going to have a much better view of that SAM. But at this point in time, this is the right number to keep in mind.
Great. Thanks again.
Thanks, Christian.
Your next question comes from the line of Erin Rakers with Wells Fargo. Please go ahead.
Yeah, thanks for taking the questions. I guess kind of just building off that last question first, you know, when you kind of think about the 600 million, you know, incremental opportunity around MRDIMS, I can appreciate that, you know, there's a lot of unknown variables at this point. But I'm just curious, as you rolled up that expectation, what assumption are you making in terms of attach rate on AMD Venice and Diamond Rapids at this point? And, you know, how might that evolve? I mean, I would assume that you're being rather conservative on that attach rate at this point. And then also on that, how do you see CXL starting to play out?
you know, at this point in time, we model, you know, a lower touch rate, as I said, you know, until my experience is until a product is in the market, it's hard to make those models, you know, more significant. There are a lot of variables coming into play. As we just said, the most important one is the timing of rollout of these platforms in the market. There's also, you know, the whole situation with, you know, DRAM pricing and the you know, the prices of modules and how, you know, our customers' customers are going to make the decisions between the combination of modules they want to have in the current, you know, memory cycle environment. So we model a, I would say, a conservative percentage, you know, for MRD at this point in time. But, you know, ramp will start when the platforms ramp in the market, and that's when we're going to have a better view.
And any thoughts of CSL?
Oh, sorry, I missed the second part of your question. Sorry, Aaron. CSL, you know, we do have very good traction on our IP business. We are not planning to, you know, launch a semiconductor product at this point in time. You know, we do have this on our shelves, if you wish, as we designed one a couple of years ago. But we do see with Agencic AI, we do see demand for, you know, standard DIMMs and MR DIMMs, you know, as being the main benefactors of that. And that's where we will continue to focus our attention.
And then one final quick one. When you guys talk about the opportunity to grow sequentially in the product revenue, you know, into the back half of the calendar year, I'm curious if you were asked about seasonality in the second half versus first half, if there's anything that changes your views maybe relative to the last couple of years on, you know, I think you've seen some decent growth second half versus first half. Thank you.
Yes, thanks, Aaron. That's a good observation. We actually do see the second half shaping out slightly different than the first half, better growth in the second half. A lot of times it had to do with the launch of new platforms. They typically hit the market if they are on time in the second half of the year, and that's where you have more products there. But even if you look at the first half of this year at the midpoint of our guidance for Q2, and you look at the first half of last year, you know, we're still growing, you know, close to 18%. So, you know, the first half, despite our issue in Q1, is still much higher than the first half of last year. And we believe the second half is going to show growth. We do see some seasonality, and typically our second half is stronger than our first half. Thank you, Aaron.
Your next question comes from the line of Gary Mobley with Loop Capital. Please go ahead.
Good afternoon, gentlemen. Thanks for taking my question. If I take the sum of your license billing and your contract and other revenue in the first half of this year for the results in the guide and compare that to the same period last year, it looks like you're generating some abnormally strong growth. Is that due to any sort of variance in the patent licensing, or should I take this to mean that your silicon IP business might actually be running north of $150 million annually right now?
So, thanks, Gary. We can see some quarter to quarter variations in these two categories just for the nature of the business. I would say that underlying this, we see very good traction on our silicon IP business. uh the actually ai has an impact on our silicon ip business which is also very positive as people who develop custom solutions for ai are looking for new interfaces and new security solutions like the ones i mentioned in the in the prepared remarks so we do have very good traction on the silicon ip business and we continue to expect this business to grow 10 to 15 a year based based on that Our other business, our patent licensing business, it can also be changing from quarter to quarter. We do renew agreements on a regular basis, and sometimes these agreements are structured in different ways depending on the customers and what they want to do. So we have some strong quarters, some quarters that are not too good. But on average, you know, this business continues to be stable at $200, $220 million. So I would say I would not, you know, pay too much attention on the quarterly split, you know, on these revenues. But the fundamentals are really, really good. What I would add to this is if you look at our patent licensing business, our Silicon IP business, or our product business, they all benefit from what's happening in the memory subsystem area. They all benefit from AI and the move from AI to AI inference. And that gives strength to our results. And when we have a challenge, like we had last quarter on the product line, then we have these two other product lines also that that allow us to meet our numbers.
Okay, thank you, Luke. As my follow-up, I wanted to ask about CPU roles in AI-optimized servers. There's been a lot more noise recently indicating a higher ratio of CPUs to GPUs in AI-optimized servers driven by genetic workloads, and you sort of hinted to that. To put this into a question, I'm curious if, you know, we've moved to a point in time where we might see a one-to-one ratio of CPU to GPU. Does this alter your view on the growth rate of your SAM for your product revenue or the size of it?
So we are excited with where the market is evolving with agentic AI and inference. If you look at the types of architectures, software architectures, hardware architectures that inference requires, then you clearly see that the ratio between CPUs and GPUs is changing and is changing in favor of CPUs. So overall, that's a very good thing for us. It's just coming from the nature of what inferences or what agentic AI is. So that's a good thing for us. Is it going to be a one-on-one? Very difficult to say at this point in time. Everyone is trying to optimize now the memory subsystems. You know, everyone is trying to use HBM where it's really good, use LPDDR where it's really good, and use DDR and MRDIMS where it's really good. And I would say that DDR and MRDIMS will continue to be, you know, the workforce of these, you know, inference AI solutions. But the fact that all of these systems start to coexist, you know, HBM, DDR, LPDDR, is really good. You know, they all try to resolve a different part of the AI workload and this plays to our strengths because this is what we've been doing for, you know, forever at Rambus. But I would say that the move to AI inference and the move to agentic AI will change the ratio in favor of CPUs. And that's good for us.
Thank you. Appreciate it. Thank you.
Your next question comes from the line of Sebastian Nagy with William Blair. Please go ahead.
Thank you. Maybe my first question, I wanted to ask about the new SOCAM products that you announced last week. Could you maybe just comment on what Rambis' dollar content looks like for each SOCAM module, just across the different voltage regulators and the SPD hub? Any unit economics you can give us?
Given the current competitive environment, stay away from giving pricing on these things. But I would say that the content on, you know, on a SOCAM from the standpoint of RAMBUS, you know, we have three voltage regulators and an SPD hub, so the content is minimum. This is what I was saying, you know, earlier on one of the questions. I do believe that this is strategically important for us because in the long run, LPDDR may play a larger role, especially in next generation LPDDR solutions in the data center. But from the content standpoint, it stays minimal and the volume stays minimal. I would leave it there.
Okay. Okay. That's fair. And maybe just turning back to the RDIMs, could we get an update on the progress you're seeing with companion chips? How much revenue came from those companion chips in Q1? And then maybe just relatedly, how important is it for your silicon customers that they have all of these DIM components bundled together coming from one provider versus having to put these together from different providers?
Yes, thank you. John, go ahead.
Sure. The newer products, Sebastian, they're contributing low double-digit percent of our total product revenue during the first quarter. We would expect it to be roughly the same in the second quarter as we see some growth in the overall revenue contribution from that part of our business.
Yeah, and what I would add to this is that this is steady growth quarter over quarter. You saw this in 2025, every quarter we had a slightly higher percentage. We continue to do that, and we expect to continue to do that for the second half of the year with this. And we expect maybe to exit the year at the mid-double digit of product revenues coming from our new chips. Now, to your other question, it is becoming more and more important for customers to have the whole chipset from one supplier, especially as the performance requirements increase. And the reason has to do with interoperability. Making sure that all of these chips on a module work well together at very, very, very high speed in very, very harsh environment is becoming more and more difficult to achieve. And that's why our customers request you know, us to have the whole solution and to have them go through these generational changes.
Makes a lot of sense. Thank you, Luke. Thank you, John. Thank you, sir.
Your next question comes from the line of Kevin Cassidy with Rosenblatt Securities. Please go ahead.
Yeah, thanks for taking my question. During the quarter, as you're building inventory, were there any orders that you had to leave on the table that you weren't able to book because you didn't have the inventory that maybe some upside surprise?
No, you know, we've not been in that situation. But there are a few market dynamics that we have to anticipate. One is, as I said earlier, we do see supply tightening, especially on the back end. So we want to make sure that we have, if that situation continues, we have enough supply to supply our customers. The second thing that is happening is that there's fast transitions between generations. And you remember we were talking about generation one moving to generation two. We indicated in the last call that, you know, generation three, you know, is ramping very, very fast. So, we want to make sure that on these new generation of products, we also have enough inventory because the ramps, you know, on the customer side can be quite steep, and we just don't want to miss them.
Okay, I understand. Maybe even when you're using your balance sheet to build more inventory, it You know, when Intel reported, they said they even were able to ship some previously written down inventory. You know, it seems like the demand for CPUs is so strong and also DRAM that, you know, maybe older generations will get a little bit of a revival. Is that anything possible? Or it sounds like you're saying everything's shifting to Gen 3 very quickly.
From a demand standpoint, it's certainly the bulk of the demand for DDR products is shifting to Gen 3. But what you're describing in terms of, you know, using inventory of old products to serve, you know, demand is something that we continuously do, you know, and look at. And that's part of our, you know, inventory management processes.
Okay, great. Thank you. Thank you.
Your next question comes from the line of Mehdi Hosseini with SGI. Please go ahead.
Hi there. This is for Mehdi. My first question is on LPDDR SOCAM 2 chipset. Would you mind clarifying the content of the chipset? It seems that the solution consists of one SPD and three voltage regulators. Do you expect to add any PME content there? And what does the pricing look like of the SPD and voltage regulator relative to the DDR DIMM? And I have a follow-up.
Sure. So, yes, on the SOCAM solution, we have one SPD hub and two types of voltage regulators, three voltage regulators in total, but two types. one 12-amp regulator and two 3-amp regulators. So that's the content. So as I said, the content is minimal. You're talking about PNIC. There's no power management IC per se. That function is done by the voltage regulators in this generation of product. But that's why we think it's very strategic for us. The way we look at this is that when LPDDR6 is available, you know, that LP memory will offer even more speed and even more, you know, power capabilities, then it will require, you know, possibly more complex, you know, chips for power management, and we will work on those. And one can imagine as well that, you know, as the market evolves in the longer run, the market will probably need as well the equivalent of RCDs in the long run. And this is all exactly in our strategy, and that's why I'm talking about the stepping stone. We want to make sure that we are early in these new technologies. They do not cannibalize the old technologies. They are complementary to them. And in the long run, they have the potential to grow quite nicely, and they build on strengths that we have, which has to do with signal integrity and power integrity. Now, in the short run for the SOCAM2 and LPDDR5X, as I said, the volumes and the content, the dollar content is going to be very low. But that's a very interesting and strategic stepping stone for us in that area.
Thanks, Luc. That's really helpful. And I guess my second question is on DDR5. How should we think about the timing of the ramp of Gen 4 and Gen 5 as they go to higher volume manufacturing?
So Gen 4 is going to start to ramp this year. But Gen 4 is a kind of a niche generation, if you wish. It doesn't have the same traction as Gen 1, Gen 2, Gen 3. or Gen 5. I think everyone is now waiting for Gen 5. We're going to start shipping products that correspond to Gen 5 towards the end of the year. But just like for the MRDM, Gen 5 is completely dependent on the timing of the ramps of the next generation platforms of Intel and AMD. this is where they're going to be adopted and that's why you know we do see you know initial volumes this year uh but the bulk of the volume just like for mrb is going to start in 2027. got it that's very helpful thank you luke thank you your next question comes from the line of mark lapasis with evercore isi please go ahead great thanks for taking my question um
A question on the DIMM attach rate. Is it different for CPUs used to perform orchestration and agentic AI versus CPUs used in standard servers versus CPUs that might be put next to the GPUs and the XPUs and the custom ASICs? Should we think about the attach rates differently
It's a very good question, very difficult question also, Marc. I would say that the way we look at it is if you look at inference and agentic AI, the functions that have to be performed by these standard CPUs are are closer to standard cpus i think the highest attach rate that you would find is really close to the uh the gpus hbn platforms that's where you you have the the heaviest loads if you wish for these cpus so that's that's how at this point i would compare it i would say if you take in a a VGX box with GPUs and HBM, then the CPU there are the CPUs that use the most memory in terms of capacity and bandwidth. I would say that when you go to inference, then it's probably a little less, but it's difficult for us at this point in time to model that.
Mark, your line is open.
Hi. Sorry. I guess my phone dropped, and I don't know if my question came through. But, Luke, I was wondering, should we think about the DIM attach rate differently for CPUs that would be used in orchestration for agentic AI versus CPUs used in standard servers versus CPUs that are used for inferencing that get put next to the GPUs and the ASICs and the XPUs? Is there a different density there for the DIMMs?
So it's a very good question, Mark, but a very difficult question to answer. I would say the way we look at it at this point in time is that the highest of memory capacity and bandwidth really resides close to the GPUs and these GPU-HBM clusters, if you wish. That's where you have the most need for very high capacity and very high bandwidth, which on average could be higher than what we found in inference and other solutions. But we have not modeled that at this point in time. It's hard to model. But we do see in aggregate the fact that, you know, inference is being added to training as a very good traction for, you know, the use of, you know, standard deems or MR deems in general. The attach rate is difficult to model at this point in time.
Gotcha. Okay, that's fair enough. And then the tightness, The techniques in the back end that you're noticing, is this, do you know or can you explain what the cause of that is? Is that because of, you know, the idea that a lot of the back end happens in Southeast Asia and they procure a lot of energy from the Mideast? Is that it or is it capacity? Is it more like just the whole industry is in a great recovery time and the capacity is utilization rates are really ticking up. Do you have a sense of the cause of the tightness in the back end?
There's a couple of reasons. One is the demand, especially in the data center, you know, has become very high, you know, recently. So there's increased demand there. And the second reason is that a lot of semiconductor suppliers have moved their back-end supply chains away from China to other countries in Asia, and that has put a strain on the total capacity of these back-end suppliers. So it's the combination of the two. We've not seen an effect yet, not yet, of the war, There are discussions about some basic elements like gas that are going to be affected, but we don't see this yet. The main reason at this point in time is increased demand, especially in the data center, combined with semiconductor companies moving their supply chains outside of China.
Okay, that's really helpful. And the last question, if I may, is, As you think about your market share in this year, are you of the view that you are a share gainer or you keep share flattish or down? Like, what is your view on your ability to gain share? Thank you.
Yeah, so we continue to gain share, you know, 24 to 25. You know, we were, we executed 25. We were, you know, mid 40% share. There's no indication that we're not going to continue on that trajectory. This year, the market is really at a high level transitioning from Gen 2 to Gen 3, and our footprint in Gen 3 is really, really good as well. So there's no sign of any erosion of the share. If we add the other components, then we'll go faster than market because we add content as well to what we ship to the market. So, again, we're very pleased with, you know, where we were in 2025. As you know, Mark, we tend to talk share on a yearly basis. You know, they can fluctuate from quarter to quarter, but we don't see any sign of erosion of our share going into 2026.
Gotcha. Very helpful. Thank you.
Thank you, Mark.
At this time, there are no further questions. This concludes the question and answer session. I would now like to turn the conference back over to the company.
Thank you, everyone, who has joined us today for your continued interest and time. We look forward to speaking with you again soon. Have a good day.
Thank you. This now concludes today's conference.