This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

Micron Technology, Inc.
3/18/2026
Ladies and gentlemen, thank you for joining us and welcome to Micron Technologies Fiscal Second Quarter 2026 Financial Conference Call. After today's prepared remarks, we will host a question and answer session. I will now hand the conference over to Satya Kumar, Investor Relations. Satya, please go ahead.
Thank you and welcome to Micron Technologies Fiscal Second Quarter 2026 Financial Conference Call. On the call with me today are Sanjay Bharotra, our chairman, president, and CEO, and Mark Murphy, our CFO. Today's call is being webcast from our investor relations site at investors.micron.com, including audio and slides. In addition, the press release detailing our quarterly results has been posted on the website, along with the prepared remarks for this call. Today's discussion contains forward-looking statements that are subject to risks and uncertainties. These forward-looking statements include statements regarding our future financial and operating performance, as well as trends and expectations in our business, customers, market, industry, products, and regulatory and other matters. These statements are based on our current assumptions, and we assume no obligation to update these statements. Please refer to our most recent financial report on Forms 10-K, Forms 10-Q, and our other filings with the SEC for more information on the risks and uncertainties that could cause actual results to differ materially from expectations. Today's discussion of financial results is presented on a non-GAAP financial basis unless otherwise specified. A reconciliation of GAAP to non-GAAP financial measures can be found on our website. I'll now turn the call over to Sanjay.
Thank you, Satya. Micron delivered an exceptional fiscal Q2 with stellar records in revenue, gross margin, EPS, and free cash flow. Quarterly revenue nearly tripled versus one year ago, and revenue for DRAM, NAND, HBM, and each business unit reached new highs. Our fiscal Q3 single quarter revenue guidance exceeds the full year revenue for every year in our company's history through fiscal 2024. For fiscal Q3, we anticipate exceptional records across revenue, gross margin, EPS, and free cash flow. Reflecting confidence in the sustained strength of our business, I'm pleased to announce that our board has approved a 30% increase in our quarterly dividend. The step up in our results and outlook are the outcome of an increase in memory demand driven by AI, structural supply constraints, and Micron's strong execution across the board. Our memory and storage solutions are at the heart of this AI revolution. Memory makes AI smarter and more capable, enabling longer context windows, deeper reasoning chains, and multi-agent orchestration. As AI evolves, we expect compute architectures to become more memory intensive. This is why we strongly believe that Micron is one of the biggest beneficiaries and enablers of AI. AI hasn't just increased demand for memory, it has fundamentally recast memory as a defining strategic asset in the AI era. We continue to work with customers on strategic customer agreements or SCAs that are different from prior LTAs and have specific commitments over a multi-year time horizon for improved visibility and stability in our business model. These SCAs also provide customers greater certainty to plan their businesses while reinforcing long-term engagement across our broad product portfolio. We are excited to have signed our first five-year SCA. We are making excellent progress ramping our industry-leading 1-gamma DRAM and G9 NAND technology nodes. We expect 1-gamma to become the highest volume node in Micron's history. Our 1-gamma node was already the fastest ramp to mature yields, is ramping volumes faster than all prior nodes in our history, and is on track to become a majority of our DRAM bitmix by mid-calendar 2026. We plan to increase EUV adoption at the 1-delta DRAM node, utilizing the latest generation EUV tools. These more advanced EUV tools will help us optimize both clean room space efficiency and patterning when scaling to 1-delta and beyond. In NAND, our G9 node also remains on track to constitute a majority of bids by mid-calendar 2026. We also achieved a record mix of QLC bits in the quarter. Looking ahead, we expect co-location of R&D and high volume manufacturing at our Boise and our Singapore sites to speed up time to market for our leading edge products. We see an unprecedented set of opportunities for memory and storage to enable the AI era across market segments. and expect to meaningfully increase our R&D investments in fiscal 2027. Micron's technology leadership, product excellence, and manufacturing execution is being recognized in quality scores from our customers. I'm pleased to report that a clear majority of our customers rank Micron number one in quality. Turning to our end markets, AI demand is driving DRAM and NAND data center bit TAM to exceed 50% of the industry TAM for the first time in calendar 2026. Traditional server demand is robust, driven by combination of demand from workloads initiated by agentic AI as well as broad-based server refresh. AI server demand continues to be strong. Both AI and traditional server demand are constrained by lack of adequate DRAM and LAN supply. We expect server units to grow in the low teens percentage range in calendar 2026, driven by growth in both AI and traditional servers. We expect server DRAM content to continue to grow in calendar 2026 with the introduction of new platforms. At NVIDIA's GTC, we announced that Micron has begun volume shipment of its HBM4 36-gigabyte 12i in the first quarter of calendar year 2026 and is designed for NVIDIA Vera Rubin. With our HBM4 production ramp and volume shipments underway, we expect to reach mature yields faster than HBM3e. We have also sampled our HBM4 16i product which provides 48 gigabytes of HBM capacity in each HBM cube, a 33% increase in HBM capacity compared to HBM4, 12 high. Development of HBM4e, our next generation HBM product, is well underway, and we expect to ramp volume in calendar 2027. Our HBM4e will leverage Micron's production-proven, industry-leading, one gamma DRAM technology node and is set to deliver another step function improvement and performance, enabling a whole new generation of AI compute platforms across the industry. Additionally, HBM4E customization options offer us further differentiation opportunities and even deeper R&D engagement with customers. Micron pioneered the development of LPD RAM for the data center which consumes one-third the power of DDR, DRAM server modules. Building on this leadership, we sampled the industry's first 256 gigabyte LP SOCAM2 product, which is built using our one gamma node and enables a massive two terabytes of capacity per CPU, quadrupling the content from just a year ago. We see expanding use of LPD-LAM in the data center in the years ahead, and we are excited to maintain an industry-leading innovative product roadmap in this market. Rapid growth in AI influence is driving the emergence of new architectures optimized for the token economics of specific workloads. Micron's broad portfolio of HBM, LP, DDR, and SSD is a critical enabler across these architectures. At GTC, the recent announcement of NVIDIA GROK 3 LPX implements up to 12 terabytes of DDR5 in a rack-scale architecture. We are seeing an acceleration in NAND-based demand in the data center due to AI use cases such as vector database and KV cache offload, and due to growing share of SSDs in capacity storage tiers. Micron's data-centered SSD product portfolio, enabled by our technology leadership and vertical integration, covers the spectrum from highest performance to highest capacity. We are now in high-volume production of our G9 NAND-based PCIe Gen 6 high-performance data-centered SSDs. Our 122-terabyte high-capacity SSD is seeing strong adoption and delivers 16 times the sequential read throughput per watt of a capacity-matched HDD configuration. Our strategy and execution are delivering results. Our data center FSD market share increased for the fourth consecutive calendar year in 2025 to a new record. In fiscal Q2, data centers' NAND revenues more than doubled sequentially reaching a substantial new record, and we expect further growth in the quarter ahead. Micron's data-centered SSD portfolio is industry-leading, and we have secured a robust set of design wins across our customer base. We are now seeing NAND demand significantly in excess of our available supply for the foreseeable future. In calendar 2026, a number of factors, including DRAM and NAND supply constraints, could cause PC and smartphone units to decline in the low double digits percentage range. Over time, we expect the value of on-device AI to drive strong memory content growth in PCs and smartphones. In PCs, there has been exciting innovation recently with agentic AI applications, such as OpenClaw. where AI agents can perform tasks independently on the host PC and also initiate workloads in the cloud. PCs with on-device agentic AI capabilities have recommended memory configurations of at least 32 gigabytes, twice as much as the average PC. Additionally, the fast-growing new category of personal AI workstations, such as NVIDIA DGX Spark and AMD Ryzen AI Halo, come in 128-gigabyte configurations, ideal for using large language models on device. Likewise, in smartphones, OEMs have recently announced new flagship devices such as Samsung Galaxy S26 and Google Pixel 10 with Agentech AI integrated into their mobile operating systems. The mix of flagship smartphones shipping with 12-gigabyte or more of DRAM increased to nearly 80% in calendar Q4, up from under 20% a year ago. Micron is well positioned to accelerate the opportunities in these markets with our industry-leading portfolio of products. In PC, Micron completed qualifications for LPCAM2 at a major OEM. At CF, we launched the industry's first Gen 5 QLC client SSV based on G9 NAND. Micron's LPDDR5X is now designed into leading personal AI workstations, expanding our addressable market with high volumes shipped to key customers. In smartphones, Micron continues to receive strong interest and feedback from OEM and ecosystem partners on our one gamma-based LPDDR6 samples. We built momentum with additional qualifications and mass production of our 10.7 gigabit per second 1 gamma LPDDR5X 16 gigabit product. We saw continued pricing improvement across automotive, industrial, and embedded markets. Total AEBU revenue reached a record with automotive and industrial revenue together exceeding $2 billion in the quarter. In automotive, OEMs are deploying level 2 plus ADAS across their fleets at an accelerating pace. The average car today has less than L2 ADAS capability, containing approximately 16 gigabytes of DRAMs, while vehicles with L4 autonomy require over 300 gigabytes. As more advanced ADAS and smart cabin adoption scales, we expect robust long-term growth in automotive memory demand. We have shared samples of the industry's first automotive-grade 1 gamma LPDDR5 DRAM, and in NAND, we were first in the industry with a G9-based UFS 4.1 automotive solution, further reinforcing our technology leadership in this market. Rapid improvements in AI are supercharging the capabilities of robots. We believe we are on the cusp of a 20-year growth vector in robotics, and expect robotics to become one of the largest product categories in the technology world. Humanoid robots will be AI-enabled and will be powered by a compute platform that rivals that of a high-end L4-capable automobile, thus requiring significant memory and storage capacity. We expect this exciting new category of growth to further underpin the long-term favorable dynamics that shape our industry environment. Micron is very well positioned to leverage this opportunity in close partnership with our customers, enabled by our industry-leading technology, product solutions, and operational capabilities. Now turning to our market outlook. We expect both DRAM and NAND industry bid demand in calendar 2026 to be constrained by supply. We continue to expect supply demand conditions for both DRAM and NAND to remain tight beyond calendar 2026. We expect industry DRAM bid shipments in calendar 2026 to grow in the low 20s percentage range, slightly above our prior outlook. In DRAM, clean room constraints and long construction lead times, higher HVM trade ratio, higher HVM growth rates, and declining bids per wafer growth from node migrations, constrained bid supply growth. We expect industry NAND bid shipments in calendar 2026 to grow approximately 20%. In NAND, some industry suppliers redirecting clean room space for DRAM and overall limited clean room space constrained bid supply growth. We expect Micron DRAM and NAND supply to grow approximately in line with the industry in calendar 2026. Micron is working to address the unprecedented gap between supply and demand, and we achieved several important milestones in expanding our global manufacturing footprint this past quarter. In DRAM, earlier this week, we announced the successful closing of the acquisition of the Tongluo site from Powerchip Semiconductor, completing the transaction ahead of schedule. We expect this site to support meaningful product shipments from the existing FAB beginning in fiscal 2028. Adding to the existing FAB, we plan to begin construction of a similar sized second clean room at this site by the end of fiscal 2026. We continue to expect initial vapor output at our first Idaho FAB in mid-calendar 2027, and ground preparation has begun for our second Idaho FAB. We broke ground on our first FAB at the New York site and initial ground preparation activities are ahead of plan. In Japan, we are making good progress on ground preparation for our cleanroom expansion to enable future technology transitions in our Hiroshima site. In NAND, the combination of our higher demand outlook and our decision to co-locate R&D cleanroom in our manufacturing fab underpin our decision to break ground for a new NAND fab at our Singapore site. We expect initial wafer output from this FAB in the second half of calendar 2028. In assembly and test, we commenced commercial shipments from our new facility in India. The state-of-the-art facility will be among the largest single-floor assembly and test clean rooms in the world. Our Singapore Advanced Packaging Facility for HBM is on track to contribute meaningfully to Micron's HBM supply in calendar year 2027. We expect fiscal 2026 CAPEX to be above $25 billion. From our last earnings call estimate, the majority of the increase is driven by clean room facility-related CAPEX, of which the largest factor is Dongluo, followed by construction spend increase in our U.S. FAB projects. We project our fiscal 2027 CAPEX to step up meaningfully to support HVM and DRAM-related investments. We expect construction-related CapEx to increase by over $10 billion year-over-year in fiscal 2027 as we build out our global manufacturing sites to address long-term demand opportunities. In addition, we expect higher equipment spent year-over-year in fiscal 2027. As we make these investments, we will continue to be responsive to the market environment and our customer demand to appropriately align our supply plans. I will now turn it over to Mark for our fiscal Q2 financial results and outlook.
Thank you, Sanjay, and good afternoon, everyone. Micron delivered strong financial results for the fiscal second quarter with revenue, gross margin, and EPS all exceeding the high end of our guidance. In fiscal Q2, we generated record-free cash flow, reduced our debt, and closed the quarter with the highest net cash position in our history. Total fiscal Q2 revenue was $23.9 billion, up 75% sequentially and up 196% year over year, representing our fourth consecutive quarterly revenue record. The $10.2 billion sequential increase is the largest in our history. Fiscal Q2 DRAM revenue was a record $18.8 billion, up 207% year-over-year, and represented 79% of total revenue. Sequentially, DRAM revenue increased 74%. Bid shipments were up mid-single digits. Prices increased in the mid-60s percentage range, driven by tight industry conditions and included favorable mix. Fiscal Q2 NAND revenue was a record $5 billion, up 169% year over year, and represented 21% of Micron's total revenue. Sequentially, NAND revenue increased 82%. NAND bit shipments increased in the low single-digit percentage range. Prices increased in the high 70s percentage range, driven by tight NAND industry conditions. and included favorable mix. The consolidated gross margin for fiscal Q2 was 75%, up 18 percentage points sequentially. This improvement was driven primarily by higher pricing and also included favorable mix and cost performance. Fiscal Q2 gross margin nearly doubled from a year ago and was a company record. turning to quarterly financial performance by business unit. Cloud memory business unit revenue is a record $7.7 billion and represented 32% of total company revenue. CMBU revenue was up 47% sequentially, driven by an increase in prices and favorable mix. CMBU gross margins were 74%, higher by 9 percentage points sequentially, driven by higher pricing and cost execution. Core data center business unit revenue was a record $5.7 billion and represented 24% of total company revenue. CDBU gross margins were 74%, up 23 percentage points sequentially, driven by higher pricing and favorable mix. Mobile and client business unit revenue was a record $7.7 billion and represented 32% of total company revenue. MCBU revenue was up 81% sequentially, driven by higher pricing, partially offset by lower bid shipments. MCBU gross margins were 79%, up 25 percentage points sequentially, driven primarily by higher pricing and favorable mix. Automotive and embedded business unit revenue was a record $2.7 billion and represented 11% of total company revenue. AEBU revenue was up 57% sequentially, driven by higher pricing, partially offset by lower bid shipments. AEBU gross margins were 68%, up 23 percentage points sequentially, driven primarily by higher pricing. Operating expenses in fiscal Q2 were $1.4 billion, up $87 million quarter over quarter. The sequential increase was driven by higher R&D expenses. We generated operating income of $16.5 billion in fiscal Q2. resulting in an operating margin of 69%, up 22 percentage points sequentially, and 44 percentage points year over year. Fiscal Q2 taxes were $2.5 billion on an effective tax rate of 15.1%. Non-GAAP diluted earnings per share in fiscal Q2 was $12.20, with 155% sequential growth and 682% growth versus the year-ago quarter. Turning to cash flow and capital expenditures, in fiscal Q2, operating cash flows were $11.9 billion. Capital expenditures were $5 billion, resulting in free cash flow of $6.9 billion. Fiscal Q2 free cash flow was a quarterly record for the company. exceeding our prior record in fiscal Q1 2026 by 77%. Ending inventory for fiscal Q2 is $8.3 billion, up $62 million sequentially, with days of inventory at 123. DRAM inventory days remain especially tight and below 120 days. We reached record levels of cash and investments of $16.7 billion at quarter end and had liquidity over $20 billion when including our untapped credit facility. In fiscal Q2, we repurchased $350 million of shares as permitted by the terms of the CHIPS agreement. During the quarter, we also reduced debt by $1.6 billion including redemption of senior notes maturing in 2029 and 2030. The weighted average maturity on our outstanding debt is August 2034. We closed the quarter with $10.1 billion of debt and a net cash balance of $6.5 billion. Reinvesting in the profitable growth of our business across R&D, CapEx, and other strategic investments remains our top priority for capital allocation. We are committed to maintaining a strong balance sheet, have reduced our total debt by over $5 billion in the last three quarters, and are at our strongest net cash position ever. Reflecting the sustained strength of our technology leadership and cash generation, as Sanjay mentioned, the Board has approved a 30% increase in our quarterly dividend to 15 cents per share. Now turning to our guidance. We expect fiscal Q3 revenue to be a record $33.5 billion, plus or minus $750 million. Gross margin to be approximately 81%. And operating expenses to be approximately $1.4 billion. Based on a share count of approximately 1.15 billion shares, we expect EPS to be a record $19.15 per share, plus or minus 40 cents. We expect higher price, lower cost, and favorable mix to all contribute to gross margin expansion in Q3. As mentioned last quarter, Micron's fiscal Q4 2026 OPEX will also reflect the effect of an additional work week in this 53-week fiscal year. We expect to increase our fiscal 2027 OPEX as we ramp R&D investments in support of an unprecedented set of long-term opportunities in memory and storage. We expect a fiscal Q3 and fiscal year 2026 tax rate of around 15.1%. Micron continues to invest in a disciplined manner across our global footprint. To address customer demand, as mentioned earlier, we now project our capital spending in fiscal 2026 to be above $25 billion. In fiscal Q3, we project capex of approximately $7 billion, while delivering significantly higher free cash flow on stronger operating cash flow. Due to the need for clean room capacity, we expect our construction spend growth rate to outpace equipment spend growth in both fiscal 2026 and fiscal 2027. Any impacts that may occur due to trade or geopolitical developments are not included in our guidance. I'll now turn it over to Sanjay to close.
Thank you, Mark. Decades of investment in innovation and execution has established Micron as a technology leader in memory and storage and as one of the semiconductor industry's biggest beneficiaries and enablers of AI. As the only US-based manufacturer of advanced memory products, Micron is uniquely positioned to capitalize on the unprecedented opportunities ahead. I want to thank our team members worldwide whose execution made this outstanding quarter possible. As strong as these results are, I'm even more excited about what's ahead for Micron. We will now open for questions. Thank you.
We will now begin the question and answer session. Please limit yourself to one question and one follow up. If you would like to ask a question, please press star one to raise your hand and star six to unmute. Your first question comes from the line of Chris Sankar from TD Cohen. Please go ahead.
The 81% gross margin guide is very impressive. It's kind of curious how to think about the sustainability of gross margins, especially as you bring more HBM4 into the mix. If you can give some thought on how to think about gross margins in the August quarter and beyond, That'd be very helpful to have a follow up for Sanjay.
So, Chris, this is Mark. So we provide a strong guide of 600 basis points, you know, sequentially into the third quarter. We're not going to provide the fourth quarter gross margin guidance. However, we have indicated that market conditions we expect to remain tight beyond 26. clearly beyond the fourth quarter. What you're seeing reflected in our gross margin is the benefits of AI driving a multi-year investment cycle, most of which is ahead of us. And AI requires more high performance, more memory and more high performance memory. And that's reflected in the margins. Um, you know, also, um, you know, we've talked about supply factors and those are, uh, going to continue beyond 26. Um, you know, the 81% contemplates a growth in HBM four, uh, but we expect, um, you know, as I mentioned, market conditions to be strong. Um, now, um, Keep in mind that at these gross margin levels, incremental increase in price is going to have less of an effect on gross margin. But beyond that, we're not providing a fourth quarter gross margin.
Got it. Thanks for that, Mark. And then a quick question for Sanjay on the SEA. Congrats on your first five-year SCA. How different is it from an LTA? Is this a multi-year volume and price commitment, or does the price get negotiated every year? And also, how to think about cancellation terms in the SCA in case the cycle slows down during the timeframe? Thanks, Sanjay.
So, thank you for recognizing us for the first SCA that we have completed here. And, you know, as you noted, SCA is multi-year agreement, and we noted that in our remarks as well. LTAs have tended to be, you know, typically one-year agreement. And, of course, in this environment of extremely tight supply outlook, you know, in the foreseeable timeframe as well, of course, our customers are very motivated, you know, in order for their own planning purposes and for their better predictability to have these structural strategic agreements with us. And of course, these agreements are really meant to bring stability and greater visibility into our business model as well. We have completed one SCA, so we are not going to be getting into the specifics here. All these agreements, I'm sure you can appreciate that these SCAs are confidential in nature. But of course, these SCAs are meant to achieve the objectives for the customers in terms of their ability to plan and be able to count on supply, supply commitments that are in the agreements, but also for us to be able to count on specific commitments that are there from the customers. And these are meant to go across the periods when the industry is very tight versus other parts of the industry environment as well so that's why they are long-term agreements and they have robust terms in them for us as well as for our customers got it thank you very much your next question comes from the line of joseph moore from morgan stanley your line is open please go ahead
allocation questions by end market. Obviously, the AI is the area that has the most urgency, but do you worry about demand destruction for things like PCs and smartphones? Are you trying to balance big customer, small customer? Just how are you thinking about that allocation process?
I mean, you know, clearly supply is extremely tight and, you know, supply is tight across all end markets. You know, demand trends are strong across the end markets, while price-sensitive markets, such as the consumer examples that you gave, may have some demand that is getting impacted due to the higher prices. But overall demand in those markets as well stays pretty strong. And our goal and strategy always is to be a diversified supplier to our various end markets. I think that is very important for us. Of course, you know, data center is becoming a bigger and bigger part of the industry TAM. So, of course, you know, bigger portion of the supply goes there. And that's the main driver of growth for the industry as well as for micron reserves. But other parts of the markets are important to us, such as PC, such as smartphone, automotive, of course, you know, industrial. And, you know, we want to maintain that well-diversified mix. for our end markets. And I would just like to point out that overall, whether in data center or in the consumer parts of the market, such as smartphone or PCs, the AI trend is continuing to drive greater and greater requirement for memory content. Of course, customers are working in this tight supply environment to manage the mix of their products, but overall, We are very much working with customers across our end markets.
Great, thank you. And I think in the past you've sort of said that some customers are getting 70% of what they're asking for. You know, is that still kind of the ballpark of what you're dealing with? Are their customers lower than they were three months ago?
Yeah, what we have said is in the last earnings call that some of our key customers we are able to fulfill only 50 percent to two-thirds of their demand in the medium term and yes that still remains the case great thank you great quarter thank you your next question comes from the line of timothy arcuri your line is open please go ahead
Thanks a lot. Sanjay, I also wanted to ask about the SCAs. I think we're all trying to sort of think to the other side of the cycle and hope that these SCAs provide some mechanism that will kind of limit your gross margin on the downside to a certain number. So I know you don't want to give too many details, but is it fair to say that there is a mechanism in these SCAs that would limit your gross margin on the downside when things do finally roll back over?
So, you know, certainly not getting into the specifics of these SEAs for the obvious reasons of confidentiality of these agreements. You know, we have done successfully completed one SEA. We are in discussions with multiple other customers. And if and when we complete these agreements and as appropriate, you know, we will, of course, share further details with you. But what I want to highlight is that these SCAs are multi-year and they have specific commitments in them, you know, and beyond and, you know, these are robust agreements. And of course, these are meant, you know, to absolutely give us the visibility and stability toward our business model. Beyond that, really, I cannot get into any specifics at this point.
Okay, thanks. And then, Mark, I mean, I just had a question about cash. So you're going to generate, I don't know, $35, $40 billion in free cash flow this fiscal year. You're going to probably have more than $50 billion in cash by the end of the calendar year. So what do you do with this? Are you planning to set aside a bunch of it to sort of buy back a bunch of stock on the other side? And I guess with respect to that, you do have restrictions on the repo from the money you took from the CHIPS Act. Is there any way to get that reworked? Thanks.
Yeah, Tim, so we're thrilled with the performance of the business and the improvement in the balance sheet and in the second quarter with record net cash and record free cash flow beating the previous quarter's record by 77%. Our third quarter guide, when you take those numbers and consider the CapEx we gave, we could see cash flow double roughly, sequentially. Yeah, we're going to continue to build on the balance sheet strength and improve our net cash position. You know, we're continuing to de-lever and pay down debt. You know, noteworthy that we received two credit upgrades in the quarter. So we're now a solid triple B. So we're getting stronger. While, as you can see, we've talked about increasing our CapEx investment and increasing our R&D investment. Now to your specific question on balance sheet priority or capital allocation, you know, balance sheet is always going to be a priority along with organic investment in our business to advance technology and to put in capacity for value-add bets, which we certainly see now. Um, and, um, you know, generating return on capital at this point, over 30% headed towards 50%. Um, we're going to remain disciplined there though. Um, and then you saw today, uh, we're pleased to announce a dividend increase of 30%, uh, reflects the confidence we have in our business, um, outlook stability of the business and, and, uh, cash returns in the future. And then, you know, as you said, we will have, we believe, significant capacity then for returning cash to shareholders through repurchase, combination of offsetting, you know, dilution from stock comp, and then opportunistically repurchasing.
Thanks, Mark.
Your next question comes from the line of CJ Muse from Cantor Fitzgerald. Your line is open. Please go ahead.
Yeah, good afternoon. Thanks for taking the question. Wanted to follow up again, the SCA question. So you've had an evolution here, LTAs binding, now SCAs. I'm curious if you could kind of discuss the breadth of the different customers that you're speaking with. Is it only Hyperscale, or are there others that are interested in? And I know you don't want to go into specific details on the contract, But just to follow up on the last question, you know, is there any CapEx forward requirements tied to these agreements? Are there pricing tied to an ROIC on those investments? Any help there would be helpful. Thank you.
So we'll share with you that the SEA that we just have signed is with a large customer. And, of course, you know, these agreements are very much focused on customers you know, allowing us to invest with confidence in our, you know, future supply plans. And of course, also, you know, just having specific terms that, you know, enable us to have overall better visibility into the future demand. And, you know, as I said earlier, enable stability around the business model as well. And CJ, beyond that, really, we are not commenting on the SCAs other than we'll say that, of course, as I mentioned earlier, these SCA discussions are proceeding with multiple customers. And yes, these are across multiple markets as well.
Very helpful. And then I guess as a quick follow-up for HBM, I think you guided last quarter 40% growth CAGR, which would suggest roughly $50 billion in revenues for the market this year. Has that number changed? And are you seeing any sort of kind of preference for perhaps moving this to D5 over HPM by any industry players given today the higher margins that we see there? Thanks so much.
So, yes, it is correct that the margins for non-HPM today are higher than HPM margins. Demand for HPM, of course, continues to be strong. We have not updated the numbers that we had provided last in terms of the outlook for the HPM TAM. Of course, the demand for DDR5, LP, and HPM, all of them continue to be strong in the data centers. And we, of course, continue to manage the mix of the business as the data center AI demand continues to grow. And as I said earlier, we, of course, outside of data center, you know, we are very much focused on making sure that we are maintaining, you know, relevant shares in our key other market segments as well. So we are overall in this environment of strong AI demand trends across data center to the edge. We are very much focused on continuing to manage our portfolio and we see strong a growth opportunity for the full portfolio of Microns in the data center. And I'll just point out there that that portfolio is about HBM, it's about ALP, it's about SOCAM, it's about DDR5, as well as our data center SSDs, which have made tremendous strides in terms of our share, market share in data center SSDs over the course of the last few years. Thank you.
Your next question comes from the line of Harlan Sir from JP Morgan. Your line is open. Please go ahead.
Good afternoon and congratulations on the solid results and strong quarterly execution. Maybe Sanjay, you know, to to to carry on from where you left off on your commentary on being on the November quarter of last year, I estimated that your enterprise business was almost half of your total flash business, right? I think it was up like 60% sequentially, obviously a very favorable mix shift from a margin perspective for the Micron team. And as you mentioned, you remain a top, you know, strong top three global supplier of ESSD. Off of that strong number, it looks like your ESSD business doubled sequentially in the February quarter, still 50% of the NAND mix. Looking forward, with the G9 nodes continuing to ramp, Your next generation ESSDs, performance optimized, capacity optimized, mainstream, all on G9. Does this give the team a runway to continue to drive sequential growth in ESSD through the remainder of this calendar year and into next year? And then I just wanted to get your thoughts on this new proposed memory tier of high bandwidth flash. Is this an area where the Micron team might start to focus on R&D resources?
So regarding your question on data center SSDs, of course, you know, this is an area of strong growth ahead. And, you know, NAND supply stays strong. I mean, NAND supply is very tight and the demand for NAND stays strong. And data center SSDs are a big driver of, you know, NAND growth here as well. And Micron is well positioned with our portfolio of SSDs. really going across the requirements in terms of capacity as well as performance, you know, across the various customers using TLC as well as QLC, you know, with respect to our data center mix. So very well positioned with this. And, you know, this is a part of our strategy of continuing to shift our portfolio, our revenue mix toward higher profit pools of the industry and, you know, higher value parts of the market. We, of course, will continue to address opportunities for growing our SSD business. We feel really good about the trajectory that we have been on with data center SSD and the trajectory that is planned ahead for us as well. And, you know, regarding your question on HPF, high bandwidth flash here. So, of course, you know, high bandwidth flash has some positive attributes, you know, such as capacity and but it has the limitations that NAND has, you know, such as ride speed as well as power and retention. Therefore, you know, there'll be potentially some workloads here, you know, where this may be a possible solution, but it is really early. And, you know, what is needed, of course, is engagement with the customers here in terms of really understanding the business value proposition of HBF. But we, of course, continue to study this.
No, I appreciate that. And then how much of these multi-year SCA agreements is due to the sort of inherent requirements for earlier and longer term engagement with your GPU, XPU chip customers, just due to the customization of their next generation HBM architectures, right? Especially around the base die and given the 12 to 18 month design cycle times for these custom base dies, the sharing of IP between you and your chip customers, and optimizing the base type here, process flow, right? It does imply that they have to engage with you much earlier in the design phase for their GPUs and XPUs. Is this another factor driving these multi-year SCAs?
Again, you know, we are not really getting into the specifics, not getting into the specific type of customers here as well. But what I will definitely tell you is that, yes, these SCAs, really bring us closer to the customer in terms of, you know, customer as well as our partnership. And that partnership, of course, extends into bringing us closer in terms of R&D, collaboration, and roadmap planning, both ours as well as for customers. So that's, you know, definitely one of the benefits of these SCAs as well.
Yeah, thank you.
Your next question comes from the line of Tom O'Malley from Barclays. Your line is open. Please go ahead.
Hey, guys. Thanks for taking the questions and really nice results. So GTC and OSC this week, I think there's a lot of conversation just around the LPU architecture and the increased use of SRAM. Could you talk about your view on the memory market longer term as you see more workloads relying on other types of memory uh outside of the hvm that you're already using and then just as a broader question with with so much of the demand coming in these longer term agreements being associated with data center and just a few number of customers that can actually acquire and build these these products how are you benchmarking when you're adding capacity do you have an internal forecast for accelerators are you talking customer by customer and building bottoms up forecasts just so that you know in year three, year four, year five, that you're offering enough supply to the industry and not getting into a situation in which we're in oversupply. Thank you very much.
You know, so first of all, on your question on the SRAM and LPU-based architectures, you know, I would just like to point out that, of course, you know, these kind of architectures make the AI infrastructure more efficient. And any architecture that make AI infrastructure more efficient, they are good for all AI. Basically, they help the pie grow faster. Keep in mind that this LPU architecture works in conjunction with Vera Rubin. Vera Rubin, which utilizes a tremendous amount of HBM as well as DRAM. And the NVIDIA Grok LPX, this LPU-based architecture, actually a rack uses 12 terabytes of DRAM in it as well. So all of this actually is addressing the workloads in a more efficient manner. This helps with, of course, the token economics, the token speed, scale-up of AI across inference helps with the power, and every bit that helps overall is good for further scaling up and acceleration of AI demand as well. So we look at this as complementing what already exists with respect to HVM and DRAM, and of course, continuing to grow the API and accelerate deployment of AI. Just keep in mind that today in the enterprises, the AI deployment as a percentage is still very, very low. And there is across all verticals, across all industries, across the economies, there is a lot of opportunity ahead. So we are excited about all of these opportunities for our full portfolio of HBM, LP, DRAM, SOCAM, and SSD in addressing these future market requirements. And ultimately, you know, all of this just points to how strategic of an asset memory is for AI. Because without more memory, without faster memory, AI just cannot scale up. AI just cannot deliver the capabilities, whether it's in training or in inference. You know, just look at from last year to this year, the DRAM requirement in the advanced AI accelerators has now doubled. And so really, and this is what, I mean, these are some of the factors that are contributing to the supply shortage. And of course, these trends of data deployment of AI apply on the edge devices, smartphone and PCs as well. So we are excited about the opportunities ahead and really, you know, absolutely continue to see strong opportunities for our full portfolio ahead.
Your next question comes from the line of Vivek Arya from Bank of America Securities. Please go ahead.
Thanks for taking my questions. Sanjay, on HBM4, do you expect your share to be in this target 20-25% range right off the bat, or do you think you will kind of build towards it over time? Just conceptually, how do you see the puts and takes in terms of whether you can actually expand your HBM share in this upcoming Vera Rubin generation?
You know, we have shared before that in CQC of last year, we reached our HBM target, you know, which we have targeted for calendar 26 to bring our HBM share in line with DRAM share. And we had also said that, you know, going forward, we are going to manage our HBM as part of the mix of our total portfolio and are not going to break out the share quarter by quarter here. But what I can tell you is that we feel very good about our HBM product positioning, feel very good about overall HBM product. Of course, the market is there for both HBM4 as well as HBM3E. in calendar 2026, and we will be supplying both of these products and feel good about our overall position here and our ability to fully manage the mix of the business.
And for my follow-up, Mark, I wanted to revisit this 81% gross margin guidance. I appreciate you're not giving a specific forward view, but what do you what has happened in kind of prior historical peaks where microns margins i think peaked in in the low 60s i believe uh so what is the difference between uh the prior situations versus now um you know what have the those kind of historical precedents uh indicated to you about how the trajectory of gross margins can be over the next uh several quarters how do customers do customers start to react differently when they see these level of uh gross margins and what is a very very important input into their AI silicon. Thank you.
And before Mark answers that question, can I just point out that I accidentally said that we targeted to reach our HPM share in Q3 26. Of course, I said it wrong by mistake. I meant that we had targeted to get to our HPM share in 25. And we achieved our HBM share in line with our DRAM share in Q3 of 25. And beyond that Q3 25, we had said we are not going to be providing any further mix of HBM share. So I just wanted to correct what I accidentally said, 26 instead of 25.
I would say that, you know, the, Keep in mind that the industry is supply constrained, and conditions will remain very tight, and that's beyond 26. So that certainly supports the near-term, medium-term pricing. And we've discussed how we're working with customers to allocate best we can to their businesses and work with them, work with them on adding capacity, work with them on supply assurance, working them with new products and so forth. I think your question about reverting to some historical mean, I think maybe that's the thing that should be revisited is we have a situation where AI is a transformational secular driver As Sanjay mentioned, AI requires more and higher performance memory, and this memory helps with driving the token cost down. It helps lower the energy cost per token. It increases the number of tokens. It increases intelligence overall of AI, which drives harder problem sets and agent use, which drives more tokens and needs more memory. So it's become, the margins are reflecting recognition that memory is a lot more valuable and an efficient way to monetize AI, and that's from data center to the edge. And then on top of that, we've been clear for a year or more that there are supply constraints that exist on a number of fronts that will take time. There are low inventory levels Um, there's declining bits per way for on node advances, um, HBM trade ratio, and that's increasing. Um, and then any new, uh, capacity really needs to be greenfield, which is a physical constraint, which takes a lot of time. So, um, this is either, these are both, you know, durable factors, both the value of memory and the, um, and the structural challenge of bringing on supply. And we're working that, you know, both those issues where we're investing in capacity and we're also increasing R&D to continue to advance the technology, improve the value of memory. And we believe that these will help help with margins over time. And I think customers are recognizing that and entering into these agreements.
This concludes today's call. Thank you for attending. You may now disconnect.