1/28/2026

speaker
Operator

quarter earnings conference call. At this time, all participants are in a listen-only mode. A question and answer session will follow the formal presentation.

speaker
Operator

If anyone should require operator assistance, please press star zero on your telephone keypad. As a reminder, this conference is being recorded.

speaker
Jonathan Price
Vice President of Investor Relations

Investor Relations website, you can find our earnings press release and financial summary slide deck, which is intended to supplement our prepared remarks during today's call and provides the reconciliation of differences between GAAP and non-GAAP financial measures. More detailed outlook slides will be available on the Microsoft Investor Relations website when we provide outlook commentary on today's call. On this call, we will discuss certain non-GAAP items. The non-GAAP financial measures provided should not be considered as a substitute for or superior to the measures of financial performance prepared in accordance with GAAP. They are included as additional clarifying items to aid investors in further understanding the company's second quarter performance in addition to the impact these items and events have on the financial results. All growth comparisons we make on the call today relates to the corresponding period of last year unless otherwise noted. We will also provide growth rates in constant currency when available as a framework for assessing how our underlying businesses performed, excluding the effect of foreign currency rate fluctuations. Where growth rates are the same in constant currency, we will refer to the growth rate only. We will post our prepared remarks to our website immediately following the call until the complete transcript is available. Today's call is being webcast live and recorded. If you ask a question, it will be included in our live transmission, in the transcript, and in any future use of the recording. You can replay the call and view the transcript on the Microsoft Investor Relations website. During this call, we will be making forward-looking statements, which are predictions, projections, or other statements about future events. These statements are based on current expectations and assumptions that are subject to risks and uncertainties. Actual results could materially differ because of factors discussed in today's earnings press release, in the comments made during this conference call, and in the risk factor section of our Form 10-K, Forms 10-Q, and other reports and filings with the Securities and Exchange Commission. We do not undertake any duty to update any forward-looking statement. And with that, I'll turn the call over to Satya.

speaker
Satya Nadella
Chief Executive Officer

Thank you very much, Jonathan. This quarter, the Microsoft Cloud surpassed $50 billion in revenue for the first time, up 26% year over year, reflecting the strength of our platform and accelerating demand. We are in the beginning phases of AI diffusion and its broad GDP impact. Our TAM will grow substantially across every layer of the tech stack as this diffusion accelerates and spreads. In fact, even in this early innings, we have built an AI business that is larger than some of our biggest franchises that took decades to build. Today, I'll focus my remarks across the three layers of our stack, cloud and token factory, agent platform, and high-value agentic experiences. When it comes to our cloud and token factory, the key to long-term competitiveness is shaping our infrastructure to support new high-scale workloads. We're building this infrastructure out for the heterogeneous and distributed nature of these workloads, ensuring the right fit with the geographic and segment-specific needs for all customers, including the long tail. The key metric we're optimizing for is tokens per watt per dollar, which comes down to increasing utilization and decreasing TCO using silicon systems and software. A good example of this is the 50% increase in throughput we were able to achieve in one of our highest volume workloads, open AI inferencing, powering our co-pilots. And another example was the unlocking of new capabilities and efficiencies for our Fairwater data centers. In this instance, we connected both Atlanta and Wisconsin site through an AI WAN to build a first-of-its-kind AI super factory. Fairwater's two-story design and liquid cooling allow us to run higher GPU densities and thereby improve both performance and latencies for high-scale training. All up, we added nearly one gigawatt of total capacity this quarter alone. At the silicon layer, we have NVIDIA and AMD and our own Maya chips delivering the best all-up fleet performance, cost, and supply across multiple generations of hardware. Earlier this week, we brought online our Maya 200 accelerator. Maya 200 delivers 10 plus petaflops at FP4 precision with over 30% improved TCO compared to the latest generation hardware in our fleet. We will be scaling this starting with inferencing and synthetic data gen for our super intelligence team, as well as doing inferencing for co-pilot and foundry. And given AI workloads are not just about AI accelerators but also consume large amounts of compute, we are pleased with the progress we are making on the CPU side as well. Cobalt 200 is another big leap forward, delivering over 50% higher performance compared to our first custom-built processor for cloud-native workloads. Sovereignty is increasingly top of mind for customers, and we are expanding our solutions and global footprint to match. We announced DC investments in seven countries this quarter alone, supporting local data residency needs. And we offer the most comprehensive set of sovereignty solutions across public, private, and national partner clouds so customers can choose the right approach for each workload with the local control they require. Next, I want to talk about the agent platform. Like in every platform shift, all software is being rewritten. A new app platform is being born. You can think of agents as the new apps. And to build, deploy, and manage agents, customers will need a model catalog, tuning services, harness for orchestration, services for context engineering, AI safety, management, observability, and security. It starts with having broad model choice. Our customers expect to use multiple models as part of any workload that they can fine tune and optimize based on cost, latency, and performance requirements. And we offer the broadest selection of models of any hyperscaler. This quarter, we added support for GPT-5-2 as well as CLOD-4-5. Already, over 1,500 customers have used both anthropic and open AI models on Foundry. We are seeing increasing demand for region-specific models, including Mistral and Cohere, as more customers look for sovereign AI choices. And we continue to invest in our first-party models, which are optimized to address the highest-value customer scenarios, such as productivity, coding, and security. As part of Foundry, we also give customers the ability to customize and fine-tune models. Increasingly, customers want to be able to capture the tacit knowledge they possess inside of model weights as their core IP. This is probably the most important sovereign consideration for firms as AI diffuses more broadly across our GDP and every firm needs to protect their enterprise value. For agents to be effective, they need to be grounded in enterprise data and knowledge. That means connecting their agents to systems of record and operational data, analytical data, as well as semi-structured and unstructured productivity and communications data. This is what we are doing with our unified IQ layer spanning Fabric, Foundry, and data powering Microsoft 365. In the world of context engineering, Foundry knowledge and Fabric are gaining momentum. Foundry knowledge delivers better context with automated source routing and advanced agentic retrieval while respecting user permissions. Fabric brings together end-to-end operational real-time and analytical data. Two years since it became broadly available, Fabric's annual revenue run rate is now over $2 billion with over 31,000 customers. And it continues to be the fastest growing analytics platform on the market with revenue up 60% year over year. All up, the number of customers spending $1 million plus per quarter on Foundry grew nearly 80%, driven by strong growth in every industry. And over 250 customers are on track to process over 1 trillion tokens on Foundry this year. There are many great examples of customers using all of this capability on Foundry to build their own agentic systems. Alaska Airlines is creating natural language flight search. BMW is speeding up design cycles. Land O'Lakes is enabling precision farming for co-op members. And Symphony AI is addressing bottlenecks in the CPG industry. And of course, Foundry remains a powerful on-ramp for the entire cloud. The vast majority of Foundry customers use additional Azure solutions like developer services, app services, databases as they scale. Beyond Fabric and Foundry, we're also addressing agent building by knowledge workers with Copilot Studio and Agent Builder. Over 80% of the Fortune 500 have active agents built using these low-code, no-code tools. As agents proliferate, every customer will need new ways to deploy, manage, and protect them. We believe this creates a major new category and significant growth opportunity for us. This quarter, we introduced Agent 365, which makes it easy for organizations to extend their existing governance, identity, security, and management to agents. That means the same controls they already use across Microsoft 365 and Azure now extend to agents they build and deploy on our cloud or any other cloud. And partners like Adobe, Databricks, GenSpark, Glean, NVIDIA, SAP, ServiceNow, and Workday are already integrating Agent 365. We are the first provider to offer this type of agent control plane across clouds. Now let's turn to the high-value agentic experiences we are building. AI experiences are intent-driven and are beginning to work at task scope. We are entering an age of macro delegation and micro steering across domains. Intelligence using multiple models is built into multiple form factors. You see this in chat, in new agent inbox apps, co-worker scaffoldings, agent workflows embedded in applications and IDEs that are used every day, or even in our command line with file system access and skills. That's the approach we're taking with our first-party family of Copilot spanning key domains. In consumer, for example, Copilot experiences span chat, news, feed, search, creation, browsing, shopping, and integrations into the operating system. And it's gaining momentum. Daily users of our Copilot app increase nearly 3x year-over-year. And with Copilot Checkout, we are partnered with PayPal, Shopify, and Stripe so customers can make purchases directly within the app. With Microsoft 365 Copilot, we are focused on organization-wide productivity. WorkIQ takes the data underneath Microsoft 365 and creates the most valuable stateful agent for every organization. It delivers powerful reasoning capabilities over people, their roles, their artifacts, their communications, and their history and memory, all within an organization's security boundary. Microsoft 365 Copilot's accuracy and latency powered by WorkIQ is unmatched, delivering faster and more accurate work grounded results than competition. And we have seen our biggest quarter over quarter improvement in response quality to date. This has driven record usage intensity with average number of conversations per user doubling year over year. Microsoft 365 Copilot also is becoming true daily habit with daily active users increasing 10x year over year. we're also seeing strong momentum with Researcher Agent, which supports both OpenAI and Cloud, as well as Agent Mode in Excel, PowerPoint, and Word. All up, it was a record quarter for Microsoft 365 Co-Pilot seat ads, up over 160 percent year-over-year. We saw accelerating seat growth quarter-over-quarter and now have 15 million paid Microsoft 365 Co-Pilot seats and multiples more enterprise chat users. And we are seeing larger commercial deployments. The number of customers with over 35,000 seats tripled year over year. Pfizer, ING, NASA, University of Kentucky, University of Manchester, U.S. Department of Interior, and Westpac all purchased over 35,000 seats. Publicis alone purchased over 95,000 seats for nearly all its employees. We're also taking share in Dynamics 365 with built-in agents across the entire suite. A great example of this is how Visa is turning customer conversations data into knowledge articles with our customer knowledge management agent in Dynamics, and how Sandvik is using our sales qualification agent to automate lead qualification across tens and thousands of potential customers. In coding, we are seeing strong growth across all paid GitHub co-pilot. Co-pilot pro plus subs for individual devs increased 77% quarter over quarter. And all up now, we have 4.7 million paid co-pilot subscribers, up 75% year over year. Siemens, for example, is going all in on GitHub, adopting the full platform to increase developer productivity after a successful co-pilot rollout to 30,000 plus developers. GitHub Agent HQ is the organizing layer for all coding agents like Anthropic, OpenAI, Google, Cognition, and XAI in the context of customers' GitHub repos. With Copilot CLI and VS Code, we offer developers the full spectrum of form factors and models they need for AI-first coding workflows. And when you add WorkIQ as a skill or an MCP to our developer workflow, it's a game changer, surfacing more context like emails, meetings, docs, projects, messages, and more. You can simply ask the agent to plan and execute changes to your code base based on an update to a spec in SharePoint or using the transcript of your last engineering and design meeting in Teams. And we're going beyond that with GitHub Copilot SDK. Developers can now embed the same runtime behind Copilot CLI, multi-model, multi-step planning, tools, MCP integration, auth, streaming directly into their applications. In security, we added a dozen new and updated security copilot agents across Defender, Entra, Intune, and Purview. For example, iCertis, a SOC team, used security copilot agent to reduce manual triage time by 75%, which is a real game changer in an industry facing a severe talent shortage. To make it easier for security teams to onboard, we are rolling out security co-pilot to all our E5 customers, and our security solutions are also becoming essential to manage organizations' AI deployments. 24 billion co-pilot interactions were audited by Purview this quarter, up 9x year over year. Finally, I want to talk about two additional high-impact agentic experiences. First in healthcare, Dragon Co-Pilot is the leader in its category, helping over 100,000 medical providers automate their workflows. Mount Sinai Health is now moving to a system-wide Dragon Co-Pilot deployment for providers after a successful trial with its primary care physicians. All up, we helped document 21 million patient encounters this quarter, up 3x year over year. And second, when it comes to science and engineering, companies like Unilever and Consumer Goods and Synopsys and EDA are using Microsoft Discovery to orchestrate specialized agents for R&D end-to-end. They're able to reason over scientific literature and internal knowledge, formulate hypotheses, spin up simulations, and continuously iterate to drive new discoveries. Beyond AI, we continue to invest in all our core franchises and meet the needs of our customers and partners, and we are seeing strong progress. For example, when it comes to cloud migrations, our new SQL Server has over 2x the IaaS adoption of the previous version. In security, we now have 1.6 million security customers, including over a million who use four or more of our workloads. Windows reached a big milestone, 1 billion Windows 11 users, up over 45% year over year. And we had share gains this quarter across Windows, Edge, and Bing. Double-digit member growth in LinkedIn with 30% growth in paid video ads. And in gaming, we are committed to delivering great games across Xbox, PC, Cloud, and every other device. And we saw record PC players and paid streaming hours on Xbox. In closing, we feel very good about how we are delivering for customers today and building the full stack to capture the opportunity ahead. With that, let me turn it over to Amy to walk through our financial results and outlook, and I look forward to rejoining for your questions.

speaker
Amy Hood
Chief Financial Officer

Thank you, Satya, and good afternoon, everyone. With growing demand for our offerings and focused execution by our sales teams, we again exceeded expectations across revenue, operating income, and earnings per share while investing to fuel long-term growth. This quarter, revenue was $81.3 billion, up 17% and 15% in constant currency. Gross margin dollars increased 16% and 14% in constant currency, while operating income increased 21% and 19% in constant currency. Earnings per share was $4.14, an increase of 24% and 21% in constant currency, when adjusted for the impact from our investment in OpenAI. And FX increased reported results slightly less than expected, particularly in intelligent cloud revenue. Company gross margin percentage was 68% down slightly year over year, primarily driven by continued investments in AI infrastructure and growing AI product usage that was partially offset by ongoing efficiency gains, particularly in Azure and M365 commercial cloud, as well as sales mix shift to higher margin businesses. Operating expenses increased 5% and 4% in constant currency, driven by R&D investments in compute capacity and AI talent, as well as impairment charges in our gaming business. Operating margins increased year-over-year to 47%, ahead of expectations. As a reminder, we still account for our investment in OpenAI under the equity method. And, as a result of OpenAI's recapitalization, we now record gains or losses based on our share of the change in their net assets on their balance sheet, as opposed to our share, of their operating profit or losses from their income statement. Therefore, we recorded a gain which drove other income and expense to $10 billion in our gap results. When adjusted for the opening I impact, other income and expense was slightly negative and lower than expected, driven by net losses on investments. Capital expenditures were $37.5 billion, and this quarter, roughly two-thirds of our CapEx was on short-lived assets, primarily GPUs and CPUs. Our customer demand continues to exceed our supply. Therefore, we must balance the need to have our incoming supply better meet growing Azure demand with expanding first-party AI usage across services like M365 Copilot and GitHub Copilot, increasing allocations to R&D teams to accelerate product innovation and continued replacement of end-of-life server and networking equipment. The remaining spend was for long-lived assets that will support monetization for the next 15 years and beyond. This quarter, Total finance leases were $6.7 billion and were primarily for large data center sites. And cash paid for PP&E was $29.9 billion. Cash flow from operations was $35.8 billion, up 60%, driven by strong cloud billings and collections. And free cash flow is $5.9 billion and decreased sequentially, reflecting the higher cash capital expenditures from a lower mix of finance leases. And finally, we returned $12.7 billion to shareholders through dividend and share repurchases, an increase of 32% year over year. Now to our commercial results. Commercial bookings increased 230%. and 228% in constant currency, driven by the previously announced large Azure commitment from OpenAI that reflects multi-year demand needs, as well as the previously announced Anthropic commitment from November, and healthy growth across our core annuity sales motions. Commercial remaining performance obligation, which continues to be reported net of reserves, increased to $625 billion and was up 110% year-over-year with a weighted average duration of approximately two and a half years. Roughly 25% will be recognized in revenue in the next 12 months, up 39% year-over-year. The remaining portion, recognized beyond the next 12 months, increased 156%. Approximately 45% of our commercial RPO balance is from OpenAI. The significant remaining balance grew 28% and reflects ongoing broad customer demand across the portfolio. Microsoft Cloud revenue was $51.5 billion and grew 26% and 24% in constant currency. Microsoft Cloud gross margin percentage was slightly better than expected at 67% and down year over year due to continued investments in AI that were partially offset by ongoing efficiency gains noted earlier. Now to our segment results. Revenue from productivity and business processes was $34.1 billion and grew 16% and 14% in constant currency. M365 commercial cloud revenue increased 17% and 14% in constant currency with consistent execution in the core business and increasing contribution from strong co-pilot results. ARPU growth was again led by E5 and M365 co-pilot and paid M365 commercial seats grew 6% year over year to over 450 million with installed base expansion across all customer segments, though primarily in our small and medium business and frontline worker offerings. M365 commercial products revenue increased 13% and 10% in constant currency ahead of expectations due to higher-than-expected Office 2024 transactional purchasing. M365 consumer cloud revenue increased 29% and 27% in constant currency, again driven by ARPU growth. M365 consumer subscriptions grew 6%. LinkedIn revenue increased 11% and 10% in constant currency, driven by marketing solutions. Dynamics 365 revenue increased 19% and 17% in constant currency, with continued growth across all workloads. Segment gross margin dollars increased 17% and 15% in constant currency, and gross margin percentage increased 17%. again driven by efficiency gains at M365 Commercial Cloud that were partially offset by continued investments in AI, including the impact of growing co-pilot usage. Operating expenses increased 6% and 5% in constant currency, and operating income increased 22% and 19% in constant currency. Operating margins increased year-over-year to 60%, driven by improved operating leverage, as well as the higher gross margins noted earlier. Next, the intelligent cloud segment. Revenue was $32.9 billion and grew 29% and 28% in constant currency. In Azure and other cloud services, revenue grew 39% and 38% in constant currency, slightly ahead of expectations, with ongoing efficiency gains across our fungible fleet, enabling us to reallocate some capacity to Azure that was monetized in the quarter. As mentioned earlier, we continue to see strong demand across workloads, customer segments, and geographic regions, and demand continues to exceed available supply. In our on-premises server business, revenue increased 2% and 1% in constant currency ahead of expectations, driven by demand for our hybrid solutions, including a benefit from the launch of SQL Server 2025, as well as higher transactional purchasing ahead of memory price increases. Segment gross margin dollars increased 20% and 19% in constant currency. Gross margin percentage decreased year over year, driven by continued investments in AI and sales mix shift to Azure, partially offset by efficiency gains in Azure. Operating expenses increased 3% and 2% in constant currency, and operating income grew 28% and 27% in constant currency. Operating margins were 42%. down slightly year over year, as increased investments in AI were mostly offset by improved operating leverage. Now to more personal computing. Revenue was $14.3 billion and declined 3%. Windows OEM and devices revenue increased 1% and was relatively unchanged in constant currency. Windows OEM grew 5% with strong execution, as well as a continued benefit from Windows 10 end of support. Results were ahead of expectations as inventory levels remained elevated, with increased purchasing ahead of memory price increases. Search and news advertising revenue XTAC increased 10% and 9% in constant currency, slightly below expectations, driven by some execution challenges. As expected, the sequential growth rate moderated as the benefit from third-party partnerships normalized. And in gaming, revenue decreased 9% and 10% in constant currency. Xbox content and services revenue decreased 5% and 6% in constant currency and was below expectations driven by first-party content with impact across the platform. Segment gross margin dollars increased 2% and 1% in constant currency, and gross margin percentage increased year over year, driven by sales mix shift to higher margin businesses. Operating expenses increased 6% and 5% in constant currency, driven by the impairment charges in our gaming business noted earlier, as well as R&D investments in compute capacity and AI talent. Operating income decreased 3% and 4% in constant currency, and operating margins were relatively unchanged over a year at 27%, as higher operating expenses were mostly offset by higher gross margins. Now, moving to our Q3 outlook, which, unless specifically noted otherwise, is on a U.S. dollar basis. Based on current rates, we expect FX to increase total revenue growth by three points. Within the segments, we expect FX to increase revenue growth by four points in productivity and business processes and two points in intelligent cloud, and more personal computing. We expect FX to increase COGS and operating expense growth by two points. As a reminder, this impact is due to the exchange rates a year ago. Starting with the total company, we expect revenue of 80.65 to 81.75 billion US dollars or growth of 15 to 17% with continued strong growth across our commercial businesses, partially offset by our consumer businesses. We expect COGS of 26.65 to 26.85 billion U.S. dollars or growth of 22 to 23 percent. An operating expense of 17.8 to 17.9 billion U.S. dollars or growth of 10 to 11 percent driven by continued investment in R&D, AI compute capacity and talent against a low prior year comparable. Operating margins should be down slightly year over year. Excluding any impact from our investments in OpenAI, other income and expense is expected to be roughly $700 million, driven by a fair market gain in our equity portfolio and interest income, partially offset by interest expense, which includes the interest payments related to data center finance leases. And we expect our adjusted Q3 effective tax rate to be approximately 19%. Next. We expect capital expenditures to decrease on a sequential basis due to the normal variability from cloud infrastructure build-outs and the timing of delivery of finance leases. As we work to close the gap between demand and supply, we expect the mix of short-lived assets to remain similar to Q2. Now, our commercial business. In commercial bookings, we expect healthy growth in the core business on a growing expiry base when adjusted for the OpenAI contracts in the prior year. As a reminder, the significant open AI contract signed in Q2 represents multi-year demand needs from them, which will result in some quarterly volatility in both bookings and RPO growth rates going forward. Microsoft Cloud gross margin percentage should be roughly 65%, down year over year driven by continued investments in AI. Now to segment guidance. In productivity and business processes, we expect revenue of 34.25 to 34.55 billion U.S. dollars or growth of 14 to 15 percent. In M365 commercial cloud, we expect revenue growth to be between 13 and 14 percent in constant currency with continued stability in year-over-year growth rates on a large and expanding base. Accelerating co-pilot momentum and ongoing E5 adoption will again drive ARPU growth. M365 commercial products revenue should decline in the low single digits down sequentially, assuming Office 2024 transactional purchasing trends normalize. As a reminder, M365 commercial products include components that can be variable due to in-period revenue recognition dynamics. M365 consumer cloud revenue growth should be in the mid to high 20% range, driven by growth at ARPU, as well as continued subscription volume. For LinkedIn, we expect revenue growth to be in the low double digits. And in Dynamics 365, we expect revenue growth to be in the high teens with continued growth across all workloads. For Intelligent Cloud, we expect revenue of 34.1 to 34.4 billion US dollars or growth of 27 to 29%. In Azure, we expect Q3 revenue growth to be between 37 and 38% in constant currency against a prior year comparable that included significantly accelerating growth rates in both Q3 and Q4. As mentioned earlier, demand continues to exceed supply, and we will need to continue to balance the incoming supply we can allocate here against other priorities. As a reminder, there can be quarterly variability in year-on-year growth rates depending on the timing of capacity delivery and when it comes online, as well as from end-period revenue recognition depending on the mix of contracts. In our on-premises server business, we expect revenue to decline in the low single digits as growth rate normalize following the launch of SQL Server 2025, though increased memory pricing could create additional volatility in transactional purchasing. In more personal computing, we expect revenue to be $12.3 to $12.8 billion. Windows OEM and devices revenue should decline in the low teens. Growth rates will be impacted as the benefit from Windows 10 NVIDIA support normalizes and as elevated inventory levels come down through the quarter. Therefore, Windows OEM revenue should decline roughly 10%. The range of potential outcomes remains wider than normal, in part due to the potential impact on the PC market from increased memory pricing. Search and news advertising ex-tech revenue growth should be in the high single digits. Even as we work to improve execution, we expect continued share gains across Bing and Edge, with growth driven by volume. And we expect sequential growth moderation as the contribution from third-party partnerships continues to normalize. And in Xbox content and services, we expect revenue to decline in the mid-single digits against a prior year comparable that benefited from strong content performance, partially offset by growth in Xbox Game Pass. And hardware revenue should decline year over year. Now, some additional thoughts on the rest of the fiscal year and beyond. First, FX. Based on current rates, we expect FX to increase Q4 total revenue and COGS growth by less than one point, with no impact to operating expense growth. Within the segments, We expect FX to increase revenue growth by roughly one point in productivity and business processes and more personal computing, and less than one point in intelligent cloud. With the strong work delivered in H1 to prioritize investment in key growth areas and the favorable impact from a higher mix of revenue in our Windows OEM and commercial on-prem businesses, we now expect FY26 operating margins to be up slightly. We mentioned the potential impact on Windows OEM and on-premises server markets from increased memory pricing earlier. In addition, rising memory prices would impact capital expenditures, though the impact on Microsoft Cloud gross margins will build more gradually as these assets depreciate over six years. In closing, we delivered strong top-line growth in H1 and are investing across every layer of the stack to continue to deliver high-value solutions and tools to our customers. With that, let's go to Q&A, Jonathan.

speaker
Jonathan Price
Vice President of Investor Relations

Thanks, Amy. We'll now move over to Q&A. Out of respect for others on the call, we request that participants please only ask one question. Operator, can you please repeat your instructions?

speaker
Operator

Thank you. Ladies and gentlemen, if you would like to ask a question, please press star 1 on your telephone keypad, and a confirmation tone will indicate your line is in the question queue. You may press star 2 if you would like to remove your question from the queue. For participants using speaker equipment, it may be necessary to pick up your handset before pressing the star keys. And our first question comes from the line of Keith Weiss with Morgan Stanley. Please proceed.

speaker
Keith Weiss
Analyst at Morgan Stanley

Thank you guys for taking the question. I'm looking at a Microsoft print where earnings is growing 24% year on year, which is a spectacular result. Great execution on your part. Top line growing well, margins expanding. But I'm looking at after hours trading and the stock is still down. And I think one of the core issues that is weighing on investors is CapEx is growing faster than we expected. And maybe Azure is growing a little bit slower than we expected. And I think that fundamentally comes down to a concern on the ROI on this CapEx spend over time. So I was hoping you guys could help us fill in some of the blanks a little bit in terms of how should we think about capacity expansion and what that can yield in terms of Azure growth going forward. More to the point, how should we think about the ROI on this investment as it comes to fruition? Thanks, guys.

speaker
Scott Guthrie
Executive Vice President, Cloud + AI

Thanks, Keith. And let me start, and Satya can add some broader comments, I'm sure. I think the first thing, I think you really asked a very direct correlation that I do think many investors are doing, which is between the CapEx spend and seeing an Azure revenue number. And, you know, we tried last quarter, and I think, again, this quarter, to talk about More specifically, about all the places that the CapEx spend, especially the short-lived CapEx spend across CPU and GPU and where that will show up. Sometimes I think it's probably better to think about the Azure guidance that we give as an allocated capacity guide about what we can deliver in Azure revenue. Because as we spend the capital and put GPUs specifically, it applies to CPUs, but GPUs more specifically, we're really making long-term decisions. And the first thing we're doing is solving for the increased usage and sales and the accelerating pace of M365 co-pilot, as well as GitHub co-pilot, our first-party apps. Then we make sure we're investing in the long-term nature of R&D and product innovation. And much of the acceleration that I think you've seen from us in products over the past bit is coming because We are allocating GPUs and capacity to many of the talented AI people we've been hiring over the past years. Then, when you end up, is that you end up with the remainder going towards serving the Azure capacity that continues to grow in terms of demand. And a way to think about it, because I think I get asked this question sometimes, is if I had... taken the GPUs that just came online in Q1 and Q2 in terms of GPUs and allocated them all to Azure, the KPI would have been over 40. And I think the most important thing to realize is that this is about investing in all the layers of a stack that benefit customers. And I think that's hopefully helpful in terms of thinking about Capital growth, it shows in every piece. It shows in revenue growth across the business and shows as OpEx growth as we invest in our people.

speaker
Judson Althoff
Executive Vice President, Worldwide Commercial Business

Yeah, I think you, Amy, covered it. But basically, as an investor, I think when you think about our capital and you think about the GM profile of our portfolio, you should obviously think about Azure. But you should think about M365 Copilot and you should think about GitHub Copilot. You should think about Dragon Co-Pilot, Security Co-Pilot. All of those have a GM profile and lifetime value. I mean, if you think about it, acquiring an Azure customer is super important to us, but so is acquiring an M365 or a GitHub or a Dragon Co-Pilot, which are all, by the way, incremental businesses and dams for us. And so we don't want to maximize just one business of ours. We want to be able to allocate capacity while we're sort of supply constrained in a way that allows us to essentially build the best LTV portfolio. That's on one side. And the other one that Amy mentioned is also R&D. I mean, you've got to think about compute is also R&D, and that's sort of the second element of it. And so we're using all of that, obviously, to optimize for the long term.

speaker
Jonathan Price
Vice President of Investor Relations

Thanks, Keith. Operator, next question, please.

speaker
Operator

The next question comes from the line of Mark Mordler with Bernstein Research. Please proceed.

speaker
Mark Mordler
Analyst at Bernstein Research

Thank you very much for taking my question, and congrats on the quarter. One of the other questions we believe investors want to understand is how to think about your line of sight from hardware CapEx investment to revenue and margins. You capitalized servers over six years, but the average duration of your RPO is two and a half years up from two years last quarter. How do investors get comfortable that since a lot of this CapEx is AI-centric, that you'll be able to capture sufficient revenue over the six-year use of life of the hardware to deliver solid revenue and gross profit dollars growth? Hopefully one similar to the CPU revenue. Thank you.

speaker
Scott Guthrie
Executive Vice President, Cloud + AI

Thanks, Mark. Let me start at a high level and I thought you can add as well. I think when you think about average duration, I think what you're getting to is, and we need to remember, is that average duration is a combination of a broad set of contract arrangements that we have. A lot of them around things like M365 or a BizApp portfolio are shorter dated, right? Three-year contracts. And so they have, quite frankly, a short duration. The majority then that's remaining are Azure contracts that are longer duration. And you saw that this quarter when we saw the extension of that duration from around two years to two and a half. And the way to think about that is, the majority of the capital that we're spending today and a lot of the GPUs that we're buying are already contracted for most of their useful life. And so a way to think about that is, you know, much of that risk that I think you're pointing to isn't there. because they're already sold for the entirety of their useful life. And so part of it exists because you have this shorter-dated RPO because of some of the M365 stuff. If you look at the Azure-only RPO, it's a little bit more extended. A lot of that is CPU basis. It's not just GPU. And on the GPU contracts that we've talked about, including for some of our largest customers, those are sold for the entire useful life of the GPU. And so there's not the risk to which I think you may be referring. Hopefully that's helpful.

speaker
Judson Althoff
Executive Vice President, Worldwide Commercial Business

Yeah, and just one other thing I would add to it is, in addition to sort of what Amy mentioned, which is it's already contracted for the use for life, is we do use software to continuously run even the latest models on the fleet that is aging, if you will. So that's sort of what gives us that duration. And so... At the end of the day, we want to have – that's why we even think about aging the fleet constantly, right? So it's not about buying a whole lot of gear one year. It's about each year you write the Moore's Law, you add, you use software, and then you optimize across all of it.

speaker
Scott Guthrie
Executive Vice President, Cloud + AI

And, Mark, maybe to – state this in case it's not obvious, is that as you go through the useful life, actually, you get more and more and more efficient at its delivery. So where you've sold the entirety of its life, the margins actually improve with time. And so I think that may be a good reminder to people as we see that, obviously, in the CPU fleet all the time.

speaker
Mark Mordler
Analyst at Bernstein Research

That's a great answer. I really appreciate it. Thank you.

speaker
Jonathan Price
Vice President of Investor Relations

Thanks, Mark. Operator, next question, please.

speaker
Operator

The next question comes from the line of Brent Thill with Jefferies. Please proceed.

speaker
Brent Thill

Thanks, Amy. On 45% of the backlog being related to open AI, I'm just curious if you can comment. There's obviously concern about the durability, and I know maybe there's not much you can say on this, but I think everyone's concerned about the exposure, and if you could maybe talk through your perspective and what both you and Satya are seeing.

speaker
Scott Guthrie
Executive Vice President, Cloud + AI

I think maybe I would have thought about the question quite differently, Brent. The first thing to focus on is the reason we talked about that number is because 55%, or roughly $350 billion, is related to the breadth of our portfolio, a breadth of customers across solutions, across Azure, across industries, across geographies. That is a significant RPO balance, larger than most peers, more diversified than most peers, and frankly, I think we have super high confidence in it. And when you think about that portion alone growing 28%, it's really impressive work on the breadth as well as the adoption curve that we're seeing, which is I think what I get asked most frequently. It's grown by customer segment, by industry, and by GEO. And so it's very consistent. And so then if you're asking me about how do I feel about OpenAI and the contract and the health, listen, it's a great partnership. We continue to be their provider of scale. We're excited to do that. We sit under one of the most successful businesses built, and we continue to feel quite good about that. It's allowed us to remain a leader in terms of what we're building and being on the cutting edge of app innovation.

speaker
Jonathan Price
Vice President of Investor Relations

Thanks, Brent. Operator, next question, please.

speaker
Operator

The next question comes from the line of Carl Kirstead with UBS. Please proceed.

speaker
Carl Kirstead
Analyst at UBS

Okay. Thank you very much. Dr. Namy, regardless of how you allocate the capacity between first party and third party, can you comment qualitatively on the amount of capacity that you have coming on? I think the one gigawatt added in the December quarter was extraordinary and hints that the capacity ads are accelerating. But I think a lot of investors have their eyes on Fairwater Atlanta, Fairwater Wisconsin, and would love some comments about the magnitude of the capacity ads, regardless of how they're allocated in the coming quarters. Thank you.

speaker
Scott Guthrie
Executive Vice President, Cloud + AI

Yeah, Carl, I think we've said a couple of things. We're working as hard as we can to add capacity as quickly as we can. You've mentioned specific sites like Atlanta or Wisconsin. Those are multi-year deliveries, so I wouldn't focus necessarily on specific locations. The real thing we've got to do, and we're working incredibly hard at doing it, is adding capacity globally. A lot of that will be added in the United States. You see locations you've mentioned. But it also needs to be added across the globe to meet the customer demand that we're seeing and the increased usage. You know, we'll continue to add both long-lived The way to think about that is we need to make sure we've got power and land and facilities available, and we'll continue to put GPUs and CPUs in them when they're done as quickly as we can. And then finally, we'll try to make sure we can get as efficient as we possibly can on the pace at which we do that and how we operate them so that they can have the highest possible utility. And so I think it's not really about, you know, two places. Carl, I would definitely abstract away from that. Those are multi-year delivery timelines. But really we just need to get it done every location where we're currently in a build or starting to do that. We're working as quickly as we can.

speaker
Carl Kirstead
Analyst at UBS

Okay, got it. Thank you.

speaker
Jonathan Price
Vice President of Investor Relations

Thanks, Carl. Operator, next question, please.

speaker
Operator

The next question comes from the line of Mark Murphy with JP Morgan. Please proceed.

speaker
Mark Murphy
Analyst at JP Morgan

Thank you so much, Satya. The performance achievements of the Maya 200 accelerator, for instance, looked quite remarkable, especially in comparison to TPUs and Tranium and Blackwell, which have just been around a lot longer. Can you put that accomplishment in perspective in terms of how much of a core competency you think silicon might become for Microsoft? And, Amy, are there any ramifications worth mentioning there in terms of supporting your gross margin profile for inference costs going forward?

speaker
Judson Althoff
Executive Vice President, Worldwide Commercial Business

Yeah, no, thanks for the question. So a couple of things. One is we've been at this in a variety of different forms for a long, long time in terms of building our own silicon infrastructure. And so we're very, very thrilled about the progress with Maya 200. And, you know, especially when we think about running a GPT-5-2 and the performance we're able to get in the gems at FP4 just proves the point that when you have a new workload, a new shape of a workload, you can start innovating end-to-end between the model and the silicon. And the entire system is just not even about just the silicon. The way the networking works at rack scale that's optimized with memory for this particular workload. And the other thing is we're obviously round-tripping and working very closely with our own superintelligence team with all of our models. As you can imagine, whatever we build will be all optimized for Maya. So, we feel great about it. And I think the way to think about all up is we're in such early innings. I mean, even just look at the amount of silicon innovation and systems innovation. Even since December, I think the new thing is everybody's talking about low latency inference, right? And so... One of the things we want to make sure is we're not locked into any one thing. If anything, we have great partnership with NVIDIA, with AMD. They're innovating. We're innovating. We want a fleet at any given point in time to have access to the best TCO. And it's not a one-generation game. I think a lot of folks just talk about who's ahead. It's just remember you have to be ahead for all time to come. And that means you really want to think about, you know, having a lot of innovation that happens out there to be in your fleet so that your fleet is fundamentally advantaged at the TCO level. So that's kind of how I look at it, which is we are excited about Maya. We're excited about Cobalt. We're excited about our DPU, our NICs. So we have a lot of systems capability. That means we can vertically integrate. And because we can vertically integrate doesn't mean we just only vertically integrate. And so we want to be able to have the flexibility here, and that's what you see us do.

speaker
Jonathan Price
Vice President of Investor Relations

Thanks, Mark. Operator, next question, please.

speaker
Operator

The next question comes from the line of Brad Zelnick with Deutsche Bank. Please proceed.

speaker
Brent Thill
Analyst at Jefferies

Great, thank you very much. Satya, we heard a lot about frontier transformations from Judson and Ignite, and we've seen customers realize breakthrough benefits when they adopt the Microsoft AI stack. Can you help frame for us the momentum in enterprises embarking on these journeys and any expectation for how much their spend with Microsoft can expand in becoming frontier firms? Thanks.

speaker
Judson Althoff
Executive Vice President, Worldwide Commercial Business

Yeah, thank you for that. So I think one of the things that we are seeing is the adoption across the three major suites of ours. So if you take M365, you take what's happening with security, and you take GitHub. In fact, it's fascinating. I mean, these three things had effectively compounding effects for our customers in the past, like something like Entra as an identity system, or defender as the protection system across all three was sort of super helpful. But so what now you're seeing is something like work IQ, right? So, I mean, just to give you a flavor for it, The most important database underneath for any company that uses Microsoft today is the data underneath Microsoft 365. And the reason is because it has all this tacit information, right? Who are your people? What are their relationships? What are their projects they're working on? What are their artifacts, their communications? So that's a super important asset for any business process, business workflow context. In fact, the scenario I even had in my transcript around you can now take WorkIQ as an MCP server and in a GitHub repo and say, hey, please look at my design meetings for the last month in Teams and tell me if my repo reflects it. I mean, that's a pretty high-level way to think about how what is happening previously perhaps with our tools business and our GitHub business are suddenly now being transformative, right? That agent black plane is is really transforming companies in some sense. That's, I think, the most magical thing, which is you deploy these things, and suddenly the agents are helping you coordinate, bring more leverage to your enterprise. Then on top of it, of course, there is the transformation, which is what businesses are doing. How should we think about customer service? How should we think about marketing? How should we think about finance? How should we think about that and build our own agents? That's where all the services in Fabric and Foundry and, of course, the GitHub tooling is helping them, or even the low-code, no-code tools. I had some stats on how much that's being used. But one of the more exciting things for me is these new agent systems that M365 Copilot, GitHub Copilot, Security Copilot, all coming together to compound the benefits of all the data and all the deployment, I think is probably the most transformative effect right now.

speaker
Brent Thill
Analyst at Jefferies

Thank you. Very helpful.

speaker
Jonathan Price
Vice President of Investor Relations

Thanks, Brad. Operator, we have time for one last question.

speaker
Operator

And the last question will come from the line of Ramo Lenshow with Barclays. Please proceed.

speaker
Ramo Lenshow
Analyst at Barclays

Perfect. Thanks for squeezing me in. The last few quarters we talked, besides the GPU side, we talked about CPU as well on the Azure side, and you had some operational changes at the beginning of January last year. Can you speak what you saw there and maybe put it more in a bigger picture in terms of clients realizing that their move to the cloud is important if they want to deliver proper AI? So what are we seeing in terms of cloud transitions? Thank you.

speaker
Jonathan Price
Vice President of Investor Relations

I didn't quite... Sorry, Raymond, you were asking about the SNC CPU side, or can you just repeat the question, please?

speaker
Ramo Lenshow
Analyst at Barclays

Yeah, sorry. So I was wondering about the CPU side of Azure because we had some operational changes there. And we also hear from the field a lot that people are realizing they need to be in the cloud if they want to do proper AI and if that's kind of driving momentum. Thank you.

speaker
Judson Althoff
Executive Vice President, Worldwide Commercial Business

Yeah, I think I get it. So first of all, I had mentioned in my remarks that when you think about AI workloads, you shouldn't think of AI workloads as just AI accelerator compute, right? Because in some sense, take any agent. The agent will then spawn through tools used, maybe a container, which runs obviously on compute. In fact, whenever we think about even building out of the fleet, we think of it in ratios. Even for a training job, by the way, an AI training job requires a bunch of compute and a bunch of storage very close to compute. And so, therefore, and the same thing in inferencing as well. So, inferencing with agent mode would require you to essentially provision a computer or computing resources to the agent. They don't need GPUs. They're running on GPUs, but they need computers, which are compute and storage. So that's what's happening even in the new workload. The other thing you mentioned is the cloud migrations are still going on. In fact, one of the stats I had was our latest SQL server growing as an IaaS service in Azure. And so... That's one of the reasons why we have to think about our commercial cloud and keep it balanced with the rest of our AI cloud because when clients bring their workloads and build new workloads, they need all of these infrastructure elements in the region in which they're deploying.

speaker
Ramo Lenshow
Analyst at Barclays

Okay, perfect. Thank you.

speaker
Jonathan Price
Vice President of Investor Relations

Thanks, Raimo. That wraps up the Q&A portion of today's earnings call. Thank you for joining us today, and we look forward to speaking with you all soon. Thank you all. Thank you.

speaker
Operator

Thank you. This concludes today's conference. You may disconnect your lines at this time, and we thank you for your participation. Have a great night.

Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-