2/24/2026

speaker
John
Conference Operator

Good morning and thank you for standing by. My name is John and I will be your conference operator today. At this time, I would like to welcome everyone to the Digital Ocean for quarter earnings conference call. All lights have been placed in mute to prevent any background noise. After the speaker's remarks, there will be a question and answer session. If you would like to ask a question during this time, simply press star followed by the number one on your telephone keypad. To withdraw your question, simply press star one again. I would now like to turn the conference over to Melanie Strait, Head of Investor Relations. Please go ahead.

speaker
Melanie Strait
Head of Investor Relations

Thank you and good morning. Thank you all for joining us today to review DigitalOcean's fourth quarter and full year 2025 financial results and an investor update. Joining me on the call today are Patty Srinivasan, our Chief Executive Officer, and Matt Steinfurt, our Chief Financial Officer. Before we begin, let me remind you that certain statements made on the call today may be considered forward-looking statements, which reflect management's best judgment based on currently available information. Our actual results may differ materially from those projected in these forward-looking statements, including our financial outlook. I direct your attention to the risk factors contained in our filings to the SEC, as well as those referenced in today's press release that is posted on our website. DigitalOcean expressly disclaims any obligation or undertaking to release publicly any updates or revisions to any forward-looking statements made today. Additionally, non-GAAP financial measures will be discussed on this conference call, and reconciliation to the most directly comparable GAAP financial measures can be found in today's earnings press release, as well as in our investor presentation that outlines the discussion on today's call. A webcast of today's call is also available in the IR section of our website. And with that, I will turn the call over to Patty.

speaker
Patty Srinivasan
Chief Executive Officer

Thank you, Melanie. Good morning, everyone, and thank you for joining us. We had a fantastic quarter and a very strong finish to the year, and I'm excited to share the details with all of you. We ended the year with 18% revenue growth in Q4, reaching $901 million for the full year. We delivered $51 million in incremental organic ARR the highest in the company's history. Our million-dollar customers reach $133 million in ARR, growing at 123% year-over-year. We maintain financial discipline and strong profitability with 42% adjusted EBITDA margins and 19% adjusted free cash flow margins for the year. There is a lot to be excited about. And given this momentum that we are seeing and the progress we are making against our long-term strategy, we wanted to provide a more comprehensive update today rather than wait for a separate investor day. Our prepared remarks will be slightly longer than usual. We'll advance slides from our earnings presentation on the webcast as we go. And we'll leave plenty of time for questions. AI is reshaping entire industries. And we are built for this shift. Software is being disrupted, not by incremental AI features, but by a structural shift to agentic systems operating at scale. Cloud and AI native disruptors are moving beyond AI experimentation at a breakneck speed. They're deploying agents that reason, act, retain memory, and run continuously. In this structural shift, we see a secular, hyperscale-sized opportunity by serving AI and cloud-native companies driving this disruption. When markets are disrupted like this, there's typically a short window to take advantage of the opportunity, and let me tell you how we are seizing it. First, our top customers are now our growth engine. We have turned what was once viewed as a weakness into a competitive strength. Our top digital native customers, or DNEs, which include cloud and AI native companies, are now our fastest growing cohort, and in fact, growing significantly faster than the market on DL. In a nutshell, scaling our top customers was once a constraint. Today, it's our growth engine. Second, we're on the right side of software disruption driven by AI. Modern cloud and AI native companies are going after large markets with disruptive AI-centric software innovation. They're increasingly choosing DigitalOcean as their natural platform to build and scale their agentic AI software. And when these companies disrupt and scale at unprecedented rates on our platform, we win. Third, we put the cloud in NeoCloud. These AI natives need more than just GPU rentals or inference APIs. They need access to optimized AI models, both closed and open source, production grade inferencing, and a full stack cloud for their software, all working together at global scale. We deliver all of it in one integrated agentic inference cloud. And finally, We are building a durable and profitable growth engine. We are investing responsibly while driving balanced growth. Without chasing the GPU training arms race, we expect to deliver 21% revenue growth in 2026, reaching 25% plus growth by Q4 2026, and 30% growth in 2027. We are on a path to being a weighted rule of 50 company next year on the back of our existing committed data center capacity alone. Put simply, we are accelerating growth the digital ocean way. In December, we crossed a major milestone, surpassing a billion dollar revenue run rate. This is a remarkable achievement for a company that was founded through Techstars in 2012. This success is a testament to our passionate team and the vision of our original founders. I also extend my deepest gratitude to all our incredible customers who have supported us throughout this journey. But what matters more than this milestone is where we are going. We exited 2025 at 18% year-over-year growth, and are on a path to deliver 21% growth in 2026 with an exit growth rate of 25% plus in Q4 of 2026. We are picking up momentum and we have outgrown the old narrative. Let me elaborate. Our top customers are now our growth engine. For our first decade, we built an iconic developer cloud. That foundation still matters And we have over 4 million active developers on our platform that absolutely love us. Over the last several quarters, we have deliberately shifted focus towards serving our top D&Es and eliminating any reason for them to leave DigitalOcean as they scale. And that focus is working. In Q4, we delivered a record organic incremental ARR of $51 million and $150 million on a trailing 12-month basis, both surpassing even our peak COVID-era quarters. This record trailing 12-month incremental ARR was balanced across AI and cloud customers. ARR from D&Es reached $604 million in Q4, which is now 62% of total ARR, growing 30% year-over-year. And our D&E NDR reached 102%, continuing to outperform developer NDR. And like I've been reporting for a while now, our largest customers in the D&E cohort are accelerating the fastest. Our $100,000 customers are growing at 58%. Our $500,000 customers are growing at 97%. And our million-dollar customers, who reached $133 million in ARR, are growing at 123% year-over-year, all well ahead of market growth rates. And NDR also increases meaningfully as these customers scale. Q4 NDR was 102% for our $100,000 customers, 106% for our $500,000 customers, and 115% for our million-dollar customers. Churn for our $1 million customers was zero in Q4 and has averaged 0% over the last 12 months, which clearly shows that our top customers are now scaling with us and becoming our growth engine. This should also effectively debunk any misconception that our most successful customers will outgrow our platform. Recapping this section, we are accelerating past the one billion revenue run rate milestone and our top customers are driving this acceleration. We are no longer defined just by entry-level developers experimenting on our platform. We are defined by high-growth cloud and AI-native companies running production workloads, scaling revenue, and building their businesses on DigitalOcean. Said simply, scaling our top customers was once a constraint. Today, it's our growth engine. On to the next point. We are on the right side of software disruption. There is a structural shift happening in software, and DigitalOcean is emerging as a preferred platform for cloud and AI-native companies that are driving this disruption. The last generation of Software as a Service, or SaaS, monetized per user, per seed. Value scaled with headcount. This next generation of AI-centric software monetizes per token, per inference request, Value scales with intelligence delivered. As AI model capabilities accelerate, entire categories of horizontal and vertical software are being reinvented. Incumbents are reacting to transformational change by layering AI into their workflows, seeking to enhance their existing software. But AI native companies are starting from first principles. For them, AI isn't a feature. It is the very engine that defines their product. Every time they deliver value, inference runs, tokens are consumed, and intelligence is produced. DigitalOcean is uniquely positioned to serve these disruptors, and that is evident in the traction we are getting from leading AI-native companies. We have signed and expanded production workloads with scaled cloud and AI-native companies like Character AI, for Cato and Hippocratic AI, companies with product market fit, real revenue, and rapidly scaling demand. Our work with Character AI demonstrates this clearly. We delivered 100% throughput increase and roughly 50% lowered cost per token for Character AI on our production inference cloud, powered by AMD Instinct GPUs at production scale. This is not a lab benchmark. This is on live traffic across tens of millions of customers. This demonstrates our ability to support production scale inferencing for leading AI companies with our differentiated performance, cost efficiency, and integrated AI and cloud platform built for inference-first production workloads. Another AI native with a proven product market fit is Hippocratic AI. who builds healthcare-focused conversational AI designed to support clinical workflows and patient engagement. Hippocratic AI selected DIOS Agentech Inference Cloud to power HIPAA-compliant clinical AI workloads. This validates not just our performance, but our enterprise-grade security and compliance. For Hippocratic AI, we optimize their multimodal deployment on NVIDIA hardware, reinforcing the importance of vertical innovation from GPUs to networking, kernel optimization, cloud integration, and inference software. These AI natives also scale very differently. While traditional cloud customers may take years to reach a million dollars in ARR, AI natives can cross that threshold in months or even weeks. When inference is your product, demand compounds quickly. DigitalOcean is purpose-built for these disruptors. As software becomes more intelligent and AI-centric, we are building the vertically integrated inferencing cloud designed to power the next generation of AI natives, putting us squarely on the right side of this AI-driven disruption. And our agentic inference cloud is catalyzing these disruptors. Next. Let me explain how we are enabling this. We do this by putting the cloud in Neocloud. Over the last couple of years, a new category of Neoclouds has emerged that is largely optimized for one thing, large-scale AI model training, dense GPU forms, high-performance networking, frontier AI model training workloads. This is an important part important layer of the AI stack. But serving inferencing is different. As AI diffuses into every software company, workloads shift from training a handful of frontier models to running millions of real-world applications. And real-world AI-centric software needs more than GPU forms. They need compute, storage, databases, networking, observability, security, all working seamlessly together with predictable and transparent unit economics. Over the past four quarters, we have evolved our agentic inference cloud to meet that reality. We have combined specialized inference infrastructure with our full stack cloud platform, purpose-built for production AI, while staying true to what defines DigitalOcean. Simplicity, open standards, enterprise-grade performance, and SLAs, and predictable and transparent unit economics. A good recent example of this in action is OpenClaw, which recently took the world by storm by demonstrating the power of agentic software, giving us a glimpse into what AI-centric software future will look like. OpenClaw is an open-source AI agent framework that allows developers to run real-world, task-driven agents. When customers deploy OpenClaw on DigitalOcean, They need more than just GPUs because AI agents are stateful. They reason. They take action. They retain memory. They interact with third-party APIs. All this requires more than just a GPU form. It takes a full cloud and AI stack working together side by side. Customers increasingly understand this as inference is the heartbeat of modern AI natives. It is their primary operating cost, their performance level, and their competitive mode. Their production traction scales directly with model quality, inference performance, and unit economics. As they grow, they don't build their products around a single closed source model, but rather orchestrate multiple models in real time, often leveraging open source and a mixture of expert approaches to optimize both accuracy and unit economics. Our platform delivers flexibility at every layer, from serverless inference APIs to dedicated clusters and GPU droplets, allowing customers to precisely match performance and cost to their workload requirements. We pair that with performance optimized open source models delivering high accuracy, strong throughput, low latency, and compelling unit economics. And this isn't a standalone inference platform. It is deeply integrated with our full stack cloud that we have hardened over the last dozen years so that customers can build, deploy, and scale their entire AI application in one integrated environment with enterprise SLAs. Our agent development platform takes them from experimentation to production with real-world AI agents. Underpinning all of this is a deep lineup of GPUs from NVIDIA and AMD, supported by rapidly expanding global data center footprint, built and operated with years of operational expertise supporting mission-critical workloads. This integrated platform and flexibility of choice is precisely what makes DigitalOcean a natural platform for agentic software. Let me explain this again using OpenClaw as an example. Customers can build and deploy OpenClaw agents on DigitalOcean in two distinct ways, depending on their need for control, scale, and operational complexity. The first path optimizes on simplicity and speed. Customers can launch a pre-configured one-click GPU droplet and have an OpenClaw agent running in minutes. This model gives full control over the environment, ideal for experimentation, customization, performance tuning, and for teams that want direct access to the infrastructure layer. The second path optimizes for global scale. Customers can deploy OpenClaw on DO's managed serverless platform, where DigitalOcean handles provisioning. scaling, security, container orchestration, and operational management. This approach is ideal for teams that are scaling a global application. Both approaches run on the same integrated cloud with access to managed databases for agentic memory, object storage for artifacts, virtual private cloud networking, observability, and GPU-backed inference. That's what vertical integration looks like in the inference economy. not just providing bare metal GPUs or even just generating inference tokens, but providing a secure, scalable, and manageable foundation for intelligent stateful systems. Within days of launching OpenClaw, nearly 30,000 native DigitalOcean one-click OpenClaw droplets were created. And that was just the starting point. Thousands of other OpenClaw deployments were activated by customers signaling the emergence of a new ecosystem almost overnight. The success of OpenClaw is an early view of how the AI market will continue to evolve and can serve as a blueprint for AI native businesses on how a new generation of software will be built around autonomous agents that orchestrate complex multi-step workflows across systems, continuously reason with data and context and execute tasks end to end with minimal human involvement. As these AI native companies move from proof of concept to production agents, the richness of the underlying platform, the security posture, manageability, scalability, and predictable unit economics become mission critical. And that is exactly where DigitalOcean is fast emerging as the natural platform for building and scaling AI agentic software. The competitive landscape is crowded with companies speaking to their ability to address the inference market. But our differentiation from these competitors is very clear. New clouds rent out GPUs. Inference wrapper providers stop at inference APIs and model libraries. We continue to effectively compete with hyperscalers who bring scale, but also come with complexity and cost structures that are aimed at traditional large enterprise companies. While each of these competitors address a component of the inference value chain, real-world agentic software requires a tightly integrated environment where inference, orchestration, persistence, networking, and security are designed to work together with simplicity, global scale, enterprise SLAs, and predictable unit economics. That is where DigitalOcean wins. This differentiation is clear to our customers But it's also very clear in our financial profile. As a full-stack cloud provider that has operated mission-critical workloads for cloud and AI natives for over a decade, we look very different from a financial perspective than other players chasing the AI training market or components of the inference market. Bernier Clouds have very high revenue concentration with just a few very large customers making up the vast majority of their revenue. DigitalOcean's top 25 customers represent only 10% of our revenue. While GPU rental providers earn bare metal revenue and margins on their infrastructure, DigitalOcean drives higher revenue and margin from our full-stack inference and cloud solutions. And when a growing number of neoclouds are investing massive amounts of capital and burning near-term profits and cash for future returns, DigitalOcean is already profitable and generating cash. Our traction with cloud and AI natives is no accident. It is the result of relentless focused investment and disciplined execution. We recently strengthened our executive team by adding Vinay Kumar as our chief product and technology officer. As a founding member of Oracle Cloud Infrastructure, or OCI, Vinay brings deep hyperscale expertise and leads our product platform, infrastructure, and security teams. Having built a hyperscaler from the ground up at OCI, he looks forward to scaling up another one at DigitalOcean, one that is purpose-built to meet the complex needs of cloud and AI native workloads globally. In the meantime, our R&D team has been very busy continuing to ship products and features that are helping our customers scale on our platform. On Core Cloud, we launched remote MCP support, embedding AI directly into the control plane, enabling secure zero setup infrastructure management. On our AI platform, we introduced the agent development kit and enhanced agent evaluation tools to help customers move from experimentation to production with measurable performance and reliability. With GPU observability, managed NFS, and multi-node GPU support, we significantly expanded our ability to run large scale mission critical inference in production. This is what vertical integration looks like. Infrastructure, inference, observability, agent tooling, all built to seamlessly work in scale together. And we're just getting started. We'll share the next wave of innovation on our agentic inference cloud at our next deploy conference in San Francisco on April 28, as we continue building the platform purpose-built for the inference economy. Our differentiation is durable and will continue to grow as the market shifts from training to inference. To give investors clearer visibility into this momentum, we are introducing a new metric, AI customer revenue. AI customer revenue includes all revenue from customers leveraging our AI products, including both inference and core cloud services. Because AI natives don't just buy GPUs. They build, operate, and scale applications which need a full stack inference cloud. In fact, 70% of our AI customer ARR in Q4 2025 was already coming from inference services or general purpose cloud products rather than from bare metal GPU rentals. And these customers are growing rapidly with Q4 AI customer ARR reaching $120 million, growing 150% year-over-year, now making up 12% of total ARR. In summary, we don't just rent GPUs. We run production AI. We're not a GPU landlord. We are an AI cloud platform. We deliver hyperscaler-grade infrastructure and reliability purpose-built inference services co-located and integrated with a full-stack general-purpose cloud designed for the next generation of AI natives. Or put simply, DigitalOcean puts the cloud in NeoCloud. Now on to my final takeaway. We are building a durable and profitable growth engine. At our investor day last April, We laid out a plan to return the business to 18% to 20% growth by 2027. On our last earnings call, we pulled that growth projection forward by a full year, guiding that we would reach that 18% to 20% growth range in 2026. And just nine months after setting that original plan, we've already reached the bottom end of the target range at 18% growth in Q4 of 2025, achieving it two full years ahead of our original target. And the momentum we are seeing gives us even greater confidence. We now expect to deliver 21% revenue growth for the full year 2026 with an exit growth rate of 25% plus by Q4 and reaching 30% growth in 2027. As we ramp into our committed 31 megawatts incremental capacity this year, There will be measured near-term pressure on gross margin and adjusted EBITDA, but we remain confident in our 18 to 20% unlevered adjusted free cash flow margin guide for the year. The near-term pressure is just a physics problem given the startup cost timing and revenue ramp characteristics of quickly adding new capacity. It is the natural result of pursuing high return growth opportunities, but we remain disciplined operators. Demand continues to far outstrip supply, and we will take advantage of opportunities to further accelerate growth when they present themselves. We will do so responsibly, and we'll continue to pursue investments with attractive returns, match investments with revenue timing, maintain a strong balance sheet, and allocate capital with rigor, even as we accelerate. Growth and discipline are not trade-offs for us. They're both operating principles. With that, I will turn it over to Matt to walk through the quarter and the year in more detail and to provide additional color on our updated outlook. Matt, over to you.

speaker
Matt Steinfurt
Chief Financial Officer

Thanks, Paddy. Good morning, everyone, and thanks for joining us today. As Paddy just shared, we're a very different company today than we were just a few years ago. It's an exciting time at DigitalOcean. We are a rapidly growing and profitable company that is incredibly well positioned to take advantage of the hyperscale sized inference market opportunity. This excitement is clearly evident in both our recent financial performance and in our higher near-term and long-term outlooks. Revenue growth has reaccelerated. We've reversed declines from our top customers, turning them into a key driver of our growth. We have scaled our AI customer ARR to 120 million, growing 150% year over year. And we've done this profitably, growing adjusted EBITDA and adjusted free cash flow on both an absolute and a margin basis. While we are pleased with our progress over the past several years, it is our recent momentum that gives us the confidence to further increase our near-term and long-term outlooks. Fourth quarter revenue was $242 million, up 18% year-over-year, and we closed 25 with full-year revenue of $901 million. We delivered sustained acceleration through the back half of 2025, driving a 500 basis point increase in Q4 growth from the same period just a year ago. We delivered the accelerated revenue growth with strong margins and growing profits, even as we increased our investments. Fourth quarter gross profit was $142 million, up 13% year over year, with a gross margin of 59%. For the full year, gross profit was $540 million, up 16% year-over-year with a gross margin of 60%. Adjusted EBITDA in the fourth quarter was $99 million, an adjusted EBITDA margin of 41%. Full-year adjusted EBITDA was $375 million, a 42% adjusted EBITDA margin. Trailing 12-month adjusted free cash flow was $168 million in Q4, or 19% of revenue. We maintained our attractive free cash flow margins in 25 in part by expanding our financial toolkit to include equipment financing. This better aligns infrastructure investment timing with the revenue that it supports. We will continue to utilize a combination of upfront asset purchases and equipment leasing as we invest to fuel our growth. We continue to be disciplined financial stewards for our investors. We prudently use stock-based compensation to attract and retain our critical talent while repurchasing shares to mitigate dilution. SBC declined to 9% of revenue in 2025, down from 12% in the prior year. To put that number in context, we have a 33% margin if you subtract SBC from adjusted EBITDA. At 33% margin, we are just above the 80th percentile of a broad software comp set on an adjusted EBITDA less SBC basis. We are well above the 13% median of that group. Non-GAAP weighted average shares outstanding increased slightly from 103 million to 105 million over the same period. To reduce dilution, we repurchased 2.4 million shares in 2025 for $82 million at an average price of approximately $35. Note that we ended 2025 with our full 100 million buyback authorization in place, and that authorization continues through July 31st of 2027. While we continue to view share repurchases as an important long-term tool, our near-term capital allocation priorities are squarely focused on organic growth and balance sheet flexibility. GAAP diluted net income per share in the quarter was 24 cents and $2.52 for the full year, 183% year-over-year increase. Non-GAAP diluted net income per share in the quarter was 44 cents. For the full year, non-GAAP diluted net income per share was $2.12, a 10% year-over-year increase. As a quick reminder, recall that our 2025 net income per share metrics were impacted by the actions we took in 2025 to strengthen our balance sheet. In 2025, we proactively addressed the upcoming maturity of our 2026 convertible notes. We did this through a series of successful financing transactions that have given us significant balance sheet flexibility. These transactions included the establishment of an $800 million bank facility, the issuance of $625 million of 2030 convertible notes, and the repurchase of the majority of our then outstanding 26 convertible notes. Excluding the effects of these financing transactions, non-GAAP diluted net income per share would have been $2.29 for the year and $0.53 for the quarter. With our 2026 notes largely addressed, we ended the year with a strong balance sheet. We have sufficient liquidity and projected cash generation to address the remaining $312 million balance of our outstanding 26 convertible notes. Having drawn down the remaining $120 million on our Term Loan A in February, we will repurchase or redeem the remaining 26 notes for cash before or at the maturity in December of 26. Beyond this, we have no other material maturity until 2030, and we entered 2025 with approximately 3.2 times net leverage. Before I get into guidance, I want to highlight an action we are taking to further concentrate our investments on our key growth levers. We are sunsetting a small legacy dedicated bare metal CPU offering. We expect approximately 13 million of ARR to roll off by the end of Q1 2026. As this revenue is non-core, we've excluded this legacy product revenue from our customer-specific year-over-year growth metrics. Shifting back to guidance, we entered 2026 with tremendous momentum and confidence. Patty spoke of the material demand we're seeing for our GenTech Inference Cloud. We also continue to improve visibility on our near-term revenue growth. As we increased RPO in Q4 to $134 million, up 121% sequentially, That's close to 500% year-over-year. With this growing demand and disability, we are again increasing our near-term growth outlook. For the first quarter of 2026, we expect revenue in the range of $249,250 million, which is approximately 18% to 19% year-over-year growth. We expect first quarter adjusted EBITDA margins in the range of 36% to 37%. We expect non-GAAP diluted net income per share of 22 to 27 cents based on approximately 111 to 112 million weighted average fully diluted shares outstanding. For the full year 2026, we expect revenue growth between 19 and 23%. This is 21% at the midpoint beyond the 18 to 20% growth outlook that we shared just last quarter. And it is important to highlight that this would be 21% to 24% projected growth if we exclude the impact of our discontinued legacy bare metal CPU offering. We will deliver this accelerated growth while maintaining attractive margins. We project full year 36% to 38% adjusted EBITDA margins and 18% to 20% unlevered adjusted pre-cash flow margins, which is $207 million at the midpoint. We expect non-GAAP diluted net income per share of 75 cents to a dollar on 111 to 112 million weighted average fully diluted shares outstanding. This growth outlook is based on the incremental data center and GPU capacity investments that we have already committed that will come online over the course of 2026. As we look at the quarterly progression within 2026, it is important to understand the timing of this incremental capacity and how that timing impacts our financials. We are bringing 31 megawatts of new data center capacity online in three new facilities in 2026. The smallest of our three new facilities will start ramping revenue in the second quarter. The remaining two start ramping revenue in the second half of 2026. Aligned with this capacity ramp, we expect second quarter revenue growth to remain around 18 to 19%, with revenue growth then ramping in Q3 before exiting the year at 25% plus in Q4. While there are always supply chain and implementation timing risks to manage, we believe our implementation timeline is realistic. Increased data center lease expense and equipment depreciation expense will both hit our financials several months before we generate our first revenue in these facilities. Given this lag between expenses and revenue, cost of goods sold from higher GPU-related depreciation, and operating expenses from new data center operating leases, will increase in the early part of the year as we ramp into the new capacity. These increased costs will cause the expected upfront drops in gross margin and net income that we've seen when we turned up previous data centers. The initial impact will just be larger as we are turning up more capacity at one time than we've done in the past. Near-term adjusted EBITDA margins will also be impacted somewhat from these dynamics, although the impact is less as adjusted EBITDA is only impacted by the higher data center operating That leverage is projected to be above four times and in short term as we add finance lease obligations to fund our GPU and CPU investments. And this increases net debt several months ahead of revenue and adjusted EBITDA ramping. We anticipate returning below four times net leverage over the medium to long term as we increase utilization in these data centers and ramp revenue and adjusted EBITDA. We will achieve these growth targets by focusing on our two primary growth levers, scaling our top D&E customers and expanding our base of AI native customers. We will focus our investments on meeting the needs of our top D&E customers so that they can continue to scale on DigitalOcean as they grow their own business. We will continue to invest both in our differentiated agentic inference cloud and in the data center and GPU capacity required to support AI natives. While we are excited by our growth potential in 2026, we are just getting started. As we reach full utilization on our existing committed capacity, we expect to reach 30% revenue growth in 2027. We will drive this growth while delivering projected 20% plus unlevered adjusted pre-cash flow margins, which would make us a rule of 50 plus company in 2027. We will achieve this while making smart investments, earning attractive margins, and maintaining a healthy balance sheet. We have both the tools and the discipline in place to continue to take advantage of opportunities as they arise. We will continue to share details on our leading indicators and our progress as we execute. We are increasingly confident in our ability to build a durable and profitable growth engine. With that, I'd like to turn it back over to Patty to close us out before we get to Q&A.

speaker
Patty Srinivasan
Chief Executive Officer

Thank you, Matt. Before we move to Q&A, let me leave you with a few thoughts. We crossed a billion dollar revenue run rate in December, but that milestone is not the headline. The headline is where we are heading. We are no longer a niche developer cloud. We're the platform that high growth cloud and AI natives are increasingly choosing to run production AI workloads at scale. We are projecting to exit 2026 at 25% plus revenue growth with a clear path to 30% growth in 2027 with the existing committed data center capacity alone. Our top customers are accelerating and are growing significantly faster than the market on DL. We have outgrown the old DigitalOcean narrative. Scaling our top customers was once a constraint. Today, it's our growth engine. Our million dollar customers are at 133 million ARR, growing at 123% year over year. The world of software is shifting from seats to tokens, from experimentation to production, from model training to inferencing at scale. And in that shift, the winners in inference will be more than just GPU landlords. They will be vertically integrated AI cloud platforms that deliver performance, great unit economics, and simplicity that embraces open source, exactly what we have and what we continue to build. Our AI customer ARR reached $120 million in Q4, growing 150% year over year, with 70% of that coming from inference and core cloud products, not from bare metal. And we're doing it without chasing the GPU training arms race, without sacrificing discipline, without compromising profitability. We're building something durable. AI is reshaping entire industries and we are built for this shift. I'm incredibly excited to be part of DigitalOcean at this critical inflection point where a new era of software is being ushered in. I take incredible pride in building a platform that AI pioneers are increasingly leveraging to disrupt software. I thank all of you for your partnership and support. And I hope you will join us in San Francisco on April 28th to learn about our platform, our innovation, and our customers. With that, let's open it up for your questions.

speaker
John
Conference Operator

Thank you. Ladies and gentlemen, we will now begin the question and answer session. At this time, I would like to remind everyone, in order to ask a question, please press star followed by the number one on your telephone keypad. And if you would like to enjoy your questions, please press star again. We would like to ask everyone to limit themselves to one question and one follow-up only to accommodate all questions. Thank you. Our first question comes from the line of Raimo Lenz-Chao with Barclays. Please go ahead.

speaker
Raimo Lenz-Chao
Analyst, Barclays

Perfect. Thank you. Congrats from me. That's amazing how a company is transforming right in front of my eyes. Teddy, can we talk a little bit about the customers that you're seeing? The talk in the market, a lot of that is just open air, unfraud-based, maybe Google, Um, and, and they are basically doing everything and nobody else really comes up when you talk with, you know, talk, looking at your customers, looking at the pipeline of, of customers out there. How do you see that inference marking evolve market evolving in terms of how broad that will be? Is it just on topic doing everything or what are you seeing out there in the field? And then had one follow up from that.

speaker
Patty Srinivasan
Chief Executive Officer

Yeah. Raymo, thank you for the question. It's a very thoughtful way to get started. Of course, OpenAI, Gemini, and Anthropic get all the headlines in the mainstream news coverage. But as we talk to AI native companies, and even examples that I was using in my script, and you will hear a lot more about this at our deploy conference with very specific benchmarks and data. But what we are hearing from these AI native companies is that while these closed source models are really, really good, the open source alternatives are extraordinarily important to manage the unit economics as these companies came. Because the cost per token for the open source models is about 90% cheaper, right? So with very comparable accuracy as these open source models mature. So we have many AI native customers that are using as I mentioned, a variety of open source models at real time when they're doing inferencing. They want us to manage a multitude of open source models and even route the request intelligently to these open source models. And of course, use closed source expensive models on a case by case basis. It could be for certain problems which are better served by these closed source models and route everything else to these open source models so that they can have a balanced unit economics. So it is by no means, and if you look at data from open router, 30% of the traffic already today is served by open source. That is without a lot of optimization, that is without companies like DigitalOcean really stepping up and taking full ownership and guardianship of these open source models. So we are doing a lot of work in this regard over the next couple of months, and you will see it in our deploy conference. But this 30% is only going to grow as these real world AI native workloads explode. We are going to see a lot of open source adoption. Even in the open claw deployments that we are seeing, there is a very healthy adoption of open source models serving these open claw agent forms. So it is really interesting to see how this is evolving. And I want to say there is definitely a world beyond these closed source models. The open source ecosystem is thriving and it is only going to grow in strength from here on.

speaker
Raimo Lenz-Chao
Analyst, Barclays

Yeah. Okay. Perfect. And then thank you for that, Patty. And Matt, one question that comes up a lot at the moment is on the weighted rule of 50 numbers. Because if you look at your weighting, and then there's a lot of questions about the free cash flow margins, if you think about in 2027, can you maybe kind of go a little bit deeper there? Because that comes up a lot here at the moment.

speaker
Matt Steinfurt
Chief Financial Officer

Yeah, thanks, Ramo. The weighted rule of 50 is pretty simple for us. We multiply revenue growth by 1.5 and add 0.5 times the pre-cash flow margin. And that's effectively saying that you're counting revenue growth three times as valuable as a point of pre-cash flow margin. But the important thing to note is while we talk about weighted rule of 50, If you look at the growth projections we provided, we're actually a regular weighted or a regular rule of 50 as well with projected 30% revenue growth in 27 with 20% unlevered free cash flow margins. So that is, I think, a very big testament to the growth opportunity that we have in front of us, but also the discipline financial discipline that we've been employing. You know, with the ability to accelerate revenue growth while still maintaining very attractive EBITDA margins and very attractive pre-cash flow margins is kind of part of the model, and it's the benefit of us not chasing the GPU training kind of, you know, arms race. We believe that we'll differentiate based on software and a differentiated platform, and we see a tremendous opportunity to drive, you know, really attractive margins as we expand and invest appropriately.

speaker
John
Conference Operator

Our next question comes from the line of Kingsley Crane, Mechanical Regenuity. Please go ahead.

speaker
Kingsley Crane
Analyst, Mechanical Regenuity

Hi, thanks for taking the question, and congrats to the whole team on the results. I think you've done an excellent job with the investor update. I actually want to circle back to the inference cloud dynamic with open source models. You know, we've been looking at open router data as well. I mean, some of these models come and go pretty quickly. You know, I've had many Macs communicate too. How are you thinking about quickly providing support for those classes and models? Is there any operational tax to quickly provide support? And then just how to think about them driving growth both from a revenue and profit standpoint? Could there be more of a Jevons Paradox dynamic there with the lower cost models? Thanks.

speaker
Patty Srinivasan
Chief Executive Officer

Yeah, thank you, Kingsley. That's a good question. So you asked two different questions, one from an operational overhead in terms of day zero support to these models. Obviously, we've been extending day zero support for a majority of these open source models as they come out. And there are a couple of things there. One is, obviously, there's a little bit of manual overhead in supporting these models. But a large portion of this test and readiness harness is automated. And it is only going to grow in automation. And you will see a lot more details around this at our deploy conference. And the second part of your question was really around the Javon's paradox of as these open source models proliferate, how should we think about the growth profile of not just our platform, but also these companies? I think it is only going to aid in the deployment of AI native software in pretty much every segment of the market. And I think we should also not think about AI-native workloads as open source or closed source. What we are seeing is a mixture of both for the same use case, for the same inference call even. Some parts of the application stack, based on the prompt, we do intelligent routing. Right now, it's fairly manual, but we are working on different types of algorithms to route it in a much more intelligent and smart fashion. So you will see a universe going into the future where prompts are going to get routed to different models, all working together at the same time to deliver high throughput, low latency, acceptable accuracy with great unit economics of token throughput. So this is coming. We're already seeing it from many of our AI native workloads. And that is how I see the market evolve as open source models continue to catch up with these closed source systems. The closed source systems are really important to be on the bleeding edge of innovation, but a vast majority of these long-running agentic software like OpenClaw can very materially run on these open source systems.

speaker
Kingsley Crane
Analyst, Mechanical Regenuity

Thanks, Paddy. That's really helpful. For Matt, you know, obviously 22 million per ARR per megawatt is a clear differentiator. I'm curious now that Atlanta is close to full utilization, any insights you have on just what a full utilized megawatt can look like in terms of a revenue efficiency standpoint for AI? Thanks.

speaker
Matt Steinfurt
Chief Financial Officer

Yeah, that's a great question, Kingsley. Yeah, if you look at the public data that's available for like a neocloud, which is more of a bare metal, model, they show like, what, 9 to 12 million, I think, in ARR per megawatt. Clearly, you know, we believe we can deliver more than that. And if you look at the guidance that we've given, what you'll see is that while it's 22 now, that's, you know, again, with a small, less than 10%, or right around 10% of our ARR in AI. So as we grow AI, it'll come down. We'll add incremental ARR per megawatt greater than what you're seeing from the neoclouds. But the drop from a bigger mix of AI by the end of 27, once we're fully ramped with the incremental 31, it'll only drop by a couple million. It'll be around 20 million. And so if you think of us as not having, okay, well, we've got AI investments and we've got core cloud investments, but we have more of a overall AI cloud platform that has GPUs, it's got CPUs, it's got you know, core compute and bandwidth and all the capabilities that you need, we still expect to deliver materially higher ARR per megawatt than what you're seeing in the NeoCloud space. So we feel really good about the returns that we're getting and the margin that we're able to drive. And this is only going to increase. I mean, you saw the chart in the deck about how much of the AI customer revenue is coming from non-bare metal. That's 70%. that's only going to increase and that smaller sliver of core cloud is only going to increase as customers become entrenched on our platform and they start putting in database and storage and some of the other higher margin capabilities that are sticky. We're very excited about our ability to serve the kind of full addressable wallet of the AI natives.

speaker
John
Conference Operator

Our next question comes from the line of Josh Beyer with Morgan Stanley. Please go ahead.

speaker
Josh Beyer
Analyst, Morgan Stanley

Great. Thanks for the question, and congrats on the strong results and impressive targets. I just wanted to clarify the incremental 31 megawatts, that all comes online by the end of 26, driving that 25% revenue growth exiting the year, but then as utilization increases, the capacity is enough to reach the full 30% growth in 2027 revenue?

speaker
Matt Steinfurt
Chief Financial Officer

That's absolutely right, Josh. You nailed it. As we said in the call, the smallest of the three facilities, which is six megawatts, is going to start ramping revenue in the second quarter. But the other two start ramping in the second half. And just with what we believe is appropriate assumptions around the timing and the ramp of that, we'll hit 25% in Q4 as an exit growth rate, 25% plus. And then if all we did was continue to fill those up, we'd hit 30% for the full year.

speaker
Josh Beyer
Analyst, Morgan Stanley

in um in 2027 and we feel very good about um again the returns that we would we would generate there and the growth trajectory that we would be on at that point okay that's helpful um and we're just hoping you could sort of review some of what uh vinay kumar's top priorities are at this point there's been so many positive changes from a product and innovation perspective over the last couple of years? What are his priorities? What changes should we expect going forward?

speaker
Patty Srinivasan
Chief Executive Officer

Yeah, thanks, Josh. As I was mentioning in my prepared remarks, given his background at Oracle Cloud, he has really hit the ground running. His top one or two priorities are going to be building, continue to build out the Inference Cloud, and you will see a lot of very detailed announcements on April 28th at our deploy conference on how the next generation of this inference cloud capabilities is going to look like. The team is super hedgedown and busy working on it now. We also will continue to raise the bar on our core cloud capabilities because Our cloud-native, digital-native enterprise companies are also scaling tremendously on our platform, and they require continuous innovation from our side on advanced things like different types of databases and different scale aspects, scalability aspects of our database as a service and various parts of our core cloud infrastructure like high-performance storage, network file system. So one of the things that Vinay is working on is delivering innovation in our core infrastructure that is applicable to both AI native and cloud native. So there's a huge intersection that when you look at companies like the AI natives that we are rapidly scaling up on our platform, They require very similar things from, say, high-performance storage, as an example. Like, I don't want to pre-announce stuff that we are working on, which we will come out on April 28th with, but a lot of those things are very similar to what our cloud-native companies can also benefit from. So there's a... quite a robust lineup of capabilities that we are working on for both the inference cloud as well as some of the underlying infrastructure enhancements that will be applicable to digital native enterprise companies. So that's what he's focused on delivering. And as I mentioned, given his background, he's almost hit the ground running in terms of ramping up the innovation on the core inference cloud.

speaker
John
Conference Operator

Our next question comes from the line of Juan C. Mohan with Bank of America. Please go ahead.

speaker
Juan C. Mohan
Analyst, Bank of America

Yes, thank you so much, and great to see this growth acceleration here. First, maybe, Patty, just visibility around the 30% growth, how should we think about that in terms of, I mean, historically, obviously, DigitalOcean is a very different company today, but historically, you really not have long-term contract, long-term visibility. You're talking about very meaningful acceleration as you go to 30% plus. Maybe if you could dissect some of the underlying drivers of what you're looking at, which give you the confidence, and maybe just split that between infrastructure as a service and platform as a service. That would be maybe a different way to slice and give people a view over there and have a quick follow-up.

speaker
Patty Srinivasan
Chief Executive Officer

Thank you, Vamsi. I think Matt broke down some of the physics of the acceleration, right? So we have new capacity that is ramping up throughout this year and going into next year as well. So that gives us a lot of visibility. So first of all, maybe I should take a step back and talk about the fact that the demand that we are seeing now is very, very robust. And it far exceeds the supply that we currently have from an infrastructure point of view. So we are being super responsible in ramping up our capacity. We are being super aggressive in the timelines. We are working very closely together with the data center providers and the OEMs to get this capacity online in the fastest possible speed of light scenario as much as we can. Given the schedule that we are currently working on, we feel very confident that as we bring this capacity online, we have enough demand in the pipeline to be able to fill up this capacity with very responsible unit economics. So that's what is giving us the confidence to provide the outlook of 25% plus exiting this year and 30% for next year. And also, our RPO has been going up steadily, and that is one leading indicator. But also, I should add the fact that inferencing is very different, right? I mean, these are real-world workloads, as opposed to training, where a company can just raise venture capital money and just commit to a two-year, three-year contract to burn dollars to build a frontier model. Inferencing workloads are typically paid by end customers. So for us, that is super exciting because we are typically working with post-product market fit companies that have real revenue, working with real consumers or business to business like Hippocratic AI. They're deploying in some of the world's largest healthcare providers. So we know that as their demand picks up, they're going to need more and more inference capabilities. So our confidence really stems from the visibility we are getting into our customers and the real-world inference demand. So I feel if you look at it from a customer perspective or you look at it from a capacity point of view, those are the data points that we use to triangulate our guidance for exiting this year and next year.

speaker
Juan C. Mohan
Analyst, Bank of America

Okay, thanks, Paddy. And then maybe one quick one for Matt. So can you just talk a little bit about the margin progression? I guess you mentioned some near-term margin compression given your capacity ramp. Should we expect that will persist through all of 2026 given the timing of the ramp? And then as you ramp into 2027, we should be back to 2025 levels? Thanks so much.

speaker
Matt Steinfurt
Chief Financial Officer

Thanks, Mamsi. Yeah, there's certainly going to be some near-term pressure, as we said, on gross margin, for example. But the metrics that we think are the best indicators of profitability for us continue to be adjusted EBITDA margin and pre-cash flow margin, both on an unlevered basis and a levered basis. And if you look at the margin guidance that we provided for the full year, 26, and the ranges for 27, You see exactly what you just described, which is, we'll have a little bit more pressure this year as we ramp. But then, as we grow into that, and the utilization increases that kind of catches catches back up and and you should see a, an upward trajectory on the, on the margins, the mix of services versus the core cloud. That's certainly, you know, that that's a. a longer duration impact because as we add more AI capabilities and more AI revenue, the AI margins are lower than the core cloud margins. So you'll have a little bit of a mix impact in addition to the timing impact. But all of that is met out in the very, very strong adjusted EBITDA margins that we're projecting and the very strong adjusted pre-cash flow margins and unlevered adjusted pre-cash flow margins.

speaker
John
Conference Operator

Our next question comes from the line of Gabriela Borges with Goldman Sachs. Please go ahead.

speaker
Gabriela Borges
Analyst, Goldman Sachs

Hey, good morning. Congratulations to the DigitalOcean team. Patty, I have a little bit of a long-term question for you. If I think about DigitalOcean's core value proposition on democratizing access to cloud, that has been true for many years now. My question for you is, what do you think is structurally different with the AI compute cycle that will allow DigitalOcean to to essentially capture and hold on to a higher share of wallet in AI inference compute relative to the cloud cycle. And the reason I'm asking is because there are 32 companies that show up in this semi-analysis cluster max benchmarking report. We know that demand is early. We know that the AI inference cycle is early. How do you think about DigitalOcean's ability to durably capture a higher share of wallet relative to those 31 other competitors over the long term? Thank you.

speaker
Patty Srinivasan
Chief Executive Officer

Thank you, Gabriela. And I'm sure if semi-analysis was around in 2011 or 2012 when cloud was taking off, there would have been 32 BPS providers as well. And we went from that to a billion dollar run rate in 12, 13 years. And when I, if I take a step back and think about how durable our mission is in the world of AI, I think I hit on a few different things, right? I fundamentally believe that Inference workloads are also workloads or real-world applications as well. As the application scales, you need a variety of different things all working together. AI natives don't want to just use one provider for token generation, go to another provider for database, go to a third provider for their application experience, and go to a fourth provider for some of the other core storage and other artifacts. They want an integrated cloud that is co-located and all of these primitives to work hand in hand together so that they can focus on building their business and not mess around with infrastructure. The other part that I feel very confident is something that we are going to be talking and dealing with a lot in our deploy conference on April 28, which is this emergence of of a mixture of AI models that is required to run efficient unit economics in inferencing mode. So the difference in the unit economics between closed source models and open source models is 90, so open source models are 90% more cost effective when it compares to closed source models. And it already has 30% market share with just a handful of open source models on the market. So I feel this is only going to go from strength to strength. And that has been a big differentiator for DigitalOcean throughout the years as well. So we talk about 32 companies showing up in some of these market landscapes, but When Open CLAW became viral a couple of weeks ago or a month ago, we were one of the natural places where developers started deploying it. As I mentioned, we have more than 30,000 of these agents running, and we barely did anything from a marketing point of view. In fact, we did no marketing. All we did is scramble our jets to make sure that developers have first-class experience deploying these agents on our platform. We were such a natural choice for running these long-running agentic software because they need a lot more than just access to GPUs or just access to inference tokens. I feel very good that our product strategy is working and we are able to serve the needs of inference workloads running in production. we're already starting to see the proof points for where different parts of our inference cloud are getting lit up. And the slide that I walked through in terms of our AI customer revenue, 70% of our revenue already is from non-bare metal. And that should give us a lot of confidence that our platform services, higher margin services,

speaker
Gabriela Borges
Analyst, Goldman Sachs

are resonating with our customers they're increasingly coming on coming to us as they recognize that bare metal is is not going to be sufficient for them yeah really good caller thank you i'll stay on this um non 70 non-bare metal data point and i'll ask the question to matt um payback period on gpus the last time we've talked about this i think you've told us it was around three years But that was before Ural had focused on maximizing or improving the ARR megawatt capacity. So my question to you is how is payback periods and GPU changing? Thank you.

speaker
Matt Steinfurt
Chief Financial Officer

Well, that's a great question, Gabriella. And one of the things that I want to make sure everybody understands is if you think about why did we lease gear, like why are we doing equipment leasing, it's to address exactly this challenge. If you said, okay, well, you're going to spend hundreds of millions of dollars on GPUs, and you're going to have to wait three, four years to pay them back. That's a model. That's not the model that we're pursuing. Our model is, you know, we're leasing the gear, which means we're earning more ARR per megawatt and, you know, per the associated GPU investment than what a neoclub would earn. But we're also earning cash on that within months of actually deploying it, right? As soon as we deploy that and we start earning revenue and it ramps, We're paying on a monthly basis for that gear over four or five years, and we're earning more than two times that in revenue. So from a payback period, we still have the same kind of payback hurdles that we've had before. You'd like to see three-year paybacks on most of your investments. You might be willing to extend that to win some early customers. But if you actually think about the mechanics, that's a little bit of an intellectual exercise because we're paying – we're already paying our gear back, you know, within a month or two because we're earning more cash than we've spent on that gear. And that's the reason you align your investment with revenue.

speaker
John
Conference Operator

Our next question comes from the line of Param Singh with Oppenheimer. Please go ahead.

speaker
spk11

Yeah. Hi. Thank you for taking my question. First of all, you know, Paddy, I wanted to get a sense of, you know, your gradient AI platform, obviously, that's driving a lot of growth. But where do you think some of the missing pieces are in terms of your technology, given that the new clouds are starting to get a little bit more aggressive? Do you think you have a sustainable competitor advantage and how do you plan to sustain that?

speaker
Patty Srinivasan
Chief Executive Officer

Yeah. I not only do I think we have we have an advantage now and our lead is increasing compared to other new clouds because they are coming from a training world, which is which is totally different, right? The needs all the way from the way GPUs are networked and the cluster sizes, everything is so different. Inferencing is very different, as I explained. And if you look at the slide that shows the richness of our inference cloud stack, each one has taken us years and years to perfect. And as we work very closely with cloud AI native companies, we are understanding and getting an appreciation for their real challenges. The example that I was talking to you about where customers need orchestration across different AI models at real time when they are trying to parse out a prompt and answer that query or make real-time decisions. We are getting so much intelligence just working hand-in-hand with our customers I feel like our lead is only going to increase from here on. And it's not to say that we won't have competition, but I feel very confident in our ability to out-invent these other companies in terms of our inference cloud. And the durability is for you to see, we have 0% churn in our million-dollar-plus customers. So something is working, and that is our agentic inference cloud.

speaker
spk11

Thanks. And as my follow-up, do you feel you're constrained by the availability of of power and physical location at this point? Or put conversely, given the opportunity to invest even heavier and grow faster, given the demand from the, you know, AI natives, what would you prefer at this time? Or would you rather have a slower pace of investment? Happy, if you could give me some insight, I'd really appreciate it.

speaker
Matt Steinfurt
Chief Financial Officer

Yeah, as Patty said, we have more demand than we have supply. But we're also making, I think, very prudent and appropriate investment decisions. We don't want to go all in with, like, a single customer. We don't want to go all in on a single generation of GPU technology. We believe that, you know, building a diverse set of customers that are very heavy in the inferencing workloads and not chasing training will build a durable model for us. And so, you know, we'll continue to evaluate opportunities to continue to accelerate our growth, and we'll make good appropriate financial decisions. And we're doing it in a very balanced way across a diverse set of customers. But we're very, very highly concentrated on what we're good at and where we're differentiated and where we can earn a good return. And that's what's driving our investment decisions.

speaker
John
Conference Operator

Our next question comes from the line of Freddie Sultan with UBS. Please go ahead.

speaker
Freddie Sultan
Analyst, UBS

Yeah, awesome. Thanks for the questions. First one for Patty, kind of on a similar line of questioning, just sort of that longer-term capacity-add framework. As you guys think about sort of how much capacity you want to procure, maybe, you know, stretch it out over the next several years, like, what are you looking at specifically to inform that decision, and then what gives you confidence in being able to fill that capacity over the next several years?

speaker
Patty Srinivasan
Chief Executive Officer

Yeah, thank you. We look at many, many factors, but the dominant one is we look at our customer demand, look at what they're dealing with, how they're projecting their needs. So that is a big, big input factor for us. The second one is we look at the footprint from the perspective of, you know, for inferencing, obviously we need to have a really good geographic spread. And for all of our new data centers, we have both core cloud as well as AI capacity all running on the same server stack. So that's an important aspect for us to have all of these things co-located. The third thing we always look at is how we are going to keep up with the generational leapfrogs including AMD and NVIDIA and perhaps others in the future. So these are all important factors that we take into account as we consider how our footprint is going to look like over the next several years. And we are always making this evaluation. We are looking at various options as we build out our long-term plans. And as I said, primary driver is always looking at our customer needs, customer demands, what kind of workloads are they ramping up. The demand for their application is a big driver for us. So those are some of the input factors that we use to plan our capacity.

speaker
Freddie Sultan
Analyst, UBS

Got it. Just a quick follow-up from Matt. Does the 27 EBITDA margin and free cash flow guidance contemplate any additional capacity investments next year, or is that just reflective of the 31 megawatts you're bringing online this year?

speaker
Matt Steinfurt
Chief Financial Officer

It's just reflective of the 31 megawatts that we're bringing on this year.

speaker
John
Conference Operator

Our next question comes from the line of James Fish with Piper Sandler. Please go ahead.

speaker
James Fish
Analyst, Piper Sandler

Hey, guys. Maybe just following up on that. If AI is growing as fast as it is, and you guys are needing to bring on capacity now to meet all this demand, aren't you going to need more capacity then? And Matt, additionally, it looks like you're excluding finance leases in your free cash flow metric. Why treating it like this, as if it wasn't finance, you'd still have capex? And it does seem to imply, I'm getting a lot of this question pre-market here, it seems like you're implying about 10% reported free cash flow in 27. So can you walk us through that? And I know this is a loaded question, but a lot of those that are providing lease servers are implementing memory cost increases. So I guess, how are you thinking about what commitments you actually have from them and potential pass-through of memory costs?

speaker
Matt Steinfurt
Chief Financial Officer

Yeah, just I'll take that in reverse. So yeah, we've seen increased component costs the same as others in the industry, and that's all reflected in our guidance. And and again, it's, it hasn't changed our return expectations or, or, or the, the economics that we'd say, it's just, it means that there's more costs associated with some of the service that we're bringing on. But this is, I'm glad that you brought this up, which is, you got to think about our. Free cash flow in tiers, right? So you say, okay, well, you got, you got, um, unlevered free cash flow, which, you know, again, people should be using from a valuation standpoint and, um, and that we're talking about being in the, in the 18 to 20% range. Um, when you, when you add, uh, uh, the interest expense, you get the levered free cash flow, which is what we've historically, you know, that's our adjusted free cash flow margin. And you're only, you're only, um, uh, given up a couple of percentage points there. And that interest right now is, um, is half like the TLA and it's half equipment leasing. And then as you point out, you have the principal payments that are more of a financing transaction. That's why they're not captured in either the adjusted pre-cash flow or levered adjusted pre-cash flow. But if you take those financing transactions and if you're going to lump everything in it and you say, okay, well, what about the mandatory prepayment of $25 million a year on your term loan? Okay, we'll throw that in there. If you take all of the cash payments, including the principal payments, including the prepayment of the term loan A, that's all financing stuff. So, again, you're mixing metaphors here. But if you throw that all in, we're still generating cash. So you're saying, hey, well, it's 10. I'm like, hey, it's 10. We're generating cash while we're accelerating the growth of this business into the 30s. And on an unlevered pre-cash flow basis, it's 18 to 20%. So it's a testament to our ability to dramatically accelerate growth. We've taken growth from 11%, 12%, 13% to guiding to 30%. And we're generating incredibly strong unlevered free cash flow. We're generating very strong levered free cash flow. And if you throw the kitchen sink in there and all the payments that we have to make, we're still generating cash. I mean, that's an incredibly strong position to be in. And we have a very flexible balance sheet. So we feel very good about the cash generation that we're setting out while we're delivering this growth.

speaker
James Fish
Analyst, Piper Sandler

Yeah, I mean, the growth acceleration looks good. And Patty, for you on slide 20, I got asked a couple questions ago to a degree, but slide 20, you point out the difference between you guys and Neoclouds and inference wrappers. And maybe being humble about it, you point out that you're about 75% of the way in the first three categories. Is this something that we should be expecting to hear about at the April event or what do you guys need to do to get to that full hundred percent difference?

speaker
Patty Srinivasan
Chief Executive Officer

Yeah. Uh, fish, I don't know if I will ever call myself a hundred percent in those things because that market is changing so fast. Like if we ask five of our customers today, what they want versus, uh, what they thought they wanted three months ago. It's meaningfully different, right? Because as they are going into their customer base and deploying their solutions, new things come up all the time. The capability of AI models evolve all the time. So this is going to be a moving target for the next couple of years. But the first part of your question, absolutely, that is where our R&D team is super heads down, inventing new technologies, inventing new parts of the stack. So you will hear a lot more about this on April 28th. But I would say this is where I feel very confident that we already have a lead and that lead is only going to grow over the next few quarters.

speaker
James Fish
Analyst, Piper Sandler

Thanks, guys.

speaker
John
Conference Operator

Next question comes from the line of Thomas Blakey with Cantor Fitzgerald. Please go ahead.

speaker
Thomas Blakey
Analyst, Cantor Fitzgerald

Hey, guys. Congratulations on a great quarter and a great outlook. here, maybe some follow-ups to my peers. Patty, you mentioned, I think it was to a previous question about demand outstripping supply and giving you great visibility that you've kind of alluded to in this call. Not expecting you to give calendar 28 commentary if you wanted to, because like you looked out two years on the April 25 call, that'd be great. But in addition to that, I'm interested in what you're seeing in a pricing dynamic, you know, if demand is outstripping supply. You know, you're lining up these new AI natives. Just maybe some commentary on, you know, pricing would be helpful from this cohort.

speaker
Patty Srinivasan
Chief Executive Officer

Yeah, thanks, Tom. So I think we have already talked about what we are going to talk about for 2027. But in terms of the, yeah, the demand, demand is clearly there. And we are moving as fast as we can to first deliver on these three data centers that Matt talked about. From a pricing point of view, we have competition from all kinds of different players and the pricing is holding and in some cases it has gone up and we are very, very attuned to what is going on in the market and there is a lot of scarcity of supply across the board. We are also in a position where we work very closely with our customers to ensure that we are calibrating the prices that we have both on demand as well as contractual prices to keep pace with what the market dynamics are at this point. I would say nothing has materially changed. And the pricing is also a function of the generation of the GPUs that we are talking about. At the lowest level, if a customer wants access to GPUs, it is priced GPU dollars per hour. And at that layer, it really depends on the generation of the GPU, whether it is Blackwell or the Hopper series from Nvidia the 350, 355s from AMD or the 300 or 325. So it really depends on the nature of the generation. There are also other dependencies like the cluster sizes, the cluster configuration, what kind of networking they want, and so forth. And as you move up stack, if you look at my slide 19, the one thing that I did not mention in slide 19 is that customers can enter our stack at pretty much any layer of the stack, right? So the higher up you go in the stack, you're not pricing by dollars for a GPU hour, but you're pricing per token. And there we have lots more degrees of freedom in terms of how, um, the price, um, versus competition, because there you're, you're doing dollar per token, but also you have the flexibility of running it in different types of hardware. You can also change up the, um, the AI model that is servicing this token request. So we have more degrees of freedom and customers, some customers, um, need that flexibility, uh, and they're willing to live with, um, uh, the higher, uh, orders of the stack rather than dictating which generation of hardware they want to run in.

speaker
Thomas Blakey
Analyst, Cantor Fitzgerald

Right. That's super helpful, Patty, and just maybe. As an extension of that flexibility, it was impressive to hear about the 0% churn in the large 1 million plus cohort with 115% NRR. I'd love to know what the overlap there is with regard to the AI native exposure. If you could maybe kind of talk about those customers and how much of that is from AI and from Matt relatedly. And you're improving NDRR. Are we finally including AI and ML revenue there? If not, when can we expect that? Thank you, guys.

speaker
Matt Steinfurt
Chief Financial Officer

Yeah, thanks, Tom. So it's about, on a customer account basis, it's about half of the million dollar customers are AI customers and half are core cloud or general purpose cloud only. It's a little bit more on a revenue basis or an ARR basis, a little bit more AI, but not a lot. It's not too far off of 50-50. um and as you saw in the um in the materials 48 of the um 43 or 48 the 48 of the um the trailing 12-month incremental error is coming from those uh from ai customers so that's um that that's kind of how the split is in terms of the ndr no it's not in there yet and and the reason that we disclosed the um the AI customer revenue, and we will continue to disclose that as a metric, and the growth rate, and also looking at the RPO, which is, again, a decent chunk of that, not all, but a decent chunk of that is also AI. We're trying to give you better leading indicators of the performance of the AI customer base. The NDR, if you look at some of the charts that we showed with some of the bigger inferencing providers, Those, they just got started on the platform in kind of the June, July timeframe. And, um, there's a big difference in the, I'd say the, the, the size and caliber of the customers that we've been winning in the last, um, six months on, um, you know, well now seven, eight months, I guess, on, um, on the AI side, those we think will have more of your traditional kind of NDR like characteristics where they grow and, and, and, um, And expand on the on the platform using inferencing, which is more of a production, you know, workload versus a lot of our earlier customers were were smaller customers doing experimentation doing projects and they just don't look like revenue was growing like crazy because we would, you know, we'd be adding a ton of those customers. But if you, you looked at any of the individual customers, it was, it was hard to see a pattern and what is is a sass metric is. It looks for patterns where you bring on a customer and you can expect them to do X, Y, Z over the next 12 months. And we just didn't see that. There's a lot of noise in our AI customer revenue kind of lumpiness early. That we see changing. So we'll continue to evaluate that every quarter. And at the appropriate time, we'll contemplate rolling that in. But it's probably still 12 months away.

speaker
John
Conference Operator

Our next question comes from the line, Patrick Wool Ravens from . Please go ahead.

speaker
Patrick Woolravens
Analyst

Oh, great. Thank you. Congratulations on the quarter. And I have to say, congratulations on the slide deck. It's fantastic. And I'm sure all of your investors are going to appreciate it. So, Patty, I was looking back at my note from two years ago when you joined. And at the time, you know, one of the things you said was that, Our durable competitor differentiator for us long-term is going to be in the software layer. And you said you were focused on bringing simple, easy-to-use AIML capabilities on both hardware and software to developers. So what I'm wondering is, as you look back, and you're growing 11% when you joined and decelerating, right? So as you look back, which of the growth drivers that have caused you to accelerate? Now we're talking about 30%. did you anticipate and which were, fortuitous is probably the wrong word, you know, luck favors the prepared, but which were sort of unexpected?

speaker
Patty Srinivasan
Chief Executive Officer

Yeah, thank you, Patrick. I would say what was surprising to me, and maybe I'll take some creative liberty in answering your question. So what took a few quarters for us to get right was, As I mentioned several times during this call, we had a constraint in keeping up with customers that were scaling rapidly and scaling big on our platform when I joined. So it took us a few quarters to really understand, get to the bottom of their needs. And there was a lot of work that had to be done for us to get to the 0% churn that I was so proud to share with all of you this morning. So that took a lot of engineering effort and I'm super proud of my team and it's a lot of very complex technology work all the way from advanced networking to fortifying our storage to inventing new things in our database offering and so forth. So that took a tremendous amount of heavy lifting and that job is not done yet as As we get to, we started with 100K customers. Then we focused on 500K customers. Now we are focused on million-dollar customers. And who knows, in the next couple of years, we'll be talking about 5 million and 10 million-dollar customers. So that bar racing is an ongoing endeavor for us. And on the more fun side of things is... literally participating from the starting point with the AI native ecosystem. So we are learning as they're learning and we are inventing alongside them and that is a great luxury to have because we feel like we can ride their growth curve and as their needs increase and they're learning the right way to do this from a workload perspective, we are just trying to keep up pace, and they're super appreciative of us inventing on their behalf to make their life easier so that they can focus on their domain and invent new things for their customers. So we'll share a lot more of this on April 28th, but that's how I would answer your question, Patrick.

speaker
Patrick Woolravens
Analyst

That's great. Thanks, Patty.

speaker
John
Conference Operator

Next question.

speaker
Unknown

comes from the like my stickers with the intimate company please go ahead hey thanks for taking the questions here guys and congrats on the strong growth cord rails are providing us Matt if I could just come back and I know that the free casual topics come up a couple of times here but you can see as well as anybody just how sensitive the investors are in this market to the AI capex investments that are required or different financing vehicles that are out there just to be clear When we look at the calendar 26 versus the calendar 27 guide that. That unlevered to just free cash flow, or just a free cash flow guide. The 3 point delta is expected to widen to about 10 points in calendar 27. If we take that 1 step further, and I know that your guidance for. Are those guardrails for 27 currently don't contemplate additional capacity coming online. But it seems fair that we should be assuming more capacity and if that's the case.

speaker
Matt Steinfurt
Chief Financial Officer

would those different would that delta between the unlevered and the the leverage free cash flow margin widen further from there is that fair yeah the way i think you got to think about it is again if you're you're looking at the levered free cash flow it's got other stuff in it besides equipment leases it's got tla interest it's got um you know other things if you look at the um the the the fish was saying if you look at the other cash There's mandatory prepayments of the term loan A. So you got to be real careful about what you're using for what purpose, right? So if you said, hey, what's the steady state cash flow generation capability of this business? Again, because we lease equipment, we don't have an upfront capital requirement that makes it super lumpy. We can make that smoother and we can grow. However, when you're growing a business, even with that model, And you're adding data center capacity, you have a couple of months where you're actually taking data center lease expense and you haven't generated any revenue when you lease gear. Unlike, if you buy gear, you put it in your warehouse, you actually don't expense it until you until you actually deploy it. When you lease gear, you start. That lease expense as soon as it's shipped, so you have front loaded costs that don't catch up to revenue right away, but. You're because you didn't have a big giant slug of capital as soon as the revenue starts generating, you're immediately generating cash and you're improving your, your margins with utilization. So, the steady state, like, if you said, that's why we, you know, we've been very crisp about what's included in the numbers. It's to give you a sense of what the margins look like on a steady state basis. If we just continually assumed, well, we're going to add incremental capacity, which I can't tell you how much incremental capacity we're going to add. because we haven't contracted it and we haven't committed anything to incremental capacity. So what we're showing is when we add 31 megawatts as an example, and you roll that forward a year, you have incredibly strong cash flow characteristics to that. And there's going to be a short-term impact on gross margins and net income because of the timing thing I described, but that works itself through relatively quickly. And so you would expect that as we saw other opportunities To accelerate our business with similar economics that we would make similarly good decisions and that engine will keep going. Um, and so it's, I view it in a very different way than what you're describing. I view it as hey, don't if we're going to commit to more capacity, it's because we have more growth opportunities and the returns are incredibly compelling and we're doing it in a way where we match the revenue and the costs and we're not going out above our skis. or beyond our skis and making massive commitments chasing the data center and GPU arms race. We're doing it methodically. We're doing it where we have an advantage, where we earn a good return, and we're able to do it while, you know, again, taking 13%, 11, 13% revenue growth to 30 while still maintaining really good margins. So, we're really excited about the potential we have and the economics that we're delivering.

speaker
Unknown

thanks for that matt maybe for a quick follow-up here uh understood on the accelerating growth you guys are looking at throughout calendar 26 just based on the megawatts coming online one thing i wanted to ask and i'm sure that you guys have your own models as you're looking at the ai customers ramping but to drive that 25 percentage growth exiting calendar 26 can you provide any additional color for what you're assuming in terms of error directly from those AI customers, if I'm thinking about the 120 million that we see today exiting calendar 25?

speaker
Matt Steinfurt
Chief Financial Officer

The only thing I would say is what we said is that the AI customer ARR in Q4 was 120 million, growing 150%. We have more demand than we have supply. We're bringing on supply. You should expect that it doesn't slow down.

speaker
John
Conference Operator

Our next question comes from the line of Mark Zang with Citi. Please go ahead.

speaker
Mark Zang
Analyst, Citi

Hey, great. Thanks for taking my question. Just given the strong demand environment, shouldn't we see more capacity commits coming, I guess, like, you know, announced today? And if that's not the case, then is there, you know, enough incremental capacity or megalog capacity in your current footprint to support continued growth? Just any insights there will be appreciated. Thanks.

speaker
Matt Steinfurt
Chief Financial Officer

Sure. So, Mark, as we said, there's enough capacity in the committed capacity. There's enough growth potential in the committed capacity to get us to 30% growth in 2027. Clearly, we're very cognizant of the data center market and very active in terms of the evaluation of that. We haven't made any commitments at this juncture to share with the market. And if we get to a point where we make a commitment, we'll certainly share that. But at this point, again, we thought it was incredibly important for people to understand how to digest capacity as we bring it on. And that's why we've guided to what we have based solely on the 31 megawatts we've already committed. And it gives you a good sense of how it ramps and what the economics are. And should we bring down incremental capacity, you'll have a good model to add on to the growth ramps that we've already articulated.

speaker
Mark Zang
Analyst, Citi

Okay, great. And then maybe related to that, is there a sense of utilization of your current estate? Maybe like, you know, give us a sense of what the current capacity is, or we know the current capacity, but maybe any sense of the contracted capacity that you have on the books.

speaker
Matt Steinfurt
Chief Financial Officer

Thanks. Yeah, so from a contracted capacity standpoint, again, if you're talking about data centers, we've got 31 megawatts that we're adding to our roughly kind of 43 or 44. which will put us at 70, just about 75 megawatts when we're done. So the six megawatts, so we're sitting at 43, and we're adding six that will come online, start generating revenue in the second quarter. And the balance of the incremental 31, which is about 25, will come on and start ramping revenue in the second half. And we expect reach, you know, whether we're at full utilization as a function of whether we decide to fill them all with GPUs right away or we do it over time because we like to stripe out the generations of GPUs. We don't like to go all in on one type of generation of GPUs. But we'll be at a very healthy utilization in, you know, at some point in 27, which is enabling us to get to that 30% growth.

speaker
John
Conference Operator

Thank you. At this time, we have no further questions. That concludes our Q&A session and today's conference call. We would like to thank you for your participation. You may now disconnect your lines. Have a pleasant day.

Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-