This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.
8/5/2025
All lines have been placed on mute to prevent any background noise. After the speaker's remarks, there will be a question and answer session. If you would like to ask a question during this time, simply press star followed by the number one on your telephone keypad. And if you would like to withdraw your question, press star one again. Thank you. And I would now like to turn the conference over to Melanie Strait, Head of Investor Relations. Melanie, you may begin.
Thank you and good morning. Thank you all for joining us today to review DigitalOcean's second quarter 2025 financial results. Joining me on the call today are Patty Srinivasan, our Chief Executive Officer, and Matt Steinfort, our Chief Financial Officer. Before we begin, let me remind you that certain statements made on the call today may be considered forward-looking statements, which reflect management's best judgment based on currently available information. Our actual results may differ materially from those projected in these forward-looking statements, including our financial outlook. I direct your attention to the risk factors contained in our filings with the SEC as well as those referenced in today's press release that is posted on our website. DigitalOcean expressly disclaims any obligation or undertaking to release publicly any updates or revisions to any forward-looking statements made today. Additionally, non-GAAP financial measures will be discussed on this conference call, and reconciliation to the most directly comparable GAAP financial measures can be found in today's earnings press release, as well as in our investor presentation that outlines the financial discussion on today's call. A webcast of today's call is also available in the IR section of our website. And with that, I will turn the call over to Patti.
Thank you, Melanie. Good morning, everyone, and thank you for joining us today as we review our second quarter 2025 results. We continue to make meaningful progress on the strategy we laid out at our investor day back in April. This is evidenced by our strong second quarter results and supported by the fact that we are raising our full year guidance on both revenue and profitability metrics. My comments today will include a recap of our Q2 financial results and an update on both our progress in product innovation and our enhanced go-to-market strategy across both core cloud and AI, which are enabling over 174,000 digital native enterprise customers to scale on our platform. Let me start with the second quarter financial results highlighted on slide 10 of our earnings deck. The growth momentum from Q1 continued into the second quarter, with revenue of $290 million growing 14% year over year. We saw excellent strength in our AI ML business, with revenue growing north of 100% year over year. Revenue from our Scalar Plus customers, or customers who were at $100,000-plus annual run rate during the quarter continued to see strong growth during the quarter at 35% year-over-year and increased to 24% of total revenue. Finally, we achieved incremental ARR in the second quarter of $32 million, our highest incremental ARR since Q4 of 2022 and the highest organic incremental ARR in over three years. Given our strong top-line performance in the first half of the year and our confidence in the second half outlook, we are raising our full-year revenue guidance range to $888 million to $892 million. We are also excited about the traction we are getting with larger customers and increase in committed contracts. I spoke last quarter about a multi-year $20 million plus committed deal, and this was a contributor to the material growth in our remaining performance obligation balance as we continue to seek and secure large multi-year deals with our higher spend customers and key strategic partners. Not only did our momentum carry over to the second quarter, but also the growth to come The growth continues to come with healthy profitability, including adjusted free cash flow of $57 million, which is 26% of revenue. As a result of this performance, we are raising our full year free cash flow guide to 17 to 19% of revenue, demonstrating our ability to accelerate revenue while maintaining attractive free cash flow margins. Turning to the balance sheet, we continue to make progress on our capital allocation priorities and remain on track to address the outstanding 2026 convertible debt prior to the end of this calendar year. Matt will go into further details on this front in his prepared remarks. Now let me give you some updates on the product innovation that we continue to deliver for our digital native enterprise customers, which you can see highlighted on slides 11 and 12 in the earnings presentation. During the quarter, we released more than 60 new products and features addressing the needs of our higher spend customers, which includes builders, scalers, and Scalar Plus customers, who now drive 89% of our revenue. Notably, 64 of our top 100 customers have adopted a product or a feature released within the last year, and 26 of the top 100 customers have adopted a new capability released within the last quarter. both clear proof points of the impact product innovation is having on our digital native enterprise customers. Let me now provide a few product highlights from the quarter starting with Core Cloud. This past quarter, we officially announced our Atlanta data center, and its resources are now available to all customers. As a reminder, this is our newest and largest data center. And it is purpose-built to deliver high-density GPU infrastructure optimized for AI inferencing, which requires a lot more than just GPUs. This data center has our core cloud stack, including compute, storage, and other cloud features that are critical to enabling AI-native customers to run full-stack applications powered by AI, and not just the training or inference part of their software. This agentic cloud data center infrastructure is a key differentiating factor for us over other Neo clouds as it provides a complete stack for running sophisticated AI applications that have comprehensive needs beyond GPUs. More on that a little later. During the quarter, we continue to build capabilities for larger digital native enterprises. These customers typically require high quality storage, especially for AI workloads. To support that requirement, we enabled NFS or network file systems for GPUs so that customers can run the most demanding GPU applications with access to higher performance object storage to meet the demands of enterprise workloads such as video streaming and data lakes. We also introduced two advanced networking features in public preview. Bring your own IP address or BYO IP and network address translation gateways, or NAT gateways. These are critical capabilities that will enable more and larger digital native enterprise workloads to migrate to DigitalOcean. BYO IP allows customers to use their existing publicly routable IP addresses on DO rather than having to acquire new DigitalOcean-specific IP addresses. This makes it easy for customers to lift and shift their workloads to our platform without requiring extensive changes to their applications, while NAP gateway allows the customer's resources to securely access the internet from within their virtual private cloud on the DEO platform. These innovations on the core cloud platform are enabling us to scale and win more workloads from our digital native enterprise customer base. To leverage that traction, we are complementing our industry-leading product-led growth motion with a small dedicated migrations team to support customers moving existing workloads from hyperscalers and other clouds to DigitalOcean's platform. And we facilitated 76 of these migrations during the quarter. One example of this is a company called Exitium, a next-generation cybersecurity provider delivering innovative, no-cost incident response as part of its fully managed Security Operations Center, or SOC, offering. Designed for businesses and managed service providers, or MSPs, Exciteum's managed SOC provides real-time threat detection, threat hunting, and incident response, all without the high cost typically associated with legacy solutions. Exciteum signed an 18-month contract with DigitalOcean, selecting the platform to migrate from other cloud providers due to our compelling total cost of ownership, performance, and ease of use, enabling Exciteum to deliver its cutting-edge cybersecurity solutions more efficiently and at scale. Servebee.host, a Scalar Plus customer that offers managed hosting specifically tailored for the Kraft content management system has already adopted our newly released network address translation gateway, enabling their customers to securely access the internet within their DigitalOcean Virtual Private Cloud. We're also very excited about the progress we're making on our AI ML platform, which we now call the DigitalOcean Gradient AI Agentic Cloud. which complements our full-stack general-purpose cloud. Slide 8 in the earnings presentation shows the power of having these two platforms side by side, enabling our customers to take full advantage of the integrated stack that is required to build and run AI-powered applications in the future. The Gradient AI Agentech Cloud has three components, Gradient AI Infrastructure, Gradient AI Platform, and gradient AI agents. Let me start with the gradient AI infrastructure, where we expanded our GPU droplets lineup significantly to now include eight major types, including the H, L, and RTX series GPUs from NVIDIA, and the latest Instinct series GPUs from AMD. Another major update that makes gradient AI infrastructure great for inferencing is a new inference optimized GPU droplet, which simplifies the setup and deployment of LLMs by leveraging Docker. And this new GPU droplet comes pre-configured with VLLM and includes built-in optimizations like multi-GPU parallelism, smart batching, faster and higher token generation, built-in support for hugging trace model downloads, speculative decoding, prompt caching, and multi-model concurrency so that customers can go from deployment to serving tokens in minutes on any GPU droplet without having to do all these steps manually. We recently announced a collaboration with AMD that provides DO customers with access to AMD Instinct MI325X GPU droplets in addition to MI300X droplets. These GPUs deliver high level performance at lower TCO and are ideal for large scale AI inferencing workloads. Another example of this growing collaboration between the two companies is the Gradient AI infrastructure powering the recently announced AMD Developer Cloud, which enables developers and open source contributors to test drive AMD instinct GPUs instantly in a fully managed environment managed by our gradient AI infrastructure. This enables developers to start AI development with zero hardware investment and accelerate the time to value in tasks like benchmarking and inference scaling. This further advances our mission of democratizing access to AI while maintaining the quality, performance, and flexibility our customers have come to expect from DL. Let's look at how customers are taking advantage of our gradient AI infrastructure. Featherless.ai is a serverless AI inference platform offering API access to an expansive and growing catalog of open-weight models, primarily hugging trace models like LAMA, Mistral, Gwen, DeepSeq, RWKB, and more. Featherless AI leverages DigitalOcean for its simplicity and price performance. And they were an early adopter of our AMD MI300X GPU droplets, which offer industry-leading price performance and ease of use for inference workloads. Another GPU droplet customer is Crye Bay AI, a digital native enterprise specializing in AI-generated documentation, which is used by 94% of the Fortune 500 companies. Scribe AI migrated their AIML training workloads to DigitalOcean from competitive cloud providers, and it's now leveraging DO's GPU droplets to build and train their process documentation and knowledge-sharing platform. Moving on to the next layer of our Gradient AI agentic cloud, we recently announced the general availability of DigitalOcean Gradient AI platform, which provides the industry's easiest and most cost-effective platform for developing production-grade AI agents with automated safety and security guardrails. The Gradient AI platform, as shown on the right side of slide eight of the earnings deck, is a one-of-a-kind platform that caters to the end-to-end agent development lifecycle or ADLC for short, enabling AI native, SAS, and any software application customer to build, test, deploy, monitor, and operate agentic AI software. Customers can use a rich set of proprietary and open source foundation models, including OpenAI, Anthropic, Mistral, DeepSeq, and LAMA as high performance serverless endpoints. These serverless endpoints automatically scale to meet real-time application demands, thus freeing customers from having to manage compute resources on their own. The Gradient AI platform provides built-in guardrails that verify AI behavior and new best-in-class agent evaluation framework to drive high accuracy and relevance of AI results. and a robust experimentation capability to deliver optimal AI performance. Over 14,000 agents have been created since announcing this platform, which is almost double the number of agents last quarter. More than 6,000 customers have leveraged this platform since January, with 30% of these customers being new to DigitalOcean. One of the customers leveraging our new Gradient AI platform is Quickest with a Q. a leading AI-powered collaborative workspace product that helps product, marketing, and sales teams generate strategy documents, campaigns, and playbooks using shared AI personas. Quickest leverages the Gradient AI platform to create persona-generating agents, enabling model comparisons and orchestrating tasks on the Gradient AI platform to fetch and summarize the markdown content. Quickest chose DigitalOcean because they needed a flexible and scalable infrastructure to support complex AI workflows, and they valued the simplicity of deploying agents and integrating them to the quickest product line with very little coding involved. Moving on to the gradient AI agents layer, our first commercial AI agent is the Cloudways Co-Pilot, which continuously monitors critical server components like the web stack, disk space, inodes, and host health to detect issues in real time, diagnose root causes, and deliver actionable recommendations faster than traditional alerting systems. An example of a customer leveraging this product is Mint Media, a full-service media and marketing company specializing in video production and digital marketing. Mint Media uses our cloud-based co-pilot GenAI agents to automatically detect and remediate web hosting issues. Mint Media manages over 180 websites and saw significant time savings by leveraging CloudBase Co-Pilot and the associated AI Power Insights and automated issue resolution. What previously required hours of manual debugging is now handled in minutes through the agent's detailed, actionable recommendations. In addition to the product innovations we delivered, we also made material progress on the go-to-market front during this quarter. From a new customer acquisition perspective, we saw meaningful progress at the top of the funnel from our product-led growth enhancements, with revenue from core cloud customers in their first 12 months significantly outpacing growth of prior years, which is a great leading indicator of future growth potential. Our direct sales motion and the strong ecosystem partnerships are driving more AI-native customers with large-scale inferencing requirements than we have ever seen in the past. Our growing success with these marquee customers is evident in the increased RPO that I mentioned earlier in my comments, and we anticipate this trend to continue as we scale out our AI capabilities. In closing, I am pleased both by the results of the second quarter and by the progress we are making on the strategy that we articulated at our investor day back in April. We maintained our top-line growth momentum from Q1 to Q2 while maintaining healthy profitability metrics, enabling us to raise our guidance across both revenue and profitability metrics for the fiscal year 2025. We delivered continued product innovation and both drove improved performance in our industry-leading product-led growth engine and continued to get traction with our direct sales go-to-market motion, especially for AI. We recently launched the Gradient AI platform into full general availability, a significant step in our offering to our customers a twin stack of cloud capabilities as outlined in slide eight of the earnings slide deck. In a single unified stack, we provide a mature, complete general purpose cloud, and on the other stack, a modern agentic AI cloud. These integrated stacks enable AI native customers to run inferencing at scale while taking advantage of the core cloud modules and digital native customers to build AI directly into their software applications without having to do the heavy lifting of dealing with AI infrastructure. With this unique twin cloud and AI stacks, We are getting increasing momentum with AI native companies with larger scale inferencing workloads, and we are expanding our partnerships with key ecosystem players in the AI domain. We are also making good progress on our balance sheet and refinancing priorities, positioning us for a strong 2026. Thank you, and I'll now turn it over to Matt.
Thanks, Paddy. Good morning, everyone, and thanks for joining us today. As Patty discussed, we are very pleased with our Q2 2025 performance, and we are confident in our ability to sustain and build on this momentum in the latter half of the year. In my comments, I'll walk through our Q2 results in detail, provide an update on our balance sheet and capital allocation strategy, and share our third quarter and full year 2025 financial outlook. Starting with the top line, revenue in the first quarter was $219 million, up 14% year over year. Our annual run rate revenue or ARR was $875 million, which was $32 million above Q1. This incremental ARR of $32 million was the highest incremental ARR since Q4 of 2022 and the highest organic incremental ARR achieved in over three years. We continue to build and strengthen our relationships with our higher spend customers and key strategic partners. This is evidenced by the material increased in our remaining performance obligation balance as we continue to secure large multi-year deals with our digital native enterprise customers, which is an early but promising new go-to-market motion for the company. Our product innovation and go-to-market enhancements are resonating with this target customer base. In Q2, revenue from our Scalers Plus customers, or customers whose annualized run rate revenue in the quarter was greater than 100,000 and who represent 24% of overall revenue, grew 35% year over year, with a 23% increase in customer count. This is clear evidence of the increasing traction that we are getting with our largest customers as they expand their use of our core cloud products and adopt our new AI offering. Q2 revenue growth was primarily driven by improvements in customer acquisition across both core cloud and AI, as well as strong customer adoption of our AI ML products. As Patty mentioned, revenue from core cloud customers in their first 12 months significantly outpaced growth in prior years, which is a great leading indicator of future growth as these stronger recent cohorts not only drive up revenue from customer acquisition, but also they should positively contribute to net dollar retention when they reach their 13th month and become part of our NDR cohort. Our Q2 net dollar retention was 99%, up from 97% in the same quarter last year and within the expected range that we communicated on the prior quarter's call. We also delivered strong AI ML revenue growth in Q2 as we continue to see a robust demand environment, particularly for inference workloads, with AI revenue growing north of 100% year over year. Turning to the P&L, we delivered strong performance on all of our key profitability metrics. Gross margin for the second quarter was 60%, which was 100 basis points higher than the prior year. Adjusted EBITDA was 89 million, an increase of 10% year over year. Adjusted EBITDA margin was 41% in the second quarter, approximately 100 basis points lower than the prior year. Non-GAAP diluted net income per share was 59 cents, a 23% increase year over year. This increase is a direct result of expanding per share profitability by driving durable revenue growth while exercising ongoing cost discipline. GAAP diluted net income per share was 39 cents, a 95% increase year over year as we continue to grow revenue drive operating leverage, and prudently manage stock-based compensation. Q2 adjusted free cash flow was $57 million, or 26% of revenue, up significantly from our front-loaded Q1, which included a large portion of the upfront investment required to bring the Atlanta data center online. As I'll detail later in my comments, we remain confident in our ability to deliver attractive adjusted free cash flow margins for the full year. although the timing of capital investment payments will continue to create quarter-to-quarter variations in adjusted pre-cash flow margins, hence our highlighting of the trailing 12-month adjusted pre-cash flow margins on slide 15. Our balance sheet continues to be strong as we continue to maintain material cash and cash equivalents and ended the quarter with $388 million in cash. We also continued to execute our share repurchase program in the quarter with $20 million of repurchases in Q2. buying back approximately 691,000 shares. This brings our cumulative share repurchases since IPO to 1.6 billion and 34.8 million shares through June 30, 2025. At the end of Q2, we had 3.4 million remaining on our current share repurchase authorization. On the debt front, we continue to actively evaluate the market and our financing alternatives and remain committed to fully addressing the 2026 convert over the balance of this calendar year. We have multiple attractive financing options available to us, including convertible debt, bank debt, and bonds, and we plan to tap into these markets as needed to optimize our long-term cost of capital. Before we move on to guidance, I'll highlight one non-cash item related to both the balance sheet and the P&L. We continue to evaluate the necessity of our valuation allowance on certain existing tax-deferred tax assets each quarter in accordance with U.S. GAAP. While the valuation allowance is still necessary for Q2, in the latter half of fiscal 2025, we may release all or a portion of our valuation allowance of $109 million, which was discussed in our most recent 10-K as well as in our most recent 10-Q. When released, we estimate this would have the financial impact of decreasing our non-cash tax expense by the amount of the release, resulting in a corresponding increase in net income. When this occurs, it will be a positive non-cash event and will have no impact on non-GAAP financial metrics. Moving on to guidance, for the third quarter of 2025, we expect revenue to be in the range of 226 to 227 million. representing approximately 14.1% year-over-year growth at the midpoint. For the full year 2025, we are raising our annual revenue guidance to the range of $888 to $892 million, representing approximately 14% year-over-year growth at the midpoint. Given our strong Q2 performance, visibility into our customer's usage strength, and the strength of the AI ML demand environment, we are able to raise our full year guide with confidence. For the third quarter of 2025, we expect our adjusted EBITDA margins to be in the range of 39 to 40%. For the full year, we raise our adjusted EBITDA margin guide to the range of 39 to 40%. For the third quarter of 2025, we expect non-GAAP diluted earnings per share to be 45 to 50 cents based on approximately 102 to 103 million in weighted average fully diluted shares outstanding. For the full year 2025, We expect non-GAAP diluted earnings per share to be $2.05 to $2.10, based on approximately 103 to 104 million in weighted average fully diluted shares outstanding. Turning to adjusted free cash flow, we raise our guided adjusted free cash flow margins for the full year to 17 to 19%. Increasing our projected cash flow margins at the same time, we are accelerating our revenue growth outlook. which speaks to the confidence we have in our ability to maintain attractive free cash flow margins while we accelerate our top-line growth. Consistent with our historical guidance practice, we are not providing adjusted free cash flow guidance on a quarter-by-quarter basis, given it is heavily influenced by working capital timing, as you saw in our year-to-date results. That concludes our prepared remarks, and we'll now open the call to Q&A.
Thank you. We will now begin the question and answer session. If you would like to ask a question, please press star 1 on your telephone keypad to raise your hand and join the queue. And if you'd like to withdraw your question, simply press star 1 again. We also ask that you limit yourself to one question in one follow-up. Your first question comes from Patrick Walravens with City. Citizens, please go ahead.
Oh, great. Thank you very much and congratulations. Patrick, could you talk a little bit more about the AIML revenue and the over 100% increase there and maybe walk us through a little bit the history of this offering and why the current version is really starting to kick in?
Yeah. Thank you, Patrick. Good morning. Good way to get started. So the AIML revenue, as I mentioned in the call, grew more than 100% year over year. So if you remember, last Q2 is when we brought a lot of H100 NVIDIA gear online. So more than doubling that this quarter was a significant step for us. And what is different is As I explained, we have a three-layer AI stack. On the foundational level is our gradient AI infrastructure stack, which is a network of GPUs, both from AMD as well as Nvidia. And then in the middle layer is our gradient AI platform that we just took from private and public preview all the way to general availability. And then on the topmost layer is agents. The type of customers that use these three layers are slightly different at this point. So AI infrastructure is consumed typically by AI native companies that have their own model or have taken an open source model and are doing some tweaks to it and hosting those models And scaling them, especially in the inferencing mode, are typically consuming the AI infrastructure. And a majority of our revenue comes from the gradient AI infrastructure stack. And that's not very dissimilar from the rest of the industry. The Gradient AI platform that we recently pushed out to GA is where any software application, like a SaaS provider, for example, can start consuming AI into their own applications without having to do the heavy lifting of building and managing their own GPU infrastructure. So we have serverless endpoints for these LLMs, for example, and we have a bunch of other tools and modules that are critical building blocks for consuming AI into your own application. It becomes very easy to build AI into your existing application. That's what is powering the growth of our AI revenue is predominantly on the infrastructure side, but we are driving a lot of adoption and mind share with developers with the AI platform. And on the agentic layer, the first commercial application of that is the cloud-based co-pilot that's typically adopted by end customers as a way to automate some of the manual tasks that they're seeing in managing and operating cloud-based applications.
That's very helpful. Thank you.
Your next question comes from the line of Mike Sikos with Needham & Company. Please go ahead.
Hey, guys. Thanks for taking the questions here. Just to further the conversation on the AIML, good to see the north 100% revenue growth reflecting some of the more recent trends you guys have seen on the ARR front. We just wanted to see I know historically you guys have given us more color on the underlying components for that net new ARR. I think last quarter you guys had cited north of 160% year-on-year. Maybe I missed the data point, but just wanted to see how that net new is growing on the AIM out front in the June quarter.
Mike Smith, I think what we said is that our ARR was growing. The AIM ARR was growing north of 160% in prior quarters. That wasn't referring to the incremental ARR. It was the actual ARR. And the north of 100 reflects still very strong growth. In fact, if you look at the incremental ARR for this quarter at 32, it was a good balance across both AI and core cloud. But it was our highest incremental ARR in the company's history. And the reason that it dropped, if this is where you were going with the question around from 160 to north of 100 is just as Patty had said, we lacked the Q2 when we launched all of our AI capabilities and we had a bunch of pent up demand. Q2 growth in the AI business in particular from last year was high, so it was just a difficult time. But if you look at the incremental ARR that we're adding in that business on a go-forward basis, we're accelerating. It's an accelerating business.
Got it. And for the NDR, I know that the 99% here is in keeping with that commentary you guys have provided last quarter. Can you just explain what would actually acted against that because I would have thought there would have been at least some benefit from you guys laughing that cloudways price increase in April.
I think that when we look at the NDR, and this is the reason that we signal that it'll likely bounce around the kind of current range into this quarter and probably going for the next couple quarters, is that in the market, we haven't seen that degradation market. We haven't really seen any change in the market since the April timeframe. But as we look at some of our larger customers in the long tail, I'd say there's a mixed impact on customers. It's very individual. So some customers we see that are maybe on edge and they're optimizing or they're a little bit hesitant to expand their business. But in the same industry or in the same size of customer, we also see a number of customers that are accelerating the business. They're doing really well and they're expanding their business with us and they're growing their workloads. Um, and you see that in the growth of the, um, you know, the, the, the customers, uh, the scalers plus at 35%. So we're seeing really strong growth in, um, you know, in parts of our customers, but we're also seeing others that, that they're being cautious and, and, um, and aren't, uh, scaling as, as, um, Assessed and so we think that we're likely to stay kind of in this level. I'd say what the good news is, despite the fact that the NDR was just a hair lower at 99, we were able to raise our guidance. We're delivering the best incremental ARR that we've delivered in a very long time. And so we're very encouraged by the trends. I think that NDR is still, it's such a laggy metric. It's going to be a little stubborn to improve, but that's not going to slow us down from a revenue growth standpoint. We're doing enough with the new product acquisition on the core cloud, which is doing really, really well, getting really good cohorts and they're coming in. We've got the migration motion, which is a relatively new motion. It doesn't always impact NDR. And then we've got the growth and acceleration in the AI business. So we're very bullish on the growth prospects, and that was what enabled us to raise the guidance for the year. Great.
Thank you, guys.
Your next question comes from the line of Gabriela Borges with Goldman Sachs. Please go ahead.
Hey, good morning. Thank you. I wanted to touch on the unit economics of the AI business. Matt, I know in the past you've talked about the three payback periods, but we've both been very consistent in saying as you move from bare metal GPUs to more differentiated services, exactly as you've illustrated in the graphic in the slides, you should be able to command more gross margin, essentially. So maybe give us an update on how those efforts are tracking How do you feel about the gross margin and the LTV to CAC of the AI business relative to the core business?
Yeah, we're very, you know, we are very encouraged and comfortable with the margins that we're getting in the AI business, as you said, Gabriella, that The higher layers of the stack, the three-layer stack that Patty describes, have better margins than pure infrastructure. But even at the pure infrastructure level, we're very comfortable with the returns, particularly given the long-term value that we believe. You talked about the LPV. The long-term value that we believe we'll generate from those customers is, as Patty has talked about multiple times, inferencing customers, which is what we're seeing more and more of, even at the infrastructure layers, we're kind of going through this, they will pull other cloud services through. They need databases, they need storage, they need bandwidth, they need standard compute CPU. And so this is a bit of, we're still investing ahead in terms of if there's a bunch of infrastructure, the margins on that are the are lower than the margins at the higher stacks. But you need that baseline infrastructure capability to get the higher layer services. And so we think it's a very good investment. It's a very good use of our capital. And we're very encouraged by the returns that we're getting and the promise of higher returns as that business matures and we get more pull-through revenue and we get more of the revenue shifting to the higher layers of the AI stack.
And just to add to it, Gabriella, this is Paddy. Just to add to what Matt just said, that's why we're also forward investing in making our gradient AI agentic cloud very, very optimized for inferencing. So I talked about our inference optimized droplet. If you look at that right side of the slide eight, you will also see that we are investing in model optimization. We are investing in infrastructure optimization. at the infrastructure level. Everything is aimed to scale inferencing workloads on our platform, which tend to have very long tails. And as Matt mentioned, they also drag through some of the other cloud primitives. So they drag the left side along with them as the inferencing workloads scale globally. So we feel very good about where we are and some of the early success we are seeing with very marquee customers that are starting to scale up their inferencing footprint on us.
Yeah, that makes sense. Thank you. And Paddy and Matt, the follow-up I have here, just on these comments on highest incremental ARR, highest organic ARR in over three years in terms of the net new that you're adding, Can we think of this as the new high watermark? I'm looking at what's being implied in guidance. Talk to us about your ability to consistently deliver growth off that metric and whether there's any unevenness, whether because of seasonality or company-specific factors like the timing of new AI capacity coming online that we should be aware of as we think about the forward model.
I can start, Matt, and you can fill in. So we did not have anything unnatural this last quarter. We didn't bring a bunch of capacity online or there was no seasonality associated with it. I think we are just as we mentioned in our prepared remarks, we are honing our product-led growth motion for our core cloud customers. And that is starting to really produce results on one hand. Our migration motion is bringing in a new type of customers that are typically digital native enterprise customers, and we are starting to grow them. And on the AI side, we're just starting to see some scaled-up inferencing customers. So it's a combination of all of those. It's just not one big contract or one spike in capacity of GPUs or anything like that. It's a very secular and durable type of – momentum that we are seeing on the new customer acquisition side. Matt?
I agree with all that, Patty. I think that, again, the reminder on ARR is it's not based on a booking. It's not based on a sale. It's based on actual customer revenue and customer utilization. And so, you know, I think that, you know, we hope that that's a steady predictor and going forward of the exit trajectory, you know, that we're on and a good indicator. So it's certainly a critical metric for us. And as Patty said, we're We're encouraged by our ability to increase that. Certainly it'll, like any metric, will vary quarter to quarter. I know that it'll always be up and to the right, but we have enough motions going that we're very confident in our ability to improve that metric.
Really nice progress. Thank you for the detail.
Your next question comes from the line of Remo Lenzchao with Barclays. Please go ahead.
Perfect. Thank you. Staying on that AI notion and inferencing, if you think about, Patty, you talked about how you try to differentiate, et cetera. Where is the industry at the moment in terms of also capacity constraints? Is that still a factor for you that it's helping, or is it really now about all differentiation? Thank you, and then I have one follow-up on that.
Thank you, Remo. Capacity constraints are a way of life in AI as we are scaling like everyone else. So we are trying to stay ahead of it a little bit, but there's just so many factors there in terms of the real estate footprint and the power and the cooling and the actual gear. So there's just a lot of variable factors here. But I think for us, it all boils down to why some of these marquee AI native customers are starting to choose us over the other alternatives that they have. And it is really the twin stack cloud that we have laid out in slide eight. I don't think there are too many cloud providers that can claim to have both sides of that equation. And we certainly feel like we are driving home that point in terms of not only offering a world-class AI infrastructure, but increasingly those same customers are also starting to leverage some of the guardrails and the agent evaluation framework and the agent observability and things like that. going up stack on the right side on of the agentic cloud but also as matt mentioned They also have very sophisticated storage, data processing, and CPU compute requirements as well. Because at the end of the day, these are very sophisticated applications that require the might of a full stack general purpose cloud. So I think that is the differentiator that we are leaning on. And we feel really confident. I've been talking about this for about four quarters. And finally, we have the twin stacks that we have described on slide eight of the earnings stack. And we feel really good. We're just getting started. And some of the RPO and the large contracts that we have been talking about, they have not even started hitting their full stride as we are scaling those customers. So we feel really good about the forward momentum that we are building.
And that kind of leads into my next question for Matt. Like, if I think about second half, you know, I got a good few questions already of people saying, well, actually, you're kind of raising probably raising a little, the full year by more than you actually beat in Q1, Q2. So there's kind of obviously a lot of confidence in the second half. Should we think about like more RPO gives you more visibility, which kind of drives kind of some of that guidance because we know you as a conservative person normally. Thank you.
Thanks very much. But I wish that it was all the RPO that was giving us full confidence. If you look at the RPO, while we're really encouraged by the increase, it's still a very, very small portion of our business. So that's certainly encouraging. But I'd say, you know, when we look at the performance that we had in the first half, we look at the visibility that we have into the customer usage patterns. We look at the migrations that we're seeing and that motion kind of coming. We look at the traction we're getting with AI and with some of the, through some of the direct sales and partnerships and some of the conversations that Pavi articulated we're having with large AI native companies. We just, we have enough irons in the fire that we're confident increasing the revenue guide. And what to me is most encouraging. Yeah, because you do know I am a relatively conservative guy, is that we're able to increase our free cash flow margin at the same time. And so to me, that, you know, we can demonstrate that we can grow revenue, we can accelerate revenue while maintaining attractive free cash flow margins. And to me, that's incredibly encouraging as we think about what's in front of us in the second half and how that sets us up for 2026. Yeah.
Okay, perfect. Thank you. Congrats.
Your next question comes from the line of Jason Adder with William Blair. Please go ahead.
Yeah, thank you. Good morning, guys. I just wanted to see if you could give us a little bit of a breakdown of the business right now when we think about the kind of AI side versus the non-AI side. I know you've given the growth rates. Can you tell us, you know, just sort of ballpark? Is this like, I don't know, I'm in the neighborhood of like 5% to 10% of revenue now from I don't know if there's any specificity you can give on that, but that would be really helpful.
Jason, we're not – we don't break this out, and part of it is because of the – we believe that a lot of the AI capabilities are going to be pulling through other capabilities, and so that the impact of the growth is beyond what's representative. You just kind of wrote down the skews. um that we consider ai but you're in the ballpark i'd say it's um it's increasingly uh um uh becoming a material chunk of the business it's still small because it's a it's a um a business that we just launched a year ago and and we're accelerating but but that's a reasonable uh ballpark for percentage of revenue and we expect that to to increase and and you know um but you know it it will become an increasingly meaningful portion of our business in 2026, but it will still be a small portion. The core cloud is still a very healthy and growing portion of our business, and the AI business is a great complement to that and is accelerating our growth and also opening up different kind of entire channels and new customers to bring in that will drive that core cloud growth up as well.
Okay, great. And then just as a quick follow-up, is it fair to assume that the core – core cloud business grew at a similar rate in Q2 versus Q1 in that kind of low double digits. Is that accurate?
Yeah, we still see momentum in the core cloud business. And while the NDR was a little bit lower in Q2 than it was in Q1, the revenue that we're getting from new customers is ahead of our plan and our expectations. We're doing a really good job there. And, again, you've got to remember NDR is a little wonky, laggy metric because what happened, like the change in revenue from a year ago has as much impact as the change in revenue this year. So the core cloud business continues to accelerate. It's in that low double-digit growth rate and is improving.
So most of the upside then was from new customers, it sounds like.
Yeah, correct. Yeah, because I mean, with NDR coming down a little bit, the new customer acquisition plus the growth in AI offset the slight headwind from the NDR. But again, if you look at the incremental AR, so if you look at an exit run rate standpoint, there was a very good balance between the core business and AI. And so you both saw AI at its highest point. but there was still very good core cloud growth on an incremental ARRO as well.
Okay, awesome. Thanks, guys.
Your next question comes from the line of Josh Baer with Morgan Stanley. Please go ahead.
Great. Thanks for the question. Just wanted to confirm that in the net dollar retention rate, AI and ML revenue is not in that metric. Is that right?
Yes, right, Josh. That is still the case and will likely be the case for a while. As we've talked about it internally and we've talked about it yesterday, we said it will eventually contribute to the NDR, and we still believe it will. It will likely be for more inferencing workloads where they're steady production workloads. They're not projects where someone comes in test something for a month and then kind of scales it back. And so if you think about the time lag of someone being in NDR, they don't the customer doesn't count even in our core cloud until their 13th month. And so if you're turning up inferencing workloads now, With, you know, with marquee customers, it'll be a year before they, you know, they would even hit NDR. So, it'll be, you know, we'll incorporate the, at least the interesting portion of AI at some point, but it's certainly not going to be in the next couple quarters. So, it continues to not include, NDR continues to not include AMs.
Okay. Got it. Yeah. I would, I would think like, especially now as it's, it's scaling, but also you have more than 12 months, you talked about a hundred percent growth off of the Q2 last year where there was AI revenue. Um, and it's all organic, um, you know, kind of missing piece to that NDR percentage just around, uh, that expansion from existing customers. Um, I did want to ask you about the large deals, like how we should be expecting the potential for large deals in the future, and then also for you, Matt, how you're thinking about it from a guidance perspective, assuming that would be a little bit lumpier or have longer sales cycles or it's just a new motion for you guys. How do you incorporate the potential for large deals in guidance? Thank you.
Do you want to start and just talk about the nature of large deals? And I can answer Josh's question about the gains.
Yeah, so the nature of large deals is a very new muscle for us, both from sales, business development, forecasting, all of the above. I think what we are driven by is can we make these customers successful and do we have enough of a technology edge that can attract and retain and get these customers to scale. And that's the number one thing that I'm focused on that Braten and Larry are focused on is making sure that we have the ability to articulate our technology differentiation in a durable fashion and have the right engineering expertise on the ground to make these customers successful. So I feel fairly encouraged by the couple of early successes that we have had. And hopefully we can, and we see enough in the pipeline to be quite encouraged with these kinds of deals. Now with inferencing, it just takes time to go from winning a customer deal to actually scaling that up with real world traffic. So we are in the process of doing that with with some of our customers and extrapolating that into the future we'll we'll see how we can do a more predictable job in terms of forecasting how these things fall but it i expect this to be lumpy and spiky in the beginning before it starts normalizing because our customers are also new to this and they get sudden spikes based on some new updates to their models or new updates to their software. Some of them are in the consumer AI space. Some of them are in the B2B AI space. So we're learning along with them and they're learning with us in terms of their business model and how it is scaling out. So I let Matt answer how we will start reflecting these things in our financials.
With that context, Josh, as you would expect based on our track record and our history, we'll be conservative in forecasting those. I mean, the good news is, as Patty said, we book revenue. When we get that revenue, it's not like we're signing massive deals that – that just turn on right away. So we have visibility into the ramps and how those customers are going. But given it's such a new motion and given some of those kind of the newness of it for both us and the customer, as Patty described, we'll be conservative in terms of including any projected revenue from large deals until we're very comfortable that things are on the right track and we're growing and we have good visibility into that growth. So I would expect that that you would continue to see us be conservative as it relates to any large deals reflected in our forecast. Great. Thank you.
Your next question comes from the line of James Fish with Piper Sandler. Please go ahead.
Hey, guys. You know, you keep using the word conservative here, but on the guide side, we haven't seen this level of second half step up in some time. really going back to the pandemic, and you guys deserve credit here doing $32 million of net new ARR organic, but can you just walk us through the linearity you are seeing, what you're expecting from some of the newer solutions in the second half to raise guide by this much, and any of the other moving parts that helps you bridge this kind of larger than normal step up here? Because if I look at this and say you book similar kind of just slightly better net new ARR and the sort of 30 to 35 million range over the next few quarters, it really doesn't leave much wiggle room based on how you guys are defining ARR versus revenue now.
I think Jim, it's a good question. Recall in the last quarter, we didn't raise guidance. We beat Q1. We didn't raise the guide to Q2. And we did that intentionally because the market had changed pretty dramatically And we just didn't know what was going to happen from a macro standpoint. We've now got a full quarter under our belt on that front. We feel good about the visibility we have with the core customers. We've got a bit of the beat from the first quarter and the beat in the second quarter. to pass through. But as I said, we have enough levers at the moment that we're confident in. We've got the revenue from new customers, the month one to month 12, that's doing very well. And that's a relatively stable and predictable. We're seeing increased volume. We're seeing increased conversion. We're seeing better customers in that cohort. And that's a fairly durable kind of improvement that we've made. And so we're really confident in that. We've got the migration motion that we've turned up that Patty talked about, 70-something migrations during the quarter. That's a very new motion for us, but we've got clearly a pipeline of those because those aren't things that you just – like somebody comes in one day and you turn on a migration. You have to be talking to the customer for a period of time, so we're managing a pipeline around that. We also have very good visibility into our AI pipeline and are getting increasing traction there. So we've got enough things that are going that give us confidence to be able to deliver on that. And as I said in the prior question or the answer, We haven't fully reflected the large deal potential in the guide that we have, and that certainly gives us the upside potential beyond what we're talking about. So we feel good that we're confident in the base, confident enough to raise the guide, and that there's still other things we can be doing and progress we could be making over the balance of this year to give us further room.
And then, Patty, maybe for you, can you talk about what you're seeing on sort of the GPU pricing dynamic? Does it seem like across the space pricing came down a little bit and how you're thinking about, you know, the ability to repurpose any GPUs that kind of migrate from customer to customer or, you know, what you're seeing in terms of utilization at this point across the GPU side? Thanks, guys.
Okay. Thank you, Jim. The The utilization is very robust. We are running very lean on our GPU fleets, regardless of the generation of GPUs we are talking about. As we become more and more heavy on the inferencing side, it gives us a lot of degrees of freedom in terms of how we allocate the machines. And typically what we're seeing with our inferencing customers is, yes, they do care about the generation of GPUs, but they care more about the price performance rather than just the raw throughput of any given generation of technology. So let's say you have 100 units of GPU on the current generation, and if we can deliver the same price performance with 90 units of GPU in the next generation, the customer really doesn't care as long as it's in the same family of GPUs and they don't have to re-engineer or do anything. So we are getting to a point where it's more about the price performance rather than the price alone or the performance alone. So that gives us a lot of degrees of freedom in terms of how we allocate which family of GPUs across our inference workload customers. And I think this is going to get even more important as we start scaling up many of our customers across geographies and start doing this in multiple data centers. A lot of new things to be figured out there, but the pricing dynamics and training workloads are quite a bit different from the ones that we are experiencing in a stack that is predominantly driving inferencing.
We have time for one more question, and that question comes from the line of Brad Reback with Stifel. Please go ahead.
Great. Thanks very much. Matt, as we think about gross margin for the back half of the year, as the revenue mix maybe shifts a little bit and you continue to invest in the CapEx, how should we think about the trajectory? And then heading into next year as you lap the change in useful life, what type of impact should we expect then? Thanks.
Thanks, Brent. The gross margins are – we expect to be relatively consistent at the current levels over the balance of this year. Again, as you said, the AI business is – it's growing fast, but it's still a small part of the business, so it's not going to have a material impact on gross margins. If you roll that out to next year, clearly we're not at a point ready to give guidance, but we would expect it to have – kind of a modest headwind to gross margins, but it's still going to be in the vast majority of our business is going to be at the same high margins that we have. And we continue to drive efficiencies in the core business, bandwidth optimization, the longer term data center optimization strategy that we have. And so we're confident that we can maintain kind of healthy gross margins in the realm that we have right now. If AI becomes a much, much bigger portion of our business, you'll clearly have visibility into that as we do. And at that point, you would see a little bit of margin pressure. But at this point, the gross margin, we expect to stay right around where it is to balance the year. That's great. Thanks very much.
Your next question comes from the line of Mark Zhang with Citi. Please go ahead.
Hey, great morning, guys. Maybe just want to dig a little bit more into the RPO performance. Very nice to see. Can you give us a sense of maybe your characteristics here? What are the average deal sizes, contract durations? I just wanted to confirm that AI was the leading contributor here, or you saw a good contribution from Core Cloud as well?
Thanks. Go ahead, Matt.
I would say starting in the reverse. So the increase in RPO was from both CoreCloud and AI. So it wasn't just AI. Clearly, there are some AI deals that are in there. And you can see that I think the average, Duration and I might be quoting Q1, so I apologize if it's slightly off, but it's like 19 months. Um, so you can get the average, um, kind of length of the, the, the deal. You know, the 2 to say, call it 2 years on the outside and sometimes 1, 1 year somewhere between 1 and 2 years. is the typical for us because this is a relatively new motion for us. And it's great that we're getting customers that are used to and value the ability to just do straight consumption with us to make the commitments to, you know, for a minimum level of revenue over some period of time. So that's something that's very encouraging and speaks to the product innovation and the improvements we've made in the core cloud and customers' confidence in our ability to continue to meet their needs. I don't know, Patty, if you wanted to add something to that.
No, I think you, you nailed it, Matt. Um, yeah, it is definitely a combination of both, both our core cloud as well as AI. So there's not, this is not just reflective of just one giant huge deal or anything like that.
Got it. Thank you. And then just maybe a quick follow up, um, just on capital allocation. It seems like you guys have been stepping up on share repurchases since, I guess, the end of last year. But now with the authorizations dwindling down to about $3 million, what's sort of the process around capital allocation going forward?
Thanks. Yeah. Our capital allocation, we actually reduced the amount of repurchases that we've been doing over the last two years. We did almost $500 million in 2023. across 2024 and into 2025, it was only $140 million. Our primary objective at the moment, and we articulated this in Investor Day, is it's all about organic growth and investing to drive organic growth. But then secondly, and as important, is we're committed to making sure that we've taken care of the balance sheet and we've addressed The outstanding convert, and we've said that we're going to do that by the end of this year. And we started that process with our 800Million dollar bank facility 500Million of that as a term loan. And so we're dialing back the share repurchases just so that we can make sure that we take care of those first two objectives. And as soon as we take care of those two objectives, the first one will be ongoing, but the second being taking care of the outstanding convert, then we'll go back to, I'd say, a reasonable level of share repurchases that are targeted at offsetting dilution. So it's, I think, you know, Priority one is organic growth. Priority two is take care of the convert. And priority three is use the recursions to offset dilution. And right now, priorities one and two are the bigger focus for the next quarter.
Your next question comes from the line of Thomas Blakey with Cantor. Please go ahead.
Hey, guys. Congratulations on the results, and thanks for squeezing me in here. I had a point of clarification first, too. I think it was Jason Ader's question earlier. Matt, did you say that the core cloud accelerated in 2Q? And then from a question perspective, I know the core AI is organic now, growing over 100%. What kind of derivative impact did it have to NDR, if any? you know, Patty or Matt, just you would think there'd be some kind of like flow through of these customers buying more services on the platform. And, you know, I would just be curious to see what kind of impact that had on that metric. Thank you.
So, on the second part of your question, a lot of the AI customers that are coming to us are new customers, right? So they're particularly in the infrastructure side of AI. So they're not yet buying a tremendous amount of products on the core cloud side. And even if they did, they haven't been in the cohort long enough to count towards NDR. So there's basically not much impact from that. That's a future benefit, which I think you're appropriately pointing out. And I'm sorry, could you repeat the first part of your question?
Yeah, I think you said earlier on the call to a question that core cloud, you know, kind of excluding AIML, accelerated. And I just wanted to make sure I heard that correctly.
Yeah, yeah, the year-over-year growth rates in the core cloud continue to improve. So again, when you look at a metric like NDR, it's again a function of what happened, the change in revenue last year compared to the change in revenue this year. So it's got a lot of kind of laggy components to it on the core cloud in terms of the incremental ARR and the overall ARR growth of the core business that continues to accelerate.
Thanks, Scott.
Your next question comes from the line of Wamsley Moham with Bank of America. Please go ahead.
Yes, thanks for taking my question here. I guess, firstly, on your AI customers, are you seeing higher volatility or churn in that customer base? And just to clarify, is the penetration of these customers, how would you categorize that between maybe learners, builders, scalers, in your traditional way of thinking about the customers, where are these in their journey and any thoughts around graduation rates on these customers?
Yeah, great question, Vamsi. It's good to hear from you. It's a completely different customer acquisition motion, so we don't think of them as testers, learners, builders, scalers, because they typically don't go through that journey on our platform. A lot of these customers are in the initial stages, there were a lot of very early stage startups. But as we are seeing a lot of traction on the inferencing side, these customers in their own evolution or in their own progression have crossed some of the chasms in terms of both funding as well as finding product market fit and customer traction. But they're coming to us with inferencing needs that are scaling, which by definition means that they have found product market fit. And now they have a captive audience that is willing to pay for their inferencing needs. So we are starting to see, there was a lot of the test and leave kind of phenomenon in the fine tuning on the training side last year. But now as we have started flipping more and more towards the inferencing side, these customers come, they stay, they expand, and they start leveraging different parts of our stack described in my diagram. So it's a very different life cycle that we're seeing on this side.
Okay, great. Thanks, Paddy. And if I could follow up quickly with Matt, on the growth CapEx side, any incremental thoughts over here? I know you said organic investments and driving organic growth is sort of highest priority. So relative to your comments that you made last quarter, how should we be thinking about the growth capex profile over the next few quarters or into next year. Thank you so much.
Thanks, Wamsi. Yeah, I think a couple things. One, I would point to, again, we've increased the free cash flow margin guidance, and we feel good about that relative to the growth rates that we're articulating. And what we said in the last quarter and say again is If we see the opportunity to accelerate growth beyond what we communicated at the investor day of 18 to 20% by 2027, we certainly do that. And we have a lot of tools in our toolkit to be able to do that in a capital efficient and cash flow efficient way. So we're very confident, remain very confident that we can grow revenue while maintaining attractive free cash flow margins.
Thanks.
And ladies and gentlemen, that does conclude our question and answer session, and it does conclude today's conference call. Thank you for your participation, and you may now disconnect.