2/25/2025

speaker
Conference Call Moderator
Moderator

Thank you. I would now like to turn the conference over to Melanie Strait, Head of Investor Relations. Melanie, you may begin.

speaker
Melanie Strait
Head of Investor Relations

Thank you, and good morning. Thank you all for joining us today to review DigitalOcean's fourth quarter and full year 2024 financial results. Joining me on the call today are Patty Trini-Vaughan, our Chief Executive Officer, and Matt Steinport, our Chief Financial Officer. Before we begin, let me remind you that certain statements made on the call today may be considered forward-looking statements, which reflect management's best judgment based on currently available information. Our actual results may differ materially from those projected in these forward-looking statements, including our financial outlook. I direct your attention to the risk factors contained in our filings at the SEC, including our most recent annual report on Form 10-K filed today, as well as those referenced in today's press release that is posted on our website. DigitalOcean expressly disclaims any obligation or undertaking to release publicly any updates or revisions to any forward-looking statements made today. Additionally, non-GAAP financial measures will be discussed on this conference call and reconciliations to the most directly comparable GAAP financial measures can be found in today's earnings press release as well as in our investor presentation that outlines the financial discussion on today's call. A webcast of today's call is also available in the IR section of our website. And with that, I will turn the call over to Patty.

speaker
Patty Trini-Vaughan
Chief Executive Officer

Thank you, Melanie. Good morning, everyone, and thank you for joining us today as we review our fourth quarter and full year 2024 results. We concluded the year with strong momentum and continue to successfully execute on the initiatives we laid out at the beginning of the year. Our accomplishments included building out our executive and senior leadership teams, significantly improving the pace of product innovation, augmenting our product-led sales motion with new strategic go-to-market enhancements, and continuing to accelerate the early success of our AIML platform, all of which together positioned us with momentum heading into 2025. In my comments today, I will briefly recap our fourth quarter and full year results, reiterate our strategy and priorities, share several product innovation and customer use cases across both core cloud and AIML, that demonstrate the progress we're making against our priorities. First, I will briefly summarize our fourth quarter and full year 2024 financial results. Revenue growth accelerated in the fourth quarter to 13% year over year to $205 million, with one of our biggest growth levers, net dollar retention, improving to 99% from 96% in Q4 of the prior year. Our efforts to improve growth and NDR in 2024 are evident in our Q4 results, as NDR with our traditional cloud services reached 100% in Q4 for the first time since June of 2023 on the back of our rapid product roadmap execution in our investments in several strategic go-to-market motions. From these efforts, we saw increased expansion from our higher-spend customers as we continue to focus both our product go-to-market efforts on these top customers. Our higher-spend customers, which has traditionally included our builder and scaler cohorts, now represent 88% of total revenue and grew 16% year-over-year in Q4. We have now further disaggregated our scalers and are disclosing our highest-spend customer cohort, scalers plus, which are customers who were at 100k plus annual run rate during the quarter. These Scalar Plus customers, who are critical to our growth trajectory, increased in count by 17% year-over-year and were 22% of the total company revenue in Q4. We reached over 500 of these customers for the first time in the company's history, and more importantly, we saw a 37% year-over-year increase in revenue from Scalar Plus customers, which is clear evidence of both the wallet share opportunity we have with these customers and our demonstrated ability to scale with them. We also made material progress on our other major growth lever, our AI ML platform, and closed the year with continued momentum, exceeding the three points of overall growth contribution from our AI ML platform that we had guided for 2024, with Q4 just north of 160% ARR growth, while staying true to our AI strategy and pursuing durable AI revenue. We are very encouraged with the rapid growth and customer adoption of our newly launched AI products, and I'll talk about them later in my comments. On top of the increasing growth signals, profitability remains strong as we delivered healthy 42% adjusted EBITDA margins, both Q4 and for the full year, maintaining our cost discipline while we continue to invest to fuel future growth. Looking forward, our 2025 guidance reflects this ongoing momentum. with full-year revenue growth at low to mid-teens and high-teens free cash flow margins above our preliminary 15% to 17% indication. We continue to prioritize and rebalance our investments, driving improved operational efficiencies while shifting resources towards our top growth initiatives. Our upcoming Atlanta Data Center is a good example of both these priorities. as the upfront investment in that facility, which will come online in Q1, not only provides us with incremental capacity for both AI and our core cloud offerings, but also gives us a lower-cost facility and is part of our longer-term data center optimization strategy. Max will walk you through more detail on our financial results and guidance later in the call. In my first year at DigitalOcean, we had several very clear priorities as we sought to accelerate growth. We needed to double down on product innovation to address key gaps in our core cloud platform, better address the needs of our larger customers, return net dollar retention to a tailwind rather than a headwind, and build the foundation for our longer-term AI growth strategy. We made material progress on each of these objectives and continue to deliver on our promise of making complex cloud and AI technologies simple. We also made substantial progress on making our platform even more scalable, enhancing our ability to meet the needs of larger customers. And finally, we doubled down on our heritage of being the most approachable public cloud provider by continuing to invest in support of open source AI models and even hosting our developer conference deploy in January. Let me now give you some updates on the core cloud computing platform. In Q4, we continue to accelerate the pace of innovation as we released 49 new products and features throughout the quarter, which is more than four times what we released in Q4 of the previous year. Most of these products and feature enhancements directly address the needs of our largest spend customers as we continue to remove blockers and implement the capabilities that they need to further scale on our platform. I will highlight several of these product releases that we have made to help our customers grow their businesses using DigitalOcean. Given that our larger customers run significant global workloads, they need the ability to securely connect different parts of their network so that their systems and applications in separate environments in various data centers in different countries can securely communicate without using the public internet to improve speed and efficiency while keeping their data secure. To address this need, In Q4, we announced Virtual Private Cloud Peering, or VPC Peering for short, which is now generally available for all our customers. VPC Peering enables customers to connect their separate private clouds and establish seamless communication between resources hosted in these clouds using private IP addresses, keeping their information safe by traversing through the digital ocean backbone rather than through the public internet. Our larger customers also need the ability to distribute traffic across resources. while still keeping it within a secure private network. To support this, we introduced a new feature called Internal Load Balancer, which enhances security by ensuring that internal workloads remain isolated from public internet, making it ideal for applications that require highly scalable private communications. We also have several large customers with volatile traffic patterns that need mechanisms to handle these massive spikes in volume very smoothly while still optimizing cost and scaling them down when the demand is lower. To address this, we announced the general availability of droplet auto-scale pools to ensure that the right resources are available to handle application workloads during these surges in traffic, scaling up automatically to meet demand, while also helping minimize cost by scaling them back down when the traffic surge is over. We also introduced flexible management capabilities to our app platform, which is our platform as a service offering for more granular lifecycle management, including archive and restore functionality and maintenance mode during the application's full lifecycle. Next. Customers of Spaces, which is our fast-growing S3-compatible object storage service, have long asked for the ability to grant granular permissions to different users or teams without exposing full account-wide credentials. In response, we launched per-bucket access keys for Spaces. This highly sought-after feature provides customers with identity-based bucket-level control over access permissions helping enhance the data security and ultimately simplifying management overhead. Complementing this accelerated pace of product delivery of sophisticated capabilities was one of our new go-to-market motions, where we bolstered our engagement with our top 1,500 customers. By helping take these new innovations to our customers and tightly orchestrating a closed loop between the various DigitalOcean teams and our top customers, this motion increased awareness and adoption of our new product capabilities, facilitating migration of cloud workloads from other clouds to DO and serving as a catalyst for both our improved NDR and faster growth rate of Scalar Plus customers. Our higher spend customers have quickly started adopting many of these features that I just talked about that we released over the back half of 2024. Over 50% of our top 100 customers have adopted at least one of the features that we released in Q3, and we anticipate similar adoption levels for our Q4 features over time. Together, the breadth of these new features and the pace at which we are executing our product roadmap is enabling our higher spend customers to grow on DigitalOcean and is enabling us to win more of their workloads that today reside on other hyperscaler clouds. As we discussed last quarter, we've been focused on helping our customers seamlessly migrate more workloads to us and scale efficiently on DO. One example of this is a customer called Digital Platform, a strategic software solutions development company that was experiencing high cost and latency issues with the cloud they were leveraging, which was impacting their application performance. Through our customer-facing teams, we were able to fully migrate their workloads to DigitalOcean, leveraging our optimized database infrastructure to improve their performance while providing them with substantial cost savings. Another example is Hoodoo, a provider of enhanced IT documentation with features and tools made for assisting managed service providers and IT departments. Kudu has been a DO customer since 2019, and they continue to scale and grow on our platform. Kudu was an early adopter of our Kubernetes, managed databases, Snapshooter, and premium support products. As a result of the ease of use of our platform, they've been able to focus on their own scalability and have grown as a business over 870% since 2021. Another example of our customers' ability to scale with DigitalOcean is Moments, a fitness and wellness platform that manages bookings, communications, and memberships. Moments needed a larger instance to house their database to meet the requirements of their rapidly growing customer base and continue leveraging our platform. the DigitalOcean team was able to provide architectural guidance by crafting a solution with existing DO products while delivering a new product they requested, a 48 vCPU storage optimized droplet with scalable storage. Let me now switch gears and give you a quick update on our AI initiatives. We remain committed to and are executing well against our AI strategy that we articulated last year. In that context, I'm very encouraged by the emerging innovations in this space, like DeepSeek, that drive down the cost of AI adoption and improve the quality of open source models, which will ultimately enable more customers to use AI. We see innovations such as DeepSeek and even reports of some hyperscalers potentially moderating their data center commitments as reinforcing our conviction that while a lot of action to date has been at the infrastructure layer, that innovation and value creation will occur at the platform and application layers, where we are highly differentiated and well positioned to grow as we democratize AI for our customers. Our prudent approach to AI investment also allows us to ramp investment where we see customer demand. And as a case in point, we are increasing our allocation of our GPU capacity for our GPU droplets, where we quickly ran out of capacity after launching at the beginning of Q4. Each of these three layers, infrastructure, platform, and AI applications have their purpose and very distinct customer targets. And although most of the action is still in the infrastructure layer, we are now starting to see more narratives in the market around the higher layers of the stack in platforms and agentic applications, which is a good validation of our AI strategy that we laid out last year. We've been making excellent progress enhancing our AI infrastructure offerings as well as innovating at the GenAI platform and agent application layers as we build towards our goal of democratizing AI by enabling our customers to quickly experiment and build AI into their real-world applications. On the infrastructure side, we are seeing strong adoption of GPU droplets, which we made generally available to all our customers in October. As a reminder, GPU droplets allow DigitalOcean customers to leverage on-demand and fractional access to GPUs in a self-service way in just a few minutes, vastly simplifying a very complex series of steps. Let me now give you a couple of real-world examples. Prodia, a company specializing in integrating generative AI into their own applications, leverages DigitalOcean's GPU infrastructure globally to efficiently manage their own products Protea accelerates generation speeds, offering an easy-to-use API for AI-powered image generation. Another example of an AI ML infrastructure company is Commodity Weather Group, a company that provides advanced weather model forecasts to their clients and runs AI-based weather models to enhance decision-making with additional insights. They also leverage DigitalOcean's AI infrastructure for its scalability and ease of use. These are just a few examples of how our customers are leveraging our infrastructure to develop and sustain data-intensive software to be able to meet the needs of their own customers, all while leveraging the simplicity of DigitalOcean's AI ML infrastructure. Moving up the stack, I'm very excited about our GenAI platform, which is now in public beta. The DigitalOcean GenAI platform is one of the simplest platforms to create, deploy, and integrate AI agents into real-world applications. Stepping back for a second, AI agents are software applications designed to autonomously perform multi-step tasks that involve reasoning and decision-making, leveraging AI and ML. Our new GenAI platform gives customers everything they need to build AI into their own applications without the need for advanced expertise in AI or machine learning. Customers can easily and quickly build AI agents leveraging DO's infrastructure by adding their data to their pre-trained third-party GenAI models and can seamlessly integrate those agents into their own application via secure endpoints or chatbot plug-ins. In the four weeks since we made the GenAI platform beta public, we have seen well over a thousand agents created on the platform, with the most encouraging fact being that roughly 90% of these agents were created by existing DO customers. which is further validation of our belief that our typical DO customer wants to leverage AI into their software stack and are willing to do it if we make it very, very simple and integrated with the rest of our cloud platform. In our latest version of the DigitalOcean Currents, a customer trends report that we published earlier this month, we found that almost 80% of our target customers are interested in leveraging AI, but over 70% of them said that cost and lack of expertise are the two major impediments to AI adoption. Our GenAI platform makes it very simple by abstracting out most of this complexity associated with creating AI agents by having templatized agents, click-through wizards, and so on, and by providing easy access to a variety of open source models, including LAMA, DeepSeq, and Mistral. At the application layer, we introduced Cloudways Co-Pilot in public beta, which is a suite of AI solutions designed to bring intelligent managed hosting to small and medium businesses, starting with AI-powered diagnostics to give customers recommendations and alerts to fix issues before they become problems. This helps our customers automate tasks, monitor performance, and provide them with insights to keep their websites up and running smoothly. One such example is Qlikit, a web design and development agency which is already leveraging the newly announced cloud-based co-pilot in AI. Qlikit is finding a 4x reduction in the time spent manually handling issues and taking care of web servers. We also started using GenAI agents for our own internal DigitalOcean cloud operations. For a variety of operational incidents, GenAI agents are invoked, which analyze our service logs to determine the root causes. This has resulted in a 39% improvement in our time to resolution and is one of the most sophisticated uses of GenAI agents in the industry today. We're building these agents and using them not just to improve our own operations, but also to deeply understand the pain points and complexities of building AI agents so that we can incorporate these learnings into our GenAI platform and making it even more simpler for our customers to use. Beyond our product and customer progress, another recent highlight was the Deploy Conference we hosted in January in Austin, Texas. This event brought together customers, partners, and some DigitalOcean employees, amplifying our presence in the developer and AI ML space and building out on a strong proposition as the most approachable public cloud. At Deploy, we introduced a slew of new product capabilities, launched an AI variant of our popular startup incubator program called Hatch, and hosted many of our technology and channel partners. At Deploy, we also launched a new migrations program designed to seamlessly transition cloud workloads from the hyperscalers to DigitalOcean. This program will eliminate migration-related complexity, deliver lower operational costs, and provide seamless technology assistance through our partner ecosystem and a newly formed DigitalOcean team of solution architects skilled at migrating cloud workloads. To recap, as I close my prepared remarks, we entered 2025 with increasing momentum. In Q4 alone, we released more than four times as many products and features we did in the previous year, increased net dollar retention to 99%, grew revenue 13% year over year, and delivered 18% adjusted free cash flow margins. Our focused efforts on our higher spent customers and our continued traction in AI drove quarterly revenue for our top 500 plus customers, representing 22% of our total revenue, to grow at 37% year over year. This shows clear progress on our strategy and builds on our leading position as the simple, scalable, and approachable cloud. Before I turn the call over to Matt, I'm very excited for our upcoming Investor Day, which we will be hosting at the New York Stock Exchange on April 4th, starting at 9 a.m. Eastern Time. During this Investor Day, we will share more on our longer-term strategy, including more details on our progress and key metrics, and we'll share a view of our long-term financial outlook. I will now hand it over to Matt, who will provide some additional details on our recent financial results and our outlook for Q1 and full year 2025. Over to you, Matt.

speaker
Matt Steinport
Chief Financial Officer

Thanks, Patty. Good morning, everyone, and thanks for joining us today. As Patty discussed, we delivered a solid Q4 and full year 2024 on all financial metrics and made meaningful progress on the key initiatives and goals we set in place at the beginning of the year. In my comments, I will review our Q4 results in detail and covered the full year 2024 financial highlights before sharing updated first quarter and full year 2025 financial outlook. Revenue in the fourth quarter was $205 million, up 13% year over year. Annual run rate revenue, or ARR, in the fourth quarter was $820 million, as we added $26 million of incremental ARR in the quarter, up from $24 million in new ARR in Q3. Of note, We made a methodology change to how we report ARR, where we now calculate ARR by multiplying the quarterly revenue times four, rather than the final month of the quarter times 12. We decided to use aggregate quarterly revenue to calculate ARR to reduce potential volatility in this metric, given the project-based nature of some of the training workloads customers are running on our AI ML platform. More details on this, as well as a reconciliation between the old and new methodologies, can be found in our Form 10-K. Revenue from our builders and scalers, which are our highest spending customer cohorts and represent 88% of total revenue, grew 16% year-over-year, and customer count increased 6% year-over-year. This quarter, we also began to disclose further disaggregation within our scalers cohort and are now separately disclosing our largest spending customer cohort, or scalers plus, or customer that monthly spend is more than $8,333 per month during the quarter, which is more than $100,000 on an annualized run rate or ARR basis. In Q4, revenue from Scalers Plus, who represent 22% of overall revenue, grew 37% year-over-year, driven by a 17% year-over-year increase in Scalar Plus customer count, coupled with their expanded usage of our core cloud services and continued growth of our AI-related solutions. The net expansion increase we saw from our top customers drove our Q4 net dollar retention rate up to 99%, up from 97% for the first three quarters of 2024. The increases we are seeing in our net expansion levels, coupled with churn that has remained stable for the last two years, is moving us closer to reaching and exceeding 100% NDR and shifting NDR from a growth headwind to a tailwind. Within this overall improvement in NDR, we saw the NDR rate of our traditional cloud services reach 100% in the first quarter for the first time since June of 2023. All of this progress is despite the fact that we are still lapping a few headwinds from our managed hosting products that we've spoken about previously. And this progress gives us confidence in our baseline growth rate heading into 2025. Turning to the P&L, gross margin for the quarter was 62%. which was 300 basis points higher than the prior quarter and 500 basis points higher than the prior year. The increase in gross margin quarter over quarter and year over year is primarily driven by the increase in revenue as well as the change in useful life for our servers from five to six years as we've been able to extend the utilization of our equipment. More details on this change in useful life can be found in our Form 10-K. Adjusted EBITDA was 86 million, an increase of 17% year over year. Adjusted EBITDA margin was 42 percent in the fourth quarter, approximately 200 basis points less than the prior quarter, but 100 basis points higher than the prior year. We feel confident in our ongoing ability to appropriately balance our growth investments with our efforts to improve operating efficiency, as is evidenced by our continued delivery of healthy adjusted EBITDA margins. Q4 adjusted free cash flow was 37 million, or 18 percent of revenue. This is higher than Q3 by approximately 500 basis points due to timing of capital investment payments, which will continue to create core-to-core variations in adjusted free cash flow margins. Finally, non-GAAP diluted net income per share was 49 cents, 11% increase year-over-year. This increase is a direct result of our ability to increase our per-share profitability levels by continuing to drive operating leverage while mitigating dilution through share buybacks. For 2024, total revenue increased 13% year over year to $781 million. This growth came primarily from continued strength in our durable customer acquisition engine, which in 2024 was augmented by new customer revenue on our AI platform and from growth in our highest spend customer cohorts. Given our 98% NDR in 2024, the vast majority of our revenue growth in 2024 came from customer acquisition and new customers. including those that consumed our newly launched AI services, which delivered north of 160% ARR growth in Q4. Despite the NDR headwind in 2024, our Scalar and Scalar Plus customer cohorts collectively contributed approximately $445 million and 57% of total revenue in 2024 and grew revenue 18% year-over-year. Turning to the P&L, our gross margin for the year was 60%, which was 3,300 basis points higher than the prior year, which I noted earlier in my comments is primarily driven by revenue growth that is faster than our growth in COGS as we delivered further operating leverage. On the profitability front, we delivered healthy adjusted EBITDA margins in 2024 at 42%, up 200 basis points from 2023. We generated 17% adjusted pre-cash flow margin in 2024, down as we had planned going into the year, 500 basis points from 2023 margins as we ramped our investment in our compelling AI growth opportunity. On our customer metrics, as I mentioned previously, this quarter we are now disclosing further disaggregation of our scalar customer cohort into scalars and scalars plus, as these higher spend customers are the focus of our investments and their performance is key to our growth strategy. In Q4, the number of scalars plus grew 17% year over year, and the revenue growth from Scalers Plus grew 37% year-over-year. We are also seeing an increase in average spend within Scalers Plus, where average revenue per user, or ARPU, grew 18% year-over-year. The traction we are seeing with our highest-spending customers is the result of our continued efforts across product development, targeted customer success and account management motions, and new go-to-market investments that are all focused on these customers, and this is an encouraging sign of our ability to drive future growth. Our balance sheet remains very strong as we ended the quarter with $428 million of cash and cash equivalents. We continued to execute against our share repurchase program in the quarter with $28 million of repurchases in Q4, bringing total share repurchases to $57 million in fiscal year 2024 and bringing our cumulative share repurchases since IPO to $1.5 billion and 32.6 million shares through December 31, 2024. With our healthy cash position and ongoing free cash flow generation, we are well positioned to continue investment in both organic growth and share with purchases while maintaining appropriate flexibility to address our 2026 convertible note at the appropriate time, which is likely before it goes current at the end of this year. Moving on to guidance, I will now share our financial outlook for the first quarter of 2025 and for the full year. For the first quarter of 2025, we expect revenue to be in the range of 207 to 209 million, representing approximately 13% year-over-year growth at the midpoint of our guidance range. For the full year 2025, we expect revenue to be in the range of 870 to 890 million, also representing approximately 13% at the midpoint of our range. As is the case with our 2024 guidance, Our 2025 guidance is underpinned by our baseline growth foundation as we entered the new year. Our primary 2025 growth levers are bolstering customer acquisition and continuing to drive customer expansion. Customer acquisition includes revenue from any new customer, including new AI ML customers who are within their first 12 months on our platform. We anticipate customer acquisition and revenue from new customers to again contribute the majority of our growth in 2025 But by continuing to improve NDR in 2025 and by expanding usage by our existing AI customers that will soon have been on our platform for more than a year, we expect customer expansion to improve and to have a neutral to slightly positive impact in 2025, where it was a headwind in 2024. For the first quarter of 2025, we expect our adjusted EBITDA margins to be in the range of 38% to 40%. For the full year, We expect adjusted EBITDA margins to be in the range of 37 to 40%. For the first quarter of 2025, we expect non-GAAP diluted earnings per share to be 41 to 46 cents based on approximately 103 to 104 million in weighted average fully diluted shares outstanding. For the full year 2025, we expect non-GAAP diluted earnings per share to be $1.85 to $1.95 based on approximately 104 to 105 million in weighted average fully diluted shares outstanding. On adjusted free cash flow, we expect adjusted free cash flow margins for the full year to be in the range of 16 to 18%, slightly ahead of the preliminary indication for 2025 that we provided last quarter. Consistent with our historical guidance practice, we are not providing adjusted free cash flow guidance on a quarter-by-quarter basis given it's heavily influenced by working capital timing. However, we would like to highlight that our 2025 expenditures will be front-end loaded, which is driven by additional AI-related capital expense as we scale those services, one-time startup costs to get our Atlanta data center online, and the usual Q1 cash expense events, including bonus payments and higher payroll taxes. Given this front-end loaded investment and expense, Q1 adjusted free cash flow margin will decline from Q4 levels, but we will quickly return to higher adjusted cash flow margins in Q2 and across the balance of the year, and we remain very confident in our ability to deliver the 16% to 18% full-year adjusted free cash flow margin in our guide. That concludes our prepared remarks, and we will now open the call to Q&A.

speaker
Conference Call Moderator
Moderator

Thank you. We will now begin the question and answer session. If you would like to ask a question, please press star 1 on your telephone keypad to raise your hand and join the queue. And if you'd like to withdraw that question, again press star 1. We also ask that you limit yourself to one question. For any additional questions, please re-queue. And your first question comes from Josh Baer with Morgan Stanley. Please go ahead.

speaker
Josh Baer
Morgan Stanley Analyst

Great. Thanks for the question. At your recent Deploy conference, you talked about several customers that were disappointed with their experience at the hyperscalers and ended up migrating to DigitalOcean's platform. I was hoping you could expand on that and touch on what types of customers you're targeting, what types of workloads, and then have a follow-up.

speaker
Patty Trini-Vaughan
Chief Executive Officer

Yeah. Thank you, Josh, for the question. So what we highlighted during the conference was a handful of customers that are looking at alternatives primarily for two reasons. One is the simplicity of running a A fairly sophisticated workload on a hyperscaler is not an easy undertaking. You need specialists for many of the nuances like storage and networking and compute and so forth. And also the total cost of ownership, especially if you have spiky workloads and you're unwilling or unable to commit to really long-term contracts at a substantial amount of minimum commitment, then it becomes a really onerous thing to keep running your workloads on some of these hyperscaler clouds. So what we are offering as part of the migrations program that I was just talking about is a couple of things. One is the ability to get our partner ecosystem involved to facilitate smooth transition of the workloads. But more importantly, provide a super compelling, scalable platform, which is far easier for most of the mainstream workloads to run and operate on DigitalOcean platform. So we're seeing all kinds of customers, Josh. And we are staying true to our target customer segments, which is tech-native, digital-native cloud application software companies that are running globally distributed workloads that are typically network and bandwidth intensive. Some of them require very bursty spikes in their traffic patterns, so that needs to be supported in a very elastic manner. And what attracts them to us is our core value proposition of being simple yet scalable, but most importantly, a very approachable cloud that really cares about them.

speaker
Josh Baer
Morgan Stanley Analyst

Thanks, Paddy. And just wondering on the EBITDA guidance, I mean, initial EBITDA guidance for this year was 36% to 38%. You ended at 42%. I was hoping you could kind of provide a just high level of the main drivers of that degree of outperformance and then how we should think about that level of conservatism in the initial 25 days for EBITDA. Thank you.

speaker
Matt Steinport
Chief Financial Officer

Great question, Josh. So I think we said this publicly last quarter that with all the new executives that had come on and with the acknowledgement that we needed to accelerate the product roadmap, you know, we built in some cushion in Q4 for the R&D team in particular to to surge resources that they wanted to bring in contractors, et cetera. And what we were able to do was, you know, we took a real hard look at the spend that we have, and we were able to reallocate resources, and Brett and his team did a phenomenal job of prioritizing on the top initiatives and getting the key products out. And so we didn't need that surge. You'll note that from a full year and even for a first quarter, the guide's still pretty wide on EBITDA. And so what we're signaling is a little bit of, well, don't get super fixated on what EBITDA is, one quarter versus the next, because we may be pulling expenses ahead, or maybe we find deficiencies and we don't need to. But what I would focus on is our commitment. We raised the pre-cash flow guide from our preliminary indication from 15 to 17 to 16 to 18, and that's what we're managing more aggressively towards. And I view EBITDA as a It'll move a little bit more, but we're very committed to driving improvements in gross margin. We're very committed to driving operating efficiencies and improving the leverage we have in the business. But at the same time, if we see an opportunity to accelerate a product capability that drives revenue, we'll do that. And that might have a near-term impact on EBITDA margins, which is why we provided a wide guide in fourth quarter and why we provided a wide guide for 2025.

speaker
Conference Call Moderator
Moderator

Your next question comes from the line of Gabriela Borges with Goldman Sachs. Please go ahead.

speaker
Gabriela Borges
Goldman Sachs Analyst

Hey, good morning. Great to see the NDR numbers. Thanks for taking my question. Patty and Matt, I wanted to follow up on the math you've given us in the prior quarter on how much ARR you're able to capture per dollar of GP-related capex. Maybe just refresh us. As you move up the stack from IaaS to PaaS, Are you able to generate more revenues per dollar of CapEx? And Matt, maybe you can comment on the gross margin profile of the AI services business as you move into more differentiated services. Thank you.

speaker
Matt Steinport
Chief Financial Officer

Yeah, both great questions, Gabrielle. And yes, so what we've found is, in particular, if you think of the Gen AI product, and this has been a great learning for us and also very, very supportive of our strategy, somebody comes in and they want to build a chat bot and they come in with their knowledge base and they come in and they want to hook up with another model, they can certainly take advantage of our Gen AI capabilities. And the Gen AI capabilities have way higher margins on their own than do the bare metal or more the infrastructure layer. But the thing that is probably most compelling to us is how much other revenue that will drive through of cloud services. Because For every chatbot you have, you need somewhere to store your knowledge base, so you need storage. You need bandwidth to get that, you know, to communicate with the models. And you need a lot of our other database infrastructure. So we think that the amount of pull-through revenue that we're going to get from the Gen AI services is kind of orders of magnitude more than the actual Gen AI revenue itself. And so that's very, very compelling. The margins, again, on the infrastructure layer As we've said and it's very clear in the market, the margins aren't spectacular, gross margins, on just core GPU as a service. And it's very price transparent in the industry. There's a lot of competition to get initial workloads. And the costs, even though there are new entrants, AMD is out with new capabilities and a lot of others are working on capabilities in addition to NVIDIA, it's still a fairly one-vendor-dominated industry and the costs are super high. So we expect that to come down over time, and we expect to be able to leverage more of that infrastructure for inferencing over time, which will drive more consistent and higher margin revenue. But I think that the path that we're on is towards more of our revenue coming from the higher platform, higher layer services, and those higher layer services not only on their own have better margins, but they pull through higher margin cloud revenue as well.

speaker
Conference Call Moderator
Moderator

Thank you. Your next question comes from the line of Mike Sikos with Niedermann Company. Please go ahead.

speaker
Mike Sikos
Niedermann Company Analyst

Great. Thanks for taking the questions, guys, and great to see the improvement in the NDR that you're talking to as well. I guess the first question I had for you was related to the AIML. Great to see the growth there still remaining triple digits north of 160%. Is there any Way to give us a little bit more as far as the size of that AIML AR base today, what it represents as a percentage of the total AR. And then the second piece on that point would be, how do we think about the AR composition today? Is the vast majority of that coming from that Scalar Plus cohort? I know we have these new disclosures and you guys are broad-based adoption, but just interested where the revenue dollars specifically are coming from for that piece. Thank you.

speaker
Matt Steinport
Chief Financial Officer

Yeah, on the first question, Mike, and thanks for the questions, we're not disclosing the specific ARR for AI for very clear reason on our side. The revenue is kind of intermingled with, like I was speaking to Gabriella, when we get a little bit of Gen AI revenue, we're getting a lot of pull-through revenue in other parts of the business. And we also don't manage the business as a, like, there's AI product group and there's other product groups. It's kind of commingled in terms of whether it's compute or some of the other capabilities. So we believe that, you know, like we have infrastructure as a service and we have platform as a service and we have managed hosting, AI is just another one of our products. And so we're not going to disaggregate at the product level. In terms of the amount of AI that's with the scalers, Recall that when we acquired Paperspace several years ago, it came with a lot of customers. It was like 15,000 customers or something like that, and they had a run rate that was largely small customers that looked a whole lot like the DigitalOcean customer base. So we have a pretty deep set of customers that are smaller. They're on the AI platform, a lot of them leveraging the legacy Paperspace kind of notebooks capability And a lot of the new customers that are coming on that Patty talked about, 90% of the customers that adopted the Gen AI product out of the gate are existing customers, and their mix looks a lot like our existing customers. So I'd say there's a healthy chunk of the AI that's in the Scalers Plus, but it's not all of it, and the vast majority of it is not in that yet. But there is a healthy chunk.

speaker
Mike Sikos
Niedermann Company Analyst

Thank you for that. Can I just tack on maybe one more on the gross margin? I know that we have the Atlanta Data Center coming online in Q1, which we're all looking forward to, but can you help us think about how gross margins are likely to move through the course of the year with that Atlanta Data Center? And then we obviously have the news that the useful server lives are being extended a year. How do we put those two pieces together when we think about how Calendar 25 looks? Thank you.

speaker
Matt Steinport
Chief Financial Officer

It's actually not going to move a ton. It'll be a little bit of a dip in gross margin in the beginning of the year, and it'll pick back up, which is very consistent to what happened before when we did Sydney. It's some upfront expenses. We haven't fully ramped it clearly with revenue yet, so you'll see a little bit of a gross margin dip, but we don't see a fundamental shift in the range of gross margin that we're in right now. In fact, Brett and the teams are working aggressively to drive gross margin improvement over the multi-year period as we continue our data center optimization strategy, and we just look for ways to better utilize the infrastructure that we have. So I think gross margin, you'll see a little bit of a drop in the beginning of the year. It'll come back. And I'll keep pointing back to, but again, we're very confident in our 16% to 18% pre-cash flow margins for the year. So it'll all kind of wash out to better margins as we get gross margins. Sorry, better pre-cash flow margins as we get through the year.

speaker
Conference Call Moderator
Moderator

Your next question comes from the line of Patrick Walravens with JMP Securities. Please go ahead.

speaker
Patrick Walravens
JMP Securities Analyst

Oh, great. Thank you. So, Patty, you've been here a year, right? What has gone better than you thought it would and what has proved a little more difficult?

speaker
Patty Trini-Vaughan
Chief Executive Officer

Hello, Patrick. Good morning and thank you for the question. Yes, super early for you. Appreciate you dialing in. What has gone better than I expected? I think our product innovation and our ability to really understand what our customers need at a deep level has been going better than I expected in the sense that when we look at the adoption, the reception from our customers, and our hypothesis that our customers really, really want to stay with us. And we were saying that. Now we have green shoots to prove that it is the case. If we take care of them, they're going to expand with us. So I think that has been really awesome. So the whole AI landscape, the way it is unfolding has been really interesting. And I'm super encouraged by the fact that we are staying very disciplined to our AI strategy, not getting caught up with deploying too much capacity and running after the GPU forms, business model, and things like that. It has been a really good learning experience for us. And slowly but surely, some of our hypotheses in the AI space have also started bearing out in the sense that So the two biggest impediments, as I talked about in my remarks, where one, it is too complex, and number two, it is too expensive for our customers, right? So we can really help with the complexity by making it super simple, especially with open source models. It gives us a lot of degrees of freedom to make it even more easier for our customers. And then number two is make it super affordable. And that's exactly what we have started doing with the on-demand fractional access to GPUs and also our token-based serverless endpoints with the Gen AI platform, especially with the open source model. So that has also started growing really well for us. So I would say those were the two key learnings for me over the last year.

speaker
Conference Call Moderator
Moderator

Your next question comes from the line of Wamsi Mohan with Bank of America. Please go ahead.

speaker
Rupalu (filling in for Wamsi Mohan)
Bank of America Analyst

Hi, thanks for taking my questions. It's Rupalu filling in for Wamsi today. I have two. First one for Patty. You know, you launched a lot of products in 2024. Patty, can you talk about the areas of investment in 2025? And thanks for disaggregating the scalers plus at 22% of revenue. How do you see that percent, that cohort, growing in 2025? And to do that, are you happy with the go-to-market investments you've made, or do you need to hire more salespeople? And I have a follow-up for Matt.

speaker
Patty Trini-Vaughan
Chief Executive Officer

Okay, thanks. Great question. So I'll try to go very fast. What are our product priorities for this year? So I'll break it down into core cloud and AI. On core cloud, we will continue our journey in terms of meeting our larger customers where they are in terms of their more sophisticated needs in terms of whether it is more management capabilities or security capabilities or networking, richer a variety of droplets that are very specific single-purpose droplets and things like that. So we still have some amount of work to do to meet the needs of our larger customers and pave the way for larger workloads to move to DigitalOcean from other clouds. From an AI perspective, we'll continue to fortify our infrastructure layer, which is honestly quite robust and very scalable and highly appreciated by our customers now. Gen AI platform is where we are going to have a lot of innovation to make it super, super simple for everyday software application company to consume and build agents into their applications. Finally, on the agentic layer, we have launched in the last couple of months the site reliability engineering co-pilot through cloud-based first, and then we'll make it available more generally. It is our intention to keep pumping out more AI agents that solve real business problems. Rather than just doing things for R&D sake, we want to solve real-world customer problems for our customers. In terms of go-to-market, as I mentioned in my prepared remarks, we have already bolstered the way we do customer engagement with our customers. As we go through the remainder of the year, I'll give you more updates in terms of additional go-to-market motions that we are standing up. The good news for us is We have tens of thousands, if not hundreds of thousands of customers that we can go and merchandise our expanding product platform capabilities and get a bigger chunk of the share of wallet. So as much as it is important to keep hunting for new logos for which we have a very efficient self-service product growth funnel, a lot of our customers go-to-market efforts are intended or aimed at expanding the share of Follett with our existing customers, which is a very different, significantly more efficient go-to-market motion than trying to find a bunch of new logos. So we'll do a combination of both, and I'll keep reporting our progress throughout the rest of the year.

speaker
Rupalu (filling in for Wamsi Mohan)
Bank of America Analyst

Thanks for all the details there, Paddy. Matt, just a quick follow-up on payables. Looks like sequentially it was up meaningfully. Is that because of the Atlanta Data Center investments, or did you get a new contract that is driving that up? And what would ARR be under the old method in fiscal 4Q? Thanks for taking my questions.

speaker
Matt Steinport
Chief Financial Officer

Sure. So from a payable standpoint, it's definitely a function of us getting ready for Q1 for the Atlanta Data Center. Bretton and team did a phenomenal job of being able to accelerate some of the readiness of some of the services. And so we bought a bit more equipment earlier than what we had expected. And so we were able to and still able to do that within the confines of our free cash flow that we delivered. So it was all about getting Atlanta ready to go and off to a good start. And from an ARR standpoint, I don't have the numbers specifically in front of me. It's in the reconciliations in the 10K. It would have been higher in fourth quarter had we reported the December times 12 if we had not changed the approach. What we think is, again, we've got to remember we're a consumption-based business. In a lot of cases, even our AI is consumption-based. It's not like committed contracts. And so if you measure the consumption at any given time and you multiply it by a number, you're going to get oscillations and variations based on whatever's going on at that period in time. So it's a more, I'd say, steady metric if you just average it over the quarter. And so despite the fact that it made us look like we had lower ARR, that's, I think, the right thing to do for the market to give you a metric that's, you know, I think more steady and less volatile, and that's why we did it.

speaker
Conference Call Moderator
Moderator

Ladies and gentlemen, as a reminder, please limit yourself to one question. Your next question comes from the line of Mark Zhang with Citigroup. Please go ahead.

speaker
Mark Zhang
Citigroup Analyst

Oh, great. Good morning, team. Thanks for taking our questions. Maybe just, you know, thanks for the additional disclosure on the scalers, but how should we think of the behaviors of the scalers plus versus the non-plus in terms of, you know, the pace of upsell, cross-sell going forward? and you know why aren't we really seeing a more accelerated pace of growth in the non-plus scaler cohort um just given the you know product and go market um at first you guys have already implemented and what can drive the you know at least like you know pass the or outpace the growth of builders thanks thanks mark for the question the um the reason that we broke out the scalers plus and the 100k plus customers is

speaker
Matt Steinport
Chief Financial Officer

The single biggest question we got from, you know, whether it was analysts or direct conversations with investors is, do we have a graduation problem? Like, do our largest customers, you know, leave us and those are your most valuable customers and you've got a leaky bucket that's leaking from the most valuable part of the customer base? And I'd say there was an element of truth to that in that we certainly were struggling with those customers over the past several years, not meeting their needs as well as we could have or should have. And that was causing a lot of the depression in NDR driving us below 100. With Patty's arrival, with Brunton's arrival, with the pace of product innovation that we have, it's very, very squarely focused on those large, large customers for us. Again, a large customer for us is probably not a named account for the hyperscalers, but it's material for us. And what we've done is we said, we're going to focus on that cohort. We're going to make sure we're listening to them. We're going to rapidly give them the capabilities they need to scale in our platform, and we've done that. And the response has been very, very positive, where now we're talking about migrations for those customers bringing workloads either back to us or to us for the first time from the hyperscalers. And so it's been a dramatic turnaround. And we'll talk more about this in Investor Day in April, but if you look at the NDR improvement, if you look at the year-over-year revenue growth improvement, it's been dramatic within those top customers. And as Patty and I have said, we think we have a tremendous wallet share opportunity with our customers. And clearly, as you pointed out, well, the scalers, the customers below that 100K plus aren't growing as fast. Okay, well, we haven't gotten there yet. Some of those we're trying to, you know, with our new go-to-market motions, how do we pick and find the ones in that, the customers in that cohort that have the highest propensity to spend, where we've got the lowest wallet share, and how do we go after those in a scalable way I'd say what you're seeing right now is not low-hanging fruit because it's very difficult and it's requiring a lot of effort, and these are important customers. But we're going after the biggest rocks first. The biggest rock was the top-spending customers, and we're getting to the other layers as we progress through the year.

speaker
Patty Trini-Vaughan
Chief Executive Officer

And the only thing I'll add is the scalers and scalers plus, they all look the same. These are the same. very similar-looking customers. So that's the best news because now we can go and start working on the scalers, as Matt just talked about, and get as many of them graduated into Scalers Plus and keep expanding their share of wallet. And as we've been maintaining, Mark, there's a tremendous wallet share opportunity because they're all running substantial workloads, just not on us. So that's a great opportunity for us to go earn the trust of these customers and get more of their workloads.

speaker
Conference Call Moderator
Moderator

Your next question comes from the line of Ramos Lenzchao with Barclays. Please go ahead.

speaker
Ramos Lenzchao
Barclays Analyst

Hey, thanks for squeezing me in. One simple one at the end. What do you see out in the market in terms of end demand, et cetera, and how is that helping you for this year? Thank you.

speaker
Matt Steinport
Chief Financial Officer

Ramos, I'm sorry. Could you repeat the question I didn't catch?

speaker
Ramos Lenzchao
Barclays Analyst

Oh, I was just asking for what are you seeing in terms of end demand, you know, like if I look at the small business index, et cetera, that all starts looking better. Does that help you? Do you see help coming there from that one, or is that all neutral for you? Thank you.

speaker
Patty Trini-Vaughan
Chief Executive Officer

Yeah, so, Remo, thanks, and nice to hear from you. From an end-user demand perspective, we have not modeled any major variation in our guidance or our plans. So, we are expecting it to be stable and just like how it has been over the last few quarters. And from our NDR improvement perspective, I mean, obviously there's a lot that we have done to control our own destiny and earn the right to keep our customers and keep them expanding on our platform. But hopefully the macro and the end-user demand stays neutral to even positive, but we have not baked any of that into our projections.

speaker
Ramos Lenzchao
Barclays Analyst

Okay, thank you.

speaker
Conference Call Moderator
Moderator

And ladies and gentlemen, that does conclude our question and answer session. And with that, that does conclude today's conference call. Thank you for your participation, and you may now disconnect.

Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-