This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.
5/7/2026
Good morning and welcome to the Q1 2026 Akamai Technologies Inc. earnings conference call. All participants will be in the listen-only mode. Should you need assistance, please signal a conference specialist by pressing the star and zero on your touchtone telephone. After today's presentation, there'll be an opportunity to ask questions. To ask a question, you may press star and then one on your telephone keypad. To withdraw your question, you may press star and then two. Please note that this event is being recorded. I would now like to turn the conference over to Mark Stoutenberg. Thank you and over to you.
Good afternoon, everyone, and thank you for joining Akamai's first quarter 2026 earnings call. Speaking today will be Tom Layton, Akamai's Chief Executive Officer, and Ed McGowan, Akamai's Chief Financial Officer. Please note that today's comments include forward-looking statements that include revenue and earnings guidance. These forward-looking statements are based on current expectations and assumptions that are subject to certain risks and uncertainties and involve a number of factors that could cause actual results to differ materially from those expressed or implied. The factors include, but are not limited to, any impact from macroeconomic trends, the integration of any acquisition, geopolitical developments, and other risk factors identified in our filings with the SEC. The statements included on today's call represent the company's views on May 7, 2026, and we assume no obligation to update any forward-looking statements. As a reminder, we will be referring to certain non-GAAP financial metrics during today's call. A detailed GAAP to non-GAAP reconciliation is available in the investor relations section of Akamai.com under financials. With that, I'll now hand the call off to our CEO, Dr. Tom Layton.
Thanks, Mark. I'm pleased to report that Akamai is off to a strong start to the year. In just a few months, we've achieved major milestones for our cloud computing strategy, marking a definitive turning point in the growth and evolution of our business. Akamai has long been known for operating the world's largest distributed platform for delivery and security solutions at global scale and with a reputation for reliability, quality, and trust. Now we're leveraging our global footprint and years of experience supporting the world's largest enterprises to become an indispensable infrastructure provider for the AI-driven economy. At GTC in March, we unveiled the industry's first global-scale implementation of NVIDIA's AI grid, and we announced the rollout of thousands of NVIDIA RTX Pro 6000 GPUs. By integrating NVIDIA AI infrastructure into Akamai's massive distributed platform, and by leveraging intelligent workload orchestration across our network, We intend to move the market for AI beyond isolated AI factories toward a unified distributed grid for AI inference. By pushing AI inference to the edge and combining it with our massive deployment of CPUs for delivery, security, and functions as a service, we're enabling customers to run complex models within milliseconds of their end users with the responsiveness of local compute and the scale of the global web. optimizing performance while reducing latency and cost. Those who attended GCC heard NVIDIA reference Akamai as a vital player in the industry's ecosystem for AI infrastructure. And we've seen very positive market reaction to our rapidly expanding capabilities from a wide spectrum of enterprises. Today, we're very excited to announce another major milestone for our cloud computing strategy and the evolution of Akamai. The signing of a landmark seven-year, $1.8 billion commitment for our cloud infrastructure services by a leading frontier model company. This is the largest customer deal in Akamai history, and it comes on the heels of the $200 million CIS deal we announced in February. with a major US tech company also at the forefront of the AI revolution. These leaders in AI have chosen Akamai because their AI workloads need the scale, performance, and reliability that our cloud platform provides. Many other enterprises have chosen Akamai for similar reasons. For example, since the start of the year, A leading cloud and digital infrastructure provider in Asia chose our VPUs to support their low latency live streaming media service. An AI company in the US chose our GPU platform to power their voice first solution to optimize business operations. An AI powered video intelligent platform in India chose our GPU platform to scale video analytics and computer vision workloads for retailers. A consumer AI platform in the US chose Akamai Cloud to run and scale live personalized agents. An AI commerce company in India chose our distributed inference platform to power their ad personalization engine. And two premier global retail brands chose our distributed data capabilities to improve the performance and resilience of their online retail applications. But all this is just the beginning. We have a large and rapidly expanding pipeline of prospects who are looking to Akamai for cloud solutions, including some with very large needs. To satisfy this strong and growing demand for our cloud infrastructure services, we expect to continue to build out both our physical infrastructure and our cloud sales and support teams. And as Ed will talk about in a few minutes, we now anticipate significant acceleration of our overall revenue growth heading into 2027 and beyond. Turning to security, I'm pleased to report that Q1 was also strong for our security portfolio, where revenue grew 11% year-over-year as reported and 9% in constant currency. Our security growth was led by strong demand for our market-leading web app firewall, API security, and GuardaCore segmentation solutions. Our WAF in particular is seeing growing interest from customers eager to deploy the latest defenses for vulnerabilities that could be exposed by the ever-strengthening frontier models and AI-powered attacks. Frontier models are changing vulnerability management, and we're proud to be one of the industry's must-have security providers partnering with the frontier model companies to help ensure the safe and rapid deployment of AI-enhanced defenses. With our early access to their vulnerability detection programs, we're applying our expertise to help keep major enterprises and critical infrastructure safe. Of course, and this is important to understand, attackers will also be using more advanced AI technology to develop even more potent ways to cause harm. This means that major enterprises will need Akamai security solutions even more than before. For example, there are many legacy systems and billions of deployed devices that can't be patched. They'll become a lot more vulnerable with the advances in AI, and they'll need our security solutions to keep them safe. For the devices and systems that can be patched, the patching process still takes time, often days or weeks, and they'll need our protection until that's done. We've seen this happen before when zero-day attacks emerged. And with the advances in AI, we can expect zero-day attacks to occur much more frequently. There's also an increasing challenge with scale. Because AI is enabling attackers to take over more devices and create enormous bot armies, we're now seeing attacks with unprecedented volumes. Just in the last few weeks, we neutralized a series of app layer attacks with millions of malicious requests per second from millions of widely distributed IPs. Akamai can defend against such attacks because of our widely distributed platform. Our WAF runs in 4,300 locations across 700 cities to intercept the attack traffic right where it enters the internet and well before it can coalesce onto the target. Having a great WAF with the needed defenses for the latest attacks is obviously important But that alone isn't enough in the coming age of AI. The WAFs need to be deployed across a vast distributed platform. And this need provides a unique advantage for Akamai when compared to the competition. In summary, we believe that Akamai's security portfolio will be needed more than ever before as attackers take advantage of the advances in AI. That's because of our massive platform scale to absorb attacks. our unparalleled access to real-time attack data, our tight integration with the early warning ecosystem to provide up-to-the-minute defenses for the latest zero-day attacks, our large and very experienced human security operations team that's equipped with the latest AI tools to enhance visibility and minimize response times, and our innovative, rapidly evolving, and AI-enabled product suite to help prevent penetrations and to limit the damage when penetrations do occur. Customers who selected Akamai and Q1 for that kind of protection for their APIs included one of the largest telecom groups in Africa, a major investment management company in South America, one of the premier investment banks in the Middle East, and one of the world's leading fintech companies in the U.S. Customers who added or expanded their use of our GardaCore segmentation solution in Q1 included the leading telecom carrier and media company in South Korea, one of the largest banking groups in Europe, and a leading healthcare company in the U.S. Many of the large renewals we signed in Q1 also included expansions of our security services. For example, after we protected one of America's leading retailers from unwanted bots during the holiday shopping season, they increased the use of our services in a contract worth $24 million. We signed an expansion contract worth $80 million over two years with one of the world's largest video game companies. We signed an expansion contract worth more than $20 million with a global consumer electronics company in Korea. and one of the largest global professional services companies in the world expanded their use of our ZTNA solution to secure large-scale remote access as they move critical applications to a zero-trust model. Our security solutions continue to receive top recognitions from the major analyst firms for their effectiveness. For example, last quarter Akamai achieved a 99% recommendation rating as customer's choice at Gartner's peer insights report on micro segmentation. And last month, Akamai was the only provider to be named customer's choice at Gartner's peer insights report on API protection. In closing, we're thrilled by the way our growth strategy has taken hold and is generating transformative opportunities for our business. We believe that Akamai is uniquely positioned to enable and benefit from the development of the AI-driven economy. By bringing powerful compute directly to the data and the users at the edge, Akamai is enabling and securing the next generation of agentic AI. With each quarter, the massive opportunity we see ahead becomes more evident, and we're making bold investments to capitalize on that opportunity and enable Akamai to do for cloud and AI what we've done for security and CDN. to generate significant future growth for our business. Now I'll turn the call over to Ed for more on our results and our outlook for Q2 and the year. Ed?
Thank you, Tom. Before I get started and to build on Tom's remarks, I want to personally underscore my excitement regarding the $1.8 billion new customer win announced today. This is a powerful validation of the Akamai value proposition in the age of AI and a clear indicator of the scale at which we can operate. To fully capitalize on this momentum and support the accelerated growth we anticipate, we will be investing slightly ahead of revenue. You will see this reflected in the updated capital expenditure and operating margin outlook I will discuss during the guidance portion of my remarks. We view these investments in our CIS portfolio as as critical to ensure we have the foundation to meet the significant demand we see on the horizon. Also, driven by today's announced $1.8 billion win, the $200 million four-year CIS deal we announced last quarter, and our rapidly accelerating pipeline, we now expect total company annual top-line revenue growth to reach double digits in 2027. We look forward to sharing more details in the coming quarters. Clearly, this is an incredibly exciting time for Akamai. With that, let's dive into the Q1 results. We delivered strong first quarter results with total revenue of $1.074 billion, which was up 6% year-over-year as reported and 4% in constant currency. Cloud infrastructure services, or CIS revenue, got off to a robust start to the year with revenue of $95 million, up 40% year-over-year as reported, and 39% in constant currency. As Tom noted, we are seeing CIS wins across a wide spectrum of industries, geographies, and use cases. Even more encouraging, the pipeline for AI-specific use cases is building rapidly. We also maintain very strong momentum in security, with revenue of $590 million, up 11% year-over-year as reported, and 9% in constant currency. The strength in the first quarter continued to be driven by our fast-growing API security and GuardaCore segmentation solutions, along with strong growth from our largest product, Web Application Firewall. Moving to delivery and other cloud applications. Revenue was $389 million, down 7% year-over-year as reported, and down 8% in constant currency. These results were in line with expectations driven by the wraparound impact of the Egeo transaction in 2025. We expect this effect in the rate of decline to moderate throughout the remainder of the year. International revenue was $530 million, up 9% year-over-year, or up 5% in constant currency, representing 49% of total revenue in Q1. Foreign exchange fluctuations had a positive impact on revenue of $2 million on a sequential basis and a positive $19 million on a year-over-year basis. Moving to profitability, in Q1, we generated non-GAAP net income of $239 million, or $1.61 of earnings per diluted share, down 5% year-over-year, as reported, and in constant currency. These results include our expanded co-location investments, higher depreciation, and increased headcount costs, all tied to our strategic investment in cloud infrastructure services during the first quarter. Our non-GAAP operating margin for Q1 was 26%, in line with our expectations. We expect operating margin to remain in this range for the remainder of this year as we ramp up our investment to capture the exciting growth opportunities ahead of us. Our Q1 CapEx was $206 million, or 19% of revenue. First quarter of CapEx was slightly below our guidance, primarily driven by timing and favorable pricing. Specifically, some expenditures shifted from Q1 into Q2, and we benefited from some lower-than-expected component costs. Moving to cash and our capital allocation strategy. During the first quarter, we spent approximately $206 million to buy back approximately 2 million shares. We ended the first quarter with approximately $975 million remaining on our current repurchase authorization. Our intention with capital allocation remains the same, to continue buying back shares to offset dilution from employee equity programs over time and to be opportunistic in both M&A and share repurchases. As of March 31st, we had approximately $1.7 billion of cash, cash equivalents, and marketable securities. Now, before I provide Q2 and full year 2026 guidance, I want to touch on a few housekeeping items. First, for Q2, CapEx is expected to jump significantly as we start to take delivery of the NVIDIA GPUs we discussed on our last quarterly earnings call, and we catch up on some of the CapEx that pushed from Q1 into Q2. Second, we expect to see an increase in operating expenses in the second quarter, due primarily to continued investments in go-to-market and the impact of our annual employee merit cycle that went into effect on April 1st. Third, we anticipate revenue from the $1.8 billion customer win to start to ramp in Q4, and we expect to generate approximately $20 to $25 million of revenue in the fourth quarter. Finally, regarding CapEx for this win, we expect to spend a total of approximately $800 to $825 million over the next 12 months to support this customer. We expect to deploy roughly $700 million of that total in the second half of 2026, with the remaining balance falling into the first half of 2027. Moving now to guidance. For the second quarter, we are projecting revenue in the range of $1.075 to $1.1 billion, up 3% to 5% as reported, and in constant currency over Q2 2025. At current spot rates, foreign exchange fluctuations are expected to have no material impact on Q2 revenue compared to Q1 levels, and a positive $2 million impact year-over-year. At these revenue levels, we expect cash gross margins of approximately 70% to 71%. Gross margin is impacted by the significant increase in co-location as we accelerate the growth in our CIS business. Q2 non-GAAP operating expenses are projected to be $346 to $357 million. We anticipate Q2 EBITDA margin of approximately 38% to 39%. We expect non-GAAP depreciation expense of $144 to $146 million. We expect non-GAAP operating margin of approximately 25 to 26%. And with the overall revenue and spend configuration I just outlined, we expect Q2 non-GAAP EPS in the range of $1.45 to $1.65. This EPS guidance assumes taxes of $47 to $54 million. Based on an estimated quarterly non-GAAP tax rate of approximately 18.5%, it also reflects a fully diluted share count of approximately 146 million shares. Moving to CapEx, for the reasons I highlighted earlier, we expect to spend approximately $433 to $453 million in the second quarter. This represents approximately 40 to 41% of total revenue. Looking ahead to the full year 2026, we expect revenue of $4.445 to $4.55 billion, which is up 6% to 8% as reported and up 5% to 8% in constant currency. For cloud infrastructure services, we are raising our outlook to at least 50% year-over-year growth in constant currency. We expect momentum in CIS to continue to build throughout the second half of 2026, driven mainly by the scaling of our AI opportunities and the impact of the two very large transactions we announced in Q4 and today. Also, we continue to expect security revenue growth in the high single digits on a constant currency basis in 2026. And for delivery and other cloud apps, we continue to expect a decline in the mid-single digits year over year on a constant currency basis. At current spot rates, our guidance assumes foreign exchange will have a positive $20 million impact on revenue in 2026 on a year-over-year basis. Moving to operating margin, for 2026, we are estimating a non-GAAP operating margin of approximately 26 percent as measured in today's FX rates. Turning to CapEx, at this time, we anticipate our full-year capital expenditures will be approximately 40 to 42 percent of total revenue, including the $700 million impact from the $1.8 billion contract we mentioned earlier. Before I move on, I want to provide some additional color on our CapEx outlook. As Tom noted, the demand we are seeing for CIS, including our GPU deployments, is exceptional. Our current pipeline for GPUs significantly exceeds our existing and projected inventory, meaning we may place additional GPU orders in the second half of the year to meet this demand. This is not factored into our current annual CapEx guide. We will update CapEx guidance on a subsequent earnings call if we place another GPU order before year-end. Moving to EPS. For full year 2026, we expect non-GAAP earnings-productive share in the range of $6.40 to $7.15. This EPS guidance includes the impact from the very large win. This non-GAAP earnings guidance is based on a non-GAAP effective tax rate of approximately 18.5% and a fully diluted share count of approximately 147 million shares. With that, I'll wrap things up, and Tom and I are happy to take your questions.
Operator? Thank you.
We will now begin the question-answer session. To ask a question, you may press star and 1 on your touchtone telephone. If you're using a speakerphone, please pick up your handset before pressing the keys. If at any time your question has been addressed and you would like to withdraw your question, please press star and two. At this time, we'll pause momentarily to assemble our roster.
We have the first question from the line of Roger Boyd from UBS.
Please go ahead.
question and congrats on the landmark deal there. Maybe if you can, Tom, just broad strokes about kind of the competitive set to win that deal. Are you going toe-to-toe with other hyperscalers or neoclouds and anything you can provide on kind of the use cases, inference, is it agentic workloads? And when you think about your compute-enabled POPs, just how is this customer leveraging the Akamai network as a whole? Thanks.
Yeah, I can't give any more details about this specific deal, but in general, yes, we do compete with the hyperscalers and the neoclouds with our cloud infrastructure services. That's the primary competition. They select Akamai because of our proven ability to manage and scale complex distributed systems, our ability to get the necessary data center space and locations around the globe, To interconnect that with the world's largest and best performing delivery network and leading security solutions, we offer the best in terms of latency, scalability. We probably deal with more data center companies than anybody, you know, with being in 4,300 locations across 700 cities and 130 countries. So, yeah, we have significant competition. Every deal is competitive, but we also have unique capabilities, which is, I think, why our pipeline is so strong and why we're winning some very large deals.
Excellent. And then just on security, I wonder if you could unpack what you're seeing from a demand perspective there. And a nice result in the first quarter. Just what are you seeing around conversion rates, sales cycles? Are you seeing more urgency from organizations that are thinking about ways to limit the blast radius and defend against kind of an AI-fueled attack landscape? Thanks.
Yeah, I don't think I've ever seen the CISOs more agitated and feeling more of a sense of urgency than they are now. Over the last several weeks, couple months, I've had the chance to meet with a lot of the world's biggest company CISOs, in many cases the CEOs and senior executives, and they are very concerned about what happens when the attackers get access to advanced AI or the latest AI frontier models, which it seems that they will. You know, this is going to uncover a lot more vulnerabilities. We're going to see the equivalent of a lot more zero days. And they are literally scrambling now in many cases to make sure all their applications, their agents, their APIs are protected by Akamai. And you can imagine, you know, most of the world's major banks rely on us for security. And they're looking at a pretty big wave of new attacks, you know, coming their way. So this is I don't know of a comparable time. where there's this much concern about what's going to happen with security and also this much appreciation for what Akamai provides with our security platform.
Very good. Thanks for the call.
Thank you. We have the next question from the line of Patrick Covell from Scotiabank. Please go ahead.
Thank you so much for taking my question. I mean, this one's for Dr. Tom. I mean, when I think about Akamai, the value prop for, you know, the last 30 plus years has been the distributed architecture, you know, 700 cities, 130 countries. When I think about this mega deal, is that a kind of highly distributed use case? Or should we think about it as being served from, you know, a few like sub 10 type data centers?
Well, I'm not at liberty to talk about the recent deal. However, I think when you're thinking about Akamai's value proposition, you hit a very key point with our really unparalleled distributed architecture. And I did reference a bunch of use cases in the prepared remarks. And yeah, they very much rely on our distributed platform where you want to get the agents and the applications, the business logic close to users, close to the data, so you get low latency, you get scalability, particularly anything to do with, you know, video processing or video generation needs a lot of scale. And Akamai is unique there. And so, you know, I think absolutely what we're able to offer is very compelling.
Yeah, and look, congrats, Matthew. I just... For my follow-up, please, Ed, you made this kind of subtle point that there's a CapEx guide.
Sorry to interrupt you, Patrick. Your voice is breaking. If you could probably go off the speakerphone, we would be able to hear you better.
Thanks for that. I guess this follow-up is for Ed. I mean, Ed, you gave us a CapEx guide. then you kind of give us this kind of subtle point that might have to increase capex further just just help us understand that nuances of why there might be a an increase in the capex mid-year and i guess what that might mean yeah so so thanks for the question patrick uh so what i had mentioned was we have a very very strong pipeline for our gpu uh the platform and
We're just starting to get the bulk of those chips up and running now, and we've got a very large pipeline. It exceeds what we have in inventory. So obviously we want to prosecute that pipeline, start winning all those deals, converting that into contracts, et cetera. And then the reason I sort of hedged a little bit is, one, we want to obviously fulfill that pipeline, but two, there is some time that it takes to get the chips. So even if we were to place an order, it may slip into next year or so. What I want to do is just give it another quarter, and if, in fact, we're in a position to place an order and receive that by year end, we'll certainly do that and let you guys know. I see that as a very bullish comment. And, again, I just didn't want to come up and surprise you with another whatever it is, a couple hundred million or whatever the order may be, without at least giving you some color behind that.
Crystal clear. Thank you so much, Tom and Ed.
Thank you. We have the next question from the line of John DeFucci from Guggenheim Securities. Please go ahead.
Thanks for taking my question. My first question is for Ed, and I have a quick follow-up for Tom. Ed, thanks for all the detail on CapEx. But when I think about the CapEx for this mega deal, and I think Patrick was kind of going here, I mean, this is over a long time, right, seven years. Are you accountable, for example, like right now we're seeing higher memory costs than ever, we would have thought of maybe a year ago are you like when you locked in this deal do you also have the supply locked in or are you um you know exposed to that if that were to happen i don't know two years from now higher prices again on on that yeah great question so that you know uh i was i was fortunate enough to work very closely with the team on both sides of this transaction
So, yeah, we've been able to get the supply chain ready. We anticipate receiving all the goods that we need to deliver this service over the seven years within the next 12 months. Obviously, you saw the way the CapEx was broken out with the majority of it this year. So we anticipate receiving a significant portion. Now, there's always the potential for some slippage and delays, but we have mechanisms in our contracts to deal with. if, in fact, say six months from now prices were to go up. So we've taken that into consideration. We've got that taken care of. And the way this, you know, from a revenue perspective, the way to think about this deal is it's a set amount of capacity that we're deploying, and there's no usage to it. It's a straight, you know, usage, sorry, deployment. committed deal over seven years. So as soon as we ramp all the capacity up, we'll start taking the revenue for a full year. I expect, as I said, a little bit this year, and then next year we'll get a partial year as we receive the remainder of what's to be deployed. And then from there, it'll go on for the remaining six plus years.
Okay, so that'll kind of look, even though it's a consumption, it'll kind of look like a subscription. Is that accurate?
Exactly. Yep, that's exactly what I think about it.
Awesome. Okay, great. Thank you. And Dr. Tom, a component of your delivery business is video streaming. And in March, we saw OpenAI, they confirmed they shut down their AI video generation system, Sora. I'm just curious, do you expect that to have any effect on your delivery or compute business forecasts?
No. No. We partner with OpenAI on security vulnerabilities, helping define them and protecting our customers for the associated attacks. But OpenAI is not and has not been a customer of Akamai. So, yeah, no impact on us at all.
Great. Okay, thanks. Nice job, guys. Thank you.
Thank you. We have the next question from a line of Jackson Edo from QBank Capital Markets. Please go ahead.
Hey guys, this is Aiden Daniels on for Jacksonator. Thanks for taking our question. With this big deal, you know, as you allocate capacity going forward, you know, how can we kind of think about the impact on any amount of on-demand GPU capacity you're able to offer going forward? Like, you know, how are you kind of balancing what you have committed from this deal with maintaining flexibility for, you know, new or incremental demand going forward? Thank you.
No, we support both on demand, you know, per token or per VM hour access to our platform. And also we support, you know, large tranche deals. And so it's not really a matter at this point of trading off and as we need more GPUs, as Ed said, you know, and that may well be the case that we would purchase more.
Awesome. And then just one quick follow-up on, I know you can't really talk too much about the deal, but I guess like, how can we kind of think about the proportion of whether it's CPU or more of the GPU inference cloud going forward? Is there kind of like a framework we can think about with this deal?
With this deal, we can't comment on this deal. However, in general, you know, with inference and AI, you need both really. And, you know, part of the value we provide is that we can help provide the computational resource that's most appropriate, you know, for the workload that you have, which might be CPU, might be GPU, because you want to be as efficient as possible. And also you want to have it be as close as possible to the user so you get the best performance. So it's a mix. And every application is different in the mix of CPU versus GPU that it needs.
Thank you.
Thank you. We have the next question from the line of Fatima Bolani from Citi. Please go ahead.
Good afternoon. Thank you for taking my questions. Just a higher level strategic question. You have opted to take more of a dedicated capacity approach in terms of satisfying demand and supply constraints out there. I wanted to sort of dig deeper into why simply because the stock rates and the market rates for what otherwise could be almost entirely a rental or a GP as a service business are significantly more attractive. So I just kind of wanted to get the vision and the thought process and the decision making calculus around steering the network and the platform more towards larger customers, longer commits and more dedicated capacity? And then I had a follow up as well, please.
Well, we do both. And, you know, the larger, bigger deals with long term commits are more attractive in many ways. You know, you have the commit. And in the big, big deals, yeah, the pricing would be lower. But we also support, you where you can buy it by the token or the hour, and you get a little bit higher pricing. But, you know, there can be more expense associated with that, you know, getting the customer on if you have a rep engaged in the account. But both are attractive, and we support both. So it's not a matter of us doing one or the other.
The one thing I would add there is, Fatima, just to jump in here for a second, the customers are really driving that. If I look at our pipeline here, A lot of our customers want to have dedicated capacities, say a dedicated number of GPUs or whatnot, because there is a scarcity in the marketplace. So rather than going on a consumption basis, they can get slightly better pricing and lock in that capacity for themselves. So it's really a market-driven thing more than anything.
I appreciate that. And Ed, since I have you, you telegraphed for us pretty nicely that you know, should the pipeline continue to grow and morph in the way and positively morph in the way you are seeing, you will be very open to continuing to throw down CapEx and bringing and lighting up Megalots online. But wondering, you know, sources of funds and capital to fund these investments, is that something you feel you can intrinsically do from running the business or should we expect maybe other companies sources of the capital to be tapped as you build out with the bigger CapEx profile for some of these larger customers under demand? Thank you.
Yeah. So, so far, no issues as far as financing these build-outs from our own capital today. We're obviously a company that's very profitable, produce a lot of cash. Obviously, in the years when we're investing, big cash flow will be a bit lower. But these things have phenomenal free cash flow after you do the initial deployment. So, That's one of the attractive things. We, you know, from a cash and equivalents, we have $1.7 billion on the books today. We also have a line of credit of $1 billion if we need to tap it. And then obviously, you know, we've got excellent credit and have no problem raising money in the capital markets if we need to. Right now, you know, we haven't announced anything there. And if we continue to get large deals and need to get capital, we'll certainly go to look to do that. But so far, we've been able to use our own funds.
Thank you. Thank you.
We have the next question on the line of Mark Murphy from J.P. Morgan. Please go ahead.
Hey, this is Artie Vula from J.P. Morgan. I'm from Mark Murphy. Thanks for taking the question. Great to see the momentum you're having with the large CIS deals with companies on the AI technology frontier. You had a large deal last quarter, another one this quarter that dwarfed the one before it. So just at a high level, can you help us understand from your perspective, like, you know, it seems like all of a sudden you're getting some of these large deals. Has this been brewing for a while in the pipeline or has this been a little bit faster? What's changed that's brought a lot of this business to your doorstep seemingly pretty quickly from our point of view? And then as a quick follow-up to that, you know, as you're dedicating the financial and operational resources to the CIS and these large deals, does it change how you're thinking about other business segments? Thanks.
Well, this has been the strategy all along, and we're very pleased to be executing against it. The goal has been to be deploying a distributed inference platform, distributed compute platform that would be desired by enterprises really across the spectrum and with many large customers. And of course, Akamai's customer base does feature many of the world's largest enterprises, And as we've talked about before, they spend 10x or more on compute than they do on, you know, our traditional services delivery and security. So this is exactly what we said we were going to do. And now we're delivering those results. The platform is to a point where we can do that. And I think you'll see more of this going forward.
Got it. Thank you.
Thank you. We have the next question from Aindo Sanjit Singh from Morgan Stanley. Please go ahead.
Yeah, thank you for taking the questions, and congrats on the biggest deal in company history. On that point, this might be a trivial question, but in terms of this $1.8 billion contract, is that more of a public cloud opportunity? Because I know part of the public cloud business also has a GPU component, or is this specifically for Akamai Inference Cloud? That was the first question that I had to follow.
Yeah, we really can't talk more about this particular deal. But obviously, there are a lot of companies where we have signed contracts that we did talk about across the spectrum. And those deals for our inference cloud and our cloud capabilities are for our GPUs and our CPUs. And it really is our ability to bring the right hardware for the particular application and have it located where you get the best benefit for the use of that application.
No, that's fair enough, Tom. My follow-up question actually goes to the delivery business. And there's a lot of people in the market kind of debating about a potential new lever for growth in CDN and delivery in a world where you have millions, potentially billions of agents running around, calling tools, executing tasks, doing web searches, etc. Has the team internally sort of revisited its thesis around the secular growth prospects and delivery, or is it still a business that you're mostly looking to harvest for profitability and gross profit dollars to fund the compelling opportunities in security and compute?
Yeah, great question. When you look at what is the proliferation of agents and what's coming, the biggest driver for growth is going to be the compute platforms. the cloud platform that supports that. And, you know, we're really well set up to do that. Next, you have a big security issue, you know, because AI and the agents are a whole new, you know, vulnerability surface that not only do you need your web app firewall, your API security, you need special security for AI. And so we get a real tailwind from STG for our security technology group there. Also, the agents are, you know, you have to interpret what an agent is, who's behind it, and what they want to do when you're delivering or protecting an application or a site or another agent. And the response you give is really tailored to what the customer wants you to do when an agent of this flavor, you know, comes and interacts with you. And so we developed a lot of capabilities there, and they fall generally within our security capabilities. Now, in terms of delivery, yeah, there'll be some traffic that used to be human-generated, now agent-generated, okay. That doesn't make a huge swing in the amount of bits you're delivering. That starts to change if you have agents dealing with video, generating video, like you go to a commerce site and the user wants to see what do they look like in that sweater they're thinking of buying. And if you generate a video showing them wearing the sweater. That will, you know, improve the return, you know, for the site. And that generates a lot of traffic. And so we're just at the very early days of seeing things like that. They're being experimented with now. That could generate, you know, more traffic for delivery. But the biggest impact, you know, for us is in the cloud business and then next in the security business. Delivery, really important, very synergistic with our whole platform approach. does generate a lot of cash for us, and we're plowing a lot of that cash into the growth of the cloud business.
I understand. Thank you, Tom.
Thank you. We have the next question from the line of Mike Sikos from Needham. Please go ahead.
Great. Thanks for taking the questions, guys, and congratulations on the strong quarter and the customer win. I just wanted to make sure I'm understanding at least the mechanics of this deal and So you signed a seven-year $1.8 billion commitment. Can we expect the full $1.8 billion to show up in RPO, or does that include anything as far as potential renewals? Is that all take or pay? Just anything to make sure we're understanding the mechanics of the deal?
Yeah, sure. So I touched on this a little bit earlier that, you know, and it was a follow-up question around this notion of capacity, dedicated capacity versus, you know, pay by the hour. This is more of the pay by the dedicated capacity. So as soon as we get the capacity set up, we will take the revenue ratably over the contract. As I said, we'll get some revenue this year and a little, not a full year next year, but a partial year next year as we're still, you know, building up and getting the capacity up and live. In terms of the way you account for RPO, there's, we will see most of that in the next quarter. And then by the time we get everything delivered, it will be all in RPO eventually. There's just some odd mechanics with the first 12 months and how we're dealing with how we're receiving the goods. And we talked about a pricing mechanism to handle if prices would go up or down, that sort of thing. So there's a little bit of nuance in there. But once we get this fully up and running, you'll see it in RPO. There'll be some amount next quarter, and then it will build from there.
Thanks for that. I appreciate you spelling out the mechanics there. And then... for Dr. Leighton, just to make sure I'm clear as well, and it's great to hear that your largest security product here with WAF is seeing some stronger growth, which I wouldn't have expected. Can you just tap into that one more time as far as what's driving that? Is it really this heightened environment that we're in, or is there something else behind there?
Yeah, you know, there's real advances in AI, and it's getting much better at finding vulnerabilities and helping the attacker take over devices and penetrate enterprises. And you need our defenses now more than ever before. You know, there's billions of devices out there that you can't patch. And now the adversary can find ways into those devices and take them over. And so as a result, we're seeing attacks much bigger than we've seen before you know, literally application layer attacks from millions of distributed IPs with millions of, you know, attacks on a target per second. And you can't defend against that with just a WAF in a data center or anything close. You need the vast platform that we have to be able to intercept all that traffic and deal with it, you know, because you've got to separate the bad stuff from the good stuff, and there's a huge amount of the bad stuff now. So our platform, the physical infrastructure, is needed more than ever before for our security services, and our customers know that. And there is a heightened sense of urgency now because they know the attacks are getting more capable due to AI and larger in size because they can take over all these devices and launch the attacks from many more locations. And so that's why we're seeing, you know, things like our web app firewall suddenly, you know, in a lot more demand. Now, AI helps on the defense, but doesn't solve that problem. And so net-net, this is a very challenging time for CISOs. And that's why they're turning to us to make sure everything that they have, you know, is protected by Akamai.
Thank you. It's great to hear about that halo effect. And congratulations again on the strong customer win. Thanks.
Thank you. We have the next question from the line of Frank Lowton from Raymond James. Please go ahead.
Yeah, just to follow up on the question about the $1.8 billion, how that's being booked, is all of that going to come in as revenue? Will any of that be counted as paid for upfront capex or something like that? And then I also wanted to follow up and see how many locations do you have Inference Cloud in? built out to currently, and what's the plan? Thank you.
Sure, I'll take the first part, Tom. You can take the second. Yeah, it's all revenue. There's no offset to CapEx or anything like that, so it's going to be all revenue.
Yeah, to the second part of the question, we have Inference Cloud covers all of our 4,300 locations. You know, we have functions as a service running in a serverless way in all 4,300 locations. We have our managed container service running in well over 100 cities and conceivably can run in all 700 cities, but active in well over 100 today. We've got full IaaS capabilities in several dozen cities, and a couple dozen of those are equipped with the new 6,000 GPUs. And the goal, of course, is to have all this orchestrated. so that when there's an application or an agent that needs to be run, it's run on the most computationally efficient resource. If you can do it on an edge server with the existing CPU, fabulous, fast, very low cost. If you can do it on a container in the same city in a CPU, great. If you need a group of GPUs in one of those couple dozen locations, okay. And again, so you want it to be on the most efficient resource, to be close to the user, and to already be ready to go. You don't want to have to spin it up in response to a request. And that's what our orchestration layer is designed to make possible. And this is how it fits in with the vision from NVIDIA with the AI grid. You think of AI like you would an electrical grid. And that's what Akamai is building.
Okay, great. Thank you.
Thank you. We have the next question from Lionel Willpower from BID. Please go ahead.
Oh, great. Thank you. Yeah, I'll echo my congratulations on this massive deal. Just maybe two questions. First, just a clarification, perhaps, Ed. When you talk about needing additional GPUs, do you need more GPUs to satisfy the new deal, or is that more related to the building pipeline? And is there I know the timing is uncertain as to when the GPUs might be available, but is there any framework for what we're talking about in terms of overall cost? And then I have a second question.
Yeah, sure. So if you listen to the prepared remarks, we talked about all the CapEx that we need is in the guidance for satisfying the $1.8 billion. So that's separate from the comment I made around the additional GPU purchase, and that was really tied to how we're doing with the pipeline, how quickly we can execute on that. And again, there's always the question of, can you get them delivered in time? So we'll give you more information on that. It really depends on what we're seeing in terms of demand. We're seeing a pipeline that's very, very strong, some very large opportunities, some customers that want to start with a couple hundred GPUs, some that want to start with a thousand or more. It's really all over the map in terms of opportunities, and it's growing every day, which is great. So we'll size it up for you. The last one was around $250 million in CapEx. You know, I don't have anything to tell you in terms of how big I think it will be, but, you know, hopefully, I'm telling you it's a really big number because we've got significant demand for it.
Yep. Okay. And then any way to kind of frame – how you're thinking about gross margin, operating margin impacts. I know 2027 sounds like it's still a partial year. As you look into 2028, how do we kind of think about how this impacts the overall financial model relative to where, maybe relative to where you are today?
Yeah. So let me talk about sort of a high level. So if you think about especially some of these larger deals that are more of that someone who comes to us and says, hey, I want 1,000 GPUs, or in the case of this big customer, I want a certain amount of capacity over a long period of time. The biggest cost driver there is your depreciation over the period of time. The costs that go into your cash gross margin are much less, right? It's your co-location costs, maybe if there's some bandwidth or networking costs and things like that. And sometimes there's some people costs, but generally speaking, these scale pretty well. So what you would expect over time is, you know, your cash gross margin could improve. Now, obviously there's some, some, push and pull here between lighting up COLO for expected demand and you have to light that up first and all that kind of stuff. So it'll take a bit for this to sort of play out, but you should see your cash gross margin expand a bit, your EBITDA margin expand a bit. And then from an operating margin perspective, it really depends on the mix. We're willing to do you know, some of these deals that are a lot larger, potentially at margins that might be sub the 30% operating margin. If you look at the, you know, certainly the GPU by the service rented GPU, much higher than the company operating margin, you're going to get a lot of scale across the OPEX as we take on, you know, certainly larger customers. So, you know, we're going to really focus over the next year or two on really capitalizing on this growth. So we won't be in a margin expansion situation point right now, but at some point that will happen naturally and your free cash flow margins will improve and things like that. So that's sort of the way we're thinking about it, but we're really excited about going after this growth opportunity and we're going to continue to invest to go get it.
That's helpful perspective. Thank you.
Thank you. We have the next question from James Fish from Piper Sandler. Please go ahead.
Hey, guys. Look, given what you've discussed around power in the past, with the large sites having, I think, as you said, 5 to 10 megawatts and smaller sites, a fraction of that, it puts you above 300 megawatts. So now you don't have enough revenue that aligns to this. So how much of that power is for non-compute services? And is that why you need to bring on, from what I can tell, another roughly 40 megawatts just for this deal alone as it doesn't seem to kind of map and you guys should be able to kind of support this if that power is all allocated to compute. So can you just walk us through how you guys are in terms of megawatts and kind of what the plan is by the end of 27 then?
Yeah, so I didn't quite follow all that. But let me just start by saying, you know, your math isn't right in terms of what would we be required to deliver this particular deal? I'll just leave it at that. It's significantly lower than that. In terms of our capacity, we have the majority... If you think about what uses the megawatts of power that we have, the CDN and the security business is a small fraction. You think about that as being kilowatts in some cases, maybe a megawatt or two in some of the big CDN deployments. So there's not a ton of... you know, like massive power required to run the CDN business. When it comes to the compute business, it's a lot greater, especially when you get customers who want, you know, say a few thousand GPUs in a particular location or they're, you know, in say 20, 30 locations and they've got a lot of CPU. So you do tend to see a lot more need for power there. And what I've talked about is Our typical deployment for some of our larger location we talked about having 40 core compute locations and those are in the five to 10 megawatts expandable to say 20 to 30. And it really again depends, and you know we can get a little bit bigger than that, but there's plenty of opportunity for us to get additional colo we expect to light up a lot more. going forward here. So if the concern is that we don't have access to enough power or can't get colo, that is not a concern of ours right now at all. As Tom talked about earlier, we've got great relationships, and we're a very attractive client for some of these data center providers. We've got excellent credit. We're not a do-it-yourselfer like a hyperscaler. We've got much better credit than, say, some of the neoclouds, and we do take significant chunks of colo in some cases and actually help some of our colo partners build out. So I'm not concerned about that at all. And the power dynamics for each one of the different products is different. So GPUs take up more power than CPUs. And so there's math that goes along with that. I've shared that math. The CPU math is much lower because you don't need as much power. And then also the type of equipment that you're running can also have a pretty significant impact on power. We're seeing a lot of interesting hardware providers coming out with stuff that's incredibly efficient from a power perspective. So it's hard for me to answer that question specifically, but Just take it that we're not too concerned at all about getting enough power to run these things, and the power requirements are very different. But we factor that in to any deal that we do and ensure that we're not going to take anything on that's not profitable.
Got it. Makes sense. And just, Ed, on the security side, normally you give us an API and zero trust versus kind of the core. I guess how did that trend and then how did compute in the quarter trend between with enterprise? Thanks.
Yeah, so with security, we didn't break out API and Garticle. We did say it was the majority of what's driving growth. And I would just say the growth rates are similar to what we had last quarter. Just remember last quarter had a fair bit of license revenue. So when you back that out, it's kind of apples to apples growing at roughly the same rates when you back out the impact of the license revenue. So still very, very healthy growth rates there. And then on compute, you asked about enterprise compute. We don't break it out that way. Really, the way to think about enterprise compute is CIS, which is broken out separately, and we do provide what we used to call our application services, which is included inside of the third bucket, the delivery and app services. So that number is broken out for you. So 40% growth was for CIS year over year, and we expect that to accelerate.
Thank you.
We have the next question from the line of Jonathan Ho from William Blair and Company. Please go ahead.
Hi, good afternoon and let me echo my congratulations as well. Just one for me. Just given the types of mega customers that you're bringing onto your platform, is there more opportunity to upsell to them once they are on your platform? Are there potential additional services or And could they come back to the well if they continue to expand their growth as well? Thank you.
Yeah, as you know, the demand for AI is rapidly increasing. We're really early on there. And, yeah, I would expect there's plenty of room to grow the existing base and, of course, add other customers of that scale.
Thank you.
Thank you. We have time for one last question from the line of Jeff Henry from Craig Helium. Please go ahead.
Great. Thanks for taking the question and sneaking me in there. Two quick ones. First, maybe, Tom, just a lot of blowback nationally against AI data centers and the power consumption correlated to them. As you're stepping into deals of this magnitude, how do you think about staying out of the crosshair of some of those community-wide pushback on the broader AI compute environment, if you will?
Yeah, I don't think we're of a profile in the popular press, anything like, you know, the giant hyperscalers are. So I don't think that's really an issue for us. So, yeah, we're not worried about that yet. Maybe that's even a problem, you know, good to have once we're much, much larger than we are today.
Okay. And then second, on the security side, Given the comments about AI becoming a tailwind there, would you think this year is likely a floor in terms of growth rate? Namely, we should be thinking maybe re-acceleration as we get into 27 and beyond?
You know, we'll see. We gave you guidance for the year. We're obviously very pleased with what we saw in the first quarter. We do like what we see, especially around API security. Still early days, low penetration rate. Guardicore is growing very, very consistently. So, yeah, we'll see how it goes, and we'll update you as we go.
Got it. Thanks so much.
Thank you. This concludes our question-answer session. The conference has now concluded. Thank you for attending today's presentation. You may now disconnect.
