This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.
2/19/2026
Good day and welcome to the Q4 2025 Akamai Technologies, Inc. earnings conference call. Today, all participants will be in a listen-only mode. Should you need assistance during today's call, please signal for a conference specialist by pressing the star key followed by zero. After today's presentation, there will be an opportunity to ask questions. To ask a question, you may press star then one on your telephone keypad. To withdraw your question, please press star, then two. Please note that today's event is being recorded. I would now like to turn the conference over to Mark Steltenberg, Head of Investor Relations. Please go ahead, sir.
Good afternoon, everyone, and thank you for joining Akamai's fourth quarter 2025 earnings call. Speaking today will be Tom Layton, Akamai's Chief Executive Officer, and Ed McGowan, Akamai's Chief Financial Officer. Please note that today's comments include forward-looking statements, including those regarding revenue and earnings guidance. These forward-looking statements are based on current expectations and assumptions that are subject to certain risks and uncertainties and involve a number of factors that could cause actual results to differ materially from those expressed or implied. The factors include, but are not limited to, any impact from macroeconomic trends, the integration of any acquisition, geopolitical developments, and other risk factors identified in our filings with the SEC. The statements included on today's call represent the company's views on February 19th, 2026, and we assume no obligation to update any forward-looking statements. As a reminder, we will be referring to certain non-GAAP financial metrics during today's call. A detailed reconciliation of gap to non-gap metrics can be found under the financial portion of the investor relations section of Akamai.com. With that, I'll now hand the call off to our CEO, Dr. Tom Layton.
Thanks, Mark. I'm pleased to report that Akamai delivered strong fourth quarter results as we continue to make major progress in positioning Akamai for the future. Revenue grew to $1.095 billion last up 7% year-over-year as reported, and up 6% in constant currency. Non-GAAP operating margin was 29%. And non-GAAP earnings per share was $1.84, up 11% year-over-year as reported, and in constant currency. Q4 revenue for cloud infrastructure services, or CIS, was $94 million. up 45% year-over-year as reported, and up 44% in constant currency. That's an acceleration from the 39% growth rate we achieved in Q3. The rapid growth was broad-based within CIS, driven by our ISV solutions, by infrastructure as a service and storage customers, and by customers leveraging Edge Workers and WebAssembly, which offer improved performance and lower costs for Edge native applications. In each of these areas, we're starting to benefit from AI-related tailwinds as customers make greater use of AI applications and agents across their businesses. Last quarter, Akamai took a major step toward the future with the launch of Akamai Inference Cloud, our platform to support the growing demand to scale AI inference on the internet. Akamai's architecture uniquely positions us to power and protect AI the way we power and protect the web. by bringing AI physically close to users, enabling the faster performance and global scale needed to unlock AI's full potential. We believe the AI market is entering a critical transition point, the first inning of a long game to come, where inference, or the execution of queries against a trained model, is the new frontier. This requires purpose-built infrastructure to enable distributed, low latency, globally scalable AI at the edge, with response times measured in a few tens of milliseconds. Akamai Inference Cloud does just that by incorporating NVIDIA Blackwell GPUs into Akamai's distributed cloud infrastructure with its unparalleled global reach and security at the edge. This enables intelligence to run instantly, securely, and exactly where it's needed, right next to the user, agent, or device. As evidence of our strong momentum, we're delighted to announce that we recently signed a four-year, $200 million commitment for our cloud infrastructure services with a major US tech company at the forefront of the AI revolution. I've had the privilege to work at Akamai for many years, and I have to say that it's really exciting to see such a pivotal player in the AI ecosystem choosing Akamai Inference Cloud for such a large AI use case. We also signed many other new and expanded contracts for our cloud infrastructure services in Q4. An AI chatbot platform based in India signed a three-year contract for our IaaS and enhanced compute support solutions and saved 45% on compute costs they would have paid to a hyperscaler. A very well-known antivirus software company chose Akamai's cloud for their VPN service telling us they liked our performance and support better than what they previously got from two of our cloud competitors. A leading social networking platform that was using us on a pay-as-you-go basis committed to consolidate their multi-vendor stack onto Akamai's cloud platform, providing us with another takeaway from a hyperscaler. Two ad tech companies in China chose us for our significantly lower latency and dramatically reduced egress costs. And one of the world's largest retail companies expanded their use of our edge compute platform to improve their digital shopping experience and increase conversion rates. As a result of the strong customer demand that we're seeing and the strong AI tailwinds across the marketplace, we anticipate that the very rapid growth rate for our cloud infrastructure services will accelerate further in 2026. Our security solutions also performed well in Q4, led by continued strong demand for our market-leading API security and GuardaCore segmentation solutions. Revenue from these high-growth security products grew 36% year-over-year as reported and 34% in constant currency. Last month, Akamai was recognized as a customer's choice for network security micro-segmentation in the Gartner Peer Insights Report for 2026. Akamai earned a 99% recommendation rate, scoring above market norms for both user adoption and overall experience. Last quarter, we saw continued strong demand for our GuardaCore segmentation platform with both new and existing customers. One of North America's largest financial institutions purchased our segmentation solution to gain visibility and protection across all of their network assets. as part of a four-year, $40 million contract. South Korea's largest mobile operator selected Akamai following the well-publicized BPF door security incident, which exposed gaps in east-west security and zero trust maturity. The customer chose our solution for workload-level segmentation, deep visibility, and resilient enforcement across hybrid environments. We also signed deals for segmentation in Q4 with one of the largest carriers in the UK, a major branch of the US Armed Services, and multinational banks in North and South America and Scandinavia. In Q4, we also saw increased demand for our API security solution, signing new customers across multiple verticals, including financial services, technology, healthcare, real estate, retail, and travel. Customers who chose Akamai API security in Q4 included a major European automaker, a telco in the Middle East, as well as airlines serving Asia Pacific and Latin America. We also signed a five-year, $47 million commitment from one of the largest hardware companies in the world in a contract that included API security and cloud infrastructure services, along with other Akamai offerings. We had many other customers in Q4 who purchased multiple security products across our portfolio, including one of Asia's largest airlines, which signed a $10 million contract for multi-layered protection over five years, and a three-year $45 million renewal with one of the world's largest financial institutions to migrate nearly 100 critical applications away from hyperscaler security and onto the Akamai platform to ensure best-in-class DDoS and web application protection, high availability, and robust security support from Akamai Security Operations Command Center. Earning the trust of customers is imperative for Akamai. The world's biggest brands trust us to keep their apps performing well, even under peak traffic conditions. They trust us to protect them from myriad attacks, and to keep their data safe, and they trust us for our reliability. We saw how much this trust mattered to customers who relied on us during the recent holiday season, a time when one of our competitors took down their customers with multiple multi-hour outages. Major enterprises know who they can trust, and we're grateful for the trust that our customers place in Akamai. Last quarter, we were honored to be named by Forbes in their list of America's most trusted companies and in their list of America's best companies for 2026. Forbes analyzed thousands of the largest public and private companies in the U.S. across 11 dimensions, including financial performance, customer sentiment, employee ratings, reputation for innovation, executive leadership, cybersecurity, and sustainability. We were also honored by the Wall Street Journal naming Akamai to its list of America's best managed companies, the management top 250. This ranking by the Drucker Institute analyzed publicly traded companies based on customer satisfaction, innovation, financial strength, social responsibility, and employee engagement and development. Before I hand off to Ed, I want to thank our employees and our management team for their achievements in 2025. Together, we're successfully executing on our ongoing transformation of Akamai into the cybersecurity and cloud company that powers and protects business online. We believe that the investments we're making today are enabling Akamai to do for cloud and AI what we've done for security and CDN, and enabling Akamai to grow even faster as a result. Now I'll turn the call over to Ed to say more about our results and our outlook for Q1 and the year. Ed?
Thank you, Tom. I'm pleased to report that we delivered excellent fourth quarter results with total revenue of $1.095 billion, up 7% year-over-year as reported and up 6% in constant currency. We also delivered strong bottom line results, with non-GAAP EPS of $1.84, up 11% year-over-year as reported, and in constant currency. Moving now to revenue. Compute revenue, which is comprised of the high-growth cloud infrastructure services, or CIS solutions, and our other cloud applications, or OCA, was $191 million, up 14% year-over-year as reported, and in constant currency. For Q4, CIS revenue was $94 million, accelerating to 45% growth year-over-year as reported and 44% in constant currency, a nice jump from 39% growth last quarter. CIS now represents approximately 50% of total compute revenue. Moving to security, revenue is $592 million, up 11% year-over-year as reported and 9% in constant currency. Revenue from API security and Zero Trust Enterprise security combined was $90 million, an increase of 36% year-over-year and 34% in constant currency. Notably, API security grew by more than 100% year-over-year, exiting the year with a revenue run rate exceeding $100 million. Security revenue was driven by strength of our high-growth product suites and a favorable tailwind from term license revenue. For the fourth quarter, license revenue rose to $18 million, up from $12 million in the same period last year. As a reminder, our term license agreements are generally for one to three years, and we continue to maintain exceptionally high renewal rates in our term license business. Moving to delivery, revenue was $311 million, down 2% year-over-year as reported, and down 3% in constant currency. These results highlight the continued steadying trends we have seen in our delivery business throughout 2025. International revenue was $542 million, up 11% year-over-year or up 8% in constant currency, representing 50% of total revenue in Q4. U.S. foreign exchange fluctuations had a negative impact on revenue of $5 million on a sequential basis and a $12 million positive impact on a year-over-year basis. Moving to profitability, in Q4, we generated non-GAAP net income of $270 million, or $1.84 of earnings per diluted share, up 11% year-over-year as reported, and in constant currency. This better-than-expected performance was primarily driven by higher-than-expected top-line revenue in the fourth quarter. Finally, our Q4 CapEx was $154 million, or 14% of revenue. Moving to cash and our capital allocation strategy, As of December 31st, our cash, cash equivalents and marketable securities totaled approximately $1.9 billion. During the fourth quarter, we did not repurchase any shares. For the full year 2025, we spent $800 million to buy back approximately 10 million shares, marking the largest annual buyback in our history. As it relates to the use of capital, our intentions remain the same. to continue buying back shares over time to offset dilution from employee equity programs, and to be opportunistic in both M&A and share repurchases. Now, before I provide Q1 and full year 2026 guidance, I want to touch on some housekeeping items. First, as Tom pointed out, we recently signed our largest compute customer contract. We're very excited that this technology company has committed to a minimum four-year spend of approximately $200 million, on our cloud infrastructure services with a large majority of that spend for our AI inference cloud. We expect to start recognizing revenue from this contract in the fourth quarter of 2026. Second, to capitalize on this transaction in the growing AI inference cloud pipeline, we intend to invest approximately $250 million of CapEx this year to augment our AI inference cloud. Third, we have recently observed significant inflationary pressure within the computer hardware market due to unprecedented industry investment in AI. Specifically, we are seeing a dramatic increase in the price of memory chips, which is driving up the cost of servers. This supply constraint has necessitated an upward adjustment to our CapEx forecast of approximately $200 million for 2026. Next, I want to remind you of some typical seasonality we experience in operating expenses throughout the year. First, we recently completed a targeted reduction in our workforce to better align our talent with our long-term growth priorities. While this action streamlined certain areas and reduced our OPEX, we do not anticipate it generating net savings for the full year. Instead, we are reinvesting those savings directly back into the business. specifically to scale our go-to-market efforts and to support our co-location and CIS infrastructure requirements to maximize our growth opportunities. In Q4, we took a $55 million restructuring charge that was primarily comprised of severance costs and impairments of certain intangible assets. Second, looking at the first quarter, we typically see a seasonal increase in expense. This is driven by higher payroll costs resulting from the reset of Social Security taxes for employees who maxed out in 2025, and stock vesting from employee equity programs, which tend to be more heavily concentrated in the first quarter. Third, as we look to the second quarter, we expect operating expenses to remain relatively flat on a sequential basis. The savings realized from our restructuring and the roll-off of the higher Q1 payroll taxes will be offset by our annual merit cycle, which takes effect on April 1st. Moving to FX, foreign currency markets are expected to remain volatile throughout 2026. As a reminder, we have approximately $1.3 billion in revenue that is denominated in foreign currency. Largest currency exposure on revenue includes the euro, the yen, and the Great British Pound. Finally, as previously noted, cloud infrastructure services now accounts for approximately 50% of our total compute revenue and is growing rapidly. Recognizing CIS as the primary growth engine and a significant focus of our investments, For the compute business, we will begin reporting it as a standalone revenue category effective in the first quarter of 2026. For simplicity, we will consolidate delivery in other cloud apps into a single reporting category starting in Q1. To assist with your year-over-year analysis of financial modeling, we have published eight quarters of revenue history for these revenue categories and supplemental schedules as part of today's reporting package on our investor relations website. In addition, for added transparency, we will disclose quarterly revenue for OCA independently for the remainder of 2026. Now moving on to guidance. For the first quarter of 2026, we are projecting revenue in the range of $1.06 billion to $1.085 billion, up 4% to 7% as reported, or 2% to 5% in constant currency over Q1 2025. We expect Q1 revenue to be lower sequentially from Q4, driven by the following factors. First, reduced one-time license revenue in Q1 from Q4 levels. Second, two fewer calendar days in Q1 compared to Q4, thus two less days of usage revenue. And finally, less seasonal traffic in Q1 compared to Q4. The current spot rates, foreign exchange fluctuations are expected to have a positive $4 million impact on Q1 revenue compared to Q4 levels and a positive $22 million impact year over year. At these revenue levels, we expect cash gross margins of approximately 71% to 72%. Q1 non-GAAP operating expenses are projected to be $339 to $348 million. We anticipate Q1 EBITDA margin of approximately 39 to 41%. We expect non-GAAP depreciation expense of $145 to $147 million. And we expect non-GAAP operating margin of approximately 26 to 27%. With the overall revenues and spend configuration I just outlined, we expect Q1 non-GAAP EPS in the range of $1.50 to $1.67. The CPS guidance assumes taxes of $57 to $60 million based on an estimated quarterly non-GAAP tax rate of approximately 19%. It also reflects a fully diluted share count of approximately 148 million shares. Moving on to CapEx, the reason that I highlighted earlier, we expect to spend approximately $254 to $264 million in the first quarter. This represents approximately 23% to 25% of revenue. Looking ahead to the full year for 2026, we expect revenue of $4.4 to $4.55 billion, which is up 5% to 8% as reported, and 4% to 7% in constant currency. Moving on to security, we expect security revenue to grow in the high single digits on a constant currency basis in 2026. For cloud infrastructure services, or CIS, we project revenue growth to accelerate to 45% to 50% year over year. We expect this momentum to build throughout the second half of 2026, driven mainly by the scaling of our AI inference cloud business. For delivery and other cloud apps, we expect both will decline significantly in the mid-single digits year over year. Specific to delivery, we expect the revenue to decline in mid-single digits for the year, with Q1 being slightly higher due to the wraparound impact of the EGEO transaction from last year. By way of comparison and for consistency with 2025, using our former compute reporting methodology, we expect the combined growth of CIS and OCA to be at least 20% year over year. At current spot rates, our guidance assumes foreign exchange will have a positive $36 million impact on revenue in 26 on a year-over-year basis. Moving on to operating margins for 2026, we are estimating non-GAAP operating margin of approximately 26% to 28% as measured in today's FX rates. The decline in operating margin for the full year 2026 is due mainly to increased co-location and depreciation expense associated with the continued buildup of our CIS business. We anticipate that full-year capital expenditures will be approximately 23% to 26% of total revenue driven by the investments and costs that I mentioned earlier. As a percentage of total revenue, our 2026 CapEx is expected to be roughly broken down as follows. For network-related CapEx, we expect approximately 4% for our delivery and security business, approximately 10% to 13% for compute, And for other CapEx, we expect approximately 8% for capitalized software, with the remainder being for IT and facilities-related spending. Excluding the impact of the increased hardware pricing, 2026 CapEx would have trended within the 18% to 22% range. The impact of increased server costs is mainly included in the compute line item above. Moving to EPS for the full year 2026, we expect non-GAAP earnings for diluted share in the range of $6.20 to $7.20. This non-GAAP earnings guidance is based on a non-GAAP effective tax rate of approximately 19% and a fully diluted share count of approximately 147 million shares. With that, I'll wrap things up. Tom and I are happy to take your questions. Operator?
Thank you. We will now begin the question and answer session. As a reminder, to ask a question, you may press star then one on your telephone keypad. If you are using a speakerphone, please pick up your handset before pressing the keys. If your question has been addressed and you would like to withdraw it, please press star then two. We will now pause momentarily to assemble our roster. And today's first question comes from Sanjit Singh with Morgan Stanley. Please proceed.
Thank you for taking the questions and congrats on a very strong Q4 results. Ed, you provided a lot of great detail on the dynamics around CapEx as well as the momentum you're seeing with the CIS business. When I look at the increase in CapEx, and it's roughly coming up by, I think, $270 million. Going back to the discussion we've had in prior quarters, that roughly a dollar of CapEx equals a dollar of revenue, does that still hold? And as we think about this increase in CapEx, how should we think about that translating into revenue from a timing perspective, both this year and then maybe going beyond 2026?
Yeah, hey, Sajid, thanks for the question. So, you know, obviously I talked about having some inflation in memory chips. Hopefully that is something that doesn't last for a long time. So that obviously skews your CapEx a bit. And as I talked about, most of that is affecting your compute because there's a lot more memory in those servers. So the dollar CapEx for dollar revenue would not hold true for this particular buying CapEx, but it's not that far off. Generally speaking, we're seeing something roughly like that. Obviously, for larger deals with longer commitments, we will offer volume discounts. But even for some stuff, you might get a slightly better return. Like, for example, we'll be launching a rental service where you can rent GPU by the hour starting sometime later this quarter where the list price for that is $250, so that would work out a little bit higher. But generally speaking, it's a decent opportunity. number to work with. I'd model it a little bit lower for this year, just given that we've seen higher capex costs associated with the memory prices.
Understood. And then just one follow-up on the Akamai Inference Cloud opportunity. Really encouraging to see that four-year deal with a major tech company. Can you speak a little bit about the pipeline? I know we have some really big customers looking at the opportunity, but just in terms of the breadth of interest and Pipeline, any code you can provide there on potential more customers signing up for the service?
Yeah, Pipeline, very strong. In fact, the inference cloud offering we announced in the fall where we deployed the GPUs into 20 cities, that's already sold out, even though it's not generally available yet, just from the beta customers. And so now we're ramping up the investment there, as Ed mentioned. and very strong pipeline, in fact, with the large customer we talked about already, you know, committing to take over a substantial portion of that. You know, the areas of interest are broad, at a high level, obviously inference applications, also post-model training, but specifically, you know, things like transcoding, real-time translation, generative media, you know, to generate images and video on the fly, and the new You know, Blackwell GPU is very good at doing that with much lower latencies. Vision, you know, processing what is seen, you know, customer support bots, all sorts of gaming applications, streaming, rendering, modifying characters as you go along in the game. In commerce, you know, virtual fitting room kinds of applications. So it's almost like the buyer is looking at themselves in a mirror and you know, wearing the clothes, also making sure the clothes will, will fit. So you have fewer returns, a lot of robotics, uh, and autonomous vehicle kinds of applications, you know, areas that these folks might not, you know, traditionally be Akamai customers now potentially, you know, large, uh, compute customers and generally the field of, you know, local LLMs, you know, as, as people, you know, companies do more kinds of things themselves, but, uh, want to operate their own model, that's great because that's the kind of thing you'd want to do on Inference Cloud and have it done close to where your employees are. So we're very enthused about what we're seeing so far and a lot of potential for growth for us.
And the next question comes from Mike Secaz with Needham & Company. Please proceed.
Hey, team. Thanks for the questions here, and congrats on the strong end to 25. The first question I have for you, on that major U.S. tech customer, can you help us think about how this came together? It's great to see the duration. We're talking four years and the $200 million minimum commitment, but was this a new logo to Akamai, or were they a previously existing customer within CIS or another portion of the Akamai portfolio. And then I just have a quick follow-up.
Sure, I'll take this one, Tom. So the good news, it was an existing customer. It wasn't one of our largest customers, though. This was somebody who was using us for CDN and security and then had discussions with them going for several months now on a pretty exciting workload. We're not at liberty to disclose who it is, but the good news is existing customer who has dramatically increased their spend, and we hope there's a lot more business to do with them.
That's excellent, and I appreciate that, Ed. I guess the follow-up, but when thinking about the capital intensity here, and I really appreciate the disclosure, it sounds like you guys have been busy on your side, but how do we think about the level of CapEx you guys are deploying here? Are you changing in any way how you're sourcing servers or going in and buying hardware versus where we've been previously, just given the the heightened price components that we're seeing out there in the market and the feel that this is somewhat different as far as the cycle and persistence of these pricing dynamics. Anything there would be incremental as well. And thank you again.
Yeah, sure. Yeah, sure. No problem. And, you know, the capital intensity isn't necessarily increasing for any other reason than we're seeing significant demand for CIS. So that's the major driver. And obviously, you know, making that purchase of $250 million for the inference cloud is very well informed. And as Tom mentioned, we have, you know, it's great to have one customer who's taking up a good chunk of that and having that committed. It's just a great opportunity for us to put that capital to use. So I hope we do more of that. So I'm very happy about that. Now, in terms of the complexity of what we're doing or what we're buying changing, no, not really. We're buying, you know, mostly servers and networking equipment and things like that. We are looking at trying to reduce the impact of the server, the memory chip increase in cost. So we're looking at sourcing things differently from different sources, et cetera. But generally speaking, there isn't really any significant change. And as far as our co-location posture, we're still using the third-party colo providers. At some point, maybe that changes once we get a lot larger, but no real significant change. And hopefully, As I broke out the different components, if you want to think about it this way, you know, you take out the $200 million for the price increases, and then if you look at that purchase of the AI inference cloud as sort of something that we did that was a little different than last year, the normalized CapEx is kind of at the lower end of what our range typically would have been. So, you know, This is a good kind of capital intensity increase when you have a chance to fuel a business that's growing as fast as CIS is.
Great to hear. Thank you, Ed.
The next question comes from Rishi Jaluria with RBC. Please proceed.
Oh, wonderful. Thanks so much for taking my questions. Nice to see you. acceleration in the CIS business at scale. Maybe two questions, if I may. Number one, if I start to think about some of the success that you're having on CIS, it sounds like you're having that with existing Akamai customers that may have used you for delivery or security or a combination there above. Maybe can you help us understand, as you think about going back to those customers, Is the total ACV or, you know, whatever sort of metrics you want to use with those customers growing meaningfully as a result of this? In other words, just trying to get a sense that it's not a situation of, you know, money that maybe they would have spent for delivery in the past. And as we think about pricing and DIY, it's money that's going elsewhere, that this is actually being additive to those customers' total bills, if that makes sense. And then I've got a quick follow-up.
Yeah, great question. It's certainly, it's additive. We're not horse trading any delivery for compute or anything like that. As a matter of fact, this particular large customer was done out of cycle, so it wasn't even done as part of a renewal, so it's all 100% additive. And I would say, yeah, we're having good success with existing customers, but also with new customers. John talked about the pipeline. What's interesting with that pipeline is we are starting to see verticals. We don't typically are strong in from a legacy perspective as far as CDN goes. And so that's good to see. We see partners bringing us new business, and there's really a mix in that pipeline of new and existing customers. And I've actually seen total new customer count pick up over the last year and a half or so, and I think a lot of that has to do with having CIS as an offering that's more broad.
Got it. That's helpful. And then maybe I'd be a little remiss if I didn't ask about kind of some of the one-time customers factors going on in calendar year 26. You know, as you think about your guide for the year and obviously appreciate the granularity, just can you maybe help us understand, you know, I know this isn't the Akamai of 10 years ago when maybe, you know, live events were a lot more meaningful, but still just want to understand what are kind of your assumptions in terms of The major events are happening between Winter Olympics going on right now, between the People World Cup in the summer. Got some big AAA gaming releases that may or may not happen. Obviously, release dates keep getting pushed out. Maybe just help us understand kind of the puts and takes and how that ties into your numbers. Thanks so much.
Yeah, sure. Happy to take that one. So if you think about events, they come in different flavors. You've got the small events like a live concert or a Super Bowl. Those tend to be very small revenue events. You know, sometimes you might get a capacity reservation fee, so maybe that might be half a million to a million bucks or something. So nothing too dramatic there. Something like the Olympics, three weeks long, it's a few million dollars. Depends on how many rights holders you have, how many different rights holders you sign, et cetera. So it's not a huge jump. And, you know, doing a billion plus a quarter, it's fairly small. insignificant to the quarter. It's good business, so we'll take it. Something like the World Cup's a little bit longer, so you'd probably see maybe three to five, five to six, something like that. But again, nothing overly material, although it's nice to have all these events. And then the things like an NFL season, much better. You're going to generate a lot more revenue there from a number of different customers. So it really depends on the length of time and the number of people that have rights. Something like a gaming release, If it's a really popular release that has a lot of updates to it, that can be popular and can drive some extra revenue. It really depends. Something like Fortnite certainly was a big tailwind for us several years ago. If you see a new console refresh cycle, that's a much bigger impact for us because you're talking now about hundreds of millions of consoles getting firmware updates and lots of updates. So that's the way to think about the event. So it's nice to have them, but it's not overly material for the year. Very helpful. Thank you.
And our next question comes from Roger Boyd with UBS. Please proceed.
Great. Thanks for taking the questions and congrats on a good end of the year. I wanted to ask about the handful of larger CIS deals that you had noted last year as being delayed out of the back half of the year. Can you just update us on how those are progressing and maybe how those are embedded into the 2026 guide? And I think you mentioned the The $200 million deal you signed this quarter will start to ramp in the fourth quarter. At a high level, can you just talk about the typical ramps you're seeing in compute? Is any part of this a result of capacity constraints? And do you expect to see these ramps on the compute deals get shorter over time? Thanks.
Yeah, it really depends. Some we can get up and running pretty quickly. It really just depends on the size of the transaction and if there's any specific geo where we may need to get some additional co-location. The co-location market is tight, but we're a big buyer of co-location, so we're doing pretty well there. We did see some of the larger workloads ramp up at the end of last year, and we've modeled in what we think those will do. And I talked about this particular really large deal will start ramping in Q4. And part of that is we're ordering all the chips, putting them in place, getting some space. So it just takes a bit to ramp that up. Obviously, GPUs are a pretty tight supply chain, but we're able to get those out and launched here. So we've modeled in a variety of different outcomes on that in terms of our guidance range. But, you know, the bigger the deal usually takes a little bit longer to ramp, and in some cases people can get up and running very shortly.
Very helpful. Thanks, Ed.
The next question is from Fatima Blani with Citi. Please proceed.
Oh, good afternoon. Thank you so much for taking my question. Excuse me. I wanted to focus on the trajectory of the delivery business. I think this has been asked in a couple of different permutations, but I wanted to ask it at more of a higher level with respect to the aggregate environment for Internet traffic and traffic volumes. You know, you had a bunch of your peers sort of talk to, you know, accelerating or improving traffic trends. I was hoping you could compare and contrast for us what you're seeing on the Connected Cloud Networks And then the flip side of that coin is just the pricing dynamic. So, you know, to your point, the delivery business has seen a pretty substantive degree of stabilization over calendar 25, and it seems like that is going to persist. So I just kind of wanted to unpack the P and the Q on the delivery equation. I'm going to have a follow-up as well, please.
Yeah, at a high level, the trends that we're seeing and projecting for this year are pretty comparable to what we saw towards the latter half of last year. Traffic environment seems very reasonable. Obviously, fewer players in the market than a couple of years ago. Pricing environment remains competitive. We still have folks out there selling, in some cases, at very low prices, which we won't do. We, you know, in particular, we see some costs rising as we've talked about, especially in memory. And in some cases, we'll actually be raising, you know, prices to help offset those costs. But I would say at a high level, what we're expecting this year is pretty comparable to what we saw last year, especially in the back half of the year.
I appreciate that. And Tom, you had sort of talked about the rental service that you're going to launch next. in this upcoming quarter. I wanted to take the opportunity to have you unpack that, you know, what the expected structure is, what the economics look like, and maybe in a more broader sense, the type of utilization that you are expecting on your network as you think about and deploy this $250 million of incremental capital to scale out the inferencing cloud ahead of the capturable opportunity. Thank you.
Yeah, so in terms of inference cloud, there's, you know, two models. One is the traditional model where you buy access to the GPU by the VM hour, the token. And that's what will be, you know, going GA later this quarter. The GPUs we deployed into 20 cities are already those pretty much sold out. So we're adding an order of magnitude more capacity. And that's what the $250 million investment is for. And in addition to selling by the token or VM hour, we'll be selling clusters so that you might decide to buy hundreds or thousands of GPUs in certain locations. So that'll be a new model that we're introducing this year and have some very large customers buying CIS in that way.
Yeah, the one thing I would add, Fatima, is, you know, in terms of the early pipeline, we are seeing a bit more skewed to the customers who want to, you know, guarantee the capacity. So they're asking for, whether it's several hundred or a thousand or whatever, GPU for a period of time, you know, multi-year time kind of deals, which is obviously a better model. I'd, you know, like to see that. In terms of the usage, we haven't done that yet, so we don't know exactly how that's going to play out. So we've, you know, got a range of various outcomes there, but... You know, certainly there's a lot of early excitement and demand in the pipeline that we're seeing for what we're buying.
Thank you for that detail.
And the next question comes from Frank Laufen with Raymond James. Please proceed.
Hey, guys. Good evening. This is Rob on for Frank. Hey, congratulations on the strong 4Q. So my question is, what sort of revenue commitments are you guys able to get from customers today relative to before? How prevalent are those now versus previously? You know, what percentage of revenue on the delivery side is under those commitments? And what's your outlook for delivery growth this year, specifically with AI-based traffic, if you can give us a better sense of that? Thank you.
Yeah, so we are seeing longer commits for really for all of our services. Partly that's by design, and I think customers also interested in having that take place. And with the delivery growth, you know, we're looking at about the same rate, so mid-single digits this year. And, Ed, do you want to add to that?
No, I would just say you'll see the RPO is growing for the total company quite a bit. That's just a function of what Tom's talking about in terms of folks making longer-term commitments. We've incentivized our sales force to get longer commitments. As far as the delivery market itself, not a huge change there in terms of commitments. There are some customers that might commit a percentage. Some might give you some type of exclusive for either a part of their business or a geographic area, etc., So there's really no dynamic change in the delivery business. It's roughly the same in terms of committed versus uncommitted. But since the other parts of the business are growing much faster, security and compute, we're seeing a lot longer and bigger commitments.
Okay, great. Thank you.
And our next question comes from John DeFucci with Guggenheim. Please proceed.
Thanks for taking my question. A lot of interesting things happening here, Tom, and especially around the CIS business. And thanks again for breaking that out historically, too. Last year, you announced a very large contract with a social media customer, and I think this is sort of a follow-up to Roger's question. And that company had a lot going on internally, right, and externally, too, and it required the additional build-out of capacity by you. I think we're a year into that, and I believe the build-out is complete by you, but I still think there's a lot going on with that company. I guess could you – because a lot of this stuff could come on lumpy, and I'm just trying to figure out how to think about this going forward. And this is like the first deal like this, and it's great to hear about that $200 million four-year deal too. But with this deal, have you started recognizing revenue yet from that customer yet? And if not, can you share a little bit about when you expect to recognize revenue? And then I guess one other part related to this is that social networking deal you talked about that's going to consolidate on Akamai and take away from a hyperscaler that I think Tom mentioned in his prepared remarks. Is that the same customer or is that another customer? Yeah. Sorry for the long-winded question.
No worries, John. I hope by interesting you mean good interesting. So I'll take that. It's not the same customer. It's a different customer. In terms of the lumpiness you talked about, generally speaking, we don't see lumpiness per se. You know, as I talked about with the new deal we just signed, the $200 million four-year deal, I expect that to be, you know, fairly even. You know, maybe there's some upside as usage ramps, but there's not a, you know, Like say a big chunk of revenue and then it goes away or whatnot. But we do expect that to start ramping in Q4 just as we, you know, start deploying, you know, make the purchase, get the DPUs, get them up, customer has to do their testing, and then they go into a full launch. So that just takes some time. So starting in Q4, we expect that to ramp up and then continue into next year. And then in terms of the large customer we signed last year, $100 million deal, we did start taking a little bit of revenue in Q4. We expect that to ramp up throughout the year. I will say there is some seasonality. We do have a little bit of work in the compute business that might be tied to, say, like a season or something like that, say a sports season. So you may see a little bit of extra revenue in, say, Q4, and it dips a little bit in Q1. But generally speaking, you don't see big lumpiness, as you said, in the compute business.
Okay, and that makes sense. That makes sense. I was thinking sort of like Oracle, but they're bringing on these huge AI training data centers, which are just small online, but that's not how your business is. So thank you for that. And I guess just one follow-up, and not a little bit unrelated here, and it's an accounting question. How much of that fourth quarter restructuring charge of $55 million. Was any of that in cash for this quarter? Because cash flow was a little weaker than I think people expected. CapEx was hired, so I get that.
Yeah, good question. So most of the cash flow is a timing issue, just in terms of timings of cash receipts and payments, and we made some pretty big tax payments before the end of the year. So that skews the cash flow. But if you look at last year, I think it's relatively in line with last year. But in terms of the restructuring, that cash will go out in Q1. So the majority, a little over half, was intangible assets. So there's no cash associated with that. Severance was a little less than half. That'll hit in Q1.
Okay, great. Thank you very much. And A lot going on here, and I actually definitely meant good when I said interesting. So thank you.
The next question is from Will Power with Baird. Please proceed. Okay, great.
Maybe just to switch gears to security, great to see the continued GuardCore segmentation, API security strength, and API, I guess, topping up $100 million in It'd be great just to get a better kind of outlook, sense for growth expectations on those two pieces in 2026, how that folds in. And then probably for you, Tom, it'd be great to get your perspective just on how you're thinking about any potential AI risk kind of across your security portfolio, just given some of the market concerns out there. It seems like the businesses have been pretty resilient, but maybe any just comments on what you're maybe seeing competitively from any other AI entrants or technologies in the marketplace. Ed, why don't you take the first, then I'll do the second.
Sure, happy to. So, yeah, very happy with what we're seeing with GuardaCore and API security. We had a really good, strong fourth quarter finish in terms of bookings. And the nice thing with both of these businesses is we're seeing a nice mix of new customer versus existing, especially with GuardaCore. As a matter of fact, the majority of revenue is coming from new customers associated with GuardaCore, which is great. And then with API, both actually are very low on a penetration rate. Within API security, less than 10% of our existing customers have purchased that. So there's an enormous amount of runway there. We're seeing a big adoption across many, many different verticals, too. So it's not just one vertical like financial services. It's really across everything. So we expect, as we go into next year, very similar to last year in terms of API and GuardaCore now at a little bit more scale, driving the majority of the growth. The other product lines, whether it's bot management and WAF, continuing to grow, albeit slower, and then services continuing to grow as well. So we expect growth. and most of those categories, maybe Prolexic tends to be a little bit more venture, but maybe that's not, maybe that's more flattish, but we do expect growth across the board, and this year it'll look pretty similar to last year, with the majority coming from API and GuardiCorp.
Yeah, and to your second question, that's a great question. You know, we are not seeing risk from, you know, AI and do SaaS, do-it-yourself kinds of things, and You know, one of the key reasons for that is for our services, security services, you really need to run it on the large distributed platform by and large. You know, and one reason for that is if you tried to sort of do it yourself in your data center or in a few locations, you just get overwhelmed with the volume of the traffic. And you don't have any chance to really apply the security because you're flooded. And that's where Akamai's distributed platform makes the critical difference as we intercept all that traffic, the bad traffic out where it starts. And we can do that at great scale. And so, you know, we're not I don't think we don't have that kind of exposure. Now, the good news is if the AI, you know, induce risk to SaaS, if that materializes, that's a big tailwind for us on the compute side. Because these enterprises are going to need to run their models that are doing these SAS tasks. And generally, they're going to probably want to run them close to where their employees are. And that's a perfect application for our inference cloud. So on balance, if that really materializes, that's a tailwind for Akamai, I think, and not a headwind.
That's helpful. Thank you.
The next question comes from Jackson Adder with KeyBank Capital Markets. Please proceed.
Hey, this is Aiden Daniels on for Jackson Adder. Thanks for taking our question. I was just curious on the compute side, what are you guys seeing as some of the main reasons for customers choosing Akamai over, whether it's other hyperscalers or other competitors for compute workloads at the edge? And I know cost has been a key element you guys have called out in the past, but I was just looking for some added color on how Akamai can continue to win some of these deals. Thanks.
Yeah, great question. It's performance, it's scale, and yeah, cost is generally lower But just as an example, we talked about on the last call, the three big hyperscalers in the U.S. are all using our compute. And for them, it's not a cost issue because they have their own clouds, obviously. For them, it's a performance issue because we can run their logic in a lot more locations than they can do themselves with their clouds. And so that results in better performance for them. They're closer to the user's and better scale, especially if you're doing things around video that are, you know, bit intensive. You need to do that in a much more distributed fashion. And then for other customers, you know, cost does come into play. As we've talked about, some of our customers getting, you know, really substantial savings as they move out of, you know, the major cloud providers, the hyperscalers to Akamai. In fact, Akamai achieved major savings as we moved out of the hyperscalers, a lot of our applications onto our own cloud. So better performance, better scalability, and better cost in many cases. Thank you.
And the next question comes from Patrick Colville with Scotiabank. Please proceed.
Thanks for having me on. This one's for Dr. Tom, please. I guess I just want to go back to the inference cloud. I mean, you talked earlier about some nice use cases for accelerated compute at the edge. And it seems like the common thread is that latency is important for those use cases. But I guess my question is this. I mean, in the CPU world, edge compute was a good market, but it wasn't enormous. You know, most compute happened locally on device or at the hyperscale core. Why would accelerated compute be different that you're going to have this large and very exciting markets at the edge?
Yeah, good question. And it's not just latency. Latency, of course, matters, but it's scale. You know, when you think about some of the AI applications, you know, generative media, you know, you're generating video, processing video, and just you don't have the capacity, the bandwidth at a core data center to to be, you know, generating a processing millions of personalized videos, you know, concurrently, you got to do that in a distributed fashion, just like anything with live sports or anything like that. It's got to be distributed. So it's not just latency. And increasingly as we're seeing these applications, they are, uh, you know, bandwidth intensive. Um, also we talk about doing speech, you know, when you're conversing with your avatar, it does need to be real time. You can't be going far away to a data center or it's not the same experience. Now in the past, you know, the GPUs weren't fast enough to, you know, make that work. But now they are getting to that point where it is a few tens of milliseconds. And so the latency does matter more now.
And can I just, just ask a quick follow up there on, on, on the first cloud again, actually, I mean, um, two parts. The first one is, do you need to do any software updates in terms of the software that Akamai has for customers to run inference cloud? And then I guess the second part is, in terms of Akamai's target customers here, it seems like the customer profile is slightly different to the existing customers. Am I interpreting that right, that you'll be able to sell this to existing customers, but also a new cohort, and maybe even AI natives?
It's a broader customer pool. So, you know, our existing customer base, yeah, they are good targets for us. But there's also, as we talked about, Ed mentioned, there's a lot of customers we're signing that weren't using Akamai before because maybe they didn't really have delivery, you know, needs or even, you know, web app firewall at any kind of scale. And so they're new to Akamai. And in terms of software updates, we're always, you know, upgrading the software in our cloud platform. But it's nothing special, per se, with the GPUs. It works very much in the way that Akamai Cloud has worked, Linode has worked. We are selling an additional model, as Ed talked about, with clusters, with a long-term contract, in addition to the traditional model, which by the VM hour, by the tokens.
Crystal clear. Thank you so much, Dr. Tom.
The next question is from Jonathan Ho with William Blair. Please proceed.
Hi, good afternoon. Congratulations on the large AI inferencing deal. I was wondering if you'd give us a little bit more color in terms of, you know, what was unique about Akamai to cause the customer to maybe choose your solution over competitors' And, you know, if you could maybe give us a sense of, you know, philosophically whether you're building out capacity to meet that demand or are you, you know, comfortable investing even above that demand as you're adding capacity? Thank you.
Yeah, it's what we've been talking about. It's, you know, really good performance, very reasonable cost. And I'd add, you know, for something like this critical in application, trust matters. And we talked a little bit about that, you know, a few minutes ago, you know, Akamai customers do trust us. We've really earned that, you know, with our delivery and security services, our reliability, our customer support. And for something this big and critical, I think that makes, you know, a big difference. So, and we are needing to build out in this case. And that's part of the large investment that Ed talked about. We're greatly increasing the capacity of inference cloud. You know, as I mentioned earlier, We pretty much sold out the 20 locations with the GPUs that we have deployed starting in the fall, and now we're going to increase that by about an order of magnitude, and part of that will be used by this large customer that we talked about. Got it.
And the next question comes from Rudy Kessinger with DA Davidson. Please proceed.
Hey, great. Thanks for taking my question, guys. Jonathan actually took the main one that I had. But on this $250 million you're spending to augment the AI inference cloud build out, I guess, you know, by year end this year, I mean, how many locations do you intend to have GPU capacity? And I believe that the initial announcement last quarter was like 17 or 19 locations or something. But how many do you intend to have that, you know, GPUs in by the end of this year?
Yeah, we're at about 20 now, and I don't expect that number to be a lot larger, but the locations we're in themselves will be a lot larger, which enables us to add the model where we can sell clusters of GPUs.
And the next question comes from Mark Murphy with J.P. Morgan. Please proceed. Thank you.
Hey, this is already on for Mark Murphy. Thanks for taking the question. Ed, I believe you mentioned that you're seeing deals in the pipeline coming from verticals that maybe weren't as prevalent before. Show us understand what those newer verticals are. And then are those coming more from the direct sale motion or from the channel? Thanks.
Yeah, so it's a little of both. We're getting some from partners that we work with. You know, we announced a relationship with NVIDIA. They refer customers over to us as an example. Um, and in terms of the verticals, you know, think of things like life sciences, manufacturing, healthcare, uh, you know, uh, different types of industrials typically generally don't have really big websites, but do spend an awful lot on compute. And they're also good security customers as well. So, uh, direct motion is part of it to direct is, you know, doing a good job of introducing this to all of our existing customers. I've gone on a couple of calls and certainly it's drumming up a lot of interest in, Customer feedback is that they believe we have a right to win here. It makes a lot of sense for us going here. And there's an enormous amount of curiosity, and we're doing a lot of proof of concept. So good to see that the demand is coming from a variety of different sources.
And then I think at least three, you know, named wins versus hyperscalers now is across CIS and security. You know, you guys have always found success there, but do you see any changes in the competitive dynamics there? Is that improving for you guys versus the hyperscalers? Thanks.
You cut out on the first part of the question, the competitive dynamic in what area?
Against the hyperscalers.
So, well, we've competed with the hyperscalers in delivery and security for over a decade. you know, I don't see any fundamental change there. We compete very successfully against them. In fact, two of the three big hyperscalers are large Akamai customers for delivery and security. And of course, now we're adding compute into the mix, and already all three are using us for our compute capabilities. And again, there it's not an issue of cost for them, it's an issue of better performance, at least in part because of our distributed nature. We can get their compute logic closer to their users where they want it.
One thing I'd add, it's not necessarily that the only way we win is by taking business away from them. In a lot of cases, we're seeing new workloads, especially as inference becomes a much bigger part of the equation in AI. We're a good spot to go to when customers have challenges where either latency needs to be very, very low and you need to be super close. We've seen some customers tell us that even being in a different state in the U.S. gives them too much latency. They need to be within a couple hundred miles, which is different than what you'd typically seen in even the CDN world. So it's not a question of a zero-sum game where we win, they lose. We do from time to time take some We do go head-to-head in competition where we go in a bake-off and sometimes we'll perform better, et cetera. So the market is just growing so fast that there's plenty of room here for us. I think we're starting to demonstrate that we're becoming a real player here.
Very insightful. Thanks.
And the next question comes from Jeff Van Ree with Craig Hallam. Please proceed.
Hey, guys, this is Vijay Homan on for Jeff. Thanks for taking the question. Just one for me. I know you mentioned the impact of AI on the cloud segment. I was hoping you could just expand on the impacts of AI on security and delivery revenue, maybe to the extent that that's driving traffic and how it's changing the demands of your customers for your services. Thanks.
Yeah, so there's a variety of impacts with AI on security. You know, one of them is that AI really helps enable the attackers. And so we're seeing much larger botnets out there because the attacker can use AI to take over a lot more devices. They can use the AI to train malware to get around known defenses. And so you see more penetrations. You've seen the AI with deepfakes you couldn't possibly know were fake. So in a lot of ways, it's making the attack environment much harder to defend against. Also, as enterprises adopt a lot of AI apps and agents, that's a whole new attack surface. And you need special defenses, like, for example, our new firewall for AI. Also, today, enterprises are in a tough shape. They don't even know all the shadow AI they have. And so we have new capabilities there with our API security to extend it, to identify it, the AI applications they have exposed. So you need to know what AI you've got out there and you need to defend it with special, you know, firewall capabilities, which we do. So I think AI is having, you know, and will continue to have a positive impact for our security business in terms of our revenue, even though the attack landscape is nastier and that in some ways it's more need for Akamai services. You know, in terms of delivery, We are, of course, seeing a rise in the scraper bots. And so, you know, if left undefended, that would create a need for more traffic. Now, for our customers through our bot management solutions, we actually help them to deflect a lot of the scraper bots, give them visibility into what the various bots are, what they're doing. And then our customers decide, okay, which ones do they want to block? Which ones do they want to do special things for? So I'd say on balance, yeah, probably a traffic, you know, increase to an extent. But again, there it's more creating more of a need for our bot management so that our customers can handle the various scraper bots in the way that makes sense for their business.
Got it. Very helpful. Thank you. And this does conclude today's question and answer session as well as today's conference. Thank you for attending today's presentation. You may now disconnect your lines.
