This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.
spk11: All lines have been placed on mute to prevent any background noise. After the speaker's remarks, there will be a question and answer session. If you would like to ask a question during that time, simply press star followed by the number one on your telephone keypad. And if you would like to withdraw your question, please press star one. Thank you. I would now like to turn the conference over to Mr. Phil Winslow, Vice President of Strategic Finance, Treasury, and investor relations, you may begin, sir.
spk13: Thank you for joining us today to discuss Cloudflare's financial results for the third quarter of 2023. With me on the call, we have Matthew Prince, co-founder and CEO, Michelle Zatlin, co-founder, president, and COO, and Thomas Seifert, CFO. By now, everyone should have access to our earnings announcement. This announcement, as well as our supplemental financial information, may be found on our investor relations websites. As a reminder, we will be making forward-looking statements during today's discussion, including but not limited to our customers, vendors, partners, operations, and future financial performance, our anticipated product launches and the timing and market potential of those products, and our anticipated future financial and operating performance, and our expectations regarding future macroeconomic conditions. These statements and other comments are not guarantees of future performance and are subject to risk and uncertainty. much of which is beyond our control. Our actual results may differ significantly from those projected or suggested in any of our forward-looking statements. These forward-looking statements apply as of today, and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. For a more complete discussion of the risks and uncertainties that could impact our future operating results and financial conditions, please see our filings with the SEC as well as in today's earnings press release. Unless otherwise noted, all numbers we talk about today, other than revenue, will be on an adjusted non-GAAP basis. You may find a reconciliation of GAAP to non-GAAP financial measures that are included in our earnings release on our investor relations website. For historical periods, a GAAP to non-GAAP reconciliation can be found in the supplemental financial information referenced a few months ago. We would also like to inform you that we will be participating in the RBC Capital Markets Global TIMT Conference on November 14th and the Wells Fargo TMT Summit on November 28th.
spk02: Now, I'd like to turn the call over to Matthew. Thank you, Phil. We had another strong quarter in spite of an increasingly uncertain world. In Q3, we achieved revenue of $335.6 million, up 32% year over year. We added 206 new large customers, those that passed more than $100,000 per year, and now have 2,558 large customers, up 34% year over year. Looking at even larger customers, we added a record number of net new customers year over year, spending more than both $500,000 and $1 million per year with CloudFlare. Our dollar-based net retention ticked up 1% to 116%. We see this as a lagging indicator and expect that it will take some time for the go-to-market improvements we're seeing across our team to be fully reflected. During the quarter, we continued to refine these go-to-market strategies. Our pipeline closed rates held steady, our sales productivity remained constant, and linearity was similar to Q2. I think that we have been able to hold things steady while making significant organizational changes and improvements across our sales and marketing organization is very encouraging. Beyond that, we're beginning to see positive early signs from the sales team members we've brought on over the six months to replace underperformers. During the quarter, the pipeline generated by this new cohort was 1.6 times higher than those brought on at the same time a year earlier. These new account executives achieved more than 130% of their activity goals for the quarter. That's great news, and we're thrilled to have them on board. And while they're still ramping, I'm encouraged by the performance and that we've been able to revamp as much of our sales team as we have without significant disruption. Our gross margin was 78.7%, still well above our target range of 75 to 77%, and up from 77.7% last quarter. we delivered an operating profit of $42.5 million, our fifth consecutive record quarter for the company. This represents an operating margin of 12.7%. Operating profit increased nearly three times year over year, underscoring our commitment to operating efficiency and productivity. We continue to generate positive free cash flow. In Q3, we generated $34.9 million during the quarter, representing a free cash flow margin of 10.4%. This is a business that can generate significant cash, and in 2023, we expect we will generate more than $100 million in free cash flow, well ahead of our original goal when we started the year and the direct result of improved execution across our entire business. In Q3, we celebrated our 13th anniversary of CloudFlare's launch, a time we call birthday week. We officially entered our teenage years, and like many kids, it took us a while to fully understand and articulate the category we belong to. The day before our 13th birthday, we announced to the world that we realized what we are, a connectivity cloud. Connectivity means we measure ourselves by connecting people and things together. Cloud means the batteries are included. It scales with you. It's programmable, has consistent security built in. It's intelligent and learns from your usage and others to optimize for outcomes better than you could on your own. Our connectivity cloud is worth contrasting against some of the other first-generation clouds. The hyperscale public clouds are, in many ways, the opposite. They optimize for hoarding your data, locking it in, making it difficult to move. They are captivity clouds. While they may be great for some things, their full potential is only truly unlocked for customers when combined with a connectivity cloud that lets you mix and match the best of each of their features. That's what we hear from customers, that they are multi-cloud whether they want to be or not. And that's what they really need is a connectivity cloud to hook all their systems together in a fast, secure, reliable way. The messaging of the connectivity cloud is resonating with customers and helping them understand the full extent of what CloudFlare is able to deliver for them. We are not any one of our individual features or even a sum of them. We are a cloud that helps you get the most out of connectivity, and customers love that and are leaning into it. Speaking of customers, we've had some great customer wins in the quarter I'd like to highlight. A U.S. government cabinet-level agency within the executive branch signed a one-year, $2 million contract. CloudFlare is replacing three point solution vendors, including a 20-year-old incumbent solution. We're providing a unified application security for 600 U.S. government applications. They were drawn to Cloudflare's modern architecture, rate of innovation, robust network, and ability to reduce complexity by consolidating multiple point solutions into a single pane of glass. Another U.S. government agency signed a one-year, $510,000 contract for Cloudflare's Zero Trust solutions, including access, gateway, browser isolation, and data loss prevention. We were selected over first-generation Zero Trust competitors due to our ability to consolidate numerous products across both application security and Zero Trust onto a single platform. Our federal business has grown significantly over the last year, and we believe these deals are just at the tip of the iceberg with both of these customers, which we expect can expand significantly. A leading healthcare company signed a three-year, $1 million contract for Cloudflare's Zero Trust solutions, including access Gateway, Browser Isolation, and Area 1 Email Security. They are a legacy vendor looking to modernize their security posture as they migrate on-prem applications to the cloud. They experienced a sophisticated email phishing attack mid-process, and with Area 1, we were able to immediately protect them. They chose us over first-generation zero-trust solutions because of the comprehensiveness of our solution, including email security. The VP of Technology said succinctly, we should have partnered with Cloudflare earlier. A major European consulting company signed a three-year, $1.6 million contract for Access and Gateway, along with Magic WAN and our data localization suite. They selected Cloudflare over first-generation Zero Trust competitors because of the breadth of our platform. The theme across these examples is customers are looking for a Zero Trust solution, increasingly wanting to protect their entire network. Cloudflare is the only vendor that can deliver a comprehensive network-wide solution from a single vendor. Switching gears a bit, a Fortune 500 semiconductor company expanded their relationship to Cloudflare, signing a three-year, $1.4 million contract. The customer was looking to modernize their network security posture. They adopted our Magic Transit product. They were able to consolidate multiple point solutions onto Cloudflare's unified platform. An African public utility company expanded their relationship with Cloudflare, signing a four-year, $3 million contract. This company first approached CloudFlare last year facing multiple under attack situations. We onboarded the customer with application security and magic transit, stopping the attacks they were seeing. The customer was so impressed with CloudFlare's products and performance that they quadrupled their utilization and added additional products, including magic firewall. The fact that CloudFlare's network spans the globe gives us the ability to service clients everywhere. Another international technology company signed a two-year, $1.8 million contract for Magic Transit and advanced application security. This customer approached us in the midst of a large-scale DDoS attack. Their incumbent solutions were provided by a mix of point solutions and bundled hyperscale cloud mitigation services. Neither was sufficient to stay ahead of the attack. In Q3, we saw a significant increase in massive DDoS attacks. To give you a sense, these new attacks are generating nearly as much traffic as the entire internet generates globally, but pointing it to a single victim. There are very few networks that can stand up to these attacks. I'm proud of the fact that Klausler is architected uniquely for this moment. And as the world becomes more complicated and these attacks become more common, I think more and more of the internet will turn to us for protection. A leading ad tech company expanded their relationship with CloudFlare, signing a one-year $720,000 contract. This customer came to us with a technical worker's use case. They needed a platform that could help them deliver through traffic spikes up to 3 million requests per second. Their existing solutions on traditional hyperscale public clouds were expensive to maintain and would encounter errors with even relatively low traffic spikes. Cloudflare workers was able to support their needs without breaking a sweat. With this win, we expect they will move more of their application to our much easier-to-scale platform. A Fortune 500 technology company expanded their relationship with Cloudflare, signing a one-year $2.9 million contract. This customer approached us to use our connectivity cloud to help them collect AI and machine learning data from their customers while maintaining the highest level of privacy. They view Cloudflare as a leader in privacy, and we work closely with them to develop the solution. This deal makes clear the importance of privacy and the likely regulatory scrutiny of AI tasks and highlights how Cloudflare's network, which extends into the vast majority of countries on Earth, can help customers take advantage of AI while complying with an increasingly complex regulatory environment. We continue to accelerate our efforts in AI. We believe CloudFlare is the most common cloud provider used by the leading AI companies. During our birthday celebrations in Q3, we made several announcements with companies like Nvidia, Microsoft, Meta, Hugging Faith, and Databricks. We also announced Workers AI to put powerful AI inference within milliseconds of every internet user. We believe inference is the biggest opportunity in AI, and inference paths will largely be run on end devices and connectivity clouds like Cloudflare. Right now, there are members of the Cloudflare team traveling the world with suitcases full of GPUs, installing them throughout our network. We have inference optimized GPUs running in 75 cities worldwide as of the end of October, and we are well on our way of hitting a goal of 100 by the end of 2023. By the end of 2024, we expect to have inference-optimized GPUs running in nearly every location where Cloudflare operates worldwide, making us easily the most widely distributed cloud AI inference platform. We've been planning for this for the last six years, expecting that at some point we'd hit the crossover where deploying inference-optimized GPUs made sense. To that end, starting six years ago, we intentionally left one or more PCI slots in every server we built empty. When the demand and the technology made sense, we started deploying. That means we can use our existing server infrastructure and just add GPU cards, allowing us to add this capability while still staying within our forecast CapEx envelope. And customers are excited. In the five weeks since our AI announcements Thousands of developers have leveraged our new AI capabilities to build full-stack AI applications on CloudFlare's network, processing more than 18 million requests through the new features we launched just over a month ago. The demand has exceeded our expectations and continued to accelerate, increasing 5x since mid-October. We have a pipeline of customers interested in putting hundreds of billions of inference tasks on our infrastructure each month. It's early days, But the interest we're seeing from customers, large and small, over what they can build with powerful inference capabilities now embedded in one of the world's largest networks is inspiring. They like how easy it is to use workers' AI. They like how it's powerful but close to their users around the world. They like the more efficient and fair pricing model our serverless implementation delivers. They like the flexibility of bringing their own models or fine-tuning existing models using the tools that are included as part of workers' AI. If we're right that inference is the big AI opportunity and that inference tasks that are too big and too complex to run on in devices will need to run as close to the user as possible, then we've got a head start on building the preferred location for inference for the most interesting AI applications of the future. Finally, before I turn it over to Thomas, I wanted to acknowledge that we continue to live in very challenging times. The war in Ukraine continues unabated. Now we have a new war in the Middle East after the attack by Hamas on Israel. We have colleagues in the region who have been impacted directly and indirectly. Our thoughts are with them. And while we see the devastating images of the kinetic war, the online war is also raging. CloudFlare is committed to providing our services to humanitarian and civil society organizations at no cost to ensure they can continue doing their important work for all those impacted by the increasingly hostile world we find ourselves in. In our business, we need to stay on top of cybersecurity issues globally, and modern warfare continues to include the cyber battlefield. As I look back on the quarter, I'd like to thank our entire team at CloudFlare for all your hard work innovation, dedication to supporting our customers, and the greater Internet. Thank you for continuing to help build a better Internet for us all. And with that, I'll hand it off to Thomas. Thomas, take it away.
spk12: Thank you, Matthew, and thank you to everyone for joining us. During the third quarter, as we continue to refine our go-to-market strategies and operations, our pipeline growth rates have held steady, our productivity remained consistent, and linearity was similar to last quarter. We are pleased to see significant growth with channel partners, momentum with large customers, and strength in the public sector. Importantly, we continue to maintain our strong commitment to being fiscally responsible and act as good stewards of investors' capital. We delivered our fifth consecutive quarter of record operating profit, increasing nearly three-fold year-over-year and significantly outperformed on free cash flows. Turning to revenue, total revenue for the third quarter increased 32% year-over-year to $335.6 million. From a geographic perspective, the U.S. represented 52% of revenue and increased 30% year-over-year. EMEA represented 28% of revenue and increased 36% year-over-year. APAC represented 13% of revenue and increased 27% year over year. Turning to our customer metrics, in the third quarter, we had 182,027 paying customers, representing an increase of 17% year over year. We ended the quarter with 2,558 large customers, representing an increase of 34% year-over-year, in addition of 206 large customers in the quarter. In fact, we added a record number of net new customers year-over-year, spending more than $500,000 and $1 million on an annualized basis with Cloudflare. Our dollar-based net retention rate was 116% during the third quarter, representing an increase of 100 basis points sequentially. Also, there can be some variability in this metric quarter to quarter. We continue to believe the recent decelerating trend in DNR is stabilizing near these levels. Moving to cross-margin. Third quarter cross-margin was 78.7%. representing an increase of 100 basis points sequentially and an increase of 60 basis points year over year. Network CapEx represented 8% of revenue in the third quarter as we continue to benefit from our focus on driving greater efficiency from our infrastructure and the uniqueness of our platform to onboard new workloads. Despite having begun to invest in the enormous AI opportunity in front of us, With the planned rollout of GPUs to more than 100 cities by the end of this year, we expect network CapEx to be 8% to 10% of revenue in fiscal 2023. However, we anticipate network CapEx to return to more normalized levels over time. Turning to operating expenses. Third quarter operating expenses as a percentage of revenues decreased by 5% sequentially and decreased by 6% year-over-year to 66%. Our total number of employees increased 11% year-over-year, bringing our total headcount to 3,529 at the end of the quarter. We were selective in hiring during the quarter. as we continue to evaluate deploying AI and automation at scale to re-engineer our business processes across the company. Early investments in these areas are already delivering encouraging returns. We will remain prudent in hiring as we continue to invest in broadening and deepening the usage of AI and automation across our operations to drive higher productivity and greater efficiency. Sales and marketing expenses were $129 million for the quarter. Sales and marketing is the percentage of revenue decreased by 3% sequentially and decreased 38% from 41% in the same quarter last year. Research and development expenses were $54.2 million in the quarter. R&D is the percentage of revenue decreased by 1% sequentially and decreased to 16% from 18% in the same quarter last year. General and administrative expenses were $38.5 million for the quarter. G&A as a percentage of revenue decreased by 2% sequentially and decreased to 11% from 13% in the same quarter last year. Operating income was $42.5 million compared to $14.8 million in the same period last year. Third quarter operating margin was 12.7%, an increase of 690 basis points year over year. These results highlight our ongoing focus on becoming more productive and doing more with less, given that operational excellence is a long-term competitive advantage. Turning to net income in the balance sheet, our net income in the quarter was $55.3 million, or a dilutive net income per share of 16 cents. We ended the third quarter with $1.6 billion in cash, cash equivalents, and available for sale securities. Free cash flow was $34.9 million in the third quarter, or 10% of revenue, compared to negative $4.6 million, or 2% of revenue in the same period last year. Remaining performance obligations, or RPO, came in at $1.1 billion, representing an increase of 5% sequentially and 30% year-over-year. Current RPO was 75% of total RPO. Moving to guidance for the fourth quarter and the full year. With broadening geopolitical uncertainty and increasingly mixed macroeconomic data points across geographies, the business environment in which you operate remains challenging to predict. And as a result, we continue to remain prudent and cautious in our outlook for the fourth quarter. Now turning to guidance for the fourth quarter, we expect revenue in the range of $352 to $353 million, representing an increase of 28 to 29% year over year. We expect operating income in the range of $28 to $29 million, and we expect an effective tax rate of 7%. We expect diluted net income per share of $0.12, assuming approximately 354 million shares outstanding. Please note that our share count guidance now includes dilution from our convertible senior notes of approximately 6.8 million shares, given that Cloudflare has achieved a level of profitability whereby these securities are no longer deemed anti-dilutive. For the full year 2023, we expect revenue in the range of $1.286 billion to $1.287 billion, representing an increase of 32% year over year. We expect operating income for the full year in the range of $110 to $111 million. And we expect diluted net income per share over that period to be 45 to 46 cents, assuming approximately 350 million shares outstanding. We expect an effective tax rate of 8% for 2023. After having achieved significant free cash flow in the first three quarters of the year, we expect to generate over $100 million in free cash flow for the full year 2023. In closing, our team remains committed to driving operational excellence, ensuring long-term growth, and delivering significant shareholder value. I'd like to thank our employees for their dedication to our mission, as well as our customers for trusting us to help them solve some of the hardest problems that they face when modernizing and transforming their businesses. And with that, I'd like to open it up for questions. Operator, please pull for questions.
spk11: Thank you. As a reminder, if you would like to ask a question, please press star 1 on your telephone keypad, and please limit yourself to one question and one follow-up. Your first question comes from the line of Matt Hedberg from RBC Capital Markets. Please go ahead.
spk07: Great. Thanks for taking my questions, guys. Congrats on the results. Matthew, for you, a lot of exciting news at birthday week, including workers AI and everything else that you guys announced. I guess broadly speaking, given the focus on all the investments this year and next year, how should we expect to see some of the monetization benefits of generative AI customer spend? Is that something that you'd be able to quantify at some point, or what are some of the breadcrumbs that we should watch for success there?
spk02: Yeah, thanks, Matt. I think that there are a couple of different areas where we're monetizing And that's starting to show up in the results. And then there are a couple of areas where I think there's a longer time horizon we're really optimizing for adoption. I think the place where we've been positively surprised is with our R2 product. R2 is our object store. And critically, it allows customers to be multi-cloud and to easily move data to wherever they're the resources that they need without charging them an egress tax like some of the other traditional hyperscale public clouds do. That's the place where a lot of the growth that we're seeing is coming from AI companies. They love the fact that they can take their data and their training sets and move it to wherever there are GPUs that are available around the world. And that's driving, and I think as we see more and more usage of that, that's driving revenue for us that we're realizing today. And I think that will be something that will go forward into the quarters to come. I think that some of the areas around inference It's early days, but I think that you'd be likely to hear us start talking about larger customers that are moving significant workloads over to the AI space. I think that the individual developers will be – we're really going to optimize in that space for adoption and building out an ecosystem. But as you hear us on earnings calls to come talk about how people have really moved – workloads, and we've got customers in the pipeline that are talking about moving billions of inference events per month to our network. That's when that starts to turn into real revenue for us. And I think it's early days, so we don't know exactly what the timeframe on that will be, but the conversations we're having are very exciting, and it's a space that I'm definitely bullish on.
spk07: Super, super exciting. And then maybe, Thomas, for you, You know, we still have Q4 to close out here and you didn't really talk about next year, but given some of the uncertainty out there, you guys are still delivering good results. Are there any building blocks that you'd share as we start to think about or fine tune our calendar 24, either growth or profitability estimates?
spk12: Well, we won't talk too much about 24 on this earnings call, as you might expect. There will be time for that on our next earnings call. But I think it's important to keep in mind that we've been talking about how cascading the impacts are in terms of the progress we make and when it hits our books, right? It's all about building pipeline first. And we have seen strength there in the third quarter, continued in the fourth quarter. How this turns into ACV is where uncertainty is. And once it hits ACV, it'll take a turn of four quarters before it shows up into revenue. So we're in good progress, especially at the beginning of this cascade. And as we move through that waterfall of progress, you'll see results showing up in our numbers in next year.
spk05: Thank you very much.
spk11: Your next question comes from the line of Brent Thiel. Please go ahead.
spk16: Matthew, you've highlighted sales efficiency as one of the top goals for this year. I think Thomas mentioned it was staying consistent, or maybe you had mentioned this, and linearity was pretty steady. I guess you mentioned some of the sales improvements, but kind of Where do you stand in terms of your overall game plan on the go-to-market? How far are you through this process? What's left? Is there an easy way to frame this move?
spk02: We've talked about how on previous calls we really looked across our go-to-market functions and recognize that there was an opportunity for us to improve. And what I'm proud of is that we haven't waffled on that path. We haven't changed course. We haven't had significant disruption or distraction as we've gone through that. And I think that the caliber and quality of the people that we're bringing on to those go-to-market teams The early indications are, and again, they're still ramping, but the early indications are that they're doing a significantly stronger job and they're delivering real results. So, that's a process and it takes time to work through that. But I think that we're seeing positive indication. I think we're being, we don't want to lose our marbles and be too aggressive. but we're being very disciplined. And one of the things I remember we've always talked about is how we've had a business where we invest behind the demand that we see. And I think the same thing is true in the go-to-market side, where we're seeing that the increased rigor and increased discipline has early signs of paying off. And if that continues and we get more data points along the way, then that's a place where we'll be able to invest with, again, new leadership in place and new training and enablement for our sales team. And, again, I think it's early still, but I think that we have good indication, and I'm proud of the fact that we did this so far without significant disruption or distraction within our business.
spk04: Thanks.
spk11: Your next question comes from the line of Joel Fishbein from Truist Securities. Please go ahead.
spk09: Thanks for taking my question, Thomas. One for you. It's a good segue from the last question. Clearly, outperforming on the gross margin and on the operating margin side, and frankly, cash flow. Just how are you going to continue to balance profitability and investment in some of these high growth areas that Matthew outlined? Just love some color there. Thanks.
spk12: Yeah, just following up on what Matthew said. I think we have the business really well instrumented. You could see this now managing ourselves through quite some macroeconomic turbulences. So the business is well instrumented. The business model is built in a way that it allows us to have visibility when demand picks up and we will invest behind the demand. We've also realized, you know, that there's significant scalability and efficiency in the business model. This is what you are, what has been showing up in the financial numbers over the last couple of quarters really well. So I think the combination of visibility into is the business picking up or not, a well-instrumented business, and a lot of elasticity in how we move forward keeps us confident that we have their hands firm on the rudder and can control the ramp up really well.
spk09: Thank you.
spk11: Your next question comes from the line of Andrew Nowinski from Wells Fargo. Please go ahead.
spk14: Great, thank you. Congrats on another amazing quarter, and please accept my thoughts and prayers for your employees that are affected by the terrorist attack in Israel. So I wanted to ask you guys about the workers AI offering that you launched. I think it's really interesting. I was wondering what the early feedback is on it, in particular the vector database component, and whether the staggered rollout of the GPUs is a potential gating factor as people kind of wait maybe to deploy that at scale or wait for the rollout of the NVIDIA GPUs before rolling it out at scale.
spk02: Yeah, Andy, thanks for the question. Again, it's an area that we're extremely excited about. I'm part of the fact that our team has been able to get it rolled out as quickly as they have. We thought it was an ambitious goal to be in 100 cities by the end of the year. The fact that we're in now over 75 today and that we've been able to deliver that while staying very disciplined around CapEx. is important. We're not seeing that people are waiting for things to be everywhere, for them to be in and testing. I do think that as we are going into various geographies around the world, it creates a real differentiation with us. We're hearing, especially in markets outside of the United States, how they have felt left behind in the AI space. And increasingly, I think, as you're seeing with the executive order on AI, with some of the European regulation on AI, being able to keep AI local is, we think, going to be something that's a real differentiation for us. The vector database, I think that that's actually a good question to ask about and sort of got lost in some of the other stories. But I think some of the more sort of people who are paying attention to within the AI space recognize how important that is. Being able to fine-tune your models and have a database that's built on top of the existing R2 infrastructure that we have is something that not only allows us to do inference, but actually allows us to do fine-tuning as well, which gives us sort of two of the three major legs of the AI stool. And that's sort of my sneaky feature that I think is going to be pretty disruptive because you can use our vector database, whether you're using the rest of Cloudflare's AI systems, becomes a really great function for AI users who are wanting to do fine-tuning. And that, combined with the locality that we can deliver with the workers' AI system and inference scattered around the entire world, allows us to do something that is truly a complete AI ecosystem. And, again, the AI developers that are paying attention ask the same question, which is, is, wow, how did you guys add a vector database? And the good news is, again, all of these things are built on a lot of the primitives that we had before. We didn't have to go out and build something new. We could put GPUs in our existing servers. We could build, vectorize the vector database on top of R2 and some of the other primitives that we had out there. And we could learn from the huge number of AI startups that are already using CloudFlare in terms of what tools they needed in their toolkit. And that's what our team is delivering.
spk14: That's great. Thanks, Matt. Matthew, keep up the good work.
spk11: Your next question comes from the line of Hamza Faderwala from Morgan Stanley. Please go ahead.
spk03: Hey, good evening. Thank you for taking my question. And congrats on a solid result in what's been a pretty tough environment. Matthew, you talked a lot about the AI inference opportunity and a lot of great color there. Could you just maybe level set and remind us of all the different sort of vectors for potential monetization over time? You talked about R2, potentially the vector database angle as well, but any others that we should consider? And then maybe a follow-up for Thomas. I believe a lot of this is sort of priced on a more consumption basis. So as the demand starts to ramp, should we start to see that more in a real-time fashion as it relates to your revenue? Thank you.
spk02: Yeah, so I'll start, and then Thomas can add to it. I think there are three different areas in which we can see growth and delivery from AI. The one where we've seen it now for at least the last 18 months is just in our traditional products. Using Cloudflare's security services to protect AI systems is absolutely critical. And as you go to some of the leading AI platforms that are out there, you'll often see Cloudflare's logo where we're using AI systems ourselves actually to check to make sure you're a human being, check to make sure that you're not a threat before letting you on. So that I think is just our bread and butter and what we can deliver very efficiently. The second area is with things like R2 and charging for storage. And again, that's going to be storing the models, storing the training sets for those models, using the fine-tuning data with R2 and Vectorize to be able to process those models. And again, that's going to be much more like, as you said, a consumption-based approach. And then the third way is that we're charging for inference. And what I think is unique about us is because a core cluster is incredibly good as a routing and scheduling engine. And that's how we're able to deliver the very high gross margins that we have compared with some others in the space, is that we just get a much higher degree of utilization. And we pass that on to our customers. And in this case, the way that we're charging for our GPUs is termed by the industry as a serverless. method of charging. And what that means is we only charge you for when you're actually running an inference task. And then we're able to schedule that very effectively across our entire platform. And we think that that's going to be as disruptive in this space as some of the things that we've done with workers have been in the traditional space. And that's something that is very attractive to AI developers. So I think those are the three ways that we see as modernization around this. One is our just traditional security products. Second is around storage of either training sets or models themselves or the refining and fine-tuning model systems. Or then the third is actually charging for what is effectively the compute capacity and doing that in a way that is, again, very disruptive compared with some of the other providers that are in the space. And we can often decrease people's inference task costs pretty substantially, while that's still being a very high margin business for us.
spk05: So we're excited. Coming back to your second question.
spk12: So today, the share of variable revenue and overall revenue is very, very low. But the ramp of the AI service and products that Matthew just mentioned would increase this share. We've seen some of the strength, actually, in the third quarter from a revenue perspective already coming from variable revenue. So, this is one data point. It's not enough to make a good correlation or a trend. But with a higher share of variable, with products and services that are price variable, you would see a more immediate impact on revenue for sure. But we don't have enough data yet to see how this will play out.
spk04: But the first signs are encouraging.
spk05: Thank you.
spk11: Your next question comes from the line of James Fish from Piper Sandler. Please go ahead.
spk13: Hey, guys. Thanks for the question. You guys have talked a lot about AI here, but where are we with getting more shop on goal with more of the Wave 2 products and network security in particular? Additionally, Thomas, more for you. While net new customers were good, the dollars added just were a little bit lower than what we've seen in past few quarters. Is that just being a reduction in contract durations given the macro or what other aspects are impacting this? And I'm sorry if I've missed this. Did you give an RPO number this quarter?
spk02: Yeah, Jim. So I'll take the first bit and then hand it off to to Thomas for the second bit. I think we're seeing real strength around the network security and our Zero Trust products. We've been recognized as leaders in those spaces by a number of the key analysts. That's driven up the amount of interest. The pipeline for those products is extremely strong, and what we're seeing is that Increasingly, customers want, especially in the sort of making every IT dollar go further, increasingly they want to say, I don't just want to protect the back door of my business. I want to protect the front door, the back door, the side door, and all of the doors in the business. And so we're the one vendor that is able to give people that vendor consolidation, that single pane of glass. And I think that that comes through in a lot of the customer examples and stories that we've seen. And so what we're seeing more and more is people want to buy the entire CloudFlare platform. They want to protect their entire business with that. And that's driving more interest in both our network security as well as our zero trust products.
spk05: RPO for the third quarter was a billion dollars and 83 million.
spk12: I think it was part of my script. Expansion is getting better. DNR ticked up one percentage point, so it's stabilizing. I think that is what we have been talking about in the previous earnings call that we see bottoming out. But I would still say that it is easier to have new logo acquisition than it is to expand with existing customers. and the trend we have seen that this might be impacted timing wise or budget wise by by current macro concerns i think still holds true it has not changed materially from the third quarter from the second quarter awesome thanks guys your next question comes from the line of shrinrik karthari from baird please go ahead
spk00: Yeah, thanks for taking my question. Congrats on the solid execution. I'd just like to switch gears a little bit to DDoS. So Matthew, I mean, of course, Cloudflare's unique approach to DDoS pricing definitely differs from the competition. And instead of tying the price to the size of the attack, you've opted for a more customer-centric approach. So just curious, in today's elevated DDoS landscape, Are you seeing this flexibility kind of appreciated by customers and not being charged based on the scale of the attack? Is it becoming a key driver for kind of stronger shared gains? And then I have a quick follow-up for Thomas.
spk02: Yeah, so first of all, I mean, the world is getting a lot more complicated, and we're seeing even nation-state actors turning to DDoS attacks to disrupt services around the world. And a new attack vector, which our team alongside Google and AWS helped discover and announce this last quarter, is generating attacks that are literally almost doubling the total volume of traffic on the entire internet when they're going forward. And The nature of how we're able to stop those attacks and the architecture of how we're able to stop those attacks is very unique to CloudFlare. And we're seeing even some of the large hyperscale public clouds that have their own limited DDoS mitigation services point customers to us because we're the best in the world at this. And I think that that's a real differentiator for us. The pricing also is important, and what's unique is because every single server that is part of CloudFlare's network can run every single service. As we stop these massive attacks, not only are we, again, better able to technically stop them, but we are then able to do it without it changing our underlying pricing because it doesn't drive up what our costs are. Early on, we said that we should pass that advantage on to our customers, and so we created pricing that was, as you said, very customer-centric. That's appreciated by the market. I think more and more people are leaning in on DDoS and using us for that. And what we're seeing is that then we can use that as sort of the milk in the grocery store where we can sell other products across our suite. And just like I said before, customers don't just want to protect the front door. They don't just want to protect the back door. They want to protect all of the parts of our business. And so we're seeing that having collective solutions from a platform that can solve DDoS, zero trust, WAF, rate limiting, bot management, access control, and have that all behind one single pane of glass is a very, very, very compelling offering. or somewhat snarkily, if you look at some of the other Zero Trust vendors that are out there, they're actually Cloudflare customers using our DDoS mitigation products because we're the best in the world at them.
spk00: Great. Just very super helpful. Just a quick follow-up on what you said around Zero Trust. I mean, I agree your margin really allows you to kind of disrupt the market, kind of enabling you to use pricing as a competitive advantage. And, of course, you discussed the DDoS pricing and On Zero Trust, when you're bundling around SASE and Zero Trust, just curious, it still seems like you guys are still pricing similarly uniquely versus the market more attractively. Just curious, are you thinking about also going for premium optimal pricing given where the market is, given the strength of the demand, and also try to push forward on the margins front? Is that a level that you guys are thinking through?
spk02: You know, I think that we can use price there as a weapon to win business. We have tended not to see that there's a lot of price sensitivity there. And so we're not going to just push that if we don't have to. I think that the place that is more attractive is actually in how we create platforms where you can have a complete network security solution And it's also really powerful that we can run our Zero Trust products at extremely, extremely high margins for actually the same reasons as the DDoS mitigation products. If you take all of the other Zero Trust vendors that are out there and add up all their traffic, we could add them all to CloudFlare's network without significantly increasing our underlying COGS of delivering that traffic. And so that gives us an advantage over time. And we do believe that whoever has the lowest cost of servicing tends to win over the long term. And that is something that is very difficult for any of our competitors in that space to match.
spk00: Got it. Makes sense. Super helpful. Thanks a lot.
spk11: Your next question comes from the line of Alex Henderson from Needham & Company. Please go ahead.
spk06: Great, thank you so much. Matt, you guys continue to amaze me in the ability to anticipate things five, six, seven years before they happen. I think about the micro-threading of microservices in your serverless platform as an example, and now you're talking about having left slots open for inference AI six years ahead of schedule. It's pretty amazing. prescience. But I was hoping you could talk a little bit about the uniqueness of the platform as we move into the world driven by inference AI. It's pretty clear to me that the combination of the workers platform combined with the location of your edge combined with all of the other elements of the service platform at the edge gives you a unique positioning, particularly with the R2 and the vectors stuff that you've announced. So is there anybody else that has in our reasonable positioning to compete with you in that context, or are you as unique as you look to me in this competitive landscape?
spk02: Alex, thanks for the kudos. I think we sometimes are a little bit early, and for people who are paying close attention, almost three years ago, we actually did an announcement with NVIDIA that was a trial balloon, kind of in the space to see how much demand there was. And at the time, there wasn't a ton of demand. But we could see how models were improving, inferences were improving. We knew that this was something which was coming. And so we learned from that first thing. I think we built a really strong relationship with the NVIDIA team in part because of that and some of the work that we've done with them in the networking space. But I think that we try to learn and stay and buy ourselves the flexibility over time to be able to deliver in this space. I don't know of anybody else that has an architecture like ours where we made the hard decision early on to say every machine everywhere can run every task. so that we don't have dedicated scrubbing centers, so that we don't have dedicated regions for one service or another, that has required us to invent a lot of technology and build a lot of intellectual property around that technology and just a lot of know-how in running a network like that. It is harder up front to build it that way, but it results in a much higher level of efficiency, a much higher level of a much faster pace of innovation. And we're able to capitalize on that today. And so I think it would require a complete re-architecture from any of the providers that we know in order to be able to do what we've done in this space. And I think it's, again, part of the secret to our continued pace of innovation. And again, really proud of our team and everything that they've done to be able to deliver it.
spk06: One last question along the same lines, if I could. The imprints AI market, how much of it do you expect to be at the edge and how much do you expect within imprints that might be in more centralized or regionalized locations? Thanks.
spk02: Yeah, I mean, my thesis around this is that probably the most inference tasks will be run directly on your device. So, on your Apple device, on your Samsung device, on your LG device, whatever that is. But ideally, you're going to want to have it so that you can seamlessly hand off whether you're using a low-powered device that needs to optimize for battery life or needs to optimize for the lowest build of materials, or you're trying to run a task which is so big and powerful that you're going to have to hand that off to a device nearby. And so you want the rails between those things to be as seamless and efficient as possible. And from a user's experience, you're going to want that to be transparent to them. And so I think the most powerful devices out there are going to get more and more powerful with the models that are running on them. But less powerful devices, devices that, again, have to have weeks of battery life but still need to be smart, or the most interesting models that are bigger and can do more interesting things, I think it's going to naturally make sense for that inference to be actually running as close as possible to the end user. I don't see a ton of reasons. why you would run inference back in some centralized location. I think that that is going to have a performance penalty in doing that. I think it's going to have a regulatory penalty in doing that. I think it's going to also have actually a cost disadvantage in sending it back to a central location. And so as we build this out and we give people the tools to be able to run those sophisticated models at the edge, I think it's a two-horse race. that it's going to be the phone and end device manufacturers that are going to get better and better and better over time. And then it's going to be connectivity clouds like a CloudFlare that are going to deliver on those models that can't run on the end device itself.
spk06: Super. Thank you so much.
spk11: Your next question comes from the line of Mark Murphy from JP Morgan. Please go ahead.
spk01: Thank you. Matthew, you have so many products that can help companies reduce the egress fees and all the other charges that are running up in their hyperscaler bills. And I was thinking of Super Slurper and Sippy and Hyperdrive and some of the other products. Can you comment on the demand patterns there and just whether you're benefiting from some of those optimization efforts out there? Then I have a quick follow-up.
spk02: Yeah, I think that that's... everyone today is looking at their cloud bill and saying, how can we make this go down? How can we get more with every IT dollar that is being spent? And as companies do that, they're realizing that, one, the best way to not get just completely gouged by whoever your cloud provider is, is not be completely dependent on them and have the ability to negotiate and move data and workloads from one provider to another. And so enabling that multi-cloud universe is just fundamental to how we think about what we're doing. And then second, to finding those places where you might be more locked in today and finding ways to release that And I think that's fundamentally what we're doing at CloudFlare. As I talked about in the beginning, you know, the hyperscale public clouds, the key KPI that they pay attention to is how much of a customer's data are they hoarding on their systems? Do they hold captive? Whereas at CloudFlare, what we pay the most attention to is how much connectivity can we deliver? How many things can we make it easy to move that data between? And I think that fundamental difference, it's not so much that we're trying to compete directly with the clouds, but over time, what we really want to do is enable customers to be able to get the best out of AWS and Google and Microsoft and Oracle and IBM and Alibaba and Tencent and be that fabric that connects them all together. And I think more and more customers are seeing the power of that. They're multi-cloud, whether they want to be or not. And we're the consistent control plane that can sit between all of those things, help them reduce their costs, help them reduce lock-in, and really have a much more competitive cloud ecosystem over time.
spk01: Thank you so much for that. I really appreciate it. Just as a follow-up, is there any rough math on the number of GPUs you're loading into suitcases to install in the next 12 months? Should we assume that those are or that those can be some of the lower-end GPUs for inferencing and they're not the ultra-expensive high-end ones?
spk02: I think that what we hear from customers is that They don't want to have to think about, you know, what GPU is the right GPU. for them. And so we will have a mix of GPUs. Today we're standardized around NVIDIA, but we're good friends with the folks at AMD and Intel and Qualcomm who are all doing interesting things. And different models, from what we've seen, perform differently on different types of GPUs that are out there. And so I think you'll find every flavor under the sun from expensive to cheap delivered across the network. But what we're really trying to optimize for is looking at the models that are being run and then giving people the right tools that they need in a way that can give them the best performance on not just a speed basis, but also on a cost and efficiency basis. And that's going to be diversity across that ecosystem. And we are good at being able to scale up our capacity as we have demand and investing behind the demand. And I think that this is going to be another area where we demonstrate that. Thank you very much.
spk11: Our final question comes from the line of Trevor Walsh from JMP Securities. Please go ahead.
spk15: Great. Thanks all for squeezing me in here at the end. Matthew, just wanted to piggyback maybe off some of your last comments there on that final question and just the ones you had in your prepared remarks around the category classification of the connectivity cloud. That makes total sense to me in terms of the strategic more CIO lens as far as kind of the benefits of being out of that connective tissue and kind of savings around R2 and kind of all the things. But as you go and talk to CISOs and especially within the context of your Act 2 products that you have, Zero Trust and otherwise, does that messaging kind of need to change a little bit or do you think they view it in the same way? Because I think they're kind of, you know, obviously the audiences and kind of the overall value prop might, I mean, be similar but different in some respects and just maybe help us understand how might that CISO respond or are in fact responding to that same connectivity cloud messaging. Thanks.
spk02: Yeah, I think we've been strong with CISOs for some time and they know us and they know the value that we can deliver. I think what we're trying to make sure is that we can have a strategic conversation with the CIOs and the CFOs that are out there and say, Here's how we can deliver value, help you consolidate vendors, and give you one consistent control plane that has an incredible ROI to it. So I think we don't want to rest on our laurels. We've been very, very strong with practitioners. We've been very strong in the security community. But we want to make sure that we can have that strategic conversation and We had a record number of customers that signed up at over $1 million a year with us. We had a record number that crossed into that $5 million a year with us this last quarter. And those are conversations that have to be had. The CFO, even in large companies, the CFO is going to be involved in signing $5 million deals. And so I think that the messaging is the right time, and it reflects that we're talking to higher and higher levels within the organization, and we are being seen much more as a strategic partner within those companies.
spk04: That's great. Appreciate it.
spk10: Thank you.
spk11: I will now turn the call over to Matthew Prince for closing remarks.
spk02: I appreciate everyone at CloudFlare, all of our customers, partners, in helping us navigate what is an increasingly complicated world. Our thoughts are with all of the people around the world that are being affected by war. We're continuing to deliver our services and stand up for the Internet. And even in these incredibly complicated times, the work that CloudFlare does is important in making sure the Internet can continue to thrive. Thank you all. We'll see you back here next quarter.
spk11: This concludes today's conference call. Thank you for your
Disclaimer