Snowflake Inc. Class A

Q1 2024 Earnings Conference Call

5/24/2023

spk05: Good afternoon. Thank you for attending today's Snowflake Q1 fiscal year 24 earnings conference call. My name is Cole, and I'll be your moderator for today's call. All lines will be muted during the presentation portion of the call with an opportunity for questions and answers at the end. If you would like to ask a question, please press star 1 on your telephone keypad. I'd now like to pass the conference over to our host, Jimmy Sexton. Please go ahead.
spk20: Good afternoon, and thank you for joining us. us on Snowflake's Q1 fiscal 2024 earnings call. With me in Bozeman, Montana are Frank Slootman, our chairman and chief executive officer, Mike Scarpelli, our chief financial officer, and Christian Kleinerman, our senior vice president of product, who will join us for the Q&A session. During today's call, we will review our financial results for the first quarter fiscal 2024 and discuss our guidance for the second quarter and full year fiscal 2024. During today's call, we will make forward-looking statements, including statements related to the expected performance of our business, future financial results, strategy, products and features, long-term growth, our stock repurchase program, and overall future prospects. These statements are subject to risks and uncertainties, which could cause them to differ materially from actual results. Information concerning those risks is available in our earnings press release distributed after market closed today and in our SEC filings, including our most recently filed Form 10-K for the fiscal year ended January 31, 2023, and the Form 10-Q for the quarter ended April 30, 2023, that we will file with the SEC. We caution you to not place undue reliance on forward-looking statements and undertake no duty or obligation to update any forward-looking statements as a result of new information, future events, or changes in our expectations. We'd also like to point out that on today's call, we will report both GAAP and non-GAAP results. We use these non-GAAP financial measures internally for financial and operational decision-making purposes and as a means to evaluate period-to-period comparisons. Non-GAAP financial measures are presented in addition to and not as a substitute for financial measures calculated in accordance with GAAP. To see the reconciliations of these non-GAAP financial measures, please refer to our earnings press release distributed earlier today and our investor presentation, which are posted at investors.snowflake.com. A replay of today's call will also be posted on the website. With that, I would now like to turn the call over to Frank.
spk01: Thanks, Jimmy. Welcome, everybody, listening to today's earnings announcement. Snowflake's product revenue grew 50% in Q1 of fiscal year 2024, totaling $590 million. The net revenue retention rate reached 151% and remaining performance obligations came in at 3.4 billion, up 31% year on year. Non-GAAP adjusted free cash flow was 287 million, up 58% year over year. We are, however, operating in an unsettled demand environment, and we see this reflected in consumption patterns across the board. While enthusiasm for Snowflake is high, enterprises are preoccupied with costs in response to their own uncertainties. We proactively work with customers to optimize their environments. This may well continue near-term, but cycles like this eventually run their course. Our conviction in the long-term opportunity remains unchanged. Degenerative AI, with its chat style of interaction, has captured the imagination of society at large. It will bring disruption, productivity, as well as obsolescence to tasks and entire industries alike. Degenerative AI is powered by data. That's how models train and become progressively more interesting and relevant. Models have primarily been trained with Internet and public data, and we believe enterprises will benefit from customizing this technology with their own data. A Snowflake manages a vast and growing universe of public and proprietary data. The data cloud's role in advancing this trend becomes pronounced. AI's focus on large language models and textual data, both structured and unstructured, will lead to rapid proliferation of model types and specializations. Some models will be broadly capable, but shallow in functions. Others will be deep, specialized, and impactful in their specific realm. For years, we focused on the extensibility of our platform via Snowpark, making Snowflake ideally suited for rapid adoption of new and interesting language models as they become available. AI is also not limited to textual data. Equally far-reaching will be seen with audio, video, and other modalities. The Snowflake mission is to steadily demolish any and all limits to data, users, workloads, applications, and new forms of intelligence. We will therefore continue to see us add, evolve, and expand our functions and feature sets. Our goal is for all the world's data to find its way to Snowflake and not encounter any limitations in terms of use and purpose. From our perspective, machine learning, data science, and AI are workloads that we enable with increased capability, continuous performance, and efficiency improvements. Data has gravitational pull, and given the vast universe of data Snowflake already manages, it's no surprise that interest in these capabilities is escalating while its uses are still evolving. Data science, machine learning, and AI use cases on Snowflake are growing every day. In Q1, more than 1,500 customers leveraged Snowflake for one of these workloads, up 91% year over year. A large U.S. financial institution uses Snowflake for model training. Facing memory constraints with their prior solution, they chose to move feature engineering workloads to Snowflake. With Snowflake, they can fully ingest all data, replacing a sampling approach, which left models less predictive and long-running. Snowflake enables machine learning for a broad spectrum of user types, not just programmers. For analysts, we have introduced and preview ML-powered SQL extensions, such as anomaly detection, top insights, and time series forecasting. SQL proficient users can now leverage powerful machine learning extensions without the need to master the underlying data science. For data scientists and engineers, Snowpark is our platform for programmability. New here is a PyTorch data loader and an MLflow plugin, both in private preview. PyTorch is a popular framework for machine learning, and MLflow helps manage the lifecycle and operations of machine learning. Snowflake had an early start in support of language models through last year's acquisition of APLICA, now in private preview. APLICA's language model solves a real business challenge, understanding unstructured data. Users can turn documents such as invoices or legal contracts into structured properties. These documents are now referenceable for analytics, data science, and AI, something that is quite challenging in today's environment. Streamlit is the framework of choice for data scientists to create applications and experiences for AI and ML. Over 1,500 LLM-powered Streamlit apps have already been built. GPT-Lab is one example. GPT-Lab offers pre-trained AI assistance that can be shared across users. We announced our intent to acquire Neva, a next generation search technology powered by language models. Engaging with data through natural language is becoming popular with advancements in AI. This will enable Snowflake users and application developers to build rich search-enabled and conversational experiences. We believe Neva will increase our opportunity to allow non-technical users to extract value from their data. More broadly, Snowflake continues to enable industries and workloads. In Q1, more than 800 customers engaged with Snowpark for the first time. Approximately 30% of all customers are now using Snowpark on at least a weekly basis, up from 20% at the end of last quarter. Snowpark consumption is up nearly 70% quarter over quarter. The Snowflake connector for ServiceNow is in public preview. Customers can access ServiceNow data inside of the data cloud without needing to manually integrate APIs or third-party tools. ServiceNow data is significant because it holds a wealth of IT and security data. The connector is the first so-called native app built by Snowflake. Native apps, which are in private preview, run inside the Snowflake governance perimeter make use of common services today developers waste time convincing customers to expose their data with native apps developers can focus on their core interests application development they offer security and deployment concerns to snowflake during the quarter we also launched a manufacturing data cloud which focuses on supply chain management as a data problem supply chain management is one of the few remaining realms in enterprise software that have struggled to platform itself. Supply chains are all somewhat unique, and the data siloing problem prevents supply chain visibility essential to managing it. With the manufacturing cloud, Snowflake continues to evolve from being a data cloud to also being an operational hub for large enterprises and institutions. We also announced that Blue Yonder, one of the largest software companies in supply chain management, will fully re-platform onto Snowflake. Blue Yonder is a key participant in both the manufacturing and the retail data clouds. They are the first major supply chain provider to make this commitment to creating the end-to-end supply chain platform on Snowflake. Supply chain management is a network discipline, as the chains are typically comprised of numerous different entities. We therefore expect significant network effects from this strategic alliance with Blue Yonder. Our summit conference in June will feature more significant product announcements, and we look forward to seeing you there. With that, we'll turn the call over to Mike.
spk17: Thank you, Frank. Q1 product revenues were $590 million, representing 50% year-over-year growth, and remaining performance obligations grew 31% year-over-year, totaling $3.4 billion. Of the $3.4 billion in RPO, we expect approximately 57% to be recognized as revenue in the next 12 months. This represents a 40% increase compared to our estimate as of the same quarter last year. Our net revenue retention rate of 151% includes five new customers with $1 million in trailing 12-month product revenue. Q1 revenue reflects strong performance in a challenging environment. We continue to focus on growth and efficiency. We generated $287 million of non-GAAP adjusted free cash flow, outperforming our Q1 target. In Q1, consumption varied from month to month. We benefited from strong consumption in February and March. Starting in April, consumption slowed after the Easter holidays through today. The strength in the quarter was driven by our healthcare and manufacturing customers. Financial services customers outperformed our expectations. From a geographic standpoint, we saw inline performance globally. with the exception of our S&B and APJ segments. It is challenging to identify a single cause of the consumption slowdown between Easter and today. A few of our largest customers have scrutinized snowflake costs as they face headwinds in their own businesses. For example, some organizations have re-evaluated their data retention policies to delete stale and less valuable data. This lowers their storage bill and reduces compute costs. We've worked with a few large customers more recently on these efforts and expect these trends to continue. History has shown that price performance benefits long-term consumption. From a booking standpoint, we saw headwinds globally, with the exception of our North American large enterprise segment. This is not due to competitive pressures, but because customers remain hesitant to sign large multi-year deals. Productivity is not where we want it to be, and our updated outlook reflects this. Q1 is always a challenging bookings quarter, and the current macro environment magnifies that, but we are still not satisfied with our results. We will only invest in areas that yield returns. For that reason, we will prioritize existing sales resources to drive growth before we onboard new capacity. Q1 represented another quarter of continued progress on profitability. Our non-GAAP product gross margin was 77%. More favorable pricing with our cloud service providers, product improvements, scale in our public cloud data centers, and continued growth in large customer accounts will contribute to year-over-year gross margin improvements. Non-GAAP operating margin was 5%, benefiting from revenue of performance and savings on sales and marketing spend. Our non-GAAP adjusted free cash flow margin was 46%, positively impacted by strong linearity of collections and some early collections of May receivables. We continue to have a strong cash position with $5 billion in cash, cash equivalent in short-term and long-term investments. We used approximately 192 million of our cash to repurchase approximately 1.4 million shares to date at an average price of $136. We will continue to opportunistically repurchase shares using our free cash flow. As Frank mentioned, we are acquiring Neva. We are excited to welcome approximately 40 employees from Neva to Snowflake, and the full impact is reflected in our outlook. Before turning to guidance, I would like to discuss the recent trends we've been observing. As I mentioned, we have seen slower-than-expected revenue growth since Easter. Contrary to last quarter, the majority of this underperformance is driven by older customers. Although we expect this to reverse, we are flowing these patterns through to the full year due to our lack of predictability and visibility As a result, we're reining in costs until we see a consistent change in consumption. We are still focused on investing in efficient growth with a concentration on continuing to sign new customers, ensuring these customers are migrated quickly and successfully, leveraging our PS team and partner resources, and selling our newer solutions such as Snowpark and Streamlit to win more personas in the enterprise. We are confident that this will ultimately lead to the data cloud network effects we have laid out over the past few years. We still believe we can achieve $10 billion of product revenue in fiscal 2029 with a better margin profile than we laid out last year. Now let's turn to guidance. For the second quarter, we expect product revenues between $620 and $625 million, representing year-over-year growth between 33% and 34%. Turning to margins, we expect on an on-gap basis, 2% operating margin, and we expect $361 million to lead to weighted average shares outstanding. For the full year fiscal 2024, we expect product revenues of approximately $2.6 billion, representing year-over-year growth of approximately 34%. Turning to profitability for the full year fiscal 2024, we expect on an on-gap basis, approximately 76% product gross margin, 5% operating margin, and 26% adjusted free cash flow margin. And we expect 362 million diluted weighted average shares outstanding. We will continue to prioritize hiring and product and engineering. We have slowed our hiring plan for the year. We expect to add approximately 1,000 employees in fiscal 2024, inclusive of M&A. And lastly, we will host our Investor Day on June 27th in Las Vegas in conjunction with Snowflake Summit, our annual users conference. If you are interested in attending, please email ir at snowflake.com. With that, operator, you can now open up the line for questions.
spk05: Thank you. We will now begin. I apologize for all the coughing. If you would like to ask a question, please press star followed by one on your telephone keypad. If for any reason you'd like to remove that question, please press star followed by two. Again, to ask a question, press star one. Our first question is from Mark Murphy with JP Morgan. Your line is now open.
spk10: Thank you very much. Frank, do you sense any connection to the cadence of hyperscaler cost optimization activity? In other words, If the AWS and Azure optimizations begin to normalize within a few quarters, do you think that Snowflake's consumption patterns and sequential growth rates would perk up around the same time, or do you look at those as more separate kind of phenomena? And then I have a quick follow-up.
spk01: Yeah, well, we think that because Amazon is such a large percentage of our overall deployments, that they are proxy. We just know from talking to them that what they experience, we experience as well. So there's definitely a ripple effect because we're in the stack. So the answer generally speaking is yes. We will see that Microsoft is smaller. So they're not as predictive of our experiences as AWS would be.
spk10: Okay. Then as a quick follow-up, Mike, I'm sorry to ask you a question. It sounds like you've got a bit of a cold, but Is it safe to assume that you're completely through the revenue headwinds from Graviton adoption and the warehouse builder product? I think that's the case, but I'm also curious, are there any other analogous developments on the horizon that we could be thinking about that you might have baked into guidance in the next several quarters?
spk17: Yeah, we've fully migrated all of our customers in AWS to Graviton, too. And that's the bulk of where revenue is. And I want to remind you, there's really three types of optimization. There's the optimizations by the cloud vendors, and that's with better hardware, better performance. Then there's the optimizations that we do regularly in our software, which improve performance and hence cheaper for our customers. Generally, those two combined, we forecast that there's a 5% headwind every year to our revenue associated with those. The third optimization is the one that we really saw in a few of our largest customers with them just wanting to really change their storage retention policies. One customer went from five to three years, and it's a massive petabytes and petabytes increase. of data and so we lose that storage revenue but on top of that now your queries run quicker because you're querying less amounts of data and we are seeing more customers wanting to do that and I spoke to some of the hyperscalers I won't say which one and they confirm they're seeing retention policies change within their customers wanting to archive more older data yeah thank you Mike that's very helpful
spk05: Our next question is from Kash Rangas with Goldman Sachs.
spk18: Your line is now open. Hi, thank you so much for taking my question. If you could just offer to the degree that you can, what are your customers that are going through consumption optimization telling you with respect to when it's likely to plateau and when they are likely to come back to quote unquote normal consumption if you can? Thank you so much.
spk01: Yeah, Cash, I would say, look, just to put a little bit more color on it, there's optimization, which is just how do we run what we're already running more efficiently and driving a level of savings that way. But there's sort of another layer on top of that, and I would call it rationalization. One of the things that we've seen happen over the last couple of quarters is that the CFO is in the business, and this is sort of an expression that we use in enterprises that they're selling. is that there is a level of oversight and scrutiny that's normally not there. This is not a frequent occurrence. You only see this happening in fairly severe episodes. In the beginning, it's like, hey, we do smaller contracts, shorter-term contracts, but then it's like, hey, you're going to live within your means. Here's the amount of money you're going to spend, and you're going to make it work, and you can figure out where you're going to cut to fit into our box. So that's really, you know, dynamics, you know, that we've seen playing out there. Now, in terms of your question, you know, when is it all going to be over? These things do run their course because in the end, you know, we're settling in. You know, I said in my prepared remarks, things are unsettled, but eventually they will settle. We will settle into new patterns and then we sort of, you know, resume, you know, from there. But I think, you know, as of right now, I think things are still unsettled and people are adjusting and we don't have real strong visibility in terms of, okay, when is this all going to be different? Thank you so much.
spk04: Our next question is from Keith Weiss with Morgan Stanley.
spk05: Your line is now open.
spk06: Excellent. Thank you, gentlemen, for taking the question. Mike, this one's for you. And it might be a little unfair, but it's the one that I'm getting most from investors. And it's about kind of guidance methodology. And if anything's changed in that, we've seen the forward forecast have to come down a couple of times over the past couple of quarters. And there's a lot of moving pieces in both the macro environment and kind of how your customers are acting. How can we give investors confidence that this is the last cut that we're not going to be running into? new types of optimization on a go-forward basis and further taking down our forecast for the fiscal year?
spk17: Well, there is no change in our forecast methodology, and we forecast looking literally at consumption trends on a daily basis, literally four weeks prior to the earnings through yesterday. And what I would say was a little unique in the past is we literally saw four weeks in April where there was no week-over-week growth per se or not material. And we do think that was driven a lot by some of these customers. That's when it happened, some of these big optimizations on storage retention policies. But, you know, in a consumption model, customers have the ability to dial it back, and they can increase it as well too when they get more confidence in their business. And I can only guide based upon the data we have available to us.
spk14: Got it. Thank you.
spk04: Thank you, Keith. Our next question is from Alex Zukin with Wolf Research.
spk05: Your line is now open.
spk23: Hey, guys. Thanks for the question. So, maybe one financial one, and then a technical one. If we look at the balance of the growth headwinds from optimization versus rationalization, or meaning how much people are doing less of versus, you know, still tight with the purse strings to do more with. Kind of how does that balance look? How has it changed over the course of the last six to nine months? And then maybe just from a technical perspective, what do you get with Neva? Why is it important? What does it unlock for your customer base relative to generative AI?
spk17: So in terms of what customers are doing, actually the number of jobs, the number of queries actually grew 57% year over year in the quarter. It's outpacing our revenue. The queries are just running more efficiently. And that is because of some of the optimizations, both if you reduce the amount of storage you're running queries on, they run faster. It's also the impact you're getting right now of the full Graviton2 this year versus the car from last year. So the number of jobs is actually out, growth is actually outpacing revenue, and just we're becoming so much more efficient for our customers. And on the NEVA, I'll go to Christian who's here, Kim.
spk12: Yeah, so hi, Christian here. The broad vision that we communicated to all of you over the last several years is Snowflake is on a mission to extend its capabilities so we can bring computation to happen close to the data. It has evolved us into an application platform. And a core use case for applications is not only search and search-enabled experiences, but with the advent of generative AI is the notion of conversational experiences. And the folks from Neva are the ones that have power, help us accelerate the efforts around Snowflake as a platform for search and conversational experiences, but most important, within the security perimeter of Snowflake with the customer's data so that they can leverage all this new innovation and technology, but with a safety on the privacy and security of the data.
spk14: Understood.
spk05: Our next question is from Ramo Linschow with Barclays. Your line is open.
spk19: Yeah, thank you, Mike. I hope you feel better soon. Um, quick question last quarter we talked about, like, um, the newer cohort. Kind of expanding slightly at the lower piece, uh, uh, compared to the, the more established 1 for the more older ones. Um, have you seen any change in momentum there? Or is it like, if you think about it, like, we had the last quarter. Slow expansion from the newer ones and now this quarter we have, like, more optimization from. from the older ones. Is that like the two things or are there other kind of factors or work there?
spk17: No, good question. The newer ones are growing faster. The older obviously are the larger dollars. So when they do optimizations, that has a bigger impact. And it's interesting too, the net revenue retention for growth within AWS, those customers are materially above where our overall company is. And that's because we're relatively new to that. So the Azure cloud is really starting to take off for us as well.
spk19: Okay. And then, uh, one, maybe, uh, to, uh, help you with your voice, uh, for Frank, but Frank, if you think about the changes in policy in terms of storage retention and stuff like that, I mean, I mean, there, there was a reason why people stored that data, like, you know, for a certain number of years, et cetera. Do you think what you're seeing now is kind of more a temporary thing, and as we're coming out, people just kind of do it, have a different approach again, or do you think that's kind of a permanent move that's happening here? Thank you.
spk01: I don't think it's permanent. Look, like I said, the CFOs in the business, they're given very direct guidance in terms of here's where you need to be. then the operating teams are starting to look at, okay, how do we implement this? Sometimes the low-hanging fruit is, hey, we'll just cut the data back. The processes might actually not be running as well. So there's actually a cost. But you know what? The cost concern is prevailing at the moment because of the general sentiment that we are. In 2020 and 2021, it was growth at all costs and the mentality was let it rip. Now we're in the complete inverse of that situation, you know, where we have, you know, strong scrutiny on predictability on cost and, you know, and so on. I don't think that will last. We're just on the other side of the spectrum right now. And we will reconverge, you know, to the mean at some point here.
spk19: Okay. Makes sense. Thank you.
spk05: Our next question is from Brad Zelnick with Deutsche Bank. Your line is now open.
spk03: Great. Thanks so much for the question. Mike, I know in a consumption model, obviously it's difficult to predict the number of new workloads and transaction volumes. A lot of that we know is tied to macro. I just wanted to come back to the optimization topic. You talked about the three different types of optimization. Is there any way you can compare your total customer portfolio to the most optimized customer that you have just to get a sense of maybe what the You know, what the downside is if everyone were optimized as your most optimized customer?
spk17: That would be so hard to do. I don't have that data. Each customer is different.
spk03: Can I ask you a question where I know you do have the data? Oh, please go ahead. Sorry.
spk12: I was going to add that in certain instances, some of these optimizations in the third category that Mike described, and what Frank was alluding to, it's changing how the business thinks about their needs. So when you make the decision to reevaluate a storage policy, there's a business impact that only customers can do. So it's difficult for us to estimate that type of decision.
spk03: Thank you, Christian. It's helpful. Mike, a question I know you do have the answer to. since you forecast the trends every week, any commentary on how May looks relative to April?
spk17: That's reflected in the guide that I gave you, the $2.6 billion for the year. I would say that there were a couple periods in May where it was strong, but it's okay, but it's not where we want it to be. But that's reflected in the guide now.
spk03: Cool. Thank you so much for taking the questions, guys.
spk05: Our next question is from Carl Kersted with UBS. Your line is now open.
spk15: OK.
spk16: Mike, if I could just build on Brad's line of questioning, the spirit of it is what assumptions you're embedding in your second half guidance? Are you essentially reflecting the April-May environment you saw and straight-lining it, or are you taking a little bit more of a conservative approach and sort of haircutting that, assumes that it or maybe the fins vertical gets a little bit weaker? That's question number one. Question number two, maybe this is best suited for Frank. Frank, Mike mentioned in his comments that sales productivity is not where um snowflake wanted it to be could you elaborate on that because that sounds like some of the pressure may not be entirely macro but might be sales execution so i'd love to hear a little bit if i interpreted that correctly and the steps you're taking maybe to turn it around thank you um so um sorry carl yeah
spk17: We are expecting that there will be week-over-week growth on average with our customers that will compound, but it's at a much lower pace than it was prior, and it's more what we've been seeing in the last four weeks is what we're expecting inside there. I'm not expecting a straight line from where we are today at the end of the year.
spk01: On the sales productivity side, I do think that's very much a macro thing. There comes a point. you know, where you can't push any harder. And, you know, we have applied the resources, but, you know, we're not converting, you know, on the resources in a way that we think is optimal. So, you know, is there an execution aspect? There always is, right? I mean, that's just day-to-day, you know, sales management, but In all the years of doing this kind of work, I felt like I've always sort of under-applied the resource. In hindsight, I always thought I could have done more. This is definitely a situation where I feel like we have applied tremendous amounts of resources. We've been very, very successful at it. But there comes a point where, okay, we need to become more selective, more prioritized on driving the performance. I definitely think it's a macro thing. I mean, the sentiment out there is, you know, is of a sort that, you know, you just can't push it any harder than up to a certain point. Okay.
spk16: Thank you both.
spk05: Our next question is from Patrick Walravens with JMP. Your line is now open. Oh, we can't hear you, Pat.
spk13: If I remember right, if I remember right, Blue Yonder is JDA and that's ITU and Manugistics. So anything about why that's so interesting would be great.
spk01: You know, look, I had a long-term fascination with supply chain management because supply chain management has never been really platformed in terms of software. It's an email spreadsheet operation. It's incredibly inefficient, and it's an incredibly high volume opportunity. And the reason that it couldn't be platformed is, first of all, each supply chain is different. So it's very hard to have a standard solution for something that is so variable. But secondly, is the data problem. You know, if you can't establish visibility, you know, across all the entities that make up the supply chain, you stand no chance of solving that problem. So the reason that I find it so interesting, you know, for Snowflake is that, look, all the entities in the supply chain, you know, will become Snowflake accounts, right? Because that's the way, you know, everybody will have visibility to everybody else. And we have a real fighting chance of solving it. Secondly, the processes that run in supply chain management are extremely computationally intensive and they run in very, very high volume. And of course, you know, Snowflake is ideally suited for taking on those kinds of workloads. So I really think that supply chain management will be the most network segment of all industries that we're operating in. And, you know, today, the most network segment that we're running in is financial services, you know, by far. But I think it will be overtaken, you know, by manufacturing and retail in the fullness of time. Because there's absolutely no penetration right there. These are unsolved problems, you know, very much in almost in the history of computing. That's how serious that is. So fantastic historical opportunity for the technology to address. Great. Thank you.
spk05: Our next question is from Kirk Matern with ISI. Your line is now open.
spk22: Yeah, thanks very much. Frank, with sort of the explosion in questions around AI over the last six months, do you think that buyers or executives are tying the opportunities with AI to the data yet? Meaning, I know conceptually they might get that, but are any of your conversations with customers sort of starting to percolate because of AI and the need to get your data sorted out to take advantage of that? Or is Are most people still sort of in the discovery phase on that front? And then, Mike, can you just talk if NEVA impacts the op margin guidance for the full year at all? I was just kind of curious. You mentioned savings, but the margins are sort of flat as year over year. I was just kind of curious if that had any impact.
spk01: Thanks, guys. Frank, you know, obviously, you know, customers make the connection, you know, between data and the ability to take advantage, you know, of large language models and the you know, the natural language interface and all that kind of stuff, and it's already happening. And the services that are today available on Snowflake, and they're also available in the AI space, you can already rig things together and make some interesting, you know, progress. But the thing is, you know, you need to have highly curated, highly optimized data, and that is what we do at Snowflake to really power these models. You know, you cannot just indiscriminately let these things loose on data that is, you know, that people don't understand in terms of its quality and its definition, its lineage, and all these kinds of things. So I think we are in a really great place. And I said in the prepared remarks, you know, data has a gravitational pull. So, you know, we will attract tremendous demand, you know, for these type of workloads. And, you know, our strategy is to enable that to the maximum and fullest extent possible.
spk17: And then with regards to Neva, Kurt, that's fully baked into the guidance. They have a number of, well, actually all of their engineers are very senior engineers and they're all based in the U.S. They're very expensive people, these people.
spk14: Our next question is from Brent Phil with Jefferies.
spk24: Your line is now open. Thanks. Frank, this concept of snow for everyone and having a simple chat like, you know, GPT-UI in front of the snowflake data, you know, bringing it to the mass market. I mean, how long do you think this takes to where you start to see that where it's We have you deployed internally, but I have to go to one person that's the power user. When do you think that ultimately we can start seeing that on everyone's desktop?
spk01: Well, I think that, you know, the more, I don't want to say simplistic. It might not be the right characterization. But, you know, for example, running these things on top of, you know, for example, Salesforce data in Snowflake, which is a very common thing, something that we're already doing internally. that's going to be available in the second half all over the place. And people will like it. I like it. I mean, I prefer it much over using dashboards and things like that because it just lets me ask questions. But they're also relatively simplistic questions. And where it gets harder when you start asking much, much harder questions, that's when you start finding the limits of these kind of technologies. So I think we're still sort of in the fun and games stage of the development of this technology. And with the content generation side of this technology, it's fascinating and captivating for people. But asking really hard analytical questions that take people weeks and weeks or even months to figure out, that will take some work for software to do that in a matter of seconds to be productive that way. So we're sort of at the top of the hype cycle. The real work really starts now.
spk24: And then Mike, you mentioned you're not effectively, it doesn't sound like bringing on a lot new capacity. There's still 183 job openings on your website. So I guess what you're saying is you're freezing quota carrying rep onboarding in the interim until you see that capacity or are you still bringing people on?
spk17: How are you thinking about this transition? In the sales organization, we're only doing backfills right now and we will look at um performance management and upgrading people and we could reallocate heads from one region that's underperforming to another region but no net new hires sorry great hope you feel better all right yeah sorry
spk05: Our next question is from Greg Moskowitz with Mizuho. Your line is now open.
spk00: All right. Thank you for taking the questions. You mentioned the change in data retention as a more prevalent form of optimization recently. What about the refresh rate? Are you seeing customers pull back on the frequency with which the data are updating?
spk12: No. Christian here. We have not seen changes there. If anything, Because of our cost model, the economics are fairly similar if people are updating more versus less frequently or reasonably similar, and we don't see changes in the patterns.
spk00: All right. That's helpful. Thanks, Christian. And then just a follow-up on Neva, I guess, either for you or for Frank. So, I think of the technology as, you know, fairly horizontal in terms of potential appeal. I'm just wondering if you think this can be an avenue to help land new enterprise customers going forward? And then secondly, how much of a value add do you think that this can truly provide to the installed base? Thanks.
spk01: Frank, I'll go first. You know, we view search and chat as, you know, really a complete evolution, you know, under the influence of our relationship with data. and how we interact with it. You know, I think most of us remember when search, you know, first became available, you know, how that's just dramatically changed our relationship with data. I'm personally a search junkie. You know, I can't leave it alone. I find it incredibly empowering. But the problem with search has been, you know, it matches on strengths. It has zero context. It's not stateful. And now we have the technology to make search incredibly powerful, also to the point that, you know, when it can't find it, it can actually generate the code you know, to answer the questions that are posed in search. So this is incredibly important, you know, to basically what we said from the beginning. You know, Snowflake is about mobilizing the world's data, and this is how we're going to do it. I mean, search and chat are sort of morphing into a single natural language interface. But the other thing I would caution you, this is not at all about natural language interfaces. You know, a lot of the intelligence that we're talking about is going to be manifested through the interfaces, not just through natural language.
spk00: Okay, thank you.
spk05: Thank you, Greg. Our next question is from Brad Reback with Stifel. Your line is now open.
spk09: Great, thanks very much. Mike, I hate to pose this to you, but you're probably the best to answer it. Beyond the week-to-week usage patterns in the install base, are there any other operational data metrics that you're looking at to give you confidence on when NRR will bottom?
spk17: You know, obviously that's not the only thing I look at. I look at pipeline generation, weighted pipeline. I'm typically looking out three to four quarters. Looking at, I sit in on the sales call every Monday. We're spending a lot of time with reps these days on what is going on within their accounts. And so, but the most important thing Is consumption patterns today or the biggest indicator of the future and also looking at new products that may come out? It's hard to forecast anything for them, but that gives us somewhat of confidence. We have some big announcements that are going GA towards stream. That is one of them. We talked about applicants in private preview, but stream that we think will be meaningful and we're really pleased with what we're seeing in the snow park daily credit consumption right now.
spk09: That's great. Thanks very much.
spk05: Our next question is from Tyler Radke with Citi. Your line is not open.
spk07: Yeah, thanks for taking the question. I'll pose this to Frank or to give Mike a break there. But just on Microsoft, so obviously they're hosting their build conference this week and you know, a ton of new product announcements, including in data and analytics. But I wanted to ask you, you know, more on the partnership front. I think you commented on just seeing some better traction there. I think they've evolved their partner program, including adding you as a Tier 1 partner. So could you just talk about kind of the status of that relationship, how you're fitting in given some of these announcements? Uh, like fabric, which kind of unifying Microsoft's own products, but just the status quo in that relationship and, you know, the opportunity with this new partnership.
spk01: Yeah, just, you know, just Microsoft relationship has been growing faster than the other, uh, two cloud platforms that we, uh, that we support. Um, you know, it's been very clear from the top of Microsoft that they're reviewing Azure as a platform, not as a sort of a single integrated proprietary Microsoft stack. And they've said over and over that we're about choice, we're about innovation. And yes, we will compete. We've competed with Microsoft from day one, and we've been very successful in that regard for a whole bunch of different reasons. But people keep on coming, and we expect that. And I think that's sort of a net benefit for the world at large that they get better and better products and they get more choice. The good news is that I think the relationship is relatively mature, meaning that when there is friction or people are not following the rules, we have good established processes for addressing and resolving that. And that's incredibly important, right? We sort of get out of that juvenile state where things are dysfunctional at a field level. So, and I have no reason to believe that that will not continue, you know, in that manner. So, I think, you know, Azure will continue to grow and grow faster than the other platforms, you know.
spk07: Great. And on Snowpark, sounded like that you're pleased with the consumption this quarter. Could you just give us a sense for, you know, expectations on the revenue ramp there and, What are the big use cases you're seeing today? Is it Hadoop migrations, data engineering? Just give us a sense on how you're expecting that ramp up and what are the main use cases driving that?
spk01: Here's the important thing to understand about Snowpark. Snowpark is the programmability platform for Snowflake. Originally, I know Snowflake was conceived with SQL interfaces and that was the mode through which you address the platform. So this has really sort of opened up a whole host of modalities, if you will, onto the platform. Basically, our posture is, look, if it reads or writes to Snowflake, we want to own these processes, and Snowpark is the platform to achieve that. The supply chain, if you will, how the data comes into Snowflake is through data engineering processes. Often these are Spark workloads and processes. We think they ought to run on Snowpark. They're going to be cheaper, they're going to be faster, they're going to be operationally simpler, and they're going to be fully governed. We think if you are a Snowflake customer and you're not running these processes on Snowpark, you're just missing out in all those four dimensions that I just listed. On the consumption end, It's the same thing. If you're doing analytics, if you're doing data science, you're doing machine learning, if you're doing AI, you know, if it reads from and writes back to Snowflake, you know, we think that's Snowpark. And, you know, we have taken a very, you know, emphatic posture to this. We're campaigning Snowpark very, very hard around the world. The interest is tremendously high. You know, as I said in the previous remarks, you know, we went from 20% in one quarter to 30% of our customers using it on at least a weekly basis. And we think that's going to go to 100%. I think snow park will become extremely prevalent, you know, around the use of snowflake. Now, beyond that, you know, there's a whole wide world, you know, that we're obviously also very interested in. And we're going to start at home and own everything that is there, that we can own over there.
spk14: Thank you.
spk05: Our next question is from Brent Grayson. With Piper Sandler, your line is now open.
spk21: Good afternoon. Frank, maybe for you, I totally get the current cost concerns and optimization efforts underway. I'd be more curious to hear what you think could get us out of the current slowdown. Are there products or workloads that you would flag as the key ones to watch that drives every acceleration in the business? Just thinking through what's in your control, or do you think we have to wait for the macro to improve? Thanks.
spk01: I definitely, the number one issue is sentiment out there, just the lack of visibility, the anxiety, you know, watching CNBC all day doesn't give you any hope. That's absolutely number one, because what we're seeing is that When we're dealing with CTOs and chief data officers, these people are chomping at the bit, but they are now literally getting stopped, as I said earlier, by the CFO being in the business and saying, well, I guess that's all good and well, but here's how much you're going to spend. You're not going to get a new contract. You're going to live within the confines of the contract that you have. So they're really artificially constraining the demand. because of the general anxiety that exists in the economy. So that really needs to start lifting. And, you know, that will happen. These things run their course. You know, we've been through these episodes, you know, before. So I think that's really the requirement. There's plenty of demand out there, absolutely. And, you know, with AI right now, I mean, it's going to, you know, drive a whole other vector in terms of workload development. It's going to be hard to stop CFOs or no CFO, you know.
spk21: Very helpful there. And then Christian, I wanted to follow up on Neva Streamlit, totally get that acquisition Neva little harder for me to fully understand. So as you look at Neva and the tech stack, what was most interesting? It wasn't the team. Is there some sort of differentiated search engine under the hood? Is it their large language model expertise? What, why Neva?
spk12: Yeah, it's a great question. I think it's the combination of traditional search technology with LLM technology. I think most of us have seen numerous demos of people that take an LLM and in a couple of days or hours produce something that looks good, but then there are problems on how precise that search is and how reliable those results are. What the Neva team did extremely well, it was able to combine LLM and generative AI type technology with traditional technology to be able to do attribution of results, and it's very interesting in an enterprise setting where you want more precise answers. That combination was very appealing. And then, of course, it is a world-class team, and the combination of those two were appealing to us.
spk21: Thank you.
spk05: Our next question is from Derek Wood with TD Cohen. Your line is now open.
spk11: Great. Thanks. I wanted to ask about the competitive and the pricing environment out there. I guess on the competitive side, have you guys seen any change in win rates or workload shifts to different platforms? And when it comes to pricing, you talked about customers focusing a lot on cost savings. How is this translating into your ability to hold kind of unit pricing, especially on renewals?
spk01: Frankly, I'll let Mike weigh in once he stops coughing. But, you know, the thing about pricing is, look, physics are physics. A read is a read, a write is a write, and there's economics. It costs a certain amount of money, right? And there's just not that much room other than playing games or temporarily sponsoring or subsidizing different parts of a business to really get a sustained pricing edge on one player or another. We're all converging to very, very similar economics. Where you see huge differences is in the total cost of ownership, and that is not the cost of computing storage. That is like, what is the cost to run that technology? And this is where has a huge advantage and you know our our customers know that it's just it's just uh it's reduced skill sets far fewer people not having to touch the complexity of the of the underlying platforms from the on and on and on i mean we're we're more descendants of you know apple and tesla than being the descendants of hadoop like some some people are in the marketplace right so we we have really abstracted the complexity and uh that's what uh generates these tco advantages but the raw cost of computing and storage, there's not that much opportunity to be had.
spk12: I want to add something to highlight what Frank mentioned in his Snowpark answer, which is what we're seeing relative to competitive platforms, Spark and PySpark, we're seeing Snowpark being not only better performance, but better price performance. So interestingly enough, we see customers giving us technical wins and wanting to migrate because of the better economics of the competitive dynamics.
spk11: Great. If I could squeeze one more in. Just in terms of LLMs, you guys are obviously sitting on a lot of data to be able to be mined and training models. Do you guys envision kind of building up GPU clusters and offering training and inference on your platform, or do you think that's really the place for hyperscalers to be doing that?
spk12: We're doing all of it. We alluded in the prepared remarks to APPLICA, which it is a multi-model collection of models being built at Snowflake that requires GPUs. So we're doing our part, but we're also working and we'll show more at our conference on how we surface GPUs. So all of the above, it's an important component of this Gen AI wave of innovations. Okay, thanks. Mike, feel better.
spk11: Thanks. I'm so sorry.
spk05: Thank you, Derek. Patrick? Our next question is from Sterling with Moffitt Nathanson. Your line is now open.
spk02: Hi, thanks. We're having a problem here. Just wondering, yep, sorry about that, Mike.
spk14: It's hard to hear the operator.
spk02: Yeah. So, just wondering, you've called out financial services as your largest vertical. Wondering how much of an impact that vertical had, you know, in the consumption patterns that you pointed out post the Easter holiday?
spk17: Actually, the financial services vertical is doing fine. It was very strong for us. It's still 23% of our revenue and growing quite fast. It was in some of the other areas with some of our bigger customers outside of financial services.
spk04: All right, understood. Thank you. Our next question is from Michael Curran with Wells Fargo.
spk08: hey operator we're having a hard time hearing you oh now we hear you okay no the operator's fading um i i would agree appreciate you sneaking me in um just going back the revised guide suggests growth falls below 30 we did mention confidence still in the longer term 10 billion dollar target so if we could just spend some time on what you're hearing from customers that drives confidence around what you're seeing is temporary which suggests growth bounces back and it's a second part on the bookings commentary it sounded like north america large enterprise this area that's standing out favorably i just want to make sure we have the right context there and if there's anything else you can add around what's driving that it's appreciated thank you what i would say is we have a lot of customers who we have only moved a fraction of their data that we know they have multi-year plans to go on snowflake
spk17: And that's what gives us the confidence as well as the pipeline of deals. And I'm not just talking pipeline now. There's deals for next year that I know they're long sales cycles, these big customers. That's what gives us the pipeline on top of a lot of the new products we have coming out over the next couple of years.
spk14: It makes some rest before the callbacks. Thanks.
spk04: That will be the last question.
spk05: Thank you for your time and your participation. That concludes the conference call. You may now disconnect your line.
Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-