Snowflake Inc. Class A

Q1 2024 Earnings Conference Call

5/24/2023

spk14: Good afternoon. Thank you for attending today's Snowflake Q1 Fiscal Year 24 earnings conference call. My name is Cole and I'll be your moderator for today's call. All lines will be muted during the presentation portion of the call with an opportunity for questions and answers at the end. If you would like to ask a question, please press star one on your telephone keypad. I'd now like to pass the conference over to our host, Jimmy Sexton. Please go ahead.
spk07: Good afternoon and thank you for joining us on Snowflake's Q1 Fiscal 2024 earnings call. With me in Bozeman, Montana are Frank Slootman, our chairman and chief executive officer, Mike Scarpelli, our chief financial officer, and Christian Kleinerman, our senior vice president of product, who will join us for the Q&A session. During today's call, we will review our financial results for the first quarter fiscal 2024 and discuss our guidance for the second quarter and full year fiscal 2024. During today's call, we will make forward looking statements, including statements related to the expected performance of our business, future financial results, strategy, products and features, long term growth, our stock repurchase program, and overall future prospects. These statements are subject to risks and uncertainties, which could cause them to differ materially from actual results. Information concerning those risks is available in our earnings press release distributed after market closed today and in our SEC filings, including our most recently filed form 10 K for the fiscal year ended January 31st, 2023, and the form 10 Q for the quarter ended April 30th, 2023 that we will file with the SEC. We caution you to not slice undue reliance on forward looking statements and undertake no duty or obligation to update any forward looking statements as a result of new information, future events or changes in our expectations. We'd also like to point out that on today's call, we will report both gap and non gap results. We use these non gap financial measures internally for financial and operational decision making purposes and as a means to evaluate period to period comparisons. Non gap financial measures are presented in addition to and not as a substitute for financial measures calculated in accordance with gap. To see the reconciliation of these non gap financial measures, please refer to our earnings press release distributed after earlier today and our investor presentation, which are posted at .snowflake.com. A replay of today's call will also be posted on the website, but that I would now like to turn the call over to Frank.
spk08: Thanks, Jimmy. Welcome everybody listening to today's earnings announcement. So, it's product revenue grew 50% in Q. Fiscal year 2024 totaling $590 million. That revenue retention rate reached 151% and remaining performance obligations came in at 3.4 billion up 31% year on year. Non gap adjusted free cash flow was 287 million up 58% year over year. We are, however, operating in an unsettled demand environment and we see this reflected in consumption patterns across the board. While enthusiasm for snowflake is high enterprises are preoccupied with costs in response to their own uncertainties. We proactively work with customers to optimize their environments. This may well continue near term, but cycles like this eventually run the course. Our conviction in the long term opportunity remains unchanged. Generative AI with its chat style of interaction has captured the imagination of society at large. It will bring disruption, productivity, as well as obsolescence to tasks and entire industries alike. Generative AI is powered by data. That's how models train and become progressively more interesting and relevant. Models have been primarily been trained with internet and public data and we believe enterprises will benefit from customizing this technology with their own data. A snowflake manages a vast and growing universe of public and proprietary data. The data cloud's role in advancing this trend becomes pronounced. AI's focus on large language models and textual data, both structured and unstructured, will lead to rapid proliferation of model types and specializations. Some models will be broadly capable, but shallow in functions. Others will be deep specialized and impactful in their specific realm. For years we focused on the extensibility of our platform via snowpark, making snowflake ideally suited for rapid adoption of new and interesting language models as they become available. AI is also not limited to textual data. Equally far reaching will be seen with audio, video and other modalities. The snowflake mission is to steadily demolish any and all limits to data, users, workloads, applications and new forms of intelligence. You'll therefore continue to see as app evolve and expand our functions and feature sets. Our goal is for all the world's data to find its way to snowflake and not encounter any limitations in terms of use and purpose. From our perspective, machine learning, data science and AI are workloads that we enable with increased capability, continuous performance and efficiency improvements. Data has gravitational pull. Given the vast universe of data, snowflake already manages, it's no surprise that interest in these capabilities is escalating while its uses are still evolving. Data science, machine learning and AI use cases on snowflake are growing every day. In Q1, more than 1500 customers leverage snowflake for one of these workloads up 91% year over year. A large US financial institution uses snowflake for model training. Facing memory constraints with their prior solution, they chose to move feature engineering workloads to snowflake. With snowflake that can fully ingest all data, replacing a sampling approach, which left models less predictive and long running. Snowflake enables machine learning for a broad spectrum of user types, not just programmers. For analysts, we have introduced and preview ML powered SQL extensions such as anomaly detection, top insights and time series forecasting.
spk03: SQL proficient
spk08: users can now leverage powerful machine learning extensions without the need to master the underlying data science. For data scientists and engineers, Snowpark is our platform for programmability. New here is a PyTorch data loader and an MLflow plugin, both in private preview. PyTorch is a popular framework for machine learning and MLflow helps manage the lifecycle and operations of machine learning. Snowflake had an early start in support of language models through last year's acquisition of Applika, now in private preview. Applika's language model solves a real business challenge, understanding unstructured data. Users can turn documents such as invoices or legal contracts into structured properties. These documents are now referenceable for analytics, data science and AI, something that is quite challenging in today's environment. Streamlit is the framework of choice for data scientists to create applications and experiences for AI and ML. Over 1500 LLM powered Streamlit apps have already been built. GPT Lab is one example. GPT Lab offers pre-trained AI assistance that can be shared across users. We announced our intent to acquire Niva, a next generation search technology powered by language models. Engaging with data through natural language is becoming popular with advancements in AI. This will enable Snowflake users and application developers to build rich search enabled and conversational experiences. We believe Niva will increase our opportunity to allow non-technical users to extract value from their data. More broadly, Snowflake continues to enable industries and workloads. In Q1, more than 800 customers engaged with Snowpark for the first time. Approximately 30% of all customers are now using Snowpark on at least a weekly basis, up from 20% at the end of last quarter. Snowpark consumption is up nearly 70% quarter over quarter. The Snowflake connector for ServiceNow is in public preview. Customers can access ServiceNow data inside of the data cloud without needing to manually integrate APIs or third party tools. ServiceNow data is significant because it holds a wealth of IT and security data. The connector is the first so-called native app built by Snowflake. Native apps, which are in private preview, run inside the Snowflake governance perimeter and make use of common services. Today, developers waste time convincing customers to expose their data. With native apps, developers can focus on their core interests, application development. They offer security and deployment concerns to Snowflake. During the quarter, we also launched the manufacturing data cloud, which focuses on supply chain management as a data problem. Supply chain management is one of the few remaining realms in enterprise software that has struggled the platform itself. Supply chains are all somewhat unique. And the data siloing problem prevents supply chain visibility essential to managing it. With the manufacturing cloud, Snowflake continues to evolve from being a data cloud to also being an operational hub for large enterprises and institutions. We also announced that BlueYonder, one of the largest software companies in supply chain management, will fully re-platform onto Snowflake. BlueYonder is a key participant in both the manufacturing and the retail data clouds. They are the first major supply chain provider to make this commitment to creating the -to-end supply chain platform on Snowflake. Supply chain management is a highly networked discipline, as the chains are typically comprised of numerous different entities. We therefore expect significant network effects from the strategic alliance with BlueYonder. Our summit conference in June will feature more significant product announcements and we look forward to seeing you there. We'll turn to Cole over
spk02: to Mike. Thank you, Frank. Q1 product revenues were $590 million, representing 50% -over-year growth and remaining performance obligations grew 31% -over-year, totaling $3.4 billion. Of the $3.4 billion in RPO, we expect approximately 57% to be recognized as revenue in the next 12 months. This represents a 40% increase compared to our estimate as of the same quarter last year. Our net revenue retention rate of 151% includes five new customers with $1 million in trailing 12-month product revenue. Q1 revenue reflects strong performance in a challenging environment. We continue to focus on growth and efficiency. We generated $287 million of -GAAP-adjusted free cash flow, outperforming our Q1 target. In Q1, consumption varied from month to month. We benefited from strong consumption in February and March. Starting in April, consumption slowed after the Easter holidays through today. The strength in the quarter was driven by our healthcare and manufacturing customers. Financial services customers outperformed our expectations. From a geographic standpoint, we saw inline performance globally, with the exception of our SMB and APJ segments. It is challenging to identify a single cause of the consumption slowdown between Easter and today. A few of our largest customers have screwed nice snowflake costs as they face headwinds in their own businesses. For example, some organizations have reevaluated their data retention policies to delete stale and less valuable data. This lowers their storage bill and reduces compute costs. We have worked with a few large customers more recently on these efforts and expect these trends to continue. History has shown that price performance benefits long-term consumption. From a booking standpoint, we saw headwinds globally, with the exception of our North American large enterprise segment. This is not due to competitive pressures, but because customers remain hesitant to sign large multi-year deals. Productivity is not where we want it to be, and our updated outlook reflects this. Q1 is always a challenging bookings quarter, and the current macro environment magnifies that, but we are still not satisfied with our results. We will only invest in areas that yield returns. For that reason, we will prioritize existing sales resources to drive growth before we onboard new capacity. Q1 represented another quarter of continued progress on profitability. Our non-GAAP product gross margin was 77%. More favorable pricing with our cloud service providers, product improvements, scale in our public cloud data centers, and continued growth in large customer accounts will contribute to -over-year gross margin improvements. Non-GAAP operating margin was 5%, benefiting from revenue performance and savings on sales and marketing spend. Our non-GAAP adjusted free cash flow margin was 46%, positively impacted by strong linearity of collections and some early collections of May receivables. We continue to have a strong cash position with $5 billion in cash equivalent in short-term and long-term investments. We used approximately 192 million of our cash to repurchase approximately 1.4 million shares to date at an average price of $136. We will continue to opportunistically repurchase shares using our free cash flow. As Frank mentioned, we are acquiring Niva. We are excited to welcome approximately 40 employees from Niva to Snowflake, and the full impact is reflected in our outlook. Before turning to guidance, I would like to discuss the recent trends we've been observing. As I mentioned, we have seen slower than expected revenue growth since Easter. Contrary to last quarter, the majority of this underperformance is driven by older customers. Although we expect this to reverse, we're flowing these patterns through to the full year due to our lack of predictability and visibility of customer behavior. As a result, we're raining in costs until we see a consistent change in consumption. We are still focused on investing in efficient growth with a concentration on continuing to assign new customers, ensuring these customers are migrated quickly and successfully, leveraging our PS team and partner resources, and selling our newer solutions such as Snowpark and Streamlit to win more personas in the enterprise. We are confident that this will ultimately lead to the data cloud network effects we have laid out over the past few years. We still believe we can achieve 10 billion of product revenue in fiscal 2029 with a better margin profile than we laid out last year. Now let's turn to guidance. For the second quarter, we expect product revenues between $620 and $625 million, representing -over-year growth between 33 and 34%. Turning to margins, we expect on an on-gap basis, 2% operating margin, and we expect $361 million to elude to weighted average shares outstanding. For the full year fiscal 2024, we expect product revenues of approximately $2.6 billion, representing -over-year growth of approximately 34%. We expect $362 million to elude to weighted average shares outstanding. We will continue to prioritize hiring and product and engineering. We have slowed our hiring plan for the year. We expect to add approximately 1,000 employees in fiscal 2024, inclusive of M&A. And lastly, we will host our Investor Day on June 27th in Las Vegas in conjunction with Snowflake Summit, our annual users conference. If you are interested in attending, please email IR at snowflake.com. With that, operator, you can now open up the line for questions.
spk14: Thank you. We will now begin. I apologize for all the coughing. If you would like to ask a question, please press star followed by one on your telephone keypad. If for any reason you'd like to remove that question, please press star followed by two. Again, to ask a question, press star one. Our first question is from Mark Murphy with JP Morgan. Your line is now open.
spk13: Thank you very much, Frank. Do you sense any connection to the cadence of hyperscaler cost optimization activity? In other words, if the AWS and Azure optimizations begin to normalize within a few quarters, do you think that snowflakes consumption patterns and sequential growth rates would perk up around the same time? Or do you look at this as a more separate kind of phenomenon and then have a quick follow up?
spk08: Well, we think that because Amazon is such a large percentage of our overall deployments that they are proxy. We just know from talking to them that what they experience, we experience as well. So there's definitely a ripple effect because we're in the stack. So the answer generally speaking is yes. We will see that Microsoft is smaller. So they're not as predictive of our experiences as AWS would be.
spk13: Okay, then as a quick follow up, Mike, I'm sorry to ask you a question. It sounds like you've got a bit of a cold, but is it safe to assume that you're completely through the revenue headwinds from Graviton adoption in the warehouse builder product? I think that's the case. But I'm also curious. Are there any. Other analogous developments on the horizon that we could be thinking about that you might have baked into guidance in the next several quarters. Yeah,
spk02: we've we've fully migrated all of our customers in AWS to Graviton too. And that's the bulk of where revenue is. And I want to remind you there's really three types of optimization. There's the optimizations by the cloud vendors, and that's with better hardware, better performance. Then there's the optimizations that we do regularly in our software, which improve performance and hence cheaper for our customers. And generally those two combined we we forecast that there's a 5% headwind every year to our revenue associated with those. And the third optimization is the one that we really saw in a few of our largest customers with them just wanting to really change their storage retention policies. Like one customer went from five to three years and it's a massive petabytes and petabytes of data. And so we lose that storage revenue. But on top of that, now your queries run quicker because you're querying less amounts of data. And we are seeing more customers wanting to do that. And I spoke to some of the hyperscalers. I won't say which one. And they confirm they're seeing retention policies change within their customers. Wanting to archive more older data.
spk13: Yeah, thank you, Mike. That's very helpful.
spk14: Our next question is from cash Rangas with Goldman Sachs. Your line is now open.
spk04: Hi, thank you so much for taking my question. If you could just offer to the degree that you can, what are your customers that are going through consumption optimization telling you with respect to when it's likely to plateau and when they're likely to come back to, quote unquote, normal consumption? If you can. Thank you so much.
spk08: Yeah, I would say, look, there's just to put a little bit more color on. There's optimization, which is just how do we run what we're already running more efficiently and driving a level of savings that way. But there's sort of another layer on top of that. I would call it rationalization. You know, one of the things that we've seen happen over the last couple of quarters is that the CFO is in the business. And this is sort of an expression that we use in enterprise. What we're seeing is that there is a level of oversight scrutiny that's normally not there. This is not a frequent occurrence. You only see this happening, you know, in fairly severe episodes. In the beginning is like, hey, you know, we do smaller contracts or term contracts. But then it's like, hey, you're going to live within your means. Here's the amount of money you're going to spend and you're going to make it work. You can figure out where you're going to cut to fit into our box. So that's really dynamics that we've seen playing out there. Now, in terms of your question, when is it all going to be over? These things do run their course because in the end, we're settling in. You know, I said in my prepared remarks, things are unsettled, but eventually they will settle. We will settle into two new patterns. And then we sort of, you know, resume, you know, from there. But I think, you know, as of right now, I think things are still unsettled and people are adjusting. And we don't have real strong visibility in terms of, okay, when is this all going to be different?
spk04: Thank you so much.
spk14: Our next question is from Keith Weiss with Morgan Stanley. Your line is now open. Excellent. Thank you, gentlemen, for taking the question. Mike, this one's for you. And it might be a little unfair, but it's the one that I'm getting most from investors. And it's about kind of guidance methodology. And if anything's changed in that, we've seen the forward forecasts have to come down a couple of times over the past couple of quarters. And there's a lot of moving pieces in both the macro environment and kind of how your customers are acting. How can we give investors confidence that this is the last cut, that we're not going to be running into new types of optimization on a go-forward basis and further taking down our forecast for the fiscal year?
spk02: Well, the way we, there is no change in our forecast methodology. And we forecast looking literally at consumption trends on a daily basis, literally four weeks prior to the earnings through yesterday. And what I would say was a little unique. This past is we literally saw four weeks in April where there was no week over week growth per se or not material. And we do think that was driven a lot by some of these customers. That's what had happened. We're seeing some of these big optimizations on storage retention policies. But, you know, in a consumption model, customers have the ability to dial it back and they can increase it as well too, and they get more confidence in their business. And I can only guide based upon the data we have available to us.
spk04: Got it. Thank you.
spk14: Thank you, Keith. Our next question is from Alex Zucan with Wolf Research. Your line is now open.
spk12: Hey, guys, thanks for the question. So maybe one financial one, if we, and then a technical one. If we look at the balance of the growth headwinds from optimization versus rationalization or meaning how much people are doing less of versus, you know, still tight with the purse strings to do more with. Kind of how does that balance look? How has it changed over the course of the last six to nine months? And then maybe just from a technical perspective, what do you get with NEVA? Why is it important? What does it unlock for for your customer base relative to generally?
spk02: So in terms of what customers are doing, actually, the number of jobs, the number of queries actually grew 57% year over year in the quarter. It's outpacing our revenue. The queries are just running more efficiently. And that is because of some of the optimizations, both if you reduce the amount of storage running queries on, they run faster. It's also the impact you're getting right now of the full Graviton to this year versus the core from last year. So the number of jobs is actually outgrowth is actually outpacing revenue and just we're becoming so much more efficient for our customers. And on the NEVA, Christian is here. Yes,
spk06: so I Christian here, the broad vision that we communicated to all of you over the last several years is Snowflake is a mission to extend its capabilities so we can bring computation to happen close to the data. It has evolved us into an application platform and a core use case for applications is not only search and search enabled experiences, but with the advent of generative AI is the notion of conversational experiences. And the folks from NEVA are the ones that have power, help us accelerate the efforts around Snowflake as a platform for search and conversational experiences. But most important within the security perimeter of Snowflake with the customer's data so that they can leverage all this new innovation and technology, but with the safety on the privacy and security of the data.
spk14: Understood. Our next question is from Ramo Linchow with Barclays. Your line is welcome.
spk01: Yeah, thank you. Mike, I hope you feel better soon. The quick question last quarter we talked about like the newer cohorts kind of expanding slightly at the lower pace compared to the more established ones, the older ones. Have you seen any change in momentum there or is it like, if you think about it, like we had the last quarter, slower expansion from the newer ones and now this quarter we have like more optimization from the older ones. Is that like the two things or are there other kind of factors that work there?
spk02: No, good question. The newer ones are growing faster. The older obviously are the larger dollars. So when they do optimizations that has a bigger impact. And it's interesting to the net revenue retention for growth within AWS. Those customers are materially above where overall company is and that's because we're relatively new to that. So the Azure cloud is really starting to take off for us as well.
spk01: Okay. And then one maybe to help you with your voice, Frank. But Frank, if you think about the changes in policy in terms of storage retention and stuff like that, I mean, there was a reason why people store the data like for a certain number of years, etc. Do you think that what you're seeing now is kind of more temporary, you think, or and as we coming out, people just kind of kind of do it, have a different approach again? Or do you think that's kind of a permanent move that's happening? Thank you. I
spk08: don't think it's permanent. Look, like I said, the CFOs in the business, they're given very direct guidance in terms of here's where you need to be. Then the operating teams are starting to look at, okay, how do we implement this? You know, sometimes the low hanging fruit is, you know, we'll just cut the data back. You know, the processes might actually not be running as well. Okay, so there's actually cost. But you know what, the cost concern is prevailing at the moment because of the general sentiment that we are. You know, in 2020 and 2021, you know, there was it was growth at all costs and, you know, the mentality was, you know, let it rip. Now we're in the complete inverse of that situation. We know where we have, you know, strong, certainly predictability on cost and, you know, and so on. I don't think that will last. We're just on the other side of the spectrum right now. And we will reconvert, you know, to the mean at some point here.
spk01: Okay, makes sense. Thank you.
spk14: Our next question is from Brad Zelnick with Deutsche Bank. Your line is now open.
spk09: Great. Thanks so much for the question. Mike, I know in a consumption model, obviously, it's difficult to predict the number of new workloads and transaction volumes. A lot of that we know is tied to macro. I just wanted to come back to the optimization topic. You know, you talked about the three different types of optimization. Is there any way you can compare your total customer portfolio to the most optimized customer that you have just to get a sense of maybe what the, you know, what the downside is if everyone were optimized as your most optimized customer?
spk02: That would be so hard to do. I don't have that data. Each customer is different.
spk09: Can I ask you a question?
spk11: Yeah.
spk09: Can I ask you a question? Oh, please go ahead.
spk06: Sorry. I was going to add that in certain instances, some of these optimizations in the third category that Mike described and what Frank was alluding to, it's changing how the business thinks about their needs. So when you make the decision to reevaluate a storage policy, there's a business impact that only customers can do. So it's difficult for us to estimate that type of decision.
spk09: Thank you, Christian. It's helpful. Mike, a question I know you do have the answer to. Since you forecast the trends every week, any commentary on how May looks relative to April?
spk02: That's reflected in the guide that I gave you, the 2.6 billion for the year. I would say that there were a couple periods in May where it was strong, but it's kind of, it's okay, but it's not where we want it to be. But that's reflected in the guide now.
spk09: Cool. Thank you so much for taking the questions, guys.
spk14: Our next question is from Carl Kerstad with UBS. Your line is not open. Okay,
spk00: thanks. Mike, if I could just build on Brad's line of questioning, the spirit of it is what assumptions you're embedding in your second half guidance. Are you essentially reflecting the April, May environment you saw and straightlining it, or are you taking a little bit more of a conservative approach and sort of haircutting that assumes that it or maybe the FINS vertical gets a little bit weaker. That's question number one. Question number two, maybe this is best suited for Frank. Frank, Mike mentioned in his comments that sales productivity is not where Snowflake wanted it to be. Could you elaborate on that? Because that sounds like some of the pressure may not be entirely macro, but might be sales execution. So I'd love to hear a little bit if I interpreted that correctly and the steps you're taking maybe to turn it around. Thank you.
spk02: So sorry, Carl. We are expecting that there will be week over week growth on average with our customers that will compound, but it's at a much lower pace than it was prior and it's more what we've been seeing in the last four weeks is what we're expecting inside there. I'm not expecting a straight line from where we are today, the end of the year.
spk08: On the sales productivity side, I do think that's very much a macro thing. There comes a point where you can't push any harder and we have applied the resources, but we're not converting on the resources in a way that we think is optimal. So is there an execution aspect? There always is. I mean, it's just day to day sales management, but in all the years of doing this kind of work, I felt like I've always sort of under applied the resource. In hindsight, I always thought I could have done more. This is definitely a situation where I feel like we have applied tremendous amounts of resources. We've been very, very successful at it, but there comes a point where, okay, we need to become more selective, more prioritized on driving the performance. So I definitely think it's a macro thing. I mean, the sentiment out there is of a sort that you just can't push it any harder than up to a certain point. Okay,
spk04: thank you both.
spk14: Our next question is from Patrick wall Ravens with. JMP your line is now open.
spk02: Oh, we can't hear you. Pat. If
spk14: I remember right. If
spk10: I remember right, blue yonder is a JDA and that's and so.
spk09: Yep, anything about why that's so interesting would be great.
spk08: You know, I look, I had a long term fascination with supply chain management because it's like a management has never been really platformed in terms of software. It's an email spreadsheet operation. It's incredibly inefficient and it's an incredibly high volume opportunity. And the reason that it couldn't be platformed is, first of all, each supply chain is different. So it's very hard to have a standard solution for something that is that is so variable. But secondly, is the data problem. If you can't establish visibility across all the entities that make up the supply chain, you stand no chance of solving that problem. So the reason that I find it so interesting for snowflake is that look, all the entities in the supply chain will become snowflake accounts, right? Because that's the way everybody will have visibility to everybody else. And we have a real fighting chance of solving it. Secondly, the processes that run in supply chain management are extremely computationally intensive and they run in very, very high volume. And of course, you know, snowflake is ideally suited for taking on those kinds of workloads. So I really think that supply chain management will be the most networked segment of all industries that we're operating in. And then, you know, today the most networked segment that we're running in is financial services, you know, by far. But I think it will be overtaken, you know, by manufacturing and retail in the fullness of time because there's absolutely no penetration right there. These are unsolved problems, you know, very much in the almost in the history of computing. That's how serious that is. So fantastic historical opportunity for the technology to address.
spk09: Great, thank you.
spk14: Our next question is from Kirk Matern with ISI. Your line is now open. Yeah, thanks very much. Frank, with sort of the explosion and questions around AI over the last six months, do you think that buyers or executives are tying the
spk07: opportunities with AI to the data
spk14: yet? Meaning, I know conceptually they might get that, but your any of your conversations with customers sort of, you know, starting to percolate because of AI and the need to get your data sorted out to take advantage of that? Or is, are most people still sort of in the discovery phase on that front? And then, Mike, can you just talk if NEVA impacts the op margin guidance for the full year at all? I was just kind of curious. You mentioned savings, but the margins are sort of flat a year over year. I was just kind of curious to see that had any impact. Thanks, Gus.
spk08: Yes, Frank, you know, obviously, you know, customers make the connection, you know, between data and the ability to take advantage, you know, of the large language models and, you know, the natural language interface and all that kind of stuff. And it's already happening and with the services that are today available on Snowflake and they're also available in the AI space, you can already rig things together and make some interesting, you know, progress. But the thing is, you know, you need to have highly curated, highly optimized data. And then that is what we do at Snowflake to really power these models. You know, you cannot just indiscriminately let these things loose on data that is, you know, that people don't understand in terms of its quality and its definitions, lineage and all these kinds of things. So I think we are in a really great place. And I said in the prepared remarks, new data has a gravitational pull. So, you know, we will attract tremendous demands for these type of workloads. And, you know, our strategy is to enable that to the maximum and towards the extent possible.
spk02: And then with regards to Neva Kirk, that's fully baked into the guidance. They have a number of, well, actually all of their engineers are very senior engineers and they're all based in the US. They're very expensive people, these people.
spk14: Our next question is from Brent Phil with Jeffries. Your line is now open.
spk10: Thanks, Frank. This concept of snow for everyone and having a simple chat like, you know, GPT UI in front of the Snowflake data, you know, bringing it to the mass market. I mean, how long do you think this takes to where you start to see that or it's we have you deployed internally, but I have to go to one person that's the power user. When do you think that ultimately we can start seeing that on everyone's desktop?
spk08: Well, I think that, you know, the more simplistic it might not be the direct characterization, but, you know, for example, running these things on top of, you know, for example, Salesforce data and Snowflake, which is a very common thing, something that we're already doing internally is that that's going to be available in the second half all over the place. And people will like it. I like it. I mean, I prefer much over using dashboards and things like that because it just lets me ask questions. But they're also relatively simplistic, you know, questions and where it gets harder when you start asking much, much harder questions. That's when you start finding the limits of these kind of technologies. So I think we're still sort of in the fun and games, you know, state to the development of this technology and, you know, with the content generation side of this, this technology. It's fascinating and captivating for people, but asking really hard analytical questions that take people weeks and weeks or even months to figure out, you know, that will take that will take some workforce software to do that, you know, in a matter of seconds to, you know, to be productive that way. So, you know, we're sort of the top of the hype cycle. You know, the real work really starts now.
spk10: You know, and then Mike, you mentioned you're not effectively. It doesn't sound like bringing a lot new capacity. There's still a hundred and eighty three job openings on your website. So I guess what you're saying is you're freezing quota carrying rep onboarding in the interim until you see that capacity. Are you still bringing people on? How are you thinking about this? This trend
spk02: in the sales organization and the sales organization, we're only doing back fills right now and we will look at performance management and upgrading people and we could reallocate heads from one region performing to another region. But no, not new ours. Sorry.
spk03: Great
spk09: hope you feel better. Sorry. Yes, sorry.
spk03: Our
spk14: next question is from Greg Moskowitz with your line is now open.
spk05: All right. Thank you for taking the questions. You mentioned the change in data retention as a more prevalent form of optimization recently. What about the refresh rate? Are you seeing customers pull back on the frequency with which the data are updating?
spk06: No, Christian here. We have not seen changes there. If anything, because of our cost model, the economics are fairly similar. If people are updating more versus less frequently or reasonably similar, and we don't see changes in the patterns.
spk05: All right. That's helpful. Thanks, Christian. And then just a follow up on Eva, I guess, either for you or for Frank. So I think of the technology as fairly horizontal in terms of potential appeal. I'm just wondering if you think this can be an avenue to help land new enterprise customers going forward. And then secondly, how much of a value add do you think that this can can truly provide to the to the install base? Thanks.
spk08: Frank, I'll go first. We view search and chat as really a complete evolution under the influence of our relationship with data and how we interact with it. I think most of us remember when search first became available, how that just dramatically changed our relationship with data. I'm personally a search junkie. I can't leave it alone. I find it incredibly empowering. But the problem with search has been it matches on strengths. It has zero context. It's not stateful. And now we have the technology to make search incredibly powerful. Also to the point that when it can't find it, it can actually generate the code to answer the questions that are posted in search. So this is incredibly important to basically what we said from the beginning. Snowflake is about mobilizing the world's data. And this is how we're going to do it. I mean, search and chat are sort of morphing into a single natural language interface. But the other thing I would caution you, this is not all about natural language interfaces. A lot of the intelligence that we're talking about is going to be manifested through the interfaces, not just through natural language.
spk05: Okay, thank you.
spk14: Thank you, Greg. Our next question is from Brad Reback with FIFL. Your line is now open. Great, thanks very much. Mike, I hate to pose this to you, but you're probably the best to answer it. Beyond the week to week usage patterns in the install base, are there any other operational data metrics that you're looking at to give you confidence on when NRR will bottom?
spk02: You know, obviously that's not the only thing I look at. I look at pipeline generation, weighted pipeline. I'm typically looking out three to four quarters looking at. I sit in on the sales call every Monday. We're spending a lot of time with reps these days on what is going on within their accounts. And so, but the most important thing is consumption patterns today are the biggest indicator of the future. And also looking at new products that may come out, it's hard to forecast anything for them, but that gives us somewhat of confidence. We have some big announcements that are going GA towards Streamlit as one of them. We talked about the applicants and private preview. But Streamlit, we think will be meaningful and we're really pleased with what we're seeing in the snowpark daily credit consumption right now.
spk14: That's great. Thanks very much. Our next question is from Tyler Radke with Citi. Your line is now open. Thanks for taking the question. I'll pose this to Frank or to give Mike a break there, but just on Microsoft. So obviously they're hosting their build conference this week and a ton of new product announcements, including in data and analytics. But I wanted to ask you, you know, more on the partnership front. I think you commented on just seeing some better traction there. I think they've evolved their partner program, including adding you as a tier one partner. So could you just talk about kind of the status of that relationship, how you're fitting in given some of these announcements like fabric, which kind of unifying Microsoft's own products, but just the status quo in that relationship and,
spk03: you
spk14: know, the opportunity with this new partnership.
spk08: Yeah, just, you know, just Microsoft relationship has been growing faster than the other two platforms that we support. You know, it's been very clear from the top of Microsoft that they're reviewing Azure as a platform, you know, not as a sort of a single integrated proprietary Microsoft stack. And, you know, they've said over and over that, you know, we're about choice, we're about innovation. And yes, we will compete with competing with Microsoft from day one. And, you know, that will and then we've been very successful in that result for a whole bunch of different reasons. But, you know, people keep on coming and that's and we expect that. And I think that's sort of a net benefit, you know, for the world at large that they get better and better, you know, products and they get more, you know, more choice. The good news is that I think the relationship is relatively mature, meaning that, you know, when there is friction or people are not following the rules, we have good established processes for addressing and resolving that. And that's incredibly important, right? We sort of get out of that juvenile state where things are dysfunctional at the field level. So, and I have no reason to believe that that will not continue, you know, in that manner. So I think, you know, Azure will continue to grow and grow faster than the other platforms.
spk14: Great. And on so part, kind of like that, you're pleased with the consumption this quarter. Could you just give us a sense for expectations on the revenue ramp there? And what are the big use cases you're seeing today? Is it Hadoop migrations, data engineering? Just give us a sense on kind of how you're expecting that ramp up and what are the main use cases driving that?
spk08: So here's the important thing to understand about Snowpark. Snowpark is the programmability platform for Snowflake. Originally, I know Snowflake was conceived with SQL interfaces and that was the mode through which you would address the platform. So this has really sort of opened up a whole host of modalities, if you will, onto the platform. Basically, you know, our posture is, look, you know, if it reads or writes to Snowflake, you know, we want to own these processes and Snowpark is the platform to achieve that. Now, the supply chain, if you will, how the data comes into Snowflake is through data engineering processes. You know, often these are Spark workloads and processes. We think they ought to run on Snowpark. They're going to be cheaper, faster, they're going to be operationally simpler, and they're going to be fully governed, right? So we think if you are a Snowflake customer and you're not running these processes on Snowpark, you're just missing out in all those four dimensions that I just listed. On the consumption end, it's the same thing. If you're doing analytics, if you're doing data science, you're doing machine learning, if you're doing AI, you know, if it reads from and writes back to Snowflake, you know, we think that's Snowpark. And, you know, we have taken a very, you know, emphatic posture to this. We're campaigning Snowpark very, very hard around the world. The interest is tremendously high. You know, as I said in the pre-carry remarks, you know, we went from 20% in one quarter to 30% of our customers using it on at least a weekly basis. We think that's going to go to 100%. I think Snowpark will become extremely prevalent, you know, around the use of Snowflake. Now, beyond that, you know, there's a whole wide world, you know, that we're obviously also very interested in. We're going to start at home and own everything that is there that we can own over there.
spk14: Thank you. Our next question is from Brent Grayson with Piper Sandler. Your line is now open. Good afternoon, Frank. Maybe for you, I totally get the current cost concerns and optimization efforts underway. I'd be more curious to hear what you think could get us out of the current slowdown. Are there products or workloads that you would flag as the key ones to watch that drives the reacceleration of the business? Just thinking through what's in your control or do you think we have to wait for the macro to improve?
spk08: Thanks. Well, I definitely the number one issue is sentiment out there. Just the lack of visibility, the anxiety, you know, watching CNBC all day doesn't give you any hope. That's absolutely number one, because what we're seeing is that, you know, when we're dealing with CTOs and chief data officers, these people are chomping at the bit, but they are now literally getting stopped, as I said earlier, you know, by the CFP. I mean, I'm not a big fan of being in the business saying, well, I guess that's all good and well, but here's how much you're going to spend. You know, you're going to get a new contract. You're going to live within the confines of the contract that you have. So they're really artificially constraining the demand because of the general anxiety that exists in the economy. So that that really needs to start lifting. And, you know, that will happen. These things run their course. You know, we've been through these episodes, you know, before. So I think that's really the requirement. There's plenty of demand out there. Absolutely. And, you know, with AI right now, I mean, it's going to drive a whole other vector in terms of workload development is going to be hard to stop CFOs or no CFO.
spk14: Very helpful there. And then Christian, I wanted to follow up on Neva Streamlit, totally get that acquisition. Neva, little harder for me to fully understand. So as you look at Neva and the tech stack, what was most interesting? It wasn't the team. Is there some sort of differentiated search engine under the hood? Is it their large language model expertise? What? Why Neva?
spk06: Yeah, it's a great question. I think it's the combination of traditional search technology with LLM technology. I think most of us have seen numerous demos of people that take an LLM and in a couple of days or hours produce something that looks good. But then there are problems on hopper side that searches and how reliable those results are. What the Neva team did extremely well, it was able to combine LLM and generative AI type technology with traditional technology to be able to do attribution of results. And it's very interesting in an enterprise setting where you want more precise answers. That combination was very appealing. And then, of course, it is a world class team and the combination of those two were appealing to us.
spk14: Thank you. Our next question is from Derek Wood with TD Cohen. Your line is now open. Great. Thanks. I wanted to ask about the competitive and the pricing environment out there. I guess on the competitive side, have you guys seen any change in win rates or workload shifts to different platforms? And then when it comes to pricing, you talked about customers focusing a lot on cost savings. How is this translating into your ability to hold kind of unit pricing, especially on renewals?
spk08: Frank, I like Mike Wayne, once he stops coughing. But, you know, the thing about pricing is physics are physics. A read is a read, a write is a write. And there's economics. It costs a certain amount of money, right? And there's just not that much room other than playing games or temporarily sponsoring or subsidizing different parts of a business to really get a sustained pricing edge on one player or another. We're all converging to very, very similar economics. Where you see huge differences is in the total cost of ownership. And that is not the cost of computing stories. That is like what is the cost to run that technology? And this is where, you know, there's a huge advantage. And, you know, our customers know that. It's just reduced skill sets. Far fewer people not having to touch the complexity of the underlying platforms. I mean, on and on and on. I mean, we're more descendants of Apple and Tesla than being the descendants of Hadoop. Like some people are in the marketplace, right? So we have really abstracted the complexity. And that's what generates these TCO advantages. But the raw cost of computing and storage, there's not that much opportunity to be had.
spk06: I want to add something to highlight what Frank mentioned in his Snowpark answer, which is what we're seeing relative to competitive platforms, Spark and PySpark. We're seeing Snowpark being not only better performance, but better price performance. So interestingly enough, we see customers giving us technical range and wanting to migrate because of the better economics of the competitive dynamics.
spk14: Great. If I could squeeze one more in just in terms of LLMs, you guys are
spk02: obviously sitting on
spk11: a lot of data
spk02: to
spk14: be able to be mined and training models. Do you guys envision kind of building up GPU clusters and offering training and inference on your platform? Or do you think that's really the place for hyperscalers to be doing that?
spk06: We're doing all of it. We alluded in the prepared remarks to Applica, which it is a multi-model collection of models being built at Snowflake that requires GPU. So we're doing our part, but we're also working and we'll show more at our conference on how we surface GPU. So all of the above, it's an important component of this GenAI wave of innovation.
spk14: Okay, thanks. Mike, feel better. Thanks. I'm so sorry. Thank you, Derek. Questions from.
spk08: Questions.
spk14: Patrick. Our next question is from Sterling. Your line is now open.
spk11: Hi, thanks. We're having a problem. Just wondering. Sorry about that, Mike. So just wondering, you've called out financial services as your largest vertical. Wondering how much of an impact that vertical had in the consumption patterns that you pointed out post the Easter holiday.
spk02: Actually, the financial services vertical is doing fine. It was very strong for us. It's still 23% of our revenue and growing quite fast. It was in some of the other areas with some of our bigger customers outside of financial services.
spk11: All right, understood. Thank you.
spk14: Our next question is from. Operator,
spk02: we're having a hard time hearing you. Oh, now we hear you. Okay.
spk14: No, the operator is fading. I would agree. Appreciate you sneaking me in. Just going back to the revised guide suggests growth falls below 30%. We did mention confidence still in the longer term. $10 billion target. So if we could just spend some time on what you're hearing from cost customers that drives confidence around what you're seeing is temporary. Which suggests growth bounces back. And it's a second part on the bookings commentary. It sounded like North America. Large enterprise. This is the area that's standing out favorably. I just want to make sure we have the right context there. And if there's anything else you can add around what's driving that, it's appreciated. Thank you.
spk02: What I would say is we have a lot of customers who we have only moved a fraction of their data that we know they have multi-year plans to go on snowflake. And that's what gives us the confidence as well as the pipeline of deals. And I'm not just talking pipeline. Now there's deals for next year that I know they're long sales cycles. These big customers. That's what gives us the pipeline on top of a lot of the new products we have coming out over the next couple of years.
spk14: Thanks for the call backs. Thanks. That will be the last. Thank you for your time and your participation. That concludes the conference call. You may now disconnect your line.
Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-