This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

Nebius Group N.V.
8/7/2025
Welcome to Nebius Group's second quarter 2025 earnings conference call. Joining me today are Arkady Volosh, founder and CEO, and our broader management team. Our remarks today will include forward-looking statements, which are based on assumptions as of today. Actual results may differ materially as the results of various factors, including those set forth in today's earnings release, and in our annual report on Form 20F or filed with the SEC. We undertake no obligation to update any forward-looking statements. During this call, we will present both GAAP and certain non-GAAP financial measures. A reconciliation of GAAP to non-GAAP measures is included in today's earnings press release. The earnings press release, shareholder letter, and accompanying investor presentation are available on our website at .nebius.com forward slash investor dash hub. And now I'd like to turn the call over to Arkady.
Thanks, Neil. And thank you to everyone for joining the call today. I'm pleased to say that we had an excellent quarter. We more than doubled our revenue for the whole group from Q1. And this quarter, we also became EBITDA positive in our core AI infrastructure business ahead of our previous projections. We could grow faster, but we were oversold on all of our supply of previous generation hoppers. And we decided to wait for the new generation of GPUs to come. And finally, the new black belts are coming to the market in masses. And in parallel, we are dramatically increasing our data center capacity. That's why we expect to significantly increase our sales by the end of this year. And that's why we are increasing our ARR guidance for the year end from the previous $700 to $1 billion to a new guidance, which is now $900 to $1.1 billion. More color on capacity front. And I see this as one of the most important updates of this call. We are aggressively ramping up. By the end of this year, we expect to have secured 220 megawatts of connected power that is either active or ready for GPU deployment. And this expansion includes our data centers in New Jersey and Finland. In addition, we have nearly closed on two substantial new greenfield sites in the United States. And overall, we're in the process of securing more than one gigawatt of power by the end of 2026 to capture industry growth next year. In addition, we made big enhancements to our software cloud platform, obviously, to support our expanding capacity and to meet the demand of those large scaling clusters. Also, we continue to significantly expand our customer base. We started to gain real traction on the enterprise side, adding large global technology customers such as Cloudflare, Prosus, and Shopify. And we still remain a leading new cloud provider for so-called native AI tech startups. We have added customers like Hygen, Lightning AI, PhotorOOM, and many, many others. On the financing front, as you already know, we are fortunate to have multiple levers to finance our ambitious growth. We have raised over $4 billion in capital so far. We have a strong balance sheet, as you can see. And we have access to potentially billions of dollars more thanks to our non-core businesses and other equity stakes such as AV Ride, ClickHouse, and Talocan. In short, this is an exciting time for neighbors. We are in the midst of a -a-generation opportunity. That's what we believe in. The demand for AI compute is strong and will just get stronger. We are rapidly increasing our capacity to pave the way for accelerated growth in 2026 and beyond. Well, and with that, let me introduce our new Chief Financial Officer, Dada Alonso. Dada, welcome again, and the floor is yours.
Thank you, Arkady. I'm really excited to be joining Nebius. I've long believed that AI will fundamentally transform our world, and Nebius is well positioned to make that happen. Of course, I'm also looking forward to getting to know our investors and analysts over the coming months. While the details of our Q2 financial performance can be found in our shareholder letter, I'd like to highlight a few key items and then conclude with guidance. We reported $105.1 million in revenue, up 625% year over year and up 106% quarter over quarter, driven by a strength in our core business and a solid execution from our Triple 10 team. Our AI cloud infrastructure revenue increased more than nine times year over year, driven by strong customer demand for our corporate GPUs and near peak utilization of our platform. Even as we achieve hyper growth, we continue to operate with discipline. This focus allowed us to achieve positive adjusted EBITDA in our core business ahead of our expectations. Below the operating income loss line, we recorded a gain from revaluation of investment in equity securities related to our equity investment. We also reported a gain from discontinued operations. These two non-business related items made us, for the quarter, got net income profitable. It is important to notice that we view these gains and one time in nature. Turning to guidance, we see very strong momentum in our business and demand for AI compute remains exceptionally high. Given our plans to further scale our platform this year, we are updating our full year outlook. For annualized run rate revenue, as Althadi already mentioned, we are raising guidance from $750 million to $1 billion to $900 million to $1.1 billion. This is based on close contracts for existing and future capacity, as well as sales we anticipate for the rest of the year. For our core business revenue, we are maintaining our guidance of $400 to $600 million. Let me share a few points. We continue to experience strong demand in our building capacity to take advantage of the large opportunity in front of us. Of the 220 megawatts of connected power we expect to have at the end of the year, we will have 100 megawatts of active power. And as we are building out our data center capacity, most of our GPU installations will take place in Q4. So we expect our annualized run rate revenue and revenue to be backed and weighted. For group revenue, we are keeping the projections that we already provided, that is group revenue of $450 to $630 million. This excludes the 2025 revenue guidance of $50 to $70 million we previously gave for Toloca. As we announced effective from Q2, we have the consolidated Toloca from the group. Turning to adjusted EBITDA, as we previously announced, we expect to be slightly positive by the end of the year at group level, but still we will be negative for the full year. Finally, we are maintaining our CAPEX guidance of around $2 billion in 2025. So in closing, we are experiencing hypergrowth with demand to support continued strong results. We are investing in capacity to capture the large and growing opportunity in front of us and our position in the company to become a leader in AI cloud infrastructure. Look, I truly believe the future of Nebius is incredibly bright. We're not just well positioned. We have the resources, the expertise, and most importantly, the team to lead and win. Now, let me turn the call over to Neil for Q&A.
Great. Thank you, Dato. We've started to collect questions from the online platform and we'll give it a minute just to consolidate. Great. All right. So our first question comes from our analyst from Goldman Sachs, Alex Deval. And maybe I'll give this to Mark. Mark, can you maybe talk about the overall demand environment and how does demand look like as we're moving into the second half of this year?
Yes. And thank you, Alex, for the question. The demand environment in the second quarter, as you can tell from our results, was very strong. As we brought on more capacity, we sold through it. And by the end of the quarter, we were at peak utilization. There's a nice trend that we're actually starting to witness. As we bring on larger clusters, we are able to bring on new large customers who want to purchase greater and greater capacity. This allows us to expand and diversify our customer base and has been a clear signal there is growing opportunity in the market. This also suggests strong demand to support ramping up our capacity. If we had more capacity in the second quarter, we probably would have sold more as well. At the same time, we were able to improve the maturity of our platform, which has contributed nicely to increasing our competitive win rate, all of which is continuing on into this quarter. Great.
Thanks, Mark. And that was Mark, our new Chief Revenue Officer. Question on EBITDA. Dato, maybe you can take this one. It's good to see positive adjusted EBITDA for the AI cloud business coming ahead of expectations. How should we think about adjusting EBITDA for the core business and for the whole group going forwards for the remainder of this year?
Well, look, we're very pleased to report that our core business rates adjusted EBITDA profitability this quarter ahead of our initial guidance. And looking ahead, we expect the core business to remain positive throughout the rest of the year. At the group level, we anticipate turning adjusted EBITDA positive by the end of the year. However, for the full year, it will remain negative. That said, we expect group adjusted EBITDA to be positive starting next year.
Great. Thank you, Dato. Dato, maybe we'll stick with you. Analyst Neho Chokshi from Northland is asking about ARR. So really, as we think about ARR for the year, what are the dynamics around ARR? And can you give any update for ARR this quarter?
Sure. Thanks, Nihal. Reality is that we show strong momentum in Q2, with analyzed round rate revenue growing from $249 million in March to $430 million in June. While we are not providing monthly ARR updates, I can say that this positive trajectory has continued into July. Looking ahead to our increased analyzed round rate revenue guidance, a significant portion of it is already under contract, which gives us a strong visibility. We also see continued strong demand in the market. And as we scale up capacity, we are able to sell it quickly. With additional capacity coming online later this year, we are confident we're on track to deliver on the revised ARR guidance.
Great. Thanks, Dato. Dato, maybe staying with you from online. Looks like our prior guidance for ARR was $750 to $1 billion and $400 to $600 million of core business revenue. We're now increasing the ARR to $900 million to $1.1 billion, but there's no change to the revenue guidance. Can you explain why this is?
Yes, of course. The increase in our ARR guidance reflects the strong demand we are seeing in the expected delivery of additional GPU capacity later this year, particularly the Blackwell altars. Because much of this capacity will come online by the end of the year, the impact will show up more in ARR than within the year revenue. That timing dynamic is why we are holding our 2025 revenue guidance steady. That said, this late year ramp will create a strong foundation heading into 2026 and will support meaningful revenue acceleration next year.
Great. Thanks, Dato. We have a question from Alex Platt, analyst from DA Davidson. He's really asking about the 1 gigawatt. So if we're getting to 1 gigawatt of contracted power by the end of 26, how should we think about revenue for next year? How should we also think about the guidance we gave last quarter for the midterm of getting to mid-single digit billions of revenue over the next few years? Maybe, Mark, do you want to take this?
Certainly. Thank you, Alex, for the question. It's too early for us to provide 26 guidance, and we'll be returning to that question later this year. But for now, we do want to reaffirm our midterm outlook as we are making very good progress towards our goals. As we said in our Q1 earnings call, our base case calls for several billion of revenue in the midterm, which means in the next few years. Our base case also assumes that we grow our capacity to support this type of revenue goal from our 25 levels. We also said this guidance does not factor in a large deal from like a frontier AI lab or a hyperscaler. Those transactions would be considered incremental to this guidance. I hope everybody's gathering that our ambition is to grow much larger and much faster, and we are laying that foundation with the one gigawatt capacity that we're deploying.
Great. Thanks, Mark. The next question is around tariffs. The US is now exercising tariffs across most nations. How does this impact your business and margins? Tom, do you want to take this?
Yeah, sure. I'm happy to. Listen, I think the question of tariffs is obviously something that we're following closely. I would say that for now, it's a bit early to say anything definitive, including based on the latest comments we saw overnight. We're still looking into these. But I think the key thing is whatever is determined, obviously this is something that affects all players in our market. While it's possible we could potentially see some short-term fluctuations, we're confident that the market will be able to balance things out going forward. As we see more, we'll obviously update.
Thank
you,
Tom. All right. We get this question quite a bit. What is your return on capex? Dato, maybe you can help. Certainly.
Look, when we price our GPUs, we aim for healthy margins on a per-hour compute basis. For the hopper generation, we expect to break even in roughly two to three years on a gross profit level. That includes both the cost of hardware, but also the associated operational expenses. This estimate doesn't factor in our higher margin software and services revenue. As those scale, we see potential to shorten the return on invested capital. As for Blackwells, we expect the price at the premium. It's still early to comment on specifics of this age.
Great. Dato. All right. Another question from Alex Deval from Goldman. This is around capacity and timeline. Maybe I'll give this to Andre. Andre, can you maybe walk us through the timeline for the infrastructure build out for this year? How do we get to the 220 megawatts this year? Maybe some incremental color for next year.
Yeah, sure, Neil. Thanks, Alex, for the question. Hello, everyone. So we are ramping up our capacity to accelerate our growth for the next year. And after, first of all, we are growing with a number of the regions where we are present. In the second half of 2025, we are adding UK, Israel, New South and New Jersey additional capacity in Finland. And Finland and New Jersey are our main drivers of the capacity this year. In currently New Jersey, we have 200 megawatts in ongoing construction phase. A good part of that will be available this year and the rest in first half of 2026. In Finland, we expect to have an additional 50 megawatts in operations this year, just like we spoke earlier.
Yeah. Great. And Andre, kind of another part of Alex's question is also just any more details to make for 26 and some of the Greenfield opportunity we talked about. And maybe just lumping that in with an online question, why Greenfield versus -to-Suit?
Sure, Neil. Well, we are in advanced discussions for a couple of new Greenfield sites, each one able to deliver hundreds of megawatts of power in 2026. And we sure will announce hopefully soon about that. Regarding the why Greenfield versus -to-Suit color options, we typically, and we spoke about that a lot, we typically favor Greenfields because we can control every aspect of the data center from the design to construction, to the hardware installations and deployment and phasing. We can actually tailor the phasing according to our demand. And for us, it's cheaper to build than -to-Suit, and we are not locked into the one-term leases. Also, by controlling the design of the building, starting from the power is piped into building and design installation of our own racks and servers, we can achieve a lower total cost of ownership, probably around 20% less than the market average. Thanks, Andrei.
Maybe we can give this question to our native UK person. Tom, can you maybe shed some light on our UK and Israeli facilities? What do we see there from an opportunity perspective in those markets, and to what extent will we have local infrastructure presence to unlock that opportunity?
Yeah, no, absolutely. So I suppose given my accent, I'll start with the UK. I think UK looks great. We think it's a really exciting opportunity there. I mean, obviously, I think everyone knows it's a massive AI market, definitely the third largest, biggest outside of US and China. We've been paying quite close attention to what the government's been doing, and they've been taking some quite impressive steps to stimulate growth generally in AI, including confirming £14 billion in private sector investment into AI in the region. So I think probably many of you have noticed a couple, about a month ago, we announced our intention to launch our first big facility, a GPU cluster in the UK. It's just outside of London, and we expect that to be coming on stream in roughly early Q4. So actually, we think we're going to be the first to deliver B300s to the UK market, which we think will be a really interesting opportunity. And just generally how we're looking at the commercial opportunity there. There's a vibrant market of AI native startup scale-ups in London and around. There's a significant enterprise customer presence as well. You also, what you've seen actually, what we've seen lately is that even some of the global tech companies have been setting up regional hubs, regional R&D facilities, which we think will help to also drive the growth of the ecosystem. So the other thing I would say is that as we're looking at some specific industry opportunities and creating verticals around them, and one of the most promising that we see right now, among others, is the healthcare and life science space. And actually, we have a dedicated healthcare and life science team that's led out of the UK. And in fact, in this particular area, this is an area where we're working in partnership with Nvidia and we'll soon be announcing some initiatives that will be helping life science startups in the sector. So UK looks great and we're looking forward to being part of that. Likewise, Israel, we think there's also a big opportunity there to service what we think is really growing demand in the local AI sector. As in the UK, the government has been doing a reasonable amount to really develop the ecosystem and stimulate demand. And just generally, we see that Israel seems to be emerging as quite a dynamic AI hub globally. So again, we're there. We've mentioned this previously, but we'll be launching our GPU cluster there with Nvidia and with that coming up on stream also early Q4. And just generally looking forward to being part of it, tapping into the growth of the AI ecosystem. We think there's a big opportunity for us there. So great. Keep you posted. Thank you,
Tom. Maybe we'll go to Dato on this question. How do you plan to finance the capacity expansion for this year and next year? It seems like you'll have to raise a significant amount of capital to achieve your expansion plans.
Surely, Neil. What we have seen is that our business model is working well. And as we bring new capacity online, we are able to sell it efficiently, which reinforces our confidence to continue investing. Given the strength of the market, we see a clear opportunity to scale and demand our footprint in infrastructure. We have significant cash on hand and will approach any additional capital raising opportunistically, depending of course on timing and market conditions. At the moment, our focus is on securing land and power and moving quickly to reach our one gigabyte target.
Great. Thank you, Dato. Maybe, Andrei, you can take this question. You've announced some important updates to the software stack. What's most important for your customers?
Sure, Neil. Well, our customers who train or run the AI models and have the connected tasks generally looking for three things. They're looking for speed, reliability, and flexibility slash convenience. And this quarter, we continued to execute on those things. And the improvements were also driven and geared towards the Blackwell deploy readiness. On speed, we doubled the speed of our network and that had a direct impact on our MLPerf benchmark results. We made a great step and improved reliability by increasing the resulting number of mean time between failure. This was due to improvement in our core platform and deployment of our auto healing and health software that would address potential points of failure before nodes actually fail. We also improved flexibility. We made it easy for anyone using the S3 storage, easily migrate the data to the AI workloads in our clusters network. And this makes it easier for the customers to come to NABELS.
Great. Thank you, Andre. Andre, maybe sticking with you, Nehal from Northland is asking around some of the benchmarks that we've talked about this quarter. Can you maybe elaborate a little more on the MLPerf?
Yeah, with pleasure. Thanks, Nehal. This quarter, we submitted MLPerf training 5.0 results revealing some quite impressive performance for large scale training of Lama 3.1, the big one, 4.0, 5 billion parameters model. Basically, in cloud, as we doubled the size of our cluster, the speed scales linearly. So the most impressive part about this is that our results are comparable to bare metal benchmarks, but we accomplished this in the cloud. And for the customers, this is important because it's easier, faster, and more cost effective in the end. Great.
Thanks, Andre. We have a question about our inference as a service platform. Maybe I'll ask Roma to elaborate. Roma, can you maybe talk about our inference as a service platform? And also, it looks like you've transitioned to a new role, so maybe you can also elaborate on your new role and what you're working on.
Yeah, thank you, Nehal. First of all, I'm always happy to talk about inference. About my transition, we now have Mark that is focusing on scaling our -to-market and sales. And I'm happy to spend time on new initiatives. And of course, we see more and more demand shifting to inference as all the market. And the strength of Tenebios is that we build a full stack. So now we are developing the next layer of our offering very naturally. We do it to enable the AI-centric ISVs, like product builders and enterprises that apply AI in their critical workflows. And we do it with our fully vertically integrated inference as a service product. We are building enterprise-grade platform to deploy and scale open-weight AI models like Lama, QAnon, Flux, just released open AI new models and others. And we focus on high performance and reliability on dedicated infrastructure. Our platform runs on top of Nebio's proven scaled infrastructure. And we target to solve the biggest pain points in production AI. Unpredictable latency, GPU bottlenecks, and not enough flexible platforms to build and scale.
Great. Thanks, Roman. Next question is around some of the new large customer wins, like Shopify. Maybe Mark, you can take this. Were these deals competitive? What are they using NetBS for? And any more color you can provide would be super helpful.
Yes. Thank you, Neil. Probably one of the important highlights that we're observing is as we're making our way through the market, we're actually getting interesting adoption, like big customers like Shopify. And I want to add another one to the discussion here, like a cloud flare. I'm very excited about these customers. They are leaders in their categories. They are pushing the frontier of using AI to build and deliver great solutions. And I've had the privilege of partnering with both of them in the past. Shopify is utilizing Nebius's AI infrastructure along with Toloka's training data in order to optimize every step of the merchant's journey. A very exciting opportunity for us. Likewise, Cloudflare is using NetBS to power inference at the edge, a very important part of their overall offering, as a part of their popular workers API. Both relationships are growing, and both are scaling opportunities for us. We're also seeing other similar interests from other major technology companies and leaders in their categories, reinforcing the opportunity overall in the market.
Great. Thanks, Mark. Mark, we also seem to have a question just about you. It looks like since you joined Nebius for the past couple of months now, what have been some of your observations, and what is your strategy to bringing more long-term contracts and move the company towards the enterprise market?
I couldn't be more excited, I have to say, even more so than when I received the opportunity to join the company. This is a very exciting organization. We've got great technology, and that's because we have a world-class leading team. It's turned out, as you're hopefully hearing in today's call, that the market is massive and it's growing quickly. The opportunity for Nebius is to get more structured and methodical with our -to-market, and to continue to build out our coverage to be able to proactively pursue the market opportunity. To that end, we are building out our -to-market leadership team, including adding a world-class VP of Sales Strategy and Operations who's actually starting this week. We're also adding general managers to lead our businesses in the Americas, Middle East, Asia Pacific, and Japan, as well as adding leadership to take on the opportunity around strategic customers and major enterprises. In tandem, we will continue to expand our overall customer-facing capacity and distribution capabilities. In the short term, we are focused on pursuing the regional markets of AI builders and targeted software vendors and select enterprise segments in order to be able to develop a strong understanding of the use cases that are winning, and then a deep understanding the overall customer journey. Midterm and longer term, we intend to cover the entire global IT market with distribution and sales capacity.
Great. Thanks, Mark. We have a question from Alex, our Goldman Analyst, around Blackwell demand. Mark, as we're bringing on the Blackwells, what does the demand look like for them?
Thank you again, Alex. A very thoughtful set of questions today. Well, first of all, let me just clarify, we continue to see really strong demand for the hoppers in Q2. As a matter of fact, whenever hopper capacity becomes available, we're selling it very quickly. We did bring on the B200s and we are actively selling through them as well. Pricing trends remain relatively stable for the hoppers, even in the context of Blackwell alternatives, which are actually coming through with a healthy premium, relatively speaking. We're also seeing interest in the Grace Blackwells that are being implemented later this year. Great.
Thanks, Mark. Looks like we're getting a question on partnerships. Looks like you've added a number of partners in Q2 and continue to strengthen your relationship with NVIDIA. What partnerships do you think are most meaningful and how should we measure the success of these partnerships? Maybe I'll give this to Daniel.
Hey, thanks, Neil. This quarter, we made strong progress expanding our reach across the AI ecosystem through several high-impact partnerships. We launched integrations with Mistral, Base 10, and Skypilot, all of which extend our ease of use of our AI cloud and dedication to our developers and model builders by supporting them across their workflows. We also partnered with Lightning AI and Enescale, extending our presence across both open-source tool sets and enterprise users. These collaborations simplify how teams scale and deploy AI workloads using Nebius. And then on the infrastructure side, we expanded our AI cloud portfolio with NVIDIA AI enterprise and became a launch NCP partner for NVIDIA DGX Cloud Lepton, further strengthening our position as a high-performance AI platform. Ultimately, we measure our success through the adoption of our partner platforms, revenue contribution, and strategic access to new user segments, all of which we've seen trending positively. Great. Thank you,
Daniel. Question on utilization. Can you discuss utilization trends in the quarter or even by GPU family? Mark, can you take this one?
Absolutely. As we've discussed already, we are investing in and building out our infrastructure. And as we bring on more capacity, we're selling through it. And we are able to bring on bigger customers who want to get greater capacity. We're adding more capacity this and next quarter and shifting to selling against future requirements. So ideally, what we're building is a model where we can close and drive expansion of future capacity and future versions of GPUs.
Great. Thank you, Mark. Here's a question from Andrew Beal, our analyst from Aratay, on getting large contracts. So some of your competitors are signing large multi-year deals with hyperscalers. What do you need to do to get one of these deals? And when can we see one of these deals? Arkady? Yeah.
As we previously said, of course, we see a lot of this demand coming from the top frontier AI labs. We actually believe that this will increase in the future. Millions of new GPUs are coming to the market next year and beyond. In order to capture this demand, actually, answering the question, what do we need to do? We're doing the main thing. We're increasing our capacity, significantly increasing this capacity. And as I said, at the beginning of the call, we are just addressing this issue right now. Going forward, we hope very much to see those big customers among our customers, because finally we have capacity of their scale. And probably just again to remind that all the projections we're making this year in midterm, they do not include these big accounts and those big deals. So if or when they will come, it will all be incremental and will be a nice surprise, I would say. Great. Thank you, Arkady.
A lot of questions on AV Ride, including from Alex Platt and from Andrew Beal. Maybe, Arkady, you can take this. Can you provide an update on AV Ride and any update regarding their strategic partnership and then really around the potential robotaxi launch in Dallas? How is that trending?
Well, it's to say nothing to say that we're very excited about AV Ride as a company, its future, taking into consideration what's going on in this industry this year. On the future of AV Ride corporate structure, as we spoke many times before, we see a structure, something similar to what we've done with Taloka. It's a good example of a type of a partnership we are looking for when a strong partner comes to co-develop this business and we actually give up control. In the meantime, the business is performing extremely well. They continue to scale. As you know, they have two business lines, delivery and autonomous vehicles, and both on the first line, the robot side, AV Ride is expanding their coverage with the existing partners. They add new cities, new service areas, the restaurants with Wilbur. They're launching new university campuses in the project with Grab Hub. They're also entering the new verticals. Just recently, they signed with a grocery delivery for the retailer HEB in Texas and also indoor robot operations in Japan that came through a partnership with Mitsuo Fugasan. On the autonomous vehicle side, AV Ride is growing its fleet. As you know, they're partners with Hyundai and they're expanding their road tests in Dallas. They're looking forward to launch their robot service with Uber later this year because they signed this partnership early. So, we believe it's a great business and we believe that this is a source of significant value for our company, for the group. Great. Thank you, Rokati.
A few questions around our sources of funding. Last quarter, we talked about potentially tapping into our non-core businesses and equity investments to fund growth of the core business. Any updates that we can share? Tom, would you like to take this?
I'm happy to catch up on this. I think that as we talked about festival, I would touch briefly on two significant equity stakes. Taloka, which you saw this quarter, we were very pleased that they were able to raise growth capital in a transaction that was led by Bezos expeditions. They're doing great things, doing a lot of stuff in that complex AI data task world. And actually, their customers are a number of the major AI labs and others. You may have seen in this industry, this industry generally is a hot one, Scale AI, which is a comp for them, recently sold about half the company at a $30 billion valuation. So, we think that there is very significant upside to Taloka's business prospects and valuation. And what was important for us in that deal was that we retained a significant majority economic interest. So, we have a lot of exposure to the upside as and when we feel it's the right time to try and tap into that. With regards to Clickhouse, you've seen Clickhouse in the news in the last quarter. We retained a minority economic interest in this business. The previous valuation had had $2 billion in a transaction in 2022, but in the latest capital raise, there was a reported valuation of around $6 billion. And the way that we're thinking about that stake is that I think right now, we still think there's a lot of value to be created in the business. But if there were to be a liquidity event in the coming years at a significantly higher valuation, then that's something that we would consider. And we think potentially that could be the source of really several billion dollars, but we'll obviously see how the business goes. And otherwise, yeah, as you know, we have our wholly owned autonomous vehicle business. I think our car is really probably already touched on that. But again, they're doing really well. They've in the last quarter entered into partnerships with the likes of Uber, Grubhub, Bracketown and others. Waymo is obviously a comp in that sector and has been valued around $40-50 billion. So this is sort of, we hope the direction that we can be going in with this business and actually DA Davidson, our covering analyst from there, actually recently put out a note on A.B. Ryan setting out some of the value potential, which I refer people to. So these are great businesses. We don't have any immediate need to do anything though. We think that there's still a lot of value to be created in all of them and we'll watch that closely. But we do very much keep this in mind as potential sources of capital that can help us accelerate investment into the core and AI infrastructure business. Great. Let's see.
Question on Leptin, NVIDIA Leptin. How is NVIDIA Leptin impacting our business? Maybe Roma, you want to take this?
Yeah, thank you. I think that now, actually it was from the launch of Leptin Marketplace. We are one of the largest partner of NVIDIA there and we see that it generates quite a significant pipeline of the customers who start using via Leptin and then they continue directly with us. So in general, we think that this partnership is a very good extension to all the rest of the job we do together with NVIDIA. And this is one of the efforts now to develop the ecosystem partnerships the channel partnerships and value-added partners that we mentioned already on this call.
Great. Thank you Roma. And maybe our last question. Europe is ramping up its AI investments. Do you expect to benefit from this? Maybe through public-private partnerships? Arkady? Yeah.
In short, the answer is yes, of course. A bit longer answer is that we're very well connected in Europe. We came from Europe. We have and we'll have even more data centers in Europe. And I'm sure that we will be, we are actually, we will stay, will be one of the major AI infrastructure in Europe. Great. One of our key markets.
Thank you Arkady. All right. I think that's that's a wrap for today. Thank you everyone for joining. We appreciate everyone for attending our call and we'll be talking to you all soon. Thanks.