Zapata Computing Holdings Inc.

Q1 2024 Earnings Conference Call

5/15/2024

spk02: Good day, and welcome to Zabata Computing Holdings, Inc. First Quarter 2024 Financial Results and Business Update Conference Call. As a reminder, this conference is being recorded. It's now my pleasure to introduce Eduardo Royas. Thank you. You may begin.
spk07: Thank you. On today's call are Christopher Savoy, Chief Executive Officer and Co-Founder, Yu-Dong Cao, Chief Technology Officer and Co-Founder, and John Zorio, Chief Revenue Officer of Zapata AI. Earlier today, Zapata AI issued a press release announcing its first quarter 2024 results. Following prepared remarks, we will open up the call for questions. Before we begin, I would like to remind you that this call may contain forward-looking statements. While these forward-looking statements reflect Zapata AI's best current judgment, they are subject to risks and uncertainties that could cause actual results to differ materially from those implied by these forward-looking statements. These risk factors are discussed in Zapata AI's filings with the SEC and in the release issued today, which are available in the investor section on the company's website. Zapata AI undertakes no obligation to revise or update any forward-looking statements to reflect future events or circumstances, except as required by law. With that, I'd like to turn over the call to Christopher Savoy, CEO and co-founder of Zapata.
spk06: Christopher? Thank you, Eduardo, and greetings to all. We appreciate you joining our inaugural earnings call. Since this is the first earnings call of many, we will spend a few minutes setting some context and highlighting the Zapata AI story before turning to an overview of business, technical, and financial highlights from Q1 and more recent developments. Zapata AI is at the forefront of the industrial generative AI revolution, and we're extremely proud to now be trading on the NASDAQ exchange following the closing of our business combination with Andretti Acquisition Corp. in late Q1. We would like to once again thank the Andretti team for their vision, partnership, and unwavering support. We are confident that our public listing will allow us to further advance our position as a technology leader in this nascent industry while providing the resources required to scale up our business. I look forward to providing our stakeholders with updates on our success on a quarterly basis going forward. Since our founding in 2017, Zapata has been an early pioneer in generative AI, innovating new techniques before the term generative AI entered the zeitgeist as the digital transformation priority that it is now. And we have only accelerated the pace of our innovation as we continue our journey as pioneers and trailblazers today. While much of the industry is focused on large language model and related LLM use cases, We are expanding the scope of what's possible with this revolutionary technology to address mission critical operations, analytics, and business intelligence use cases. Specifically, we are unlocking the powerful insights that can be hidden in real-time data from sensors, nodes, and other previously untapped sources across an enterprise's operating environment. This includes predicting and simulating the future more accurately and in near real time, detecting anomalies more quickly and with higher accuracy, creating virtual sensors to infer data for critical variables that would be difficult or impossible to measure directly, generating optimization recommendations to drive better and faster decision making. And as we've demonstrated working trackside with Andretti Global at any car races, We can run these decision support capabilities in challenging, extreme industrial environments on edge networks. For those unfamiliar, edge refers to processing that takes place nearer to where the data actually originates or where it's actually being generated. Zapata is also unique as one of the only pure play public companies who offer quantum and quantum-based algorithms for generative AI. These techniques deliver unique advantages including more accurate and more expressive AI models. They also provide our customers with an on-ramp to the revolutionary potential of quantum computing, as we expect that the hardware matures in coming years. But today, through these techniques, Zapata is enabling our customers to benefit from the supercharged model and gen AI capability. We have long held that generative AI will be the first place that we see a practical quantum advantage. and we continue to believe this today. We are seeing vast opportunities for what we have coined industrial generative AI, which span many industries, including financial services, telecom, transport, and logistics, government and defense, biotech pharma, and manufacturing, including automotive, energy, chemicals, and materials. Our team has deep experience working across many of these sectors. Throughout our history, Our enterprise and government customers have included Sumitomo Mitsui Trust Bank, BBVA, BP, BASF, DARPA, and Andretti Global. And we have worked with numerous research and university partners. At a time of growing anxiety about big tech monopolizing AI, we are addressing the concerns we hear from our prospective customers every day about being forced to lock in with an AI vendor. With Orchestra, our open source platform for developing and deploying industrial generative AI, we are giving enterprises the freedom to use the best hardware and software tools from across the ecosystem for their unique challenges and use cases. We support a range of deployment options, and all of our models and custom applications can be integrated with and live in production in any customer's environment. During the first quarter, we continue to make strong progress in demonstrating the powerful breakthrough technology we have at our fingertips. To dive deeper into this, in a few minutes, I will turn the call over to our CTO, Yudong Cao. But first, I want to emphasize how excited we are about our sales and business development pipeline. Based on active discussions we are having with potential customers and partners today, we are particularly excited about the opportunity set ahead of us in five key industries. pharma and biotech, financial services, insurance, telecom, and defense. To elaborate a bit, I'll start with pharma and biotech. We are confident that we have the potential to significantly reduce drug development and manufacturing timelines, thus bringing drugs to market faster, while also materially reducing development and production costs. In financial services and insurance, recent work we have delivered has demonstrated our ability to drastically reduce compute costs and runtimes for highly complex risk and compliance models, as well as dramatically speeding up classical approaches to Monte Carlo simulations. These techniques have the potential to transform core operations, freeing up resources, bringing new products to market faster, unlocking substantial savings for customers, and enabling them to make faster, more informed pricing, risk, and trading decisions. In telecom, early customer discussions indicate that there is a potential market for us to deploy the same capabilities and expertise we have delivered trackside for Andretti to anticipate, predict, and respond to network disruptions before they occur. Wait till you hear about the yellow flag prediction. And help telcos optimize their network operations to prepare for high traffic moments while streamlining how they respond to these events. These are billion-dollar challenges. Applying our generative AI and anomaly detection capabilities could allow communication service providers to significantly reduce these types of burdensome and costly events, which in turn could result in major OpEx reduction and improvement in operations agility and customer satisfaction. To this end, we're excited to formally announce our global strategic partnership with TecmoHindra, one of the most highly respected global technology partners in the telecom space. This partnership is a game changer for Sabata, and it provides us access to Tech Mahindra's incredible portfolio of global telecom providers, bringing us closer to where we can do the most good with our quantum-based generative AI solutions, especially our real-time machine learning on the edge, our sensor intelligence platform, and our optimization capabilities. During Q1, we have been in active dialogues with two leading telcos in the U.S. exploring the possibilities in this collaboration. John will speak more about our partnership with TechMahindra in a few minutes. Finally, defense. In this area, we have shown that our technology can build high performance applications that provide literally mission critical decision support capabilities in contested and unpredictable edge environments where timing is of the essence and where connectivity is typically a challenge. We cannot wait to share more as we continue to make progress on these and other opportunities. Now, over to Yudong to summarize our technical highlights from Q1. Yudong? Thank you, Christopher.
spk09: Picking up on the pharmaceutical theme, in February, Zapata AI made history by generating, for the first time anywhere, viable cancer drug candidates using quantum-enhanced generative AI, together with our partners at Encyclical Medicine, the University of Toronto, and Harvard University. Not only were those drugs completely new to the scientific literature, They also showed superior binding affinity over molecules generated by purely classical generative AI. We believe this milestone demonstrates the potential for quantum-enhanced generative AI to aid in molecular discovery and other complex design challenges. To pursue these opportunities further, we announced in February a partnership with quantum compute provider D-Wave to develop commercial applications for quantum-enhanced generative AI in molecular discovery. In defense, we have continued to reach technical milestones in our work with DARPA and the U.S. DOD, including software delivery. With the Phase II award we were granted last year, we have been steadily improving BenchQ, our open-source tool for benchmarking quantum computing applications. This work will help the U.S. government and the quantum community to understand the resources required to unlock the utility of high-value quantum use cases across multiple hardware modalities. We're in year three of this partnership, and we're not slowing down. Finally, I'll end with a major milestone from our work with Andretti Global. For the first time, we deployed our yellow flag prediction model that's alive in a race on our sensor intelligence platform, or SIP. For those unfamiliar with auto racing, a yellow flag is flown when there's an accident, dangerous condition, or other hazard on the racetrack, or that may require response vehicles and personnel to intervene. Given the massive danger posed by race cars going 200-plus miles per hour past the tow trucks and around people standing on the racetrack, all of the race cars on track are forced to drastically slow down, bunch together, and hold position while the issue is addressed. The implications of the yellow flag on race strategy is significant. Every driver has to pit several times during the race to refuel and change tires. Strategically timing these pit stops around yellow flags can win or lose a race because it can mean pitting while the field is spread out and moving at full speed, or pitting when everyone is bunched together and unable to pass under the yellow flag. Either can be advantageous depending on the scenario. Using a combination of live and extensive historical race data from sensors around the track and the cars themselves, we were able to accurately predict the likelihood when a yellow flag will be flown in the next five laps, giving Andretti race strategists a critical window of time to adjust their strategy before the yellow flag was actually flown. We have recorded this demonstration with the actual race data from the IndyCar St. Petersburg race earlier this season, and it's available on our website and our digital channels. Check it out. We look forward to continue to evolve our capabilities and sharing additional information such examples of our technology in action. With that said, I'll turn it over to Chief Revenue Officer John Zorio for an update on our commercial progress.
spk04: Thanks, Yudong. The yellow flag prediction model Yudong mentioned represents just one component of the powerful suite of generative AI and ML applications that we're developing and deploying in support of Andretti's various race teams. As another compelling example, we've deployed generative AI to create virtual sensors that infer real-time data, which are key race variables that would otherwise be unmeasurable, like a tire slip angle during the race. We've been able to tune our models to over 99% accuracy. Other use cases include lap time prediction, fuel savings optimization, tire degradation analytics. All of these applications run on our Sensor Intelligence Platform, or SIP, which we've deployed at the edge in our mobile race analytics command center, or RAC. The RAC has proven to be an excellent marketing and demonstration tool for our prospective customers as they physically tour the interior, the math, the models, and the generative AI all of a sudden becomes real, and they can see for themselves the power of real-time analytics at the edge, and they leave IndyCar races truly impressed with the possibilities for harnessing our technology for their business. We're now well into our third season with the Andretti team. As a testament to the value we continue to deliver for Andretti on and off the track, in the first quarter, we signed a significant commercial expansion with Andretti worth $1 million ACV for 2024, in which the Zapata team is building an innovative database solution. This Q1 booking represents 42.8% growth in sequential workings versus Q4 2023. This agreement significantly expands our work with Andretti's engineering, operations, and race strategy teams with the goal of delivering significant innovation and operational efficiencies across Andretti's global presence and multiple racing series and driver teams. The database solution will also help build a foundation for Andretti's expansion to future race series. In addition, we've also expanded our co-branding and marketing relationship to enhance visibility around our cutting-edge innovation in IndyCar racing, with Zapata now formally recognized as Andretti's official artificial intelligence partner. We continue to believe that our work with Andretti is critical to raising the visibility of industrial gendered AI and demonstrating our capabilities in the context of real customer problems. The same challenges we're tackling with Andretti around real-time analytics and time-compressed decision-making, relying on massive amounts of streaming data in challenging environments, are directly applicable to any number of use cases across industries, from financial services to advanced manufacturing to network operations and telecom, transportation and logistics, energy, and the battle space. To put it in perspective, upwards of one terabyte of data comes off of one IndyCar during a race, flowing from hundreds of sensors, not to mention additional streaming data sources from the track environment and IndyCar itself. In financial services, We continue our work with our valued customer, Sumitomo Mitsui Trust Bank, or SMTB. We started working with SMTB in November of last year. With SMTB, we're applying generative AI to produce synthetic financial time series data for practical purposes, including simulating a range of plausible scenarios for future market movements These scenario simulations will enable traders and investors to be prepared to make decisions more quickly, more accurately, and with more confidence. We're also helping risk managers conduct more sophisticated stress tests and supporting derivative traders to better hedge their portfolios in addition to enabling more efficient and trustworthy derivative pricing calculations and value adjustments, or XVAs. We hope to share some results from this work in the coming months. As Christopher mentioned earlier, we're thrilled to formally announce our global strategic partnership with Tech Mahindra, initially focusing on the telecom vertical and our portfolio of leading telecom service providers. Throughout Q1, we've been engaged in working sessions with teams from several of the largest US-based telcos, and we're focusing in on specific use cases where we can apply our solutions. This has the potential to be a game changer for Zapata, and you can read more about it in the press release, which hit the wires yesterday. In Q1, we also made great progress in delivering a breakthrough transformation pilot for a large UK-based insurer, working in collaboration with one of Zapata's global services partners focused on the financial services space. We're excited to share more about this successful outcome in the near future and where this may lead. We continue to build our ecosystem of industry-focused partners, and we've also worked hard to deepen our relationships with current partners like One Stop Systems, or OSS. We expect to be collaborating closely with them on several opportunities in the defense space. During Q1, I was pleased to see strong growth in our pipeline for direct customer engagement, and those logos comprise a who's who of leading brands and organizations across our core verticals, from pharma, healthcare, financial services, insurance, telco, automotive, manufacturing, and defense. From a go-to-market operations standpoint, We've been pragmatically adding sales and beauty capacity in North America and in our primary theaters in Japan. We continue to work to enhance the Zapata brand, to raise our profile and evolve our industry vertical based orientation, our value based messaging, to produce original thought leadership and educate our ideal customer personas and buyers, to enhance our consultative go to market and sales motion, to tell compelling customer stories and demonstrate the value of our capabilities and uplevel our day-to-day prospecting activities with the objective of building awareness and executive mindshare. We're in the process of building other lead gen and referral programs to expand and cultivate our network of senior executive friends of Zapata, and we're tracking these activities and our progress systematically, and we're building the process discipline that will help us scale. Much more to come here. Let me now turn it back over to Christopher to go through Zapata's Q1 financials. Thanks, John.
spk06: Since this is our first earnings call, we are providing a little extra color where applicable to help you better understand our business. But before I get into our financial results, I wanted to say a quick welcome to Sumit Kapur, who has recently joined us and will be our CFO effective May 20th. Sumit is an experienced CFO with expertise in scaling growth tech companies. We look forward to having him handle the discussion around financials going forward. We would also like to thank our outgoing CFO, Mimi Flanagan, for her years of incredible dedication and commitment to Zapata, including supporting us through the rigorous listening process. We appreciate her continued role as a consultant to the company. So with that said, starting with a recap of the first quarter 2024 results. Q1 2024 revenues were $1.22 million, which compares to revenues of $1.5 million in Q1 2023. The period-over-period change primarily reflects a decrease of $0.5 million from the completion of certain customer contracts that occurred subsequent to Q1 2023, partially offset by an increase of $0.2 million from ongoing customer contracts entered into subsequent to March 31, 2023. As a reminder, we primarily earn revenue via annual or multi-year subscriptions to our software platform, Orchestra. which is available on a stand-ready basis, as well as through the provision of consulting services. Gross margin in Q1 2024 was 14%, flat with Q1 2023. Gross margins can be quite volatile and lumpy at these revenue levels. Operating costs during the period were 5.24 million versus 5.3 million in Q1 2023. Of note, general and administrative costs made up more than 40% of our operating costs. and we're up 0.74 million period over period to 2.21 million, primarily associated with costs related to the merger with Andretti Acquisition Corporation. We currently do not anticipate another significant step up in G&A costs in the near future. Putting all of this together, our GAAP operating loss was 5.08 million in Q1 2024, generally in line with the loss of 5.09 million in the year-ago quarter. Our GAAP net loss during Q1 2024 was $22.32 million and reflects the impact of $17.18 million of other non-cash expenses. Our Q1 2023 net loss was $5.07 million. As of May 10, 2024, we had 31.98 million basic shares outstanding. Before I turn to our balance sheet and cash flows, a quick reminder that we closed our business combination with Endry Acquisition Corp on March 28, 2024. As such, our reported results reflect net cash brought in through the transaction, although there have been subsequent financing transactions, which I will touch on momentarily. On March 31, 2024, we had $7.39 million in cash and cash equivalents, including $0.14 million in restricted cash. Net cash used by operating activities was $2.15 million during the first quarter of 2024. Included in this figure is $2.55 million in cash generated by working capital. During Q1 2024, we raised a total of $6.10 million through financing activities. This includes proceeds from the issuance of additional senior secured notes prior to the closing of the business combination with Andretti, as well as funds brought in from the business combination. We have raised additional capital subsequent to the end of Q1 2024. Specifically, we brought in $2.5 million for our forward purchase agreement with Sandia Investment Management and have raised $2.9 million as of May 10th through our equity line of credit with Linkin Park. We plan to be judicious, flexible, and opportunistic as we fund our growth strategy going forward, while remaining disciplined on cost control. However, given the inherent lumpiness in our business and where we stand in our company's life cycle today, we will not be providing formal guidance at this time. That concludes our discussion on our financial results for the period, and I'll now offer up some closing remarks. Earlier in the call, I expressed my enthusiasm about the constructive conversations we're having with potential new partners across industries, and especially in pharma, financial services, insurance, telecom, and defense. These are all sectors where our work has demonstrated very real, tangible benefits, and we believe we will have more to say across these fronts in the upcoming quarters. We are only at the beginning of our journey as a public company, and we look forward to sharing more milestones as we grow and continue to lead the industrial generative AI revolution. Thank you for your time and attention. Operator, we're ready for questions.
spk02: Thank you. We will now be conducting a question and answer session. If you would like to ask a question, please press star 1 on your telephone keypad. A confirmation tone will indicate that your line is in the question queue. And you may press star two if you'd like to remove your question from the queue. For participants using speaker equipment, it may be necessary to pick up your handset before pressing the star keys. And our first question comes from the line of Michael Lattimore with Northland Capital Markets. Please proceed with your question.
spk05: All right, great, thanks. Yeah, congrats on your first earnings call here. Thank you. Yeah, so the tech Mahindra Relationship is very interesting. Can you talk a little bit more about the prospects you see there? You know, are you, is the intent to supplement kind of current network and fault management capabilities, replace them? And then, you know, how extensive could you get deployed? I mean, are you going to be in, you know, specific network elements like backbone versus CPE devices? Just a little more clarity on the opportunity there would be great.
spk06: Sure. Yeah. I can't get into, obviously, proprietary things about these folks' networks and whatnot. But in a generic sense, yeah, follow the story of the yellow flag prediction, really. And if you can imagine there where we have five different models looking a lap ahead, two laps ahead, three laps ahead, different models working in an ensemble, imagine having a model on each node of a network, either a a SIDA router or next to a beatbox or of such next to the tower. And so you could conceivably there have the ability to report, say, jitter on the line or other signals that indicate a possible malfunction or incoming network traffic events and pre-report those before you get a 20 devices down type of a message. So, that has applicability, obviously, in telco there, but you can think more broadly on other edge network types of situations, like power management, grid, and other places where this may be applicable.
spk05: Great, great. And then maybe just, can you comment a little bit on the pipeline here? Just a little more, I don't know, quantification, if anything. You know, number of prospects, how much, how many prospects, you know, how fast the prospect count is growing. Yeah, any just more color on pipeline?
spk06: Yeah, I think I can say that it's growing significantly. And the demand as, you know, Palantir sent in their call recently, the demand for generative AI is pretty relentless. So the pipeline is growth. We want to provide guidance on that. I think what's going to be important is watching the bookings come in over the next few quarters. And hopefully you'll see you saw as we reported today, you know, an increase through customer expansion in one account. You'll see new logos and new pipeline coming in, as well as expansion over the coming quarters to meet that demand.
spk05: Okay.
spk06: Great.
spk05: Thanks. Best of luck.
spk06: Thank you.
spk02: And our next question comes from the line of Rohit Kulkarni with Roth MKM. Please proceed with your question.
spk08: Hey, thanks. Good job, Christopher and team on the first earnings call. A few questions come to mind. At a big picture, maybe talk about the team, your hiring plans as you set yourself up to be a successful public company and then also on the IP and the patent portfolio. Maybe talk through what types of opportunities you think can be unlocked through a pretty robust IP portfolio that you seem to have. Maybe just talk through that. Those are the first questions, and then I have a couple of follow-ups.
spk06: Sure. First of all, yeah, I mean, you will see Sumit joining us as CFO with a fantastic background and who actually has an applied mathematics background. background as well, so understands the technology we're into in a very deep way. So we're excited about that. I think you'll see some additional executive level hires coming over the next quarters. And then it's continued growth as we add revenues and customers in our generative AI pipeline. Obviously, to meet that demand, we'll be continuing to add high-level engineers and scientists who are already extremely talented pool of people that we have here. And we're going to continue to do that as we have globally. You know, you can't always bring everyone who has these kind of backgrounds to North America because they're not a dime a dozen. So, you know, we've been pretty advantaged, I think, in the fact that we can operate globally with our people. mechanisms, and we have people in Europe, we have people in Japan, and in other geographies that are key for our global expansion, because the customer base here is global Fortune 100 type companies that are also global. And you mentioned the IP. Yeah, we have a pretty significant portfolio there, as was in our general presentation. you will have seen that we rank pretty highly on especially the quantum enhanced AI techniques that we believe are going to give us a really differentiated advantage in the marketplace and what we can deliver with our technology. You saw and Yudong commented and I did a bit about our drug discovery work that we did within Silicon Medicine, Harvard and University of Toronto where we developed an actual drug leveraging this quantum enhanced AI technology, and that's really exciting. We're continuing on that work in the context of our work with D-Wave and other hardware providers as we go, and that is an advantage. So it's not just the IP, but the actual ability to bring this into commercial relevance that's really exciting for us.
spk08: Okay, great. And then just on the business side, maybe provide more color on to the extent you can around how are your conversations with these large customers that you're trying to sell to. So over the last six, maybe nine months, how are these large companies migrating from experiments and pilots on Gen-EI into actual production projects with mission-critical applications in Gen AI. Maybe talk through that adoption curve and where do you think we are in this adoption curve right now through your conversations and through your demos and various different leads that you have through your business dev pipeline?
spk06: Yeah, well, thank you for that question because I think, you know, these are indeed early days, but I think that there is some transition that we're observing in the marketplace. You know, we have the open AI, I guess, event horizon, if you will, where people realize, wow, generative AI has some possibility to do some really incredible things with GPT-3 and then 4, and recent releases and other folks' models. But these are language models. And I think that we were into pretty quickly, in quick order, a kicking the tires kind of mode to see where could we use large language models to do things. I think there was a little bit of expectation that a large language model could possibly be a general AI that knows everything about everything, kind of omniscient. Maybe that was the maybe expectation by some people in the market, some people in the C-suites of these large companies. But I think as things got into the brass tacks, the blocking and tackling of, okay, where can I really use this, it became obvious that, okay, there's no one language model that rules them all. Where the industry, I think, is going is, well, I think we're going to use small language models and also smaller other models that are better at things like numbers. Large language models are language models. They're good at language. They're not necessarily good at analytics and numbers. Actually, they fail grade school level math sometimes. So what we're finding, and it's to our advantage, is that now people want to do useful things in enterprise, in production with this. They realize that while there is power in this generative AI revolution, You're going to need different models and several models to do the kind of stuff that you want to do, like improve your network performance if you're a telco or reduce your costs or get a race car to have a better strategy. These kind of things are numerical in nature oftentimes combined with maybe some language UI with them. But for the most part, you're going to need different lots of models. And we can provide those models. And so this shift has been advantageous getting into the conversation. So LLMs, language models, which we can also do, are the conversation starter. But when it gets to the brass tacks of what you need to do in enterprise, like FP&A analysis, trading strategies, different things you might do in finance, actuarial science if you're an insurance company, these kind of things involve analytical, numerical models, which we excel at, which really opens up our market opportunity. And that's where people are kind of moving now, we're sensing, in the enterprise space.
spk08: Okay, fantastic. And one last question, just on the core value prop and the receptivity that you're getting, the pipeline you mentioned is across a variety of industries and pretty diverse, like pharma, financial services, telco, And the use cases for each of those verticals could actually be quite diverse. So is there a common thread that you feel is emerging as you are building applications and probably selling to these pretty different industries? Perhaps there is a common thread of reducing costs or creating new products or anything to that effect that you think you're seeing early signs of success or greater adoption in, or just would love to understand where is the common factor here across these diverse industries for a young company like yours?
spk06: Sure. The good news for us, it sounds very diverse, but the good news is a lot of the use cases are mathematically the same. It's kind of a geeky answer, but important that A lot of these are series data, time series data, and these kinds of data that we're attacking in a lot of these use cases. So while drug discovery and discovering a new molecule and race car strategy and insurance may not sound like the same thing at all from a use case perspective, and certainly they're not from the data's perspective what data it is, the The way we formulate the problem and solve it is mathematically the same time series or series data. And maybe I can get Yudong on the call here to expound upon that a little bit. Mathematically, they're the same thing. And I'll say another thing, you know, this is why we also, in these different verticals, partnerships are so critical because we don't want to or pretend to have the expertise to be the experts in the domain in every one of these verticals. That's why it's great to have a partner like Tech Mahindra that knows Telco really well, already has fantastic customers like AT&T and Verizon and know those customers to be able to take us into those accounts and work with those people with the domain expertise. So it's very complementary and orthogonal. And that gives us you know, global reach into verticals with the same math and the same tools and the same platform to be able to reach different verticals without stretching ourselves and increasing our costs. Yudong, I don't know if you want to comment a little bit about the time series data.
spk09: Yeah. So fundamentally, we are solving essentially a sequential data modeling problem. And molecules can be cast as a sequential representation. and then obviously the timing and scoring and race data takes a time series form. Financial data as well takes a time series form. Even text data is also a sequential form of data. So what we have done is that we have looked at the underlying structure of these problems, the mathematical structure of these problems, and developed quantum techniques towards those And so on the science side, there's definitely a very strong sense of convergence. I'll also add that through projects like Andretti, we also have developed the machine learning practice, the engineering practice, and also the overall team practice of how to get our algorithm deployed into production. That operationalization expertise is also something that is repeatable across the board.
spk08: Okay, thanks a lot. I'll go back in the queue. This was very interesting and cool. Thank you for the color.
spk02: Our next question comes from the line of Brian Dobson with Chardon Capital Markets. Please proceed with your question.
spk03: Hi, thanks very much for taking my question. So you mentioned several industries that you see as key moving forward that would be a good fit for your technology. Which of those might you focus on the most in the immediate future? And can you share any feedback you may have received from those industries regarding your technology?
spk06: Yeah, I have to be careful because obviously these are mission critical applications that we're doing. So, you know, the feedback is related to Yes, we can deploy. Yes, this is relevant. Yes, this is exciting. And there's an immediate recognition that the math, when we get the technical people on those sides involved, that the math is applicable to what they're doing. As far as focus goes, you know, it's four or five. Some of them, like we said, through partners. Some of them more directly, like in the finance area. SMTB, we announced that relationship that began in Q4 last year, where we're directly working with the customer on these predicted trading scenario generation work with generative AI in the context of portfolio management and trading strategies. And that is cookie-cutterable across as you can imagine, many financial areas. We've also had some success in delivering some similar capabilities in the insurance area, which is, you know, adjacent to bank financing and trading. And so that expansion is particularly important. We think that the applicability of this time series data, just as in racing, it's really the same problem. It's time series and real time updates of things as the market trades. These things are places where we can add value pretty immediately, and we have some initial traction with a very large bank in Japan already. So we expect to be able to leverage that very much in that sector in the coming quarters. And then, very importantly, I think we all know that doing things on the edge with things that fly by themselves in the defense industry in recent years and months, it's become obvious how important that is to our defenses. And the ability for us to deliver a generative AI solution that allows us to predict things and see things at the edge and update data real time, you can see where that would be important to national defense. Particularly in an era where unmanned vehicles and unmanned robotic machines are becoming more and more prevalent in the defense space. So that's another one that we're very focused on and very excited about. And we hope to be able to announce some stuff in the coming quarters about our progress there.
spk03: That's very exciting. Great colors. Thank you for that. Speaking about what you've learned through your work with racing, I guess, can you speak to some of the synergies that IndyCar has offered as far as facilitating new business introductions? And I mean, that yellow flag predictive model that you have is very impressive. You know, have you been able to use the rack to illustrate that your technology can work in a, call it less than ideal or suboptimal environment?
spk06: Absolutely. And it's, you know, important to note that we have two NVIDIA GPUs in the truck at that rack that are in a server in an environment that's horrible. There's actually a damper factory and damper testing facility in that truck shaking the thing. So you can imagine a card-based solution won't work there. At one of the recent races, the power to the IndyCar series stand got cut, so they had to throw a yellow flag before the race even started. So this is the kind of environment you're in. You're in these ad hoc tracks that are just set up with CAT 7 capable of running across the city just for a weekend. It's very interesting in that way. It's not a fixed road course a lot of the time, which creates a really horrible network environment. the truck itself gets really hot. And access to the cloud is not always guaranteed. We actually have a star link on the truck in case connectivity becomes a problem just to get data. So it's really the most horrific environment you could have to try and do AI in some ways. But the fact that we've been able to do it and we've been able to operationalize there is really important for folks who run, say, a power stations and power grid networks and telcos who have to work with exactly those kinds of problems in those kinds of situations. Um, and, uh, and, and the ability to do that real time, uh, with, with very fast updates, you know, in, in a sport where, you know, at the Indy 500, uh, in a couple of weeks, the, uh, the, uh, the, there was a tie last year, I believe between 20 and 21 for qualifying down to one 10,000th of a second after eight miles, uh, at 240 miles an hour at peak. So that's the kind of environment. And so our ability, actually, the marketing relationship we have is really valuable. So people can actually not just think of it as a concept, but see it really actually be there and see the trucks, see the analytics going on live and looking at the kind of environment that we have there. And that, I think, breeds a lot of confidence with our customer base that we can deliver in some of the most extreme environments. And if we can do it there, then doing a banking solution on the cloud where we pretty much have a 24 seven connections to market data is, is less horrible, right? Then, then, then, then that, and actually the data is it may sound extreme, but, but actually there's, there's, you know, fewer data points and fewer variables. So we have to monitor in, in, in trading portfolio situations than there are ironically than, than, than in the car situation. So, It's a really good format. We're able to take our customers to actually see it, feel it, touch it, see the data as it comes in while they're watching the race happen live, and really get the experience of seeing it in production. And that has been able to obviously transfer itself into a very real pipeline, and hopefully we'll be able to give more color on that in the coming quarter with new logos and new wins that we'll be able to announce, and hopefully we'll be able to tell you that a lot of that came directly from these interactions at the racetrack.
spk03: Excellent. Thanks for that, Culler. I appreciate it. Thank you.
spk02: Our next question comes from the line of Yi-Fu Lee with Cantor Fitzgerald. Please proceed with your question.
spk01: Good morning Chris, Yudong, and John. Thank you for taking my question, and congrats on the listing in the productive quarter. I have one question for each gentleman, maybe starting with Chris. Thank you for fleshing out how Zapata AI is different from your standard large language model, whether it be from Anthropik or OpenAI. I was wondering if you were to compare your competitive edge Chris, in terms of coupling edge computing with quantum physics, I guess, other startups, right? What does the part of AI does differently versus competitors?
spk06: Well, I think certainly the quantum edge that we have and the mathematical capability and the capability of the people that we have doing that math really gives us a competitive edge. I mean, we have some of the brightest minds in the planet on quantum math, and we've been working in with the Defense Department and the DARPA program on benchmarking these algorithms for them for a couple of years now. And so we really do have that kind of people advantage, the IP advantage that comes out of that. And so we have really the most cutting-edge math that we can apply to these generative AIs. I think as a key advantage. And then I think Yudong said it best in saying that, you know, we're bringing this to production and you can actually see it. It's actually working. These models are doing something every weekend at the track. I mean, it'll be there at the Indy 500 with these models and the yellow flag models running. in real time and update in real time. So we can actually do this stuff in production. This isn't just a kick the tires kind of experiment or POC or these kind of things. We can actually make this stuff work for making real important decisions. And so it really is, it's the decision science and the ability to deploy a decision assisting AI into enterprise environments that really gives us an edge. We're not thinking we can do this or saying we can do this. We've actually done it, past tense.
spk01: Got it. Thanks for that, Chris. And then moving on to Yudan, right? Yudan, you mentioned about the B-Wave on drug discovery, working with the Harvard University. I was wondering if you could give us some color, how you could transition this, I guess, working with the academics to more commercial opportunity with other large big pharma's.
spk09: The machine learning model by itself and also the infrastructure can be used for other types of discovery like material discovery or other types of design problems. Our role in the project is that we will take a given target and propose molecular design. This is assuming that there is there's a partner that has identified a target, and also there's another partner that works downstream to produce those molecules. So this is an embodiment of what Christopher was saying, like where do we get our repeatability and how does our map plug into an actual process? So we're not a drug company. We don't actually make molecules. We're also not a bioinformatics company that does target identification, but we plug in very nicely, and we have ongoing partnerships with D-Wave and also through the universities. We will connect with organizations that can perform these target identification and drug testing and manufacturing. So by partnering with these organization-themed ecosystem plus our generative AI capability. So this is how we truly go to market with the entire pipeline, essentially.
spk01: Got it. That makes sense. Thanks for that. And then finally, ending with John, on the Tech Mahindra partnership, sounds very exciting. Can you describe a little bit more about this opportunity? It sounds very big and that you could probably bring it to all the U.S. carriers like Verizon, AT&T, et cetera. Maybe a little bit on this opportunity, please.
spk04: Sure, happy to. And I think Christopher touched on it briefly in earlier comments. I think the power is placing our models both at the edge of those large networks with literally hundreds of millions of nodes of data. So it's not lack of data that's the problem. It's being able to have the fidelity and the sensitivity to pick up very slight signals, anomaly detections that might indicate some sort of intrusion or some sort of change in track pattern or, as Christopher said, possible jitter, something happening with a piece of hardware. Well, we can have that sensitivity at the farthest reaches of the network and place our models with real-time machine learning at different places along that hierarchy to pinpoint where something may be changing or there might be a signal that was previously undetectable. So if there is a disruption, the carrier can react to it immediately, route traffic somewhere else, just potentially proactively dispatch a truck. And you kind of zoom out when you think about this problem with a large carrier, with all the truck rolls, with all the outages. And there was just recently a large outage we all probably were impacted by. These are literally billion-dollar problems. So when we are sitting down with some of these carriers, and they are taking us through their pain points, and we are doing the math together figuring out how we can kind of co-solve this together. If they had more intelligence deep into the network, and they have the ability to pinpoint where a problem happens, and update correlation engines to send out teams, or automate responses in a much more timely proactive manner, the savings are just tremendous. And frankly, I think you're picking up on the point that more or less, these networks are generally the same in terms of the characteristics and how they operate. So we were able to, as we were planning, drive value in one. There's no reason we couldn't drive value in many more, as well as the network construct. Now, it doesn't just have to be telco. Anytime there is a Maybe it's utility. Maybe it's an airline. Maybe it's rail. Anytime there's a large geographically dispersed complex network of anything, large devices, large complex machinery that we need to monitor and take action quickly, that's very much in our sweet spot. So the repeatability and extendability of the solution is pretty tremendous.
spk01: Okay, thanks for that again. Thanks, Chris, Yudan, and John, and congrats on your listening again.
spk02: Thank you. We have reached the end of our question and answer session, and with that, I would like to turn the floor back over to Christopher Savoy for closing comments.
spk06: Thank you very much, and thank you for all the great questions, and we look forward to talking to you again at our next calls. Thank you much. Bye now.
spk02: This concludes today's teleconference. You may disconnect your lines at this time. Thank you for your participation.
Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-