This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

Blaize Holdings, Inc.
8/14/2025
Thank you for standing by and welcome to Blaze's second quarter 2025 earnings conference call. At this time, all participants are in a listen-only mode. After the speaker presentation, there will be a question and answer session. To ask a question during the session, you will need to press star 1-1 on your telephone. To remove yourself from the queue, you may press star 1-1 again. I would now like to hand the call over to Verovice Pozinski, Investor Relations. Please go ahead.
Before we begin the prepared remarks, we would like to remind you that earlier today, Blaze issued a press release announcing its second quarter 2025 results. A corporate overview presentation was published and is available on the Investor Relations section of Blaze's website. Today's earnings call and press release reflect management views as of today and will include statements related to our competitive position, anticipated industry trends, our business and strategic priorities, our financial outlook, and our revenue guidance for the third quarter of 2025 and full fiscal year 2025, all of which constitute forward-looking statements under the federal securities laws. Actual results may differ materially from those or implied by these forward-looking statements due to risks and uncertainties associated with our business. For discussion of the material risks and other important factors that could impact our actual results, please refer to the company's SEC filings and today's press release, both of which can be found on our Investor Relations website. Any forward-looking statements that we make on this call are based on assumptions as of today and other than as may be required by law, we undertake no obligation to update these statements as a result of new information or future events. Information discussed on this call concerning Blaze's industry competitive position and the markets in which it operates is based on information from independent industry and research organizations, other third-party sources, and management's estimates. These estimates are derived from publicly available information released by independent industry analysts and other third-party sources, as well as data from Blaze's internal research. These estimates are based on reasonable assumptions and computations made upon reviewing such data and Blaze's experience in and knowledge of such industry and markets. By definition, assumptions are subject to uncertainty and risk, which could cause results to differ materially from those expressed in the estimates. During the call, we will discuss non-GAAP financial measures. These non-GAAP financial measures should be considered as a supplement to and not a substitute for measures prepared in accordance with GAAP. For reconciliation of non-GAAP financial measures discussed during this call to the most directly comparable GAAP measures, please refer to today's press release.
Good afternoon, everyone, and thank you for joining us today. Blaze's second quarter of 2025 marks a clear inflection point from building technology to putting it to work. Our hybrid AI strategy has now moved from pilot validation to early-stage deployment with contracted programs underway across industries and regions. For the last 12 months, we've been focused on validation. Now we're shifting to execution at scale. Blaze is now deploying its technologies to advance sovereign AI strategies and power public safety networks. We're not only delivering chips, we're also enabling hybrid AI infrastructure that complements GPU systems powered by a programmable, power-efficient Blaze AI platform. In the last few months, we've locked in two major contracts together worth up to $176 million to be fulfilled through 2026. These contracts send a strong signal that our product market fit and our platform approach complement the global demand for hybrid AI. The first is a $120 million contract with Starshine, building hybrid AI systems across Asia. The second is a $56 million purchase order rolling out sovereign-ready smart infrastructure in South Asia, serving as many as 250,000 cameras for smart traffic and public safety. Both rollouts complement existing GPU systems with Starshine deployments starting in the third quarter and South Asia systems continuing to shift through the third and the fourth quarters of this year. These two significant wins are just the start. In addition to these contracts, we have a robust pipeline of over $725 million in active opportunities through 2027. Today's AI deployments are fundamentally heterogeneous, powered by a mix of hardware types across Edge and cloud. That complexity often creates bottlenecks, especially as organizations run multimodal workloads at the Edge. Blaze's hybrid AI approach answers that need. Our purpose-built platform is built to complement GPU systems and to unlock better performance per watt and more responsive influence. While GPUs handle training jobs and complex AI in the cloud, Blaze handles fast, efficient tasks like processing live video, small language models, or sensor data right where it happens. Delivering a compelling total cost of ownership benefit to customers. Customers aren't replacing infrastructure, they're augmenting it. And Blaze helps them do exactly that. To meet the growing demand for hybrid AI, we're introducing the Blaze AI platform, a -and-play software and hardware stack that makes deployment faster and easier. At its heart is a programmable graph streaming processor, the Blaze GSP, designed for low-power inference where data is created. We combine that with full-stack verticalized software, a low-code software development kit, and a growing partner network to make AI applications deployable out of the box. According to Gartner's 2024 AI services forecast, the combined market across defense, smart cities, retail, industrial, and energy is over $112 billion today. And in its 2024 AI server forecast, Gartner notes that inference systems will outnumber training systems by as much as six to one. That is why the Blaze AI platform is positioned to deliver better performance, lower total cost of ownership, and more deployable solutions at scale. That is how we take customer demand and turn it into deployments quickly and at scale. It is what gives us confidence in the quarters ahead. From day one, my co-founders and I set out to create a programmable architecture to power the physical world, not just through the cloud but with localized intelligent systems. Our vision is to be the trusted AI platform that helps people and machines act on real-time intelligence in every critical industry. Our mission is to deliver energy-efficient, scalable AI infrastructure at the edge and in the cloud. That vision and mission are now coming to life. Our platform is already in the field, validated through the Starshine project and the South Asia Rollout, where hybrid AI is moving from signed contract to real deployment. Whether it's smart infrastructure, autonomous zones, or next generation defense, the common thread is clear. Blaze is enabling real-time intelligence wherever it's needed. Momentum is growing as projects move forward and new opportunities open in Asia, the Gulf, and the Americas. The hybrid AI platform today runs on our current generation chip and we're already developing our next generation chip to keep our strong edge AI position while deepening our reach into cloud-native, enterprise, and data center environments. We believe Blaze is becoming a core part of the AI infrastructure stack, complementing GPU systems, enabling multimodal workloads, and optimizing for power, latency, and cost from edge to cloud. This second quarter was a milestone for us. We secured 176 million dollars in contracts, launched the Blaze AI platform, and proved our -to-market strategy across smart cities, defense, and sovereign AI. Hybrid AI is no longer just a roadmap item. It's in the field, helping to advance national infrastructure and delivering real outcomes. The product is ready. Our partners are aligned, and customer momentum is real. With that, I'll turn it over to Harminder to walk through the financial highlights and updated guidance.
Thank you, Dinekar, and good afternoon, everyone. I'll take you through our second quarter financial performance, what we've been getting done, and where we're headed next. As you heard from Dinekar, in the last six weeks alone, we signed 176 million dollars in customer commitments. That's two deals. A 56 million dollar purchase order for server and software deliveries to a South Asia company, and a 120 million dollar minimum revenue contract for servers to Starshine, covering markets across Asia Pacific. We booked 1.6 million dollars of the South Asia order in the second quarter, net of partner commission, and there is about 4 million dollars in backlog for the remainder of this year. Starshine shipments are planned to begin in the third quarter, with up to 25 percent of the total order anticipated to be fulfilled this year. Cash collections should come in Saturday, and most of our deliveries in the third and fourth quarters of 2025 are expected to be paid within the year. We believe these bookings alone largely de-risk our revenue outlook for fiscal years 2025 and 2026. Our pipeline growth is robust, now over 725 million dollars, with 300 million dollars of that in advanced discussions. We expect conversion to accelerate as we move into 2026, and plan to share news as contracts and purchase orders close. Now let's look at the second quarter by the numbers. I'm pleased to report that revenue came in at 2 million dollars, net of around 200,000 dollars in related party sales commissions. That's almost double the revenue reported last quarter, and above the high end of our guidance range. The South Asia purchase order includes around 15 percent of perpetual software licenses shipped with each server, and we also recognized 300,000 dollars in AI studio license revenue from another customer. Excluding inventory cost adjustments made in prior periods, our underlying blended gross margin for the second quarter was 64 percent. Gross margins are expected to dip for the next two or three quarters, as we ramp up the Starshine contract because of third-party hardware. Research and development expense of 9.6 million dollars in the second quarter included a non-cash stock-based charge of 3.2 million dollars. The underlying cost of 6.4 million dollars was 700,000 dollars lower than the prior quarter underlying cost of 7.1 million dollars. This reduction was largely due to the uneven nature of third-party costs related to the development of our next generation chip. We continue to actively manage our labor costs by optimizing resources in lower cost geographies. Selling general and administrative expenses, excluding depreciation and amortization and stock-based compensation, grew slightly to 8.6 million dollars in the second quarter, up from 8.3 million dollars in the prior quarter. The primary drivers were higher legal and financial advisory fees and external marketing costs, offset by savings in labor costs. We plan to selectively invest in our -to-market capability in the regions where we're experiencing highest demand for the hybrid AI platform. Net loss for the second quarter of 29.6 million dollars was over 118 million dollars lower than the prior quarter, which included significant one-time and non-cash accounting adjustments related to the merger. Cash ended the quarter at 29.1 million dollars, including funds in escrow. In July, we entered into a common stock purchase agreement with B. Riley. This agreement gives us the flexibility to raise equity when we want to. We will continue to explore capital formation strategies to fund capital needs for future growth opportunities. Coupled with anticipated receipts from customers, we believe that our cash runway supports the commercialization of the two announced contracts and engagement of third-party design partners to begin developing our next generation silicon. Since our last earnings call, here's what I'd highlight. First, we secured 176 million dollars in contracts and purchase orders. South Asia deliveries are underway and we anticipate the first shipments for Starshine to start in the third quarter of 2025. Second, we launched our hybrid AI platform, which is resonating strongly with customers serving multiple use cases. This is no longer a roadmap item. It's being deployed in national and enterprise infrastructures and creating real-world outcomes. Next, our qualified pipeline now exceeds 725 million dollars with 300 million dollars in higher confidence deals expected to contribute towards more predictable revenue growth in 2026 and beyond. And finally, we continue to maintain cost discipline, investing where we stand the strongest and have capital formation strategies in place to fund growth. Thank you and with that we'll now open the line for questions.
As a reminder, to ask a question you will need to press star 11 on your telephone. To remove yourself from the queue you may press star 11 again. Please limit yourself to one question and one follow-up. Please stand by while we compile the Q&A roster.
Good afternoon everybody.
Sorry, Speaker. We just needed to pass it back to Dinekar for
some closing
remarks.
Okay, please proceed.
Good afternoon everybody. As we get started, I wanted to share something we're excited about. Our client Starshine recently published a case study featuring Blaze along with a great video from their CEO Matt. It's always energizing to see our work and vision showcased. Matt shared the video link with us and we posted it on the Blaze website and our blog for anyone who would like to check it out. With that, I'd like to hand it over back to you for Q&A.
Thank you, sir. Our first question comes from the line of Gil Luria of D.A. Davidson. Please go ahead, Gil.
Thank you. Good afternoon. First question is just to check some math here. If we have 176 million, it will be delivered by then in 2026 based on this year's guidance. That would imply that there's 140 million dollars that should still go into 2026 before any additional wins from any other contract that could be secured between now and the end of next year. Is that right?
That would be correct,
Gil. Okay. Then the second part is more about the architecture and the market opportunity. It sounds like the architecture for Starshine is, I think you referred to it as a hybrid architecture where you're putting your product side by side in a server with NVIDIA GPUs because your inference is so much more efficient. But then that goes into the data center. Unlike other projects that you are going to deliver on the edge, this architecture puts you in the data center. Does that mean that you now have a bigger incremental opportunity within data centers as opposed to the opportunity at the edge?
Absolutely, Gil. It is a combination of, for example, the South Asia contract that we announced is also with Soarin AI provider. They are doing traffic management use cases where they have a box right behind the camera and also a server in an on-prem cloud that can do traffic. What we're seeing in the bigger picture is customers primarily care about ROI for a project. This is where they're looking at hybrid as the right strategy where they do GPU-based systems for part of the problem. Then they need part of the AI stack, Blaze runs more efficiently for a TCO advantage, for a cost advantage, power advantage. This is where they complement GPUs with Blaze. That's the momentum that we're seeing and that's what we're referring to as hybrid AI. So you are right. It's also extending our reach into the data center.
That's great. Thank you.
Our next
question comes from the line of Richard Shannon of Craig Harlem Capital Group LLC. Please go ahead, Richard.
Thank you. Thanks, Dinnaker and Homirinda for taking a couple of my questions here. Maybe let's just ask more of a tactical one here, following up the guide as well as a couple of the past questions here. I guess the first part of this is, I think, the question that I have is, you were talking about a dip in gross margins here in, I think, the next couple of quarters. Maybe you could nail this down a little bit better, help us out what kind of levels you're thinking of, and then how much of the sales here come from these two large contracts in Starshine and from the Southeast Asian customer?
So in your second question for clarification, Richard, do you mean in 2025? How much of those contracts are in 2025?
I mentioned the third quarter, but if you'd like to go for the whole second half, that'd be great too.
Yeah. So it's going to be a combination. If I take your second question first, between the $56 million and the $120 million, there'll be more of Starshine this year in proportion, and of course, it'll be proportionate as we go into 2026. The gross margin dip, it will depend on the mix of not just the deliveries of these two contracts. Both have third-party hardware components in them, but the margins on the Starshine third-party components are slightly lower. So I can't quantify it just yet. It will all depend on the mix, but all we do know, what we can say is that it will be a little bit lower than what you're seeing today. Of course, what will also and can also offset that is the more software licenses and so on that we deploy in 2026. That should counter some of that reduction from the Starshine contract specifically.
Okay, perfect. That's very helpful detail here. My second question is regarding the MOU with the UAE entity they announced last year. I asked this partly, Jinnikar, because a couple of times you prepared to, Mark, you talked about defense applications being proven out here, and it smells something like what's going on there. So maybe if you can give us an update of where that sits, if it's still in the pipeline, when you expect that to come across the finish line.
Sure. Our defense pipeline is actually growing, but let me specifically address your question with regards to the MOD status. The delivery and revenue recognition of that particular MOD project will follow the customer's deployment schedule, which currently targets 2026. Given the size and immediacy of the Starshine and the South Asia orders, we're actually prioritizing these programs to deliver a recognized revenue in Q2, Q3, and Q4, and that's where we are. On top of that, we are engaged with defense industry in various, this is part of a pipeline, larger pipeline, including use cases like drones, as well as video security, perimeter security, and as these further materialize, we'll be sure to announce them.
Okay, great. One last question. I will jump out of line here. The pipeline, $725 million. I think the last quarter you talked about, Kimber, what it was, $400 or something like that. So a nice increase, and I would assume if we take out the $176, the contracts you've announced since then here, maybe you can tell us about where the new elements of the pipeline sit. And then also with these, I think $300 million in late stage here, how many different contracts are we talking about, and would it be reasonably expected to see those close sometime this year?
So yes, you're right. The $176 is outside of the $725. And so we've always maintained a very deliberate nature of how we qualify our opportunities. And really, it's playing to the advantages of the place, the programmable device, the low latency, high performance, and low power consumption. There are probably around 20 to 40 different applications or customer engagements in the $725. So they vary in size. As we would have discussed in earlier conversations, we approach what we call a beach held customer. We take a beach held customer approach with our ISVs. When we deal with one particular customer, one ISV in one specific industry, and as that gets deployed, we expect that that ISVs pipeline becomes available to us. The $300 million, specifically that one, we've gone through POCs, we've done some pilots, we've identified the ISV, we're working with those customers. And really, it's about when they wish to start deploying solutions. We expect most of those to begin at scale in 2026.
Okay, great. Thanks. That's all for me, guys.
Thank you. Once again, to ask a question, please press star 1-1 on your telephone. Again, that's star 1-1 to ask a question. Our next question comes from the line of Kevin Cassidy of Rosenblatt Securities. Please go ahead, Kevin.
Yeah, thank you for taking my question. And congratulations on the great results and good outlook. One question I have is, your guidance for the year for 2025, it tightens it on the low end and also on the top end. I was just wondering on the high end, is it more of your supply chain or what brought that number down?
No, it's not a supply chain issue, Kevin. It's really, we're now working very closely with those customers on what their deployment needs are. We know that they had requirements for 2025, which were fulfilling, and we've got schedules for 2026 that we're working through.
Okay, great. And your outlook for 2026 remains the same? Yes. The revenue guidance you had given before? Okay,
great. Yes. A while ago, we had updated the lower end from 105 million to 130 million. But the upper end still remains the same.
Okay. And can you talk about the supply chain? Is there any problems getting wafer starts or any other, I guess, what is the long pole in the tent for manufacturing your products?
No, we're fortunate in the sense that our first generation of chip is at 14 nanometer, doesn't cause us any capacity sort of challenges with Samsung Foundry. We maintain extremely close relationships with them and our contract manufacturer. So as these contracts were being discussed and we knew what the rollouts were going to be, we'd already placed orders for not just chips, but also the end, the cards and so on. Denica, do you want to add?
Also that, you know, right, our Foundry is here. It's also beneficial.
Okay, thank you. Thank you. Our next question comes from the line of Scott Serow of Roth Capital. Please go ahead, Scott.
Good afternoon. Thanks for taking my questions. Hey, Denica, I was hoping to dive in a little bit on the hybridization commentary. You know, there's certainly been the evolution within the data center, which was a huge upside opportunity as we look at over the next several years. But it sounds like there's a little bit of hybridization going on at the edge as well. I'm wondering a couple things coexistence at the edge with GPUs as opposed to complete displacement. How do you see the evolution over the next year or so? And then I'm wondering within the data center itself, it sounds like some of those early opportunities are starting to crop up and present themselves. I'm wondering about the timeline of when we start to see that materialize in a little bit more of a meaningful way. And as we look at that $725 million pipeline, you know, how is that spread across, you know, traditional data center versus edge applications? Thanks.
Sure, sure. Scott, thank you. So first of all, I guess the technology part behind it, customers clearly, especially in these real world projects, care about ROI, how much is the spend, and they have their budgets established. And within that, they have to, you know, show results and return. And whether it's a government, a city, or what have you. And that is what is driving. If they did the entire project on GPU only, the costs would be prohibited. At the same time, the parts of the AI stack that blazes more efficient and more power efficient, more cost efficient. At the same time, one of the uniquenesses of Blazor's architecture is full of programmability so that it can adopt these workloads. And then the software tool chain to make it seamless and easy to adopt, right? So that it also helps with the customers, time to market and IT spend. So these are all coming to play and complementing a GPU based design, a system with a Blazor server alongside is what they're witnessing significant TCU advantage. So that's what is driving the momentum. The second part, I'll let Harminder jump in as well.
Yeah, so you asked about a mixture of the split of the pipeline. Actually, it is a blend. So I'll give you one extreme, which is where in certain defense applications, drones, for example, it's deployment of one of our cards on the drone itself. And that's it. That is the solution that's being provided. At the other end, you've got the kind of things that you've heard from the South Asia deal and the Starshine deal where we are delivering server systems powered by Blaze and they coexist with GPU based systems. And in the middle, you've got again, another defense type of application where you've got a mixture of two. On the one you have sensors, perimeter security, for example, you have sensors on the perimeter, which is fusing visual spectrum near infrared, infrared radar, et cetera. And there are certain alerts being delivered from the algorithms that are being run there and there. But they're then backed up by a command center, which has a server sort of system based, server system based, a system, a server system, excuse me. So when we look at that pipeline, it's got combinations of all of those. And we just focus on where customers want to deploy the fastest and that's how we react.
Hey, if I could just quickly follow up, you indicated in your opening remarks about inference driving about 6X, the opportunity versus traditional training. And I'm just wondering if you're seeing those types of numbers in any of these early deployments, right? In terms of the wallet share or market opportunity for you guys, or is that the evolution that you're going to expect over the next several years? Thanks.
This is actually quite clear, right? The initial phase of AI was all about AI training and creating models in the cloud. Now AI does need to come outside the data center into real use cases. It could be smart city, it could be smart agriculture, industrial automation, all of these use cases. And this is where majority of the workload is inference. So as the world starts adopting AI, increasingly, inference is the main use case. And we're actually finding this, even as we discuss with our customers about the hybridization strategy, pretty common to see them have more inference demands than anything else. So yeah, short answer is yes.
Great. Thank you. Thank you.
I would now like to turn the conference back to Dhinnekar Munagala for closing remarks. Sir?
Sure. Thank you. So as we wrap up, we started the year as a young public company, and we had a pipeline, and our focus was to convert the pipeline. I want to emphasize that we are a hybrid AI with focus, momentum, and the right partnerships in place. We have a clear path to execute on our commitments at scale with our Blaze AI platform and capture the opportunities ahead with our pipeline. Our technology is proven, our go-to market is working, our team is fully aligned. I'm excited about what's ahead for Blaze, and I want to thank our employees, our partners, customers, and shareholders for their continued trust and support. Thank you.
This concludes today's conference call. Thank you for participating. You may now disconnect.