11/6/2025

speaker
Mike
Moderator

CFO, and Kent Draper, Chief Commercial Officer. Before we begin, please note this call is being webcast live with a presentation. For those that have dialed in via phone, you can elect to ask a question via the moderator after our prepared remarks. Before we begin, I'd like to remind you that certain statements that we make during the conference call may constitute forward-looking statements and iron cautions listeners that forward-looking information and statements are based on certain assumptions and risk factors that could cause actual results to differ materially from the expectations of the company. Listeners should not place undue reliance on forward-looking information or statements, and I'd encourage you to refer to the disclaimer on slide two of the accompanying presentation for more information. With that, I'll now turn over the call to Dan Roberts.

speaker
Dan Roberts
President and CEO

Thanks, Mike, and thank you all for joining us for IRN's Q1 2026 earnings call. Today we'll provide an overview of our financial results for the first fiscal quarter ending September 30, 2025, highlighting key operational milestones and importantly discuss how our AI cloud strategy is driving strong growth. We'll then open the call for questions at the end. So Q1 FY26 results, fiscal year 2026 is off to a really good start. We delivered a fifth consecutive quarterly increase in revenues and a strong bottom line. Revenue reached $240 million and adjusted EBITDA was 92. Noting, of course, that net income and EBITDA, importantly, reflected an unrealised financial gain on financial instruments. This performance reflects our continued and the team's disengagement the team's disciplined execution, along with the benefits of having a resilient vertically integrated platform. Microsoft and the cloud contract. So earlier this week, we announced a $9.7 billion AI cloud contract with Microsoft, which was a defining milestone for our business that underscores the strength and scalability of our vertically integrated AI cloud platform. The agreement not only validates our position as a trusted provider of AI cloud service, but also opens up access to a new customer segment among the global hyperscalers. Under this five-year contract, Iron will deploy NVIDIA GB300 GPUs across 200 megawatts of data centers at our Childress campus. The agreement includes a 20% upfront prepayment, which help support capital expenditures as they become due through 2026. The contract's expected to generate approximately $1.94 billion in annual recurring revenue. Beyond the obvious positive financial impact, the contract carries strategic value of significance for us. It not only positions Iron as a contributor towards Microsoft's AI roadmap, but also demonstrates to the market our ability to serve an expanded customer base, which includes a range of model developers, AI enterprises, and now one of the largest technology companies on the planet. As enterprises and other hyperscalers accelerate their AI build-out, we expect that our combination of power, AI cloud experience, and execution capability will continue to position us as a partner of choice. Looking ahead, we're executing now on a plan that will see our GPU fleet scale from 23,000 GPUs today up to 140,000 GPUs by the end of 2026. When fully deployed, this expansion is expected to support in the order of $3.4 billion in annualized run rate revenue. Importantly, this expansion leverages just 16% of our three gigawatts in secured power, leaving ample capacity for future expansion. With that overview in mind, let's turn to the next section, a closer look at our AI cloud platform and how we're positioned to scale in the years ahead. So as I alluded to earlier, a key driver of IRON's competitive advantage in AI cloud services is our vertical integration. We develop our own greenfield sites, engineer our own high voltage infrastructure, build and operate our own data centers, and deploy our own GPUs. Simply put, we control the entire stack from the substation all the way down to the GPU. We believe strongly that this end-to-end integration and control is a key differentiator that positions us for significant growth. This model of vertical integration eliminates dependence on third party co-location providers, and most importantly, removes all counterparty risk associated. This allows us to commission GPU deployments faster with full control over execution and uptime. For our customers, this translates into scalability, cost efficiency, and a superior customer service with tighter control over performance reliability and delivery milestones, driving tangible value and certainty. For those reasons, our customers, including Microsoft, view Iron as a strategic partner in delivering cutting edge AI compute, recognizing our deep expertise in designing, building, and operating a fully integrated AI cloud platform. On that note, we're excited to announce a further expansion of our AI cloud service, targeting a total of 140,000 GPUs by the end of 2026. This next phase includes the deployment of an additional 40,000 GPUs across our McKenzie and Canal Flats campuses, which are expected to generate in the order of $1 billion in additional ARR. When combined with the $1.9 billion expected from the Microsoft contract, and 500 million from our existing 23,000 GPU deployment. This expansion provides a clear pathway to approximately $3.4 billion in total annualized run rate revenue once fully ramped. Importantly, this incremental 40,000 GPU build out will be executed in a highly capital efficient manner through leveraging existing data centers. While we have not yet purchased GPUs for the deployment, we continue to see strong demand for air cooled variants of NVIDIA's Blackwell GPUs, including both the B200 and the B300. And given their efficient deployment profile, we expect these to form the basis of this expansion. That said, we will continue to monitor customer demand closely and pursue growth in a disciplined, measured way. This full expansion to 140,000 GPUs will only require about 460 megawatts of power, representing roughly 16% of our total secured power portfolio. This leaves substantial optionality for future growth and importantly, continued scalability across our portfolio. The key takeaway here is that we have substantial near-term growth being actively executed upon, but also have significant and additional organic growth ahead of us. Turning now to slide eight, which highlight the British Columbia data centres supporting our expansion to 140,000 GPUs. At Prince George, our ASIC to GPU swap out program is progressing well. The same process will soon extend to our McKenzie and Canal Flats campuses, where we expect to migrate ASICs to GPUs with similar efficiency and speed. Together, these sites are allowing us to fast track our growth in supporting high-performance AI workloads, scaling it into what is becoming one of the largest GPU fleets in North America. Turning to Childress, where we are now accelerating the construction of Horizons 1 to 4 to accommodate the phase delivery of NVIDIA GB300 NVL72 systems for Microsoft. We've significantly enhanced our original design specifications to meet hyperscale requirements and also further ensure durable long-term returns from our data center assets. The facilities have been engineered to tier three equivalent standards for concurrent maintainability, ensuring continuous operations even during maintenance windows. A key feature of this next phase is the establishment of a network core architecture capable of supporting single 100 megawatt superclusters. A unique configuration that enables high performance AI training for both current and next generation GPUs. We're also incorporating flexible rack densities ranging from 130 to 200 kilowatts per rack, which allows us to accommodate future chip generations and the evolving power and density requirements without major structural upgrades. While these design enhancements have resulted in incremental cost increases, they provide long-term value protection, enabling our data centers to support multiple generations and reduce recontracting risk typically associated with lower spec builds. In short, we're building Childress not just for today's GPUs and the Microsoft contract in front of us, but also for the next generations of AI compute. Beyond the accelerated development of Horizons 1 through to 4, the remaining 450 megawatts, as you can see in the image on screen, of secured power at Childress provides substantial expansion potential for future Horizons numbered 5 through to 10. Design works underway to enable liquid-cooled GPU deployments across the entire site, positioning us to scale seamlessly alongside customer demand. Finally, turning to Sweetwater, our flagship data center hub in West Texas, which has been somewhat overshadowed in recent months by the activity in Childress and Canada. At full build out, Sweetwater will support up to two gigawatts, 2000 megawatts of gross capacity, all of which has been secured from the grid. As shown in the chart, this single hub rivals and in most cases exceeds the entire scale of total data center markets today. While the recent headlines have naturally been dominated more about our AI cloud expansion, at other sites, Sweetwater is a pretty exciting platform asset, giving us the capability to continue servicing the wave of AI compute demand. Sweetwater One energization continues to remain on schedule, with more than 100 people mobilized on site to support construction of what is becoming one of the largest high-voltage data center substations in the United States. All exciting stuff. With that, I'll now hand over to Anthony, who will walk through our Q1 FY26 results in more detail.

speaker
Anthony
Chief Financial Officer

Thanks, Dan. And thanks, everyone, for your attendance today. Continued operational execution was reflected in another quarter of strong financial performance. Q1F1-26 marked our fifth consecutive quarter of record revenues, with total revenue reaching $240 million, up 28% quarter over quarter and 355% year over year. Operating expenses increased primarily on account of higher depreciation reflecting ongoing growth in our platform and our higher SG&A. The latter, primarily driven by a materially higher share price, resulting in acceleration of share-based payment expense and a higher payroll tax expense associated with employees. 63 million were both significantly up largely on account of unrealized gains on prepaid forward and cap call transactions entered into in connection with our convertible note financings. Adjusted EBITDA was 92 million, reflecting continued margin strength, partially offset by that higher payroll tax of 33 million accrued in the quarter on account of strong share price performance. Turning now to our recently announced AI cloud partnership with Microsoft. As Dan mentioned, this is a very significant milestone for Iron. It not only delivers strong financial returns, but also creates a significant long-term strategic partnership for the business. Focusing on the financials, the $9.7 billion contract is expected to deliver approximately $1.9 billion in annual revenue once the four phases come online, with an estimated 85% project EBITDA margin. This strong margin, which reflects our vertically integrated model, incorporates all direct operating expenses across both our cloud and data center operations, supporting the transaction, including power, salary, wages, maintenance, insurance, and other direct costs. These cash flows deliver an attractive return on the cloud investment, i.e. the 5.8 billion capex for the GPUs and ancillaries, after deducting an appropriate internal co-location charge. ensuring that the project delivers robust cloud returns, as well as an attractive return on our long-term investment in the Horizon data centers, which will deliver returns for many years into the future. The transaction is also a number of features that allow us to undertake the transaction in a capital efficient way. Firstly, the payments for the CapEx are aligned with the phase delivery of the GPUs across the calendar year 26, as we deliver those four phases. Secondly, The $1.9 billion in customer prepayments, being 20% of total contract revenue, paid in advance of each tranche, provides funding for circa one-third of the funding requirement at the outset. Thirdly, the combination of the latest generation of GPUs and the very strong credit profile of Microsoft should allow us to raise significant additional funding secured against the GPUs and the contracted cash flows on attractive terms. While the final outcome will be subject to a range of considerations and factors, we are targeting circa $2.5 billion through such an initiative, and depending on final terms and pricing, there is meaningful upside to that, noting again the very high quality of our counterparty. We also have a range of options available to fund the remaining $1.4 billion, including existing cash balances, operating cash flows, and a mix of equity convertible notes and corporate instruments. On that note, turning more generally to CapEx and funding. We continue to focus on deepening our access to capital markets and diversifying our sources of funding. We issued $1 million in zero-coupon convertible notes during October, which was extremely well supported. And we also secured an additional $200 million in GPU financing to support our AI cloud expansion in Prince George, bringing total GPU related financings to $400 million to date and attractive rates. Taking into account recent fundraising initiatives, our cash at the end of October stood at $1.8 billion. Our upcoming CapEx program, which includes the construction of the Verizon data centers for the Microsoft transaction, will be met from a combination of this strong starting cash position, operating cash flows, the Microsoft prepayments as just noted, and other financing streams that are underway. These include the GP financing facilities that we discussed, as well as a range of other options under consideration, from other forms of secured lending against our fleet of GPUs and data centres through to corporate level issuance, whilst maintaining an appropriate balance between debt and equity to maintain a strong balance sheet. With that, we'll now turn the call over to Q&A.

speaker
Operator
Conference Operator

Thank you. If you wish to ask a question, please press star one on your telephone and wait for your name to be announced. If you wish to cancel your request, please press star then two. If you're using a speakerphone, please pick up the handset to ask your question. The first question today comes from Nick Giles from B Reilly Securities. Please go ahead.

speaker
Nick Giles
Analyst, B. Riley Securities

Yeah, thank you, operator. Hi, everyone. Thanks so much for the update today, guys. I want to congratulate you on this significant milestone with Microsoft. This was really great to see. I have a two-part question. Dan, you mentioned strategic value, and I was first hoping you could expand on what this deal does from a commercial perspective. And then secondly, I was hoping you could speak to the overall return profile of this deal and how you think about hurdle rates for future deals.

speaker
Dan Roberts
President and CEO

Thank you very much. Sure. Thanks, Nick. Appreciate the opportunity. The ongoing support. So in terms of the strategic value, I think undoubtedly proving that we can serve as one of the largest technology companies on the planet has a little bit of strategic value. But below that, the fact that this is our own proprietary data center design and we've designed everything from the substation down to the nature of the GPU deployment and that has been deemed acceptable by a trillion-dollar company, I think that's got a bit of strategic value, both in terms of demonstrating to capital markets and investors that we are on the right track, but also importantly in terms of the broader customer ecosystem and that validation. And look, we've seen that play out over the days since the announcement. In terms of hurdle rates and returns, I think it's worth, Anthony, if you to jump into this. I think it's fair to say that IRRs, hurdle rates and financial models have dominated our lives for the last six weeks. So there's probably a little bit we can outline in this regard.

speaker
Anthony
Chief Financial Officer

Sure. Thanks, Dan. And thanks for the question. Just in terms of the returns on the transaction, obviously, as I noted in the In the introductory comments, when we look at the cloud returns, we obviously take away what we think to be an arm's length co-location rate, so effectively charge the deal for the cost of reaching the data center capacity. After we take that into account on an unlevered basis and assuming that there are zero cash flows or RV associated with the GPUs after the term of the contract, We expect an unlevered IRR of low double digits. Obviously, we'll be looking to add some leverage to the capital structure for the transaction, as we also discussed. And once we take that target $2.5 billion of additional leverage into account, you're achieving a levered IRR in the order of circa 25% to 30%. Obviously, that is assuming that $2.5 billion package, and it also assumes that the remaining funding is coming from equity as opposed to other sources of capital, which we might also have access to. I'd also note that we said that we might well be upside on that $2.5 billion package. Obviously, at a $3 billion leverage package against the GPUs on a secured financing package, you could see that levered return increase by, you know, circa 10%. In terms of the RV, we've obviously, in those numbers, we're just reflecting zero economic value in the GPUs at the end of the term. If, for example, you were to assume a 20% RV, obviously that has a material impact unlevered IRRs would increase to high teens and your levered IRRs would be somewhere between, you know, 35 to 50%, depending on your leverage assumptions.

speaker
Dan Roberts
President and CEO

Yeah, I think maybe just to jump in as well. Thanks, Anthony. That's all absolutely correct. And there are a lot of numbers in there, which is demonstrative of the amount of time we've spent thinking about IRRs. So I think just to reiterate a couple of points, One is we've clearly divided out our business segments into standalone operations for the purposes of assessing risk return against a prospective transaction. So to be really clear, all of those AI cloud IRRs assume a co-location charge. So they assume a revenue line for our data centres. So our data centres we've assumed earn internally $130 per kilowatt per month escalating, which is Absolutely, a market rate of return, particularly considering the first five years is underwritten by a hyperscale credit. So that's probably the first point I make. But it's also really important to mention that we've optimised elsewhere. So the 76,000 GPUs that we've procured for this contract at a $5.8 billion price, Dell have really looked after us to the point where they've got an inbuilt financing mechanism in that contract. where we don't have to pay for any GPUs until 30 days after they're shipped. So there's further enhancements there. And then the final point I'd reiterate is this 20% prepayment, which I don't believe we've seen elsewhere, accounts for a third of the entire capex of the GPU fleet. And I guess we've been asked previously why we would prefer to do AR cloud versus co-location. as one very single small data point, we are getting paid a third of the capex upfront here as compared to having to give away equity, big chunks of equity in our company to get access to a co-location deal. So we're really pleased and to lead us towards that $3.4 billion in ARR by the end of 2026 on returns that are pretty attractive. Yeah, it's a good result.

speaker
Nick Giles
Analyst, B. Riley Securities

Anthony, Dan, I really appreciate all the detail there. One more, if I could. I was just wondering if you could give us a sense for the number of GPUs that will ultimately be deployed as part of the Microsoft deal. And then as we look out to year six and beyond, I mean, can you just speak to any of the kind of future proofing you've done of the Horizon platform and what can ultimately be accommodated in the long term for future generations of chips?

speaker
Kent Draper
Chief Commercial Officer

I'm happy to jump in and take that one down. So in terms of the number of GPUs to service this contract, draw your attention to some of our previous releases where we've said that each phase of Horizon would accommodate 19,000 GB300s. And obviously we're talking about four phases here with respect to that. In terms of future proofing of the data centers, there are a number of elements to it, but the primary one is that we have designed for rack densities here that are capable of handling well in excess of the GB300 rack architecture. And to give you specific numbers there, the GB300s are around 135 kilowatts a rack for the GPU racks, and our design at the Horizon facilities it can accommodate up to 200 kilowatts a rack. So that is the primary area where we have future proofed the design. But as Dan also mentioned in the remarks on the presentation, we have enhanced the design in a number of ways, including effectively what is full tier three equivalent concurrent maintainability. So there are a number of elements that have been accommodated into the data centers to ensure that they can continue to support multiple generations of GPUs.

speaker
Nick Giles
Analyst, B. Riley Securities

Very helpful, Ken. Guys, congratulations again and keep up the good work.

speaker
Operator
Conference Operator

Thank you. The next question comes from Paul Golding from Macquarie. Please go ahead.

speaker
Paul Golding
Analyst, Macquarie

Thanks so much for taking the question, and congrats on the deal and all the progress with HPC. I wanted to ask, I guess this is a quick follow-on to the IRR question. Just on our back of the envelope, math, it looks like pricing for GPU hour may be on the rise or at the higher end of that $2 to $3 range, assuming full utilization, so presumably potentially even higher. How should we think about the pricing dynamics in the marketplace right now on cloud, given the success of this deal and what seems to be fairly robust pricing. And then I have a follow-up. Thank you.

speaker
Dan Roberts
President and CEO

Sure.

speaker
Paul Golding
Analyst, Macquarie

Sorry.

speaker
Kent Draper
Chief Commercial Officer

You go ahead, Dan.

speaker
Dan Roberts
President and CEO

Sorry. Look, I'll let Kent talk a bit more about the market dynamic, but it is absolutely fair to say that we're seeing a lot of demand. That demand appears to increase month on month in terms of the specific dollars per GPU hour. We haven't specified that exactly. However, we have tried to give a level of detail in our disclosures, which allows people to work through that. I think importantly for us, rather than focusing on dollars per GPU hour, which I think your statement is correct, is focus on the fundamental risk return proposition of any investment. And when we've got the ability to invest in an AI cloud, delivering what is likely to be in excess of 35% levered IRRs against a Microsoft credit. I mean, you kind of do that every day of the week.

speaker
Kent Draper
Chief Commercial Officer

Yeah, thanks, Dan. And Paul, with regard to your specific question around demand, we continue to see very good levels of demand across all the different offerings we have. The air-cooled servers that we are installing up in our facilities in Canada, lend themselves very well to customers who are looking for 500 to 4000 GPU clusters and want the ability to scale rapidly. As we've discussed before, transitioning those existing data centers over from their current use case to AI workloads is a relatively quick process, and that allows us to service the growth requirements of customers in that class very well. And case in point, we've been able to pre-contract for a number of the GPUs that we purchased for the Canadian facilities well in advance of them arriving out at the sites. And this is something that customers have historically been pretty reticent to do. But that level of demand exists in the market as well as ongoing trust and credibility of our platform. with both existing and new customers that is allowing us to take advantage and pre-contract a lot of that away. And then obviously with respect to the Horizon One build out for Microsoft, this is the top tier liquid cooled capacity from NVIDIA. We continue to see extremely strong demand for that type of capacity. And the fact that we are able to offer that means that we can genuinely serve all customer classes from hyperscalers, the largest foundational AI labs and largest enterprises with that liquid cooled offering down to top tier AI startups and smaller scale inference enterprise users at the BC facilities.

speaker
Paul Golding
Analyst, Macquarie

Thanks for that color, Kent and Dan. As a follow-up, as we look out to Sweetwater One energization coming up fairly soon here in April, are you able to speak to any inbound interest you're getting on cloud at that site? I know it's early days just from a construction perspective, maybe for the facilities themselves, but any color there and maybe whether you would consider hosting at that site given the return profile and potential cash flow profile that you would get from engaging in the cloud business over a period of time. Thank you.

speaker
Kent Draper
Chief Commercial Officer

Yeah, in terms of the level of interest and discussions that we're having, we're seeing a strong degree of interest across all of the sites, including Sweetwater as well. Obviously very significant capacity available at Sweetwater, as Dan mentioned, with initial energisation there in April 2026, which is extremely attractive in terms of the scale and time to power. So I think it's very fair to say that we're seeing strong levels of interest across all the potential service offerings. As it relates to GPU as a service and co-location, as previously, we will continue to do what we think is best in terms of risk adjusted returns. Anthony outlined the risk adjusted returns that we're seeing in GPU as a service specifically at the moment. And as we've outlined over the past number of months, that does look more attractive to us today. But as we continue to see increasing supply-demand imbalance within the industry, that may well feed through into co-location returns where it makes sense to do that in the future. But as it stands today, certainly the return profile that we're seeing in GPU as a service we think is incredibly attractive.

speaker
Paul Golding
Analyst, Macquarie

Great. Thanks so much and congrats again.

speaker
Operator
Conference Operator

Thank you. The next question comes from Brett Nobletch from Cantor Fitzgerald. Please go ahead.

speaker
Brett Nobletch
Analyst, Cantor Fitzgerald

Hi, guys. Thanks for taking my question. On the $5.8 billion order from Dell, can you maybe parse out how much of that is allocated to GPUs and the auxiliary equipment? And on the auxiliary equipment, say you wanted to retrofit the Horizon data centers with new GPUs in the future, do you also need to retrofit the auxiliary equipment?

speaker
Kent Draper
Chief Commercial Officer

So out of that total order amount, it's fair to say the GPUs constitute the vast majority of it. But there is some substantial amounts in there for the back end networking for the GPU clusters, which is the top tier InfiniBand offering that's currently available. In terms of future proofing, we'll have to see how much of that equipment may or may not be reusable for future generations of GPUs. As I was referring to earlier, the vast majority of our data center equipment and the way that we have structured the rack densities within the data center mean that the data center itself is future proofed. But in terms of the specific equipment for this cluster, it remains to be seen whether that will be able to be reused.

speaker
Brett Nobletch
Analyst, Cantor Fitzgerald

Perfect, thank you. And then on maybe the new 40,000 order that sounds like it's going to be plugged in in Canada, you talked about maybe a very efficient CapEx build for those data centers. Could you maybe elaborate a bit more on that? You know, when the AI craze movie first got started, you know, 18 months ago, you guys flagged that you guys were running GPUs up on Prince Shored that you built for less than a million dollars a megawatt. Are we closer to that number for this, or are we just well below maybe what the horizon one for cost per megawatt basis?

speaker
Kent Draper
Chief Commercial Officer

Yeah. So in terms of the basic transition of those data centers over to AI workloads, it is relatively minimal in terms of the capex that is required. The vast majority of the work is removing ASICs, removing the racks that the ASICs sit on and replacing those with standard data center racks and PDUs, so the power distribution units that can accommodate the AI servers. So that is relatively minimal. As we've discussed before, it's a matter of weeks to do that conversion. And from a CapEx perspective, it is not material. The one element that may be more material in terms of that conversion is adding redundancy, if required, to the data centres. That would typically cost around $2 million a megawatt if we need to do that. But obviously in the context of, you know, a full build-out like we're seeing of liquid cooled capacity at Horizon, it's extremely capital and capex efficient.

speaker
Brett Nobletch
Analyst, Cantor Fitzgerald

Awesome. Thank you guys. I'll hop back in the queue. Congrats again.

speaker
Operator
Conference Operator

Thank you. The next question comes from Dylan Hesselin from Roth Capital Partners. Please go ahead.

speaker
Dylan Hesselin
Analyst, Roth Capital Partners

Hey, thanks for taking my questions and passing our congrats on the Microsoft deal as well. To start with Microsoft, was co-location ever on the table with them? Did they come to you asking for AI cloud or how did those negotiations sort of fall out?

speaker
Dan Roberts
President and CEO

um just think about best way to answer this so we've been talking to microsoft for a long period of time and the nature of those conversations absolutely did evolve over time um is their preference the cloud deal possibly um but at the end of the day we want to focus on cloud and that was the the transaction we were comfortable with so conversations really focused around that over the last six weeks or so. I think if I may, I'd talk more generically around these hyperscale customers, because obviously we weren't just talking to Microsoft. I think there probably is a stronger preference from those to be looking at more co-location and infrastructure deals rather than cloud deals. But also is the case that there's an appetite for a combination. So it may be that we do some co-location in the future. But yeah, I think different hyperscalers have different preferences. We'll entertain them all, but given the nature of the deal we did with a 20% prepayment funding, a third of CapEx and a 35% plus equity IRR, we're feeling pretty good about pursuing AI cloud.

speaker
Dylan Hesselin
Analyst, Roth Capital Partners

Got it. Thank you. And just as a follow-up with the rest of Childress, is there any significance to the size of the Microsoft deal starting at 200 megawatts? Do they have interest in the rest of the campus? Have you talked to them about that yet?

speaker
Dan Roberts
President and CEO

So again, I'm going to divert the question a little bit because we've got some pretty strong confidentiality provisions. So let me talk generically. There is... appetite from a number of parties in discussing cloud and other structures well above the 200 megawatts that's been signed with Microsoft.

speaker
Dylan Hesselin
Analyst, Roth Capital Partners

Okay, great. Thank you.

speaker
Operator
Conference Operator

Thank you. The next question comes from John Todaro from Needham. Please go ahead.

speaker
John Todaro
Analyst, Needham

Great. Thanks for taking my question and congrats on the contract. I guess just one on that is we dig a little bit more in any kind of penalties or anything related to the timeline of delivering capacity. Just wondering if there's guardrails around that. And then ideally we'll follow up on CAPAC.

speaker
Dan Roberts
President and CEO

There's always a penalty, whatever you do in life, if you don't do what you promise you're going to do. So we're very comfortable with the contractual tolerances that have been negotiated, the expected dates versus contractual penalties and Other consequences, I can't comment more specifically beyond that on this call. But the other thing I would reiterate is we have never, ever missed a construction or commissioning date in our life as a listed company. So I think you can take a lot of comfort that if we've put something forward to Microsoft and agreed it there, and if we've put something forward to the market, our reputations are on the line, our track record is on the line. We're going to be

speaker
John Todaro
Analyst, Needham

very confident we can deliver it um and potentially even exceed it got it understood um and then just following up on the capex that 14 to 16 million uh on the i think it was the data center side um just wondering if there's anything kind of additional in there that would get it north of the colo um items other folks are talking about if maybe there's some networking or cabling included in that or any contribution from tariffs are being considered there?

speaker
Kent Draper
Chief Commercial Officer

Happy to give some additional colour there. So, yes, in terms of networking, et cetera, again, as Dan mentioned in his presentation earlier, this is designed, the Horizon campus is designed to be able to operate 100 megawatt superclusters. Now, that does raise a significant level of additional infrastructure that is required over being able to deliver smaller clusters. And so certainly some of the costs that are in the number that you mentioned are related to the ability to do that. and that will not necessarily be a requirement of every customer moving forward. So, you know, that probably is an element that is somewhat unique.

speaker
John Todaro
Analyst, Needham

Understood. Thank you, guys.

speaker
Operator
Conference Operator

Thank you. The next question comes from Stephen Gorgola from Jones Trading. Please go ahead.

speaker
Stephen Gorgola
Analyst, Jones Trading

Hey, thanks for the question. On your British Columbia GPUs, can you maybe just provide an update on where you guys stand with contracting out the remaining 12,000, I believe, GPUs of the initial 23,000 batch? And are you seeing any demand for your bare metal offering in BC outside of AI native enterprises? Thank you.

speaker
Kent Draper
Chief Commercial Officer

Yeah, happy to give an update there. We'd previously put out guidance a couple of weeks ago that we'd contracted 11,000 out of the 23,000 that were on order. Subsequent to that, we have contracted a bit over another 1,000 GPUs, and primarily the ones that are not yet contracted are the ones that are arriving latest in terms of delivery timelines. As I mentioned earlier, Yeah, we are seeing an increased appetite from customers to pre-contract, but these are GPUs that are a little further out in terms of delivery schedules relative to the ones that have already been contracted. Having said that, we continue to see very strong levels of demand and we're in late stage discussions around a significant portion of the capacity that has not yet been contracted and continue to see very good demand leading into the start of next year as well and are receiving an increasingly large number of inbounds from a range of different customer classes. So you mentioned AI natives. Yes, that has been a portion of the customer base that we've serviced previously. but we are also servicing a number of enterprise customers on an inference basis. So it is a pretty wide-ranging customer class that we're servicing out of those British Columbia sites.

speaker
Stephen Gorgola
Analyst, Jones Trading

Thanks, Kent. Appreciate it.

speaker
Operator
Conference Operator

Thank you. The next question comes from Joe Vaffey from Canaccord Genuity. Please go ahead. Joe, your line is open if you'd like to ask your question. I'll move on to the next question. Oh, sorry.

speaker
Joe Vaffey
Analyst, Canaccord Genuity

Yeah, sorry, guys. I'm really sorry. Congrats from me too on Microsoft. Just maybe, Dan, if you could kind of walk us through what you were thinking in your head. Clearly, you know, some awesome IRRs here. on the Microsoft deal. But how are you thinking about risk on a cloud deal here versus a straight colo deal, which, you know, probably wouldn't have had the return, but, you know, maybe the risk profile may be lower there. And then I'll just quick follow up. Thanks.

speaker
Dan Roberts
President and CEO

Thanks, Joe. Look, it's funny, I actually see risk very differently. So Yeah, we've spoken about co-location deals with these hyperscalers. And if you model out a 7% to 8% starting yield on cost and run that through your financial model, what you'll generally see is that you'll struggle to get your equity back during the contracted term. And then you're relying on recontracting beyond the end of that 15-year period to get any sort of equity return. So in terms of risk, I would argue that there's a far better risk proposition implicit in the deal that we've signed and going down the cloud route. And then for the shorter term contracts on the colo side, where you may not have a hyperscale credit, you're running significant GPU refresh risk against companies that don't necessarily have the balance sheet today to support confidence in that GPU refresh. So again, we think about in business segments, we think about our data center business. has got a great contract internally linked to Microsoft as a tenant. And that data center itself is future-proofed, accommodating up to 200 megawatt rack densities. And it's also the case that in five years, the optionality provides further downside protection. So upon expiry of the Microsoft contract, maybe we can run these GPUs for additional years, which we've seen with prior generations of GPUs like the A100s. But assuming that isn't the case, we've got a lot of optionality within that business. We could sign a co-location deal at that point. We could relaunch a new cloud offering using latest generation GPUs. So my concern with these co-location deals is what you're doing is you're transferring an interest or an exposure to an asset that is inherently linked to this exponential world of technology and demand and the upside that that may entail. And you're swapping that for a bond position in varying degrees of credit with the counterparties. So if you're swapping an asset for a bond exposure to a trillion dollar hyperscaler and you're kind of hoping you might get your equity back after the contracted period, I mean, that's one way to look at it. If you're swapping your equity exposure for a bond exposure in a smaller neocloud without a balance sheet, then is that a good decision for shareholders? We just haven't been comfortable.

speaker
Joe Vaffey
Analyst, Canaccord Genuity

I get it, Dan. I mean, we've run some GCFs here and on some colo deals here in the last couple months. And, you know, there's a lot to be learned when you do it. There's no doubt. And then just on this prepayment from Microsoft, I know you've got some strong NDAs here, but, you know, kind of a feather in your cap on, you know, getting, you know, that much in a prepayment. Any other, you know, anything else to say on, you know, on... you know, on, on how, you know, uh, you know, maybe your qualifications or, you know, how Microsoft, you know, perhaps, and you came to the agreement to pre-fund, you know, uh, you know, the GP purchases out of the box. Thank you.

speaker
Dan Roberts
President and CEO

Um, look, yeah, getting a third of your capex funded through a prepayment from the customer is fantastic from our perspective. And we're super appreciative for Microsoft coming to the table. On that, and what that allows us to do is to drive a really good IRR and return to equity for our shareholders. And again, linking back to what Anthony said earlier, we expect 35% equity IRRs from this transaction accounting after an internal data center charge. So trying to create that apples and apples comparison for a Neo cloud that has a infrastructure charge, even after that, we're looking at 35% plus. And also what's really important to clarify is the equity portion of that IRR, we have assumed is funded with 100% ordinary equity, which given our track record in raising convertibles, given the lack of any debt at a corporate level, is probably conservative again. So from a risk adjusted perspective, linked to a trillion dollar credit and the ability to fund it I mean, we're really happy with the transaction and yeah, hopefully there's more to come. Great.

speaker
Joe Vaffey
Analyst, Canaccord Genuity

Thanks, Dan.

speaker
Operator
Conference Operator

Thank you. The next question comes from Michael Donovan from Compass Point. Please go ahead.

speaker
Michael Donovan
Analyst, Compass Point

Thanks for taking my question and congrats on the progress. I was hoping you could talk more to your cloud software stack and the stickiness of your customers.

speaker
Kent Draper
Chief Commercial Officer

Yeah, I'm happy to take that one. To date, the vast majority of our customers have required a bare metal offering, and that is their preference. These are all highly advanced AI or software companies like a Microsoft. They have significant experience in the space, and they want the raw compute and the performance benefits that that brings. having access to a bare metal offering and then being able to layer their own orchestration platform over the top of that. So that has been by design that we have been offering a bare metal service. It lends itself exactly to what our customers are looking for. Having said all of that, we obviously are continuing to monitor the space, continuing to look at what customers want. And we are certainly able to go up the stack and layer in additional software if it is required by customers over time. But today, as I said, we haven't really seen any material levels of demand for anything other than the bare metal service that we're currently offering.

speaker
Dan Roberts
President and CEO

And I think maybe just to add to that, Kent, if you step back and think about it, you're contracting with some of the largest, most sophisticated technology companies on the planet that want access to to our GPUs to run their software. It's kind of upside down world to then turn around and say, oh, we'll do all the software and operating layer. Like clearly they're in the position they are because they have a competitive advantage in that sense. They're just looking for the bare metal. I think as the market continues to develop over coming years, it may be the case that if you want to service smaller customers that don't have that internal capability or budget, then yes, maybe you will open up smaller segments of the market. But for a business like ours that is pursuing scale and monetising a platform that we spent the last seven years building, it's very hard to see how you get scale by focusing on software, which is, I think everyone generally accepts is going to be commoditised anyway in coming years, as compared to just selling through the bare metal and letting these guys do their thing on it.

speaker
Michael Donovan
Analyst, Compass Point

That makes sense. I appreciate that. You mentioned design works are complete for a direct cyber loop between Sweetwater one and two. How should we think about those two sites communicate with each other once they're live?

speaker
Kent Draper
Chief Commercial Officer

Yeah, I think really the best way to think about it is it just adds an additional layer of optionality as to the customers that would be interested in that and how we contract those projects. Um, you know, there are a number of customers out there who are looking particularly, uh, for scale in terms of their deployments and obviously being able to offer two gigawatts that can operate as an individual, uh, campus, even though the physical sites are separated, um, is something that we think has value and that that's why we pursued that direct fiber connection.

speaker
Michael Donovan
Analyst, Compass Point

Appreciate that. Thank you, guys.

speaker
Operator
Conference Operator

Thank you. At this time, we're showing no further questions. I'll hand the conference back to Dan Roberts for any closing remarks.

speaker
Dan Roberts
President and CEO

Great. Thanks, operator. Thanks, everyone, for dialing in. Obviously, it's been an exciting couple of months, and particularly last week. Our focus now turns to execution to deliver 140,000 GPUs through the end of 2026. but also continuing the ongoing dialogue with a number of different customers around monetising the substantial power and land capacity we've got available and our ability to execute and deliver compute from that. So appreciate everyone's support. Look forward to the next quarter.

Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-