This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

IREN Limited
11/6/2025
That said, we will continue to monitor customer demand closely and pursue growth in a disciplined, measured way. This full expansion to 140,000 GPUs will only require about 460 megawatts of power, representing roughly 16% of our total secured power portfolio. This leaves substantial optionality for future growth and importantly, continued scalability across our portfolio. The key takeaway here is that we have substantial near-term growth being actively executed upon, but also have significant and additional organic growth ahead of us. Turning now to slide eight, which highlight the British Columbia data centres supporting our expansion to 140,000 GPUs. At Prince George, our ASIC to GPU swap-out program is progressing well. The same process will soon extend to our McKenzie and Canal Flats campuses, where we expect to migrate ASICs to GPUs with similar efficiency and speed. Together, these sites are allowing us to fast track our growth in supporting high performance AI workloads, scaling it into what is becoming one of the largest GPU fleets in North America. Turning to Childress. where we are now accelerating the construction of Horizons 1 to 4 to accommodate the phase delivery of NVIDIA GB300 NVL72 systems for Microsoft. We've significantly enhanced our original design specifications to meet hyperscale requirements and also further ensure durable long-term returns from our data center assets. The facilities have been engineered to Tier 3 equivalent standards for concurrent maintainability, ensuring continuous operations even during maintenance windows. A key feature of this next phase is the establishment of a network core architecture capable of supporting single 100 megawatt superclusters. A unique configuration that enables high performance AI training for both current and next generation GPUs. We're also incorporating flexible rack densities ranging from 130 to 200 kilowatts per rack, which allows us to accommodate future chip generations and the evolving power and density requirements without major structural upgrades. While these design enhancements have resulted in incremental cost increases, they provide long-term value protection, enabling our data centers to support multiple generations and reduce recontracting risk typically associated with lower spec builds. In short, we're building Childress not just for today's GPUs and the Microsoft contract in front of us, but also for the next generations of AI compute. Beyond the accelerated development of Horizons 1 through to 4, the remaining 450 megawatts, as you can see in the image on screen, of secured power at Childress provides substantial expansion potential for future Horizons, numbered five through to 10. Design works underway to enable liquid cooled GPU deployments across the entire site, positioning us to scale seamlessly alongside customer demand. Finally, turning to Sweetwater, our flagship data center hub in West Texas, which has been somewhat overshadowed in recent months by the activity in Childress and Canada. At full build out, Sweetwater will support up to two gigawatts, 2,000 megawatts of gross capacity, all of which has been secured from the grid. As shown in the chart, this single hub rivals and in most cases exceeds the entire scale of total data center markets today. While the recent headlines have naturally been dominated more about our AI cloud expansion at other sites, Sweetwater is a pretty exciting platform asset. giving us the capability to continue servicing the wave of AI compute demand. Sweetwater One energization continues to remain on schedule with more than 100 people mobilized on site to support construction of what is becoming one of the largest high voltage data center substations in the United States. All exciting stuff. With that, I'll now hand over to Anthony, who will walk through our Q1 FY26 results in more detail.
Thanks, Dan, and thanks, everyone, for your attendance today. Continued operational execution was reflected in another quarter of strong financial performance. Q1F1-26 marked our fifth consecutive quarter of record revenues, with total revenue reaching $240 million, up 20%, 28% quarter over quarter, and 355% year over year. Operating expenses increased primarily on account of higher depreciation, reflecting ongoing growth in our platform and our higher SG&A. The latter, primarily driven by materially higher share price, resulting in acceleration of share-based payment expense and a higher payroll tax expense associated with employees. 63 million. were both significantly up, largely on account of unrealised gains on prepaid forward and cap call transactions entered into in connection with our convertible note financings. Adjusted EBITDA was 92 million, reflecting continued margin strength, partially offset by that higher payroll tax of 33 million accrued in the quarter on account of strong share price performance. Turning now to our recently announced AI cloud partnership with Microsoft. As Dan mentioned, this is a very significant milestone for Iron. It not only delivers strong financial returns, but also creates a significant long-term strategic partnership for the business. Focusing on the financials, the $9.7 billion contract is expected to deliver approximately $1.9 billion in annual revenue once the four phases come online. with an estimated 85% project EBITDA margin. This strong margin, which reflects our vertically integrated model, incorporates all direct operating expenses across both our cloud and data center operations, supporting the transaction, including power, salary and wages, maintenance, insurance, and other direct costs. These cash flows deliver an attractive return on the cloud investment i.e. the 5.8 billion capex for the GPUs and ancillaries, after deducting an appropriate internal co-location charge, ensuring that the project delivers robust cloud returns, as well as an attractive return on our long-term investment in the Horizon data centres, which will deliver returns for many years into the future. The transaction has also a number of features that allow us to undertake the transaction in a capital-efficient way, Firstly, the payments for the CAPEX are aligned with the phase delivery of the GPUs across the calendar year 26, as we deliver those four phases. Secondly, the $1.9 billion in customer prepayments, being 20% of total contract revenue, paid in advance of each tranche, provides funding for circa one-third of the funding requirement at the outset. Thirdly, the combination of the latest generation of GPUs and the very strong credit profile of Microsoft should allow us to raise significant additional funding secured against the GPUs and the contracted cash flows on attractive terms. While the final outcome will be subject to a range of considerations and factors, we are targeting circa $2.5 billion through such an initiative, and depending on final terms and pricing, there is meaningful upside to that. noting again the very high quality of our counterparty. We also have a range of options available to fund the remaining $1.4 billion, including existing cash balances, operating cash flows and a mix of equity, convertible notes and corporate instruments. On that note, turning more generally to CapEx and funding. We continue to focus on deepening our access to capital markets and diversifying our sources of funding. We issued $1 million in zero coupon convertible notes during October, which was extremely well supported. And we also secured an additional $200 million in GPU financing to support our AI cloud expansion in Prince George, bringing total GPU-related financings to $400 million to date at attractive rates. Taking into account recent fundraising initiatives, our cash at the end of October stood at $1.8 billion. Our upcoming CapEx program, which includes the construction of the Verizon data centers for the Microsoft transaction, will be met from a combination of this strong starting cash position, operating cash flows, the Microsoft prepayments as just noted, and other financing streams that are underway. These include the GP financing facilities that we discussed, as well as a range of other options under consideration from other forms of secured lending against our fleet of GPUs and data centers, through to corporate level issuance whilst maintaining an appropriate balance between debt and equity to maintain a strong balance sheet. With that, we'll now turn the call over to Q&A.
Thank you. If you wish to ask a question, please press star 1 on your telephone and wait for your name to be announced. If you wish to cancel your request, please press star then 2. If you're using a speakerphone, please pick up the handset to ask your question. The first question today comes from Nick Giles from B Reilly Securities. Please go ahead.
Yeah, thank you, operator. And hi, everyone. Thanks so much for the update today. Guys, I want to congratulate you on this significant milestone with Microsoft. This was really great to see. I have a two-part question. Dan, you mentioned strategic value, and I was first hoping you could expand on what this deal does from a commercial perspective. And then secondly, I was hoping you could speak to the overall return profile of this deal and how you think about hurdle rates for future deals. Thank you very much.
Sure. Thanks, Nick. Appreciate the ongoing support. So in terms of the strategic value, I think undoubtedly proving that we can serve as one of the largest technology companies on the planet has a little bit of strategic value. But below that, the fact that this is our own proprietary data centre design and we've designed everything from the substation down to the nature of the GPU deployment and that has been deemed acceptable by a trillion dollar company. I think that's got a bit of strategic value, both in terms of demonstrating to capital markets and investors that we are on the right track. But also importantly, in terms of the broader customer ecosystem and that validation, and look, we've seen that play out over the days since the announcement. In terms of hurdle rates and returns, I think it's worth, Anthony, if you can, to jump into this. I think it's fair to say that IRR's hurdle rates and returns financial models have dominated our lives for the last six weeks. So there's probably a little bit we can outline in this regard.
Sure. Thanks, Dan. And thanks for the question. Just in terms of the returns on the transaction, obviously, as I noted in the introductory comments, when we look at the cloud returns, we obviously take away what we think to be an arm's length co-location rate. So effectively charge the deal for the cost of reaching the data center capacity. After we take that into account on an unlevered basis and assuming that there are zero cash flows or RV associated with the GPUs after the term of the contract, we expect an unlevered IRR of low double digits. Obviously, we'll be looking to add some leverage to the capital structure for the transaction, as we also discussed. And once we take that target $2.5 billion of additional leverage into account, you're achieving a levered IRR in the order of circa 25% to 30%. Obviously, that is assuming that $2.5 billion package. And it also assumes that the remaining funding is coming from equity as opposed to other sources of capital, which we might also have access to. I'd also note that we said that we might well be upside on that $2.5 billion. Obviously, at a $3 billion leverage package against the GPUs on a secured financing package, you could see that levered return increase by, you know, circa 10%. In terms of the RV, we've obviously, in those numbers, we're just reflecting zero economic value in the GPUs at the end of the term. If, for example, you were to assume a 20% RV, obviously that has a material impact. Unlevered IRRs would increase to high teens, and your levered IRRs would be somewhere between, you know, 35% to 50%, depending on your leverage assumptions.
Yeah, I think maybe just to jump in as well. Thanks, Anthony. That's all absolutely correct. And there are a lot of numbers in there, which... is demonstrative of the amount of time we've spent thinking about IRRs. So I think just to reiterate a couple of points, one is we've clearly divided out our business segments into standalone operations for the purposes of assessing risk return against a prospective transaction. So to be really clear, all of those AI cloud IRRs assume a co-location charge. So they assume a revenue line for our data centres. So our data centers we've assumed earn internally $130 per kilowatt per month escalating, which is absolutely a market rate of return, particularly considering the first five years is underwritten by a hyperscale credit. So that's probably the first point I make, but it's also really important to mention that we've optimized elsewhere. So the 76,000 GPUs that we've procured for this contract, at a $5.8 billion price, Dell have really looked after us to the point where they've got an inbuilt financing mechanism in that contract, where we don't have to pay for any GPUs until 30 days after they're shipped. So there's further enhancements there. And then the final point I'd reiterate is this 20% prepayment, which I don't believe we've seen elsewhere, accounts for a third of the entire capex of the GPU fleet. And I guess we've been asked previously why we would prefer to do AR cloud versus co-location as one very single small data point, we are getting paid a third of the capex upfront here as compared to having to give away equity, big chunks of equity in our company to get access to a co-location deal. So we're really pleased and to lead us towards that $3.4 billion in ARR by the end of 2026 on returns that are pretty attractive, Yeah, it's a good result.
Anthony, Dan, I really appreciate all the detail there. One more if I could. I was just wondering if you could give us a sense for the number of GPUs that will ultimately be deployed as part of the Microsoft deal. And then as we look out to year six and beyond, I mean, can you just speak to any of the kind of future proofing you've done of the Horizon platform and what can ultimately be accommodated in the long term for future generations of chips?
I'm happy to jump in and take that one down. So in terms of the number of GPUs to service this contract, draw your attention to some of our previous releases where we've said that each phase of Horizon would accommodate 19,000 GB300s. And obviously we're talking about four phases here with respect to that. In terms of future proofing of the data centers, there are a number of elements to it, but the primary one is that we have designed for rack densities here that are capable of handling well in excess of the GB300 rack architecture. And to give you specific numbers there, the GB300s are around 135 kilowatts a rack for the GPU racks, and our design at the Horizon facilities it can accommodate up to 200 kilowatts a rack. So that is the primary area where we have future proofed the design. But as Dan also mentioned in the remarks on the presentation, we have enhanced the design in a number of ways, including effectively what is full tier three equivalent concurrent maintainability. So there are a number of elements that have been accommodated into the data centers to ensure that they can continue to support multiple generations of GPUs.
Very helpful, Ken. Guys, congratulations again and keep up the good work.
Thank you. The next question comes from Paul Golding from Macquarie. Please go ahead.
Thanks so much for taking the question, and congrats on the deal and all the progress with HPC. I wanted to ask, I guess this is a quick follow-on to the IRR question. Just on our back of the envelope, Mass, it looks like pricing for GPU hour may be on the rise or at the higher end of that $2 to $3 range, assuming full utilization, so presumably potentially even higher. How should we think about the pricing dynamics in the marketplace right now on cloud, given the success of this deal and what seems to be fairly robust pricing. And then I have a follow-up. Thank you.
Sure.
Sorry. You go ahead, Dan.
Sorry. Look, I'll let Kent talk a bit more about the market dynamic, but it is absolutely fair to say that we're seeing a lot of demand. That demand appears to increase month on month in terms of the specific dollars per GPU hour. We haven't specified that exactly. However, we have tried to give a level of detail in our disclosures, which allows people to work through that. I think importantly for us, rather than focusing on dollars per GPU hour, which I think your statement is correct, is focus on the fundamental risk return proposition of any investment. And when we've got the ability to invest in an AI cloud, delivering what is likely to be in excess of 35% levered IRRs against a Microsoft credit. I mean, you kind of do that every day of the week.
Yeah, thanks, Dan. And Paul, with regard to your specific question around demand, we continue to see very good levels of demand across all the different offerings we have. The air-cooled servers that we are installing up in our facilities in Canada are lend themselves very well to customers who are looking for 500 to 4000 GPU clusters and want the ability to scale rapidly. As we've discussed before, transitioning those existing data centers over from their current use case to AI workloads is a relatively quick process, and that allows us to service the growth requirements of customers in that class very well. And case in point, we've been able to pre-contract for a number of the GPUs that we purchased for the Canadian facilities well in advance of them arriving out at the sites. And this is something that customers have historically been pretty reticent to do, but that level of demand exists in the market as well as ongoing trust and credibility of our platform. with both existing and new customers that is allowing us to take advantage and pre-contract a lot of that away. And then obviously with respect to the Horizon One build out for Microsoft, this is the top tier liquid cooled capacity from NVIDIA. We continue to see extremely strong demand for that type of capacity. And the fact that we are able to offer that means that we can genuinely serve all customer classes from hyperscalers, the largest foundational AI labs and largest enterprises with that liquid cooled offering down to top tier AI startups and smaller scale inference enterprise users at the BC facilities.
Thanks for that color, Kent and Dan. As a follow-up, as we look out to Sweetwater One energization coming up fairly soon here in April, are you able to speak to any inbound interest you're getting on cloud at that site? I know it's early days just from a construction perspective, maybe for the facilities themselves, but any color there and maybe whether you would consider hosting at that site given the return profile and potential cash flow profile that you would get from engaging in the cloud business over a period of time. Thank you.
Yeah, in terms of the level of interest in discussions that we're having, we're seeing a strong degree of interest across all of the sites, including Sweetwater as well. Obviously, very significant capacity available at Sweetwater, as Dan mentioned, with initial energisation there in April 2026, which is extremely attractive in terms of the scale and time to power. So I think it's very fair to say that we're seeing strong levels of interest across all the potential service offerings. As it relates to GPU as a service and co-location, as previously, we will continue to do what we think is best in terms of risk adjusted returns. Anthony outlined the risk adjusted returns that we're seeing in GPU as a service specifically at the moment. And as we've outlined over the past number of months, that does look more attractive to us today. But as we continue to see increasing supply demand imbalance within the industry, that may well feed through into co-location returns where it makes sense to do that in the future. But as it stands today, certainly the return profile that we're seeing in GPU as a service, we think is incredibly attractive.
Great. Thanks so much and congrats again. Thank you. The next question comes from Brett Knoplach from Canticle Fitzgerald. Please go ahead. Hi, guys. Thanks for taking my question. On the $5.8 billion order from Dell, can you maybe parse out how much of that is allocated to GPUs and the ancillary equipment, and on the ancillary equipment, say you wanted to retrofit the Horizon data in the future, do you also need to retrofit the ancillary equipment? total order amount, and it's fair to say the GPUs constitute the vast majority of it, but there is some substantial amounts in there for the back-end networking for the GPU clusters, which is the top tier InfiniBand offering that's currently available. In terms of future proofing, we'll have to see how much of that equipment may or may not be reusable for future generations of GPUs. As I was referring to earlier, the vast majority of our data centre equipment and the way that we've structured the rack densities within the data centre, meaning that the data centre itself is future-proofed. But in terms of the specific equipment for this cluster, it remains to be seen whether that will be able to be reused. Perfect. Thank you. And then on the maybe the 10 to 40,000 water that sounds like it's going to be plugged in in Canada, you talked about maybe a very efficient build for those data centers. Can you maybe elaborate a bit more on that? I know when the AI creates maybe first cut started 18 months ago. You guys are running TPUs. I'm pretty sure that you built for less than a million dollars a megawatt. Are we close to that number for this, or are we just well below maybe what it provides for a cost per megawatt existence? Yes, so in terms of the basic transition of those data centres over to AI workloads, it is relatively minimal in terms of the capex that is required. The vast majority of the work is removing ASICs, removing the racks that the ASICs sit on and replacing placing those with standard data center racks and PDUs, so the power distribution units, that can accommodate that AI server. So that is relatively minimal. As we've discussed before, it's a matter of weeks to do that conversion. And from a CapEx perspective, it is not material. The one element that may be more material in terms of that conversion is adding redundancy if required to the data centres that would typically cost around $2 million a megawatt. If we need to do that, but obviously in the context of, you know, a full build-out like we're seeing of liquid-cooled capacity at Horizon, it's extremely capital and capex efficient. Awesome. Thank you, guys. I'll hop back in the queue. Congrats again. Thank you. The next question comes from Dylan Heslin from Roth Capital Partners. Please go ahead.
Hey, thanks for taking my questions. And, Passanar, congrats on the Microsoft deal as well. To start, With Microsoft, was co-location ever on the table with them? Did they come to you asking for AI Cloud or how did those negotiations sort of fall out?
I'm just thinking about the best way to answer this. So we've been talking to Microsoft for a long period of time and the nature of those conversations absolutely did evolve over time. Is their preference the cloud deal possibly? But at the end of the day, we wanted to focus on cloud, and that was the transaction we were comfortable with. So conversations really focused around that over the last six weeks or so. I think if I may, I'd talk more generically. around these hyperscale customers, because obviously we were just talking to Microsoft. I think there probably is a stronger preference from those to be looking at more co-location and infrastructure deals rather than cloud deals. But also it's the case that there's an appetite for a combination. So maybe we do some collocation in the future. But yeah, I think different hyperscalers have different pressures. We'll entertain them all, but given the nature of the deal we did with a 28% prepayment fund in a third of CapEx and a 35% plus equity IRR, we're feeling pretty good about pursuing AI cloud. with the rest of Childress. Is there any significance to the size of the Microsoft deal starting at 200 megawatts? Do they have interest in the rest of the campus? Have you talked to them about that yet? divert the question a little bit because we've got some pretty strong confidentiality provisions. So let me talk generically. There is appetite from a number of parties in discussing cloud and other structures well above the 200 megawatts that's been signed with Microsoft. Okay, great. Thank you. Thank you. The next question comes from John Tadara from Needham. Please go ahead. Great. Thanks for taking my question and congrats on the contract. I guess just one on that is we think a little bit more in any kind of penalties or anything related to the timeline of delivering capacity, just wondering if there's guardrails around that and then maybe a little follow-up on that. There's always a penalty, whatever you do in life, if you don't do what you promised you were going to do. So we're very comfortable with the contractual tolerances that have been negotiated, the expected dates versus contractual penalties and other consequences. I can't comment more specifically. beyond that on this call. But the other thing I would reiterate is we have never, ever missed a construction or commissioning date in our life as a listed company. So I think you can take a lot of comfort that if we put something forward If we put something forward to the market, our reputations are on the line, our track record is on the line, we're going to be very confident we can deliver it and potentially even exceed Got it. Understood. And then just following up on the CapEx, that $14 to $16 million on the data center side, just wondering if there's anything kind of additional in there that would get it north of the items of their focus for talking about if maybe there's some networking or cabling included in that or if any contributions from tariffs are being considered there. I'm happy to give give some additional power there. So yes, in terms of networking, et cetera, again, as Dan mentioned in his presentation earlier, this is designed, the Horizon campus is designed to be able to operate 100 megawatt superclusters. Now that does raise a significant level of additional infrastructure that is required over being able to deliver smaller clusters and so certainly Some of the costs that are in the number that you mentioned are related to the ability to do that. And that will not necessarily be a requirement of every customer moving forward. So that probably is an element that is somewhat unique. Thank you, guys. Thank you. The next question comes from Steven from . Please go ahead. Hey, thanks for the question. On your British Columbia GPUs, can you maybe provide an update on where you guys stand with contracting out the remaining 12,000, I believe, GPUs of the initial 23,000 batch. Are you seeing any demand for your offering in BC outside of AI native enterprises? Thank you. Yeah, happy to give you an update there. We'd previously put out guidance a couple of weeks ago that we'd contracted 11,000 out of the 23,000 that were on order. Subsequent to that we have contracted a bit over another thousand GPUs and primarily the ones that are not yet contracted are the ones that are arriving latest in terms of delivery timelines as I mentioned earlier. we are seeing an increased appetite from customers to pre-contract. These are GPUs that are a little further out in terms of delivery schedules relative to the ones that have already been contracted. Having said that, we continue to see very strong levels of demand and we're in late stage discussions around a significant portion of the capacity that has not yet been contracted and continue to see very good demand leading into the start of next year. as well and are receiving an increasingly large number of inbounds from a range of different customer classes. So you mentioned AI natives. Yes, that has been a portion of the customer base that we've serviced previously. But we are also servicing a number of enterprise customers on an inference basis. So it is a pretty wide-ranging customer class that we're servicing out of those British Columbia sites. Thanks. Appreciate it. Thank you. The next question comes from Joe Vasthi from Mechanical Genuity. Please go ahead. Joe, your line is open if you'd like to ask your question. I'll move on to the next question. Yeah. Sorry, guys. Really sorry. Congrats from me too on Microsoft. Just maybe, Dan, if you could kind of walk... what you were thinking in your head. Clearly, you know, some awesome IRRs here on Microsoft deal, but how are you thinking about risk on a cloud deal here versus a straight cold deal, which, you know, probably wouldn't have had the return, but you know, maybe the risk profile over there. Just a quick follow-up, thanks. Thanks, Joe. Look, it's funny. I actually see risk very differently. So, yeah, we've spoken about kind of location deals with these type of scalers, and if you model that out, 7% to 8% starting yield on cost and run that through your financial model, what you'll generally see is that you'll struggle to get your equity back during the contracted term. And then you're relying on re-contracting beyond the end of that 15-year period. to get any sort of equity return. So in terms of risk, I would argue that there's a far better risk proposition implicit in the deal that we've signed. Going down the cloud, And then for the shorter term contracts on the colo side where you may not have a high scale credit, you're running significant GPU refresh risk against companies that don't necessarily have the balance sheet today to support confidence in that GPU refresh. So again, we think about it in business segments. We think about our data centre business has got a great contract internally linked to Microsoft as a tenant. And that data centre itself is future-proof, accommodating up to 200 megawatt contracts. rack densities, and it's also the case that in five years, the optionality provides further downside protection. So upon expiry of the Microsoft contract, maybe we can run these GPUs for additional years, which we've seen with prior generations of GPUs like the A100s. But assuming that isn't the case, we've got a lot of optionality within that business. We could sign a colocation deal at that point. We could relaunch a new cloud offering using latest generation GPUs. So my concern with these colocation deals is what you're doing is you're transferring an interest or an exposure to an asset that is inherently linked to this exponential world of technology and demand and the upside that that may entail and you're swapping that for a bond position in varying degrees of credit with the counterparties. So if you swap in an asset for a bond exposure to a trillion dollar hyperscaler and you're kind of hoping you might get your equity back after the contracted period, I mean, that's one way to look at it. If you swap in your equity exposure for a bond exposure in a smaller NEO cloud without a balance sheet, then is that a good decision for shareholders? We just haven't been comfortable. I get it, Dan. We've run some GCFs here and on a couple of deals here in the last couple of months. And, you know, there's a lot to be learned when you do it, there's no doubt. And then just on this previous payment from Microsoft, I know you've got some strong NDAs here, but... you know, kind of a feather in your cap on, you know, getting, you know, that much in a prepayment. Any other, you know, anything else to say on, you know, on, you know, on how, you know, you know, maybe your qualifications or, you know, how Microsoft, you know, perhaps, and you came to the agreement to pre-fund, you know, the GP purchases out of the box. Thank you. Look, yeah, getting the 30 CapEx funded through a pre-payment from the customer is, fantastic from our perspective and we're super appreciative for Microsoft coming to the table on that and what that allows us to do is to drive a really good IRR and return to equity for our shareholders and again linking back to what Anthony said earlier we 35% equity IRRs from this transaction accounting after an internal data centre charge. So trying to create that apples and apples comparison for a neocloud that has an infrastructure charge, even after that, we're looking at 35 plus and also what's really important to clarify is the equity portion of that ira we have assumed is funded with 100 ordinary equity which given our track record in raising convertibles given the lack of any debt at a corporate level it's probably conservative again so from a risk adjusted perspective linked to a trillion dollar credit and the ability to fund it I mean, we're really happy with the transaction and hopefully there's more to come. Great. Thanks, Dan. Thank you. The next question comes from Michael Donovan from Compass Point. Please go ahead. Thanks for taking my question and congrats on the progress. I was hoping you could talk more to your cloud software stack and the stickiness of your customers. Yeah, I'm happy to take that one. To date, the vast majority of our customers have required a bare metal offering. And that is their preference. These are all highly advanced AI or software companies like the Microsoft They have significant experience in the space and they want the raw compute and the performance benefits that that brings. Having access to a bare metal offering and then being able to layer their own orchestration platform over the top of that. So that has been by design that we've been offering a bare metal service that lends itself exactly to what our customers are looking for. Having said all of that, we obviously are continuing to monitor the space, continuing to look at what customers want. And we're certainly able to go up the stack and layer in additional software if it is required by customers over time. But today, as I said, we haven't really seen any material levels of demand for anything other than the bare metal service that we're currently offering. I think maybe just to add to that, if you step back a little bit, Think about it. You can track in with some of the largest, most sophisticated technology companies on the planet that want access to our GPUs to run their software. It's kind of upside down to then turn around and say, oh, we'll do all the software and operating later. Like, clearly they're in the position they are because they have a competitive advantage in that. So if they're just looking for the bare metal, I think As the market continues to develop over the coming years, it may be the case that if you want to service smaller customers that don't have that internal capability or budget, then yes, maybe we will open up smaller segments of the market, but for a business like ours that is pursuing scale and in a platform that we've spent the last seven years building, it's very hard to see how you get scale by focusing on software, which I think everyone generally accepts is going to be commoditised anyway in coming years, as compared to just selling through the bare metal and letting these guys do their thing on it. That makes sense. I appreciate that. You mentioned design works are complete for a direct hybrid loop between Sweetwater One and Two. How should we think about those two sites communicate with each other once they're live? I think really the best way to think about it is it just adds an additional layer of optionality as to the customers that would be interested in that and how we contract those projects. There are a number of customers out there who are looking particularly for scale in terms of their deployments and obviously being able to offer two gigawatts that can operate as an individual campus even though the physical sides are separated, is something that we think has value, and that's why we pursue that direct fibre connection. Appreciate that. Thank you, guys. Thank you. At this time, we're showing no further questions. I'll hand the conference back to Dan Roberts for any closing remarks. Great. Thanks, Alfreda. Thanks, everyone, for dialling in. Obviously, it's been an exciting couple of months, particularly last week. Our focus now turns to execution to deliver 140,000 GPUs through the end of 2026, but also continuing the ongoing dialogue with a number of different customers around monetising the substantial power and We've got available and our ability to execute and deliver compute. From that so appreciate everyone