Earnings Labs

IREN Limited (IREN)

Q4 2025 Earnings Call· Thu, Aug 28, 2025

$44.23

-8.52%

Key Takeaways · AI generated
AI summary not yet generated for this transcript. Generation in progress for older transcripts; check back soon, or browse the full transcript below.

Same-Day

+14.93%

1 Week

+13.50%

1 Month

+103.69%

vs S&P

+101.03%

Transcript

Operator

Operator

Good day, and thank you for standing by. Welcome to the IREN FY 2025 Results Conference Call. [Operator Instructions] Please be advised that today's conference is being recorded. [Operator Instructions] I would now like to hand the conference over to your speaker today, Mike Power, VP, Investor Relations.

Mike Power

Analyst

Thank you, operator. Good afternoon, and welcome to IREN's FY 2025 Results Presentation. My name is Mike Power, VP of Investor Relations. And with me on the call today are Daniel Roberts, Co-Founder and Co-CEO; Belinda Nucifora, CFO; Anthony Lewis, Chief Capital Officer; and Kent Draper, Chief Commercial Officer. Before we begin, please note that this call is being webcast live with a presentation. For those that have dialed in via phone, you can elect to ask a question via the moderator after our presentation. Before we begin, I would like to remind you that certain statements that we make during the conference call may constitute forward- looking statements, and IREN cautions listeners that forward-looking information and statements are based on certain assumptions and risk factors that could cause actual results to differ materially from the expectations of the company. Listeners should not place undue reliance on forward-looking information or statements. Please refer to the disclaimer on Slide 2 of the accompanying presentation for more information. Thank you, and I will now turn the call over to Dan Roberts.

Daniel Roberts

Analyst

Thanks, Mike. Good afternoon, everyone, and thank you for joining our FY 2025 earnings call. So today, we will provide an update on our financial results for the fiscal year ended June 30, along with some operational highlights and strategic updates across our business verticals. We'll then end the call with Q&A. So FY '25 was a breakout year for us, both operationally and financially. We delivered record results across the board, including 10x EBITDA growth year-on-year and strong net income, which Belinda will discuss shortly. Operationally, we scaled at an unprecedented pace. We increased our contracted grid-connected power by over 1/3 to nearly 3 gigawatts and more than tripled our operating data center capacity to 810 megawatts, all at a time when power, land and data center shortages continue to persist across the industry. We expanded our Bitcoin mining capacity 400% to 50 exahash and in the process, cemented our position as the most profitable, large-scale, public Bitcoin miner. At the same time, we made huge strides in AI, scaling GPU deployments to support a growing roster of customers across both training and inference workloads. We also commenced construction of Horizon 1, our first direct-to-chip liquid cooling AI data center and Sweetwater, our 2-gigawatt data center hub in West Texas, one of the largest data center developments in the world and a cornerstone of our future growth plans. These achievements underscore the strength of our execution and the earnings potential of our expanding data center and compute platform. We expect this momentum to carry into FY '26 and beyond as we realize the revenue potential of our 50 exahash platform and advance our core AI growth initiatives. So reflecting on current operations. Our AI cloud business is scaling rapidly with more than 10,000 GPUs online or being commissioned in…

Belinda Nucifora

Analyst

Thank you, Dan. Good morning to those in Sydney, and good afternoon to those in North America. As noted in our recent disclosures, we've completed our transition to a U.S. domestic issuer status from the first of July of this year. And as such, we've reported our full year results for the period ended 30 June 2025 under U.S. GAAP and the required SEC regulations. For the fourth quarter of FY '25, we delivered a record revenue of $187 million being an increase of $42 million from the previous quarter, primarily due to the record mining Bitcoin of $180 million as we operate at 50 exahash. During the quarter, we also delivered AI cloud revenue of $7 million. Our Bitcoin mining business continues to perform strongly, supported by best-in-class fleet efficiency at 15 joules per terahash and low net power costs being $0.035 per kilowatt hour in Q4. Whilst our operating expenses increased to $114 million, primarily due to overheads and depreciation costs associated with our expanded data center platform and increased Bitcoin mining and GPU hardware, we've delivered a strong bottom line of $177 million. High-margin revenues from our Bitcoin mining operations were a key driver of this profitability with an all-in cash cost of $36,000 per Bitcoin mined versus an average realized price of $99,000. Noting that these all-in costs incorporate expenses across our entire business, including the AI verticals, underscoring the strength of our platform. We closed the financial year with approximately $565 million of cash and $2.9 billion in total assets, giving us a strong balance sheet to support the next stage of growth. I'll now hand back to Dan to discuss the exciting growth opportunities that continue for IREN.

Daniel Roberts

Analyst

Thanks, Belinda. So I think it's fair to say that the market backdrop for our AI cloud business is pretty compelling. Industry reports demonstrate accelerating enterprise adoption of AI solutions and services with the percentage of organizations leveraging AI in more than 1 business function growing from 55% to 78% in the last 12 months alone. As almost all of us would know, demand is accelerating faster than supply. New model development, sovereign AI programs and enterprise adoption are driving a step-up in GPU needs and the constraint is infrastructure and compute, not customer interest. Power availability, GPU ready high-density data center capacity remains scarce with customers prioritizing speed to deploy and the ability to scale. IREN is uniquely positioned to meet this demand. Our vertical integration gives us control over the key bottlenecks, significant near-term grid-connected power with data centers engineered for next-generation power dense compute. This enables accelerated delivery time lines and rapid low-risk scaling. Because we own and operate the full end-to-end stack, we are able to deliver superior customer service, tighter control over efficiency, uptime and service quality translating directly into a better customer experience for our customers. We are leading our service with a bare metal service because it gives sophisticated developers, cloud providers and hyperscalers what they want most, direct access to compute and the flexibility to bring their own orchestration. As and when customer needs evolve, we have the flexibility to layer in software solutions to provide additional options to the customer. Our new status as an NVIDIA preferred partner is helpful in that regard. It enhances supply access and helps broaden our customer pipeline, supporting expansion across both existing relationships and new end users, platforms and demand partners. So the market is large. It's accelerating. Supply is constrained, and we have the…

Anthony John Lewis

Analyst

Thanks, Dan, and good morning or good evening, everyone, as the case may be. This slide highlights how we are funding growth across our AI verticals through a combination of strategic financing and strong cash flows from existing operations. To the right -- the table to the right, which many of you will be familiar with shows the illustrative cash flows from our existing Bitcoin mining operations. At the current network hash rate and 115,000 Bitcoin price, we show over $1 billion in mining revenue. And after subtracting all costs and overheads of our entire business, we arrive at close to $650 million of adjusted EBITDA. There is then a further $200 million to $250 million of annualized revenue on top of this expected to come from the AI cloud business expansion with an increasing contribution from that business over time. There is clearly some sensitivity to the relevant assumptions here, but the key message is we expect significant operating cash flow to invest in our growth initiatives over a range of operating conditions. With our position enhanced by low-cost power and best-in-class hardware forwards. These cash flows, together with existing cash and recent financing initiatives, which I'll touch on shortly, fully fund our near-term CapEx including the cloud expansion discussed with liquid cooling and power redundancy at Prince George taking GPUs to 10,900, completing Horizon 1 and energizing Sweetwater 1 substations. Let me now turn to our funding strategy more generally. As a capital business -- capital-intensive business growing quickly, we are clearly focused on diversifying our sources of capital so that we maintain a resilient and efficient balance sheet. The $200 million of GPU financings we announced this week are a recent example of that. These transactions had 100% of the upfront GPU CapEx financed allowing us to…

Operator

Operator

[Operator Instructions] Our first question comes from Paul Golding with Macquarie.

Paul Alexander Golding

Analyst

I wanted to ask on efficiency at these sites. I noticed that PUE in the British Columbia sites is down at 1.1, which is a very impressive efficiency ratio versus Sweetwater being about 1.4. Those may be peak numbers as opposed to average. But I was wondering if you could give some color around how that might influence the thought process around rollout or concentration of sites receiving GPUs initially versus others as you think about the efficiency? And then also just along the lines of this infrastructure being developed with PUE that low being cited, how are you thinking about the backup generation for the existing pods that you have outstanding? I only ask that question in relation to the on-demand versus contracted customer dynamic and how you're seeing that evolve?

Kent Draper

Analyst

Paul, happy to jump in and take that one. So as you mentioned, across the BC sites, we're operating at a PUE of 1.1. That's on an air cooled basis. Once we install the liquid-cooled facilities there, we expect that to be operating on an average slightly higher than that, but still well under 1.2 PUE across the year. At Childress, the Horizon 1 liquid cooled installation, the number that you mentioned is much closer to a peak PUE number, although we actually expect it to be less than 1.4 and the average PUE over the year to be around 1.2. In all cases, I think those are extremely competitive numbers across the industry. We are more led in terms of our deployments across the different sites by what our customers are ultimately demanding. Within British Columbia, the ability to scale extremely quickly on an air cooled basis has been a significant driver of demand for us. And again, that PUE level is extremely competitive regardless. And so that is where we are seeing some of the primary interest from our customer base. At Horizon 1, that liquid cooled capacity, in particular, is extremely scarce in the industry at the moment and the ability to locate a single cluster of up to 19,000 or just over 19,000 GB300s is significantly attractive and driving high levels of customer interest. So I think less driven by PUE overall in terms of deployments and more driven by the customer side of the equation. To your question on redundancy, as Dan mentioned in his remarks, we're introducing redundancy across the entire fleet of GPUs that we have in our existing operating business as well as for the new GPUs that we purchased. While we believe that for many of the applications that these clusters are used for, it's not necessarily the case that it's required to have redundancy. We have seen some of our customers wanting that redundancy. And for us, we ultimately want to be driven by providing the best customer service, and that's really what's driving us to install that redundancy across the fleet.

Paul Alexander Golding

Analyst

And if I could ask one quick follow-up on the GB300 NVL72 capability that has been incorporated or retrofitted into the original plan for Horizon 1, I believe. If you could just give us any incremental color around what that may have entailed and any impact that may have had on how financing availability or future financing plans may be impacted as you think about incremental cost for that density and in particular, maybe as you plan for Rubin given this preferred partner status now?

Kent Draper

Analyst

Yes. So I think what you're referring to with Dan's comments around introducing flexibility for a wider range of densities. And for us, that actually comes more towards lower densities, so being able to operate at densities that are under what the Vera, Rubins would require. So the base design, as we had it, could handle up to 200 kilowatts of rack easily able to accommodate the next iteration of GPUs. But what we're seeing in the market today is that many customers actually want flexibility to be able to operate not only at the rack densities for GB300s, which are around 135 kilowatts a rack, but actually at even lower densities to accommodate additional types of compute within the data center infrastructure. And so what we've done is gone back and reworked some of the electrical and mechanical equipment to be able to actually accommodate lower rack densities. So as it relates to accommodating Rubins in the future, no change from our perspective.

Operator

Operator

Our next question comes from John Todaro with Needham.

John Todaro

Analyst · Needham.

Congrats on a very strong quarter. First question on the cloud business. And apologies if I missed this, but just the average duration of the contract, kind of trying to determine given the 3-year payback with the GPUs plus infrastructure, the overlap there with the customer contract division? And then I also have a follow up on the HPC side of things.

Kent Draper

Analyst · Needham.

Yes. We've got a range of contract lengths across our existing asset base today, all the way from 1 month rolling contracts out to 3- year contracts. For the newer gen equipment, including the Blackwell purchases that we've made, we've typically seen demand in slightly longer contract lends whilst those Blackwells are new equipment on the market, and so a good indication of that is the initial portion of our B200s that, as Dan mentioned, as soon as they were installed, we were able to contract them on a multiyear basis. So we do have contracts across the spectrum, but we are seeing fewer gen equipment often longer-term contracts being available.

John Todaro

Analyst · Needham.

Got it. That's great. And then just with the success you're having so far in the cloud business, you could take a step back and think, do we need to sign HPC colo capacity? Would you be more comfortable kind of continuing at this at even a bigger scale? And then as it relates to just kind of thoughts on the CapEx to get you there? Any targeted leverage ratio or a threshold on debt, too.

Kent Draper

Analyst · Needham.

Yes. We're constantly evaluating the opportunities as it relates to both colocation and cloud. I think we're uniquely positioned in the sense that we are able to take advantage of both opportunities which we think is quite differentiated to a number of others in the industry. They obviously have very different profiles in terms of the risk-adjusted returns. So colocation, longer-dated contracts, typically in the range of 5 to 20 years, but lower payback periods often higher than 7 years before you can get your capital back. And in many cases, because of the nature of the debt financing associated with those, there's very little actual cash flow coming out of the business during that finance period whereas cloud shorter-dated contracts, but much stronger margins and shorter overall payback period. So we typically see around 2-year payback periods on the GPUs alone and 3 to 4 years on the GPUs plus data center infrastructure. So it is something that we're constantly evaluating. And overall, we're looking to maximize risk-adjusted returns across both models. I think you can tell from the comments today, as it stands, we do find the cloud opportunity extremely compelling. Anthony, did you want to touch on the comments around financing?

Anthony John Lewis

Analyst · Needham.

Thanks, Kent. Yes, obviously, we have very modest debt servicing requirements today. And I guess as we scale the business, obviously, where those opportunities have developed and the nature of the cash flows and the security of those cash flows will ultimately drive what an appropriate level of leverage is for the business. So the capital structure will continue to evolve as we continue to grow, but we'll obviously be focused on maintaining a strong and resilient balance sheet as well as an efficient cost of capital.

Operator

Operator

Our next question comes from Darren Aftahi with ROTH.

Darren Paul Aftahi

Analyst · ROTH.

Congrats on all the progress. A couple if I may. So on Horizon 1 and 2. I guess there's commentary in the press release about what theoretically Horizon 1 could support in terms of GPUs, but you kind of left the door open that there may be other uses. So I'm kind of curious on strategic thinking there. And then on Horizon 2, I think, if my math is right, you guys only have 25 megawatts left at Childress and you're talking about, I guess, 50 megawatts of critical load, will you be borrowing from your Bitcoin business to kind of get there? And are there expansion opportunities beyond that? Second question, I guess, on Slide 9, you have one of your demand partners is Fluidstack. I'm more curious on the neo cloud side and maybe that entity in particular, given one of your peers signed a deal with them and another partner there, just kind of what the demand drivers are with Fluidstack in particular?

Daniel Roberts

Analyst · ROTH.

Thanks, Darren. Appreciate that. So 3 questions I hear in there. Horizon, we mentioned 19,000. It's just tick over based on the NVL72 configuration, GB300s. The project has been engineered specifically for liquid cooled GPU. So there is no other use case as an end market other than that in saying that there's a couple of different ways we might monetize that capacity. A is through different types of GPUs. So as we mentioned during the presentation and Kent reiterated, we've now introduced the flexibility to accommodate a wider range of rack entities. We actually discovered building this, that the issue is we're building rack entities that are too dense for where the industry is today. So we've had to dial it back a little bit. So accommodating lower rack density gives us the ability to accommodate a wider range of different GPUs whilst preserving the ability to service the Vera Rubins as and when they're released and potentially beyond that. So that's exciting. In terms of monetizing the capacity, there's been colocation versus cloud. So we may buy, own, operate the 19,000 GPUs, and we're having conversations with a variety of potential partners for that, including hyperscale customers. We're progressing financing work streams in parallel. That's a real option. If the risk return balance is right, as Kent mentioned, then absolutely, we're in a unique position where not many people can build, own and operate a cloud service. So we're pursuing that, and we're excited about that. But equally, we're seeing a lot of demand for colocation and that would deliver more of an infrastructure return on capital, and we'll remain open to that structure, but we want to see a risk-return framework that is compelling. And to date, I guess, we haven't yet seen that. In terms of…

Operator

Operator

Our next question comes from Joseph Vafi with Canaccord Genuity.

Joseph Anthony Vafi

Analyst · Canaccord Genuity.

Congrats on all the progress here in fiscal Q4 and quarter-to-date, really great progress. Just really one question for me, maybe just a 2-part but a single question. I just want to drill down a little bit more on the financing on the Blackwells. I know that you mentioned there's some optionality at the end of the lease financing period. I thought maybe we could kind of go into what you're thinking at the end of those. At the end of the lease financing period, what may -- just what may be a factor in having you decide what to do next with those? And then just as a follow-up, it does seem like at least initially, the -- building your own clusters with this financing does look attractive on a payback and time value of money basis. Just wondering how much financing do you think is available in this market versus the kind of project financing that maybe yourself and others have discussed for a broader colocation type project.

Anthony John Lewis

Analyst · Canaccord Genuity.

Yes. Thanks for the question. In terms of the -- you're probably familiar with the various types of leasing structures you can see in the market, some of them are structured as more classic full payout finance leases. Others are sort of more sort of tech rotation style where you have fixed committed lease payments and then you have an FMV option to acquire at the end of -- often capped at a percentage of the day 1 price. So that obviously allows you the flexibility to potentially return the equipment if we wanted to reinvest in, for example, the next generation of GPUs at that time or obviously continue to own and operate the equipment depending on the conditions that we see. Sorry, could you just remind me of the second part of your question?

Joseph Anthony Vafi

Analyst · Canaccord Genuity.

Just the amount of financing capacity you see out there on the GPU side versus colocation.

Anthony John Lewis

Analyst · Canaccord Genuity.

Yes, I think they are obviously quite different asset profiles and the amount of, obviously, leverage and the cost of that leverage depends greatly on the specific situations. On the cloud side, it's obviously focused on the underlying portfolio of customers, the diversity in the customer mix, the credit quality, the duration of the contracts that will all drive both the sort of pricing and leverage that you can secure. And I guess, similarly on the colocation side, obviously, you can obtain very attractive cost of funds and very meaningful leverage against high-quality offtake such as high-scale offtakes. And as you come down the credit spectrum or the duration of the contract, that will obviously flow through into the cost of the finance and the leverage that you can obtain.

Daniel Roberts

Analyst · Canaccord Genuity.

And maybe just to add to that, Anthony. Joe, the two are not mutually exclusive, cloud and colocation, in the sense that we are arranging these 100% financing lease structures, as Anthony mentioned, over the GPUs, but that doesn't preclude us then financing the asset base and the infrastructure base at a data center level similar to how you would finance the colocation. It just happens to be the case that the colocation partner is an internalized IREN entity. So that market is open. We're talking to a number of a -- vast number of potential providers of capital for that. But as Anthony has mentioned, we're looking up and down the entire capital stack to optimize cost of capital at a group level. So you've got these asset level options, but then you've got corporate options as well. We mentioned the buoyant convertible note market that continues to look quite prospective. We've been prosecuting bond type structures at a corporate level as well. So there's a whole different array and every week, depending on level of demand, our revenue profile, how we're building out different elements of the business, the jigsaw puzzle from a financing perspective kind of falls into place and help support that. So it's that reflexive wheel of sources and uses of capital, and that's the benefit of now having Anthony on and dedicated full time to optimizing cost of capital while Kent runs around North America looking to deploy it.

Operator

Operator

Our next question comes from Reggie Smith with JPMorgan.

Charles Bonner Pearce

Analyst · JPMorgan.

This is Charlie on for Reggie. Can you talk a bit more about some of the key hires you've made in building out the cloud and colocation businesses and where, if anywhere, there is still some room to go. And then as a follow-up, digging in a bit more on the sales side, can you provide a bit more on how you're getting in front of and winning some of the AI clients that you called out in the slides.

Kent Draper

Analyst · JPMorgan.

Yes. Happy to jump in there on the resourcing question. So we've been hiring across the stack, as Dan made clear, at a level of vertical integration that we have. We continue to need resources across all areas, including data center operations, networking, InfiniBand experts, developers on the software side. We also continue to build out our go-to-market function. So that consists of hiring additional sales executives as well as solutions architects, and we're also expanding the marketing team in parallel with that. So there is an ongoing level of hiring across the business to support the additional customer-facing work that we're doing. And sorry, there was a last part to your question that I missed, it was breaking up a little.

Charles Bonner Pearce

Analyst · JPMorgan.

Yes. Just more on the sales side, like how you're getting in front of the clients, what are you competing on? Why are they choosing IREN, things like that?

Kent Draper

Analyst · JPMorgan.

Yes. So we get a mix of inbound and outbound customer demand drivers. We have been active recently in the conference space. So we have been getting out telling our story, showing why we are differentiated. As I mentioned, we've been expanding the marketing team and our efforts there to help drive inbound, particularly our activities across all social platforms have been ramping over the past 12 months, in particular. And we're seeing a high degree of interest there. And as that gets out into the public sphere as well as our ongoing provision of cloud services and customer word of mouth, we are starting to see more inbound inquiries as well around both our cloud services platform and the potential colocation platform. So it is a bit of a mix there in terms of what we're seeing.

Daniel Roberts

Analyst · JPMorgan.

And I think maybe just to add to that as well. Like this is exactly the point, the whole demand-supply equation in this industry is imbalanced, but there is little supply. So the demand when they need something -- when people need something, they tend to find it particularly when it's scarce. So word-of-mouth through these demand brokers, conferences, existing customers, word does get out. And we do have 3 pretty unique competitive advantages compared to other competition around neo clouds like [ AI scale ], we control the infrastructure end-to-end. We can scale up capacity up and down across our existing data center footprint, let alone the new footprint and building into that growth. Performance. vertical integration is really important because it gives us direct oversight of every single layer in the stack. So we've got tighter control over performance, reliability, service and they get higher uptime as a result because there's no colocation partners. There's no SLAs with data centers that restrain and constrain your ability to update GPUs and get your hands on them. And then finally, from a cost perspective, we've got no colocation fees and greater operational efficiency as a result. So we're in a really good spot. And this also translates to sales force and marketing support and general cloud support because we are in the industry, we are doing stuff. We've got available capacity. There's significant interest in joining IREN because we have capacity to sell as distinct from other providers who have no capacity and salespeople are sitting there with not a lot to do.

Operator

Operator

Our next question comes from Brett Knoblauch with Cantor Fitzgerald.

Brett Anthony Knoblauch

Analyst · Cantor Fitzgerald.

Maybe on the cloud services front. Is the strategy to go out and order or purchase GPUs with a customer already in mind? Or are you buying those GPUs and then trying to find a customer. And then could you maybe just elaborate on the power dynamics per GPU? I think the 19,000 GB300s for Horizon 1 implies that you can be 380 of them per megawatts of critical IT load. So you have like maybe a similar metric or GB300s or GB200s, if you can provide any color there, that would be helpful as well.

Daniel Roberts

Analyst · Cantor Fitzgerald.

So I might take the first half. Kent, if you can do the second half. The prospect of ordering GPUs before or after a contract, this is the nature of the industry where companies want compute, they want it now, but they don't want to wait 2 to 3 months. You think about an enterprise that's made the decision. You think about an AI scale-up or start-up that's raised a bunch of capital. Very few companies are in a position where they can plan out and map out a 2- to 3-year time line of GPU needs. Often it's we need GPUs, we need them for a project, we need them for today. So the world wants on-demand compute and we almost use this as a universal motherhood statement to guide what we do. The world doesn't really want data center infrastructure. The world at its core wants compute and it wants it now and when it needs it. That's the first element. The second element is I feel like it's groundhog day. We're back in this world, and it takes me back to Bitcoin mining where every man and their dog promises certain amounts of capacity online by a certain date, and no one does it. No one hits the schedules, everyone revises them downwards, stretches them out, cost blowouts, et cetera, because the real world is hard. Dealing with large- scale infrastructure projects, large-scale workforces, complex project delivery, safety, like it takes a lot of work and systems and structures to deliver that. This is why we're in such a good position. We never missed a milestone on Bitcoin mining. We are the most profitable if only profitable Bitcoin miner because we did things properly from the start. And we're now sitting here. And as I said, it's groundhog day with the cloud business, where again, all these companies, neo clouds and otherwise promise capacity online by a certain date, and they rarely hit it. And as a result, customers get a bit gun shy. So the best thing you can do is to continue ordering the hardware. If it snapped up and bit me as soon as it's commissioned, that's a pretty good sign that you're doing the right thing. And as and when we install hardware and the sales cycle starts slowing down, then you know, okay, well, maybe we've just got to slow down on the orders. But each incremental order from here is a relatively small portion of our overall risk so we can afford to take it.

Kent Draper

Analyst · Cantor Fitzgerald.

And with respect to the power question, yes, we do continue to see the overall power usage per GPU ticking up with each incremental release from NVIDIA and the other manufacturers. I think using some of the examples of the numbers that were presented earlier in the presentation on an air cooled basis for B200s, we can fit over 20,000 GPUs into the Prince George site, which is 50 megawatts. At Horizon 1, 50 megawatts of IT load, you're looking at around 19,000 GB300s. So yes, it's not exact math there, but it does give you an idea of what we're seeing in terms of the amount of power per GPU going up over time.

Operator

Operator

Our next question comes from Nick Giles with B. Riley.

Nicholas Giles

Analyst · B. Riley.

I wanted to go back to how the Horizon 1 capacity will be utilized and you're closing in on that 4Q completion. So -- at what point would you make the decision to fill Horizon 1 with your own GPUs versus pursue a colocation dealer. Maybe said differently, and I think Dan alluded to this from a financing perspective, but if you were to fill it with GPU, should we expect that to be the case for the entire capacity? Or could we see you colocate between your own GPUs and a third party?

Kent Draper

Analyst · B. Riley.

Yes. I think that's one of the advantages of where we're at is they're not mutually exclusive options for us. So as we mentioned earlier, we are in a unique position that we can monetize that data center capacity in a number of ways, and it doesn't have to be 1s or 0s. We don't need to do all of it as cloud or all of it as colocation. It could be a combination within Horizon 1. As Dan mentioned, we've started building out Horizon 2. Again, that gives us significant optionality where we could potentially do Horizon 1 under 1 methodology and 1 type of monetization, Horizon 2 under another, but what we will continue to do over time is try and maximize the risk-adjusted returns for how we monetize the assets. And that may fluctuate over time. We're in an obviously incredibly dynamic industry here. And at different points in time, we may see very different risk/reward proposition in colocation versus cloud, but we do have significant flexibility as to how we utilize the capacity.

Nicholas Giles

Analyst · B. Riley.

Thanks for that, Kent. Just on the cloud services, you're focused on bare metal today, but I think you did make some comments that you could expand your software offerings or integrate, if needed. What should we be looking for there? Or what would the incremental revenue opportunities be if you were to integrate?

Kent Draper

Analyst · B. Riley.

Yes. Today, as Dan mentioned, the vast majority of the customers that we are dealing with, which make up the majority of the compute market. These are highly experienced AI players, hyperscalers, developers. They are, for the most part, demanding bare metal because it actually suits them better to be able to bring their own orchestration layer where we see benefits over time from adding incrementally to the software layer is being able to serve a slightly different customer class, which might be smaller AI start- ups or enterprise customers who are looking for a simpler single click spin up, spin down type service. But today, where we see the demand supply imbalance, that bare metal offering that we have has a significant level of demand for it. And so we feel like we're well positioned where we're at today.

Daniel Roberts

Analyst · B. Riley.

I think, again, look, just to reiterate, this notion that software is required and these large sophisticated end users of GPUs want a third- party provider to staple their own software and make them use it like these guys are sophisticated. They just want compute. They want to run their own stuff. And at the end of the day, software is eating the world. We know that. Software is not difficult to overlay. The large customers don't want your software. They want their own software. And we are hearing it also firsthand from executives and employees at some of these companies that offer their own software, that it's a nightmare because every time the GPUs change, they need to update the software and rewrite it. And it's this constant evolution of code, bugs, rewriting, updating, et cetera, all for an area of the market that, yes, it might seem good as a narrative, but fundamentally and substantially in terms of revenue opportunity is quite small today.

Operator

Operator

Our next question comes from Stephen Glagola with JonesTrading.

Stephen William Glagola

Analyst · JonesTrading.

As IREN is now recognized as the preferred cloud partner on NVIDIA's website. I was hoping, Dan and Kent, maybe you could provide more detail on your participation in the DGX Cloud Lepton marketplace. And specifically, how do the economics of working through the Lepton marketplace compared to maybe operating your own independent cloud offering? What advantages does IREN get from being on that platform? And any insights into sort of NVIDIA's fee structure or take rate for participants there.

Kent Draper

Analyst · JonesTrading.

Yes. Happy to give some more color there. So we're not currently participating in the Lepton marketplace. But as an NVIDIA preferred partner, we continue to evaluate platforms like that could expand how we're able to get customers access to our infrastructure. So it may offer us broader reach into developer communities, simple onboarding. So again, to come back to the previous comments that I made on software, it may open up some of the smaller areas of the market, with smaller AI start-ups and the enterprise customers who are looking for a simpler solution. So we continue to monitor this. We are seeing an increasing number of these type of offerings coming to market. And for us, we think it will be an additional demand driver for the underlying compute layer that we are providing.

Stephen William Glagola

Analyst · JonesTrading.

And if I could just ask one more on Horizon 1. Is there any -- is the growth of your cloud services business, is that influenced which partners you're willing to consider for colocation potentially at Horizon 1, given arguably they can be competitors?

Kent Draper

Analyst · JonesTrading.

Yes. I mean it's something that we continue to evaluate in terms of the mix. And I think what you're probably referring to are neo cloud customers on the colocation side. Now the majority of neo clouds have a very different profile to hyperscalers in terms of colocation. So even within the broader colocation market, there is a significant degree of differentiation. If you think of hyperscalers, they're typically looking for longer-term contracts, often 10 to 20 years, extremely creditworthy but drive a hard bargain in terms of the financials and the economic returns that you're able to achieve. With neo clouds, we often see shorter-term requirement. So typically, it might be 5 to 15 years, less creditworthy than the hyperscalers. So it's all something that we factor in, in terms of that risk-reward element that we discussed earlier. But in terms of -- because we have heard from a number of people whether the fact that we're offering a cloud service limits our ability to do colocation, I would actually say quite the opposite. Most of the colocation customers that we're talking to significantly value the fact that we understand how to operate these clusters at scale, that we have the data center knowledge. We know how to design data centers to operate these clusters, and we've proved out through our own cloud service that we can operate them at a very efficient level. So I don't see any kind of conflict there, and it hasn't been a particular issue for us over time.

Daniel Roberts

Analyst · JonesTrading.

Sorry, just to jump in on the Lepton cloud as well. It hasn't really been live functionally. So NVIDIA has been working through a number of items in relation to making that available. I think some of it is now live early access, and we're in direct conversation with them about integration at the moment. So it is a demand partner that we can absolutely envisage using.

Operator

Operator

Our next question comes from Ben Sommers with BTIG.

Benjamin Eric Sommers

Analyst · BTIG.

So kind of more on the colocation side, just curious what went into the decision to start developing Horizon 2. And if that was a lot of -- potential customers were thinking about potentially scaling beyond the initial 50 megawatts of Horizon 1. And then I think kind of more broader picture, as we progress towards getting Sweetwater online. What's the different customer profile, if any, for more larger scale sites versus potentially just wanting 50 megawatts or 100 megawatts and just kind of any color on the counterparties that you're having conversations with.

Daniel Roberts

Analyst · BTIG.

So we haven't committed the full CapEx to building out Horizon 2. So importantly, over the last 7 years, our whole business model has been around cheap optionality. And sitting here right now today looking at the bigger picture, and I can drill into that, it just makes sense to order long lead items and start moving the ball ahead on a potential commissioning of the Horizon 2 facility. So a lot of the way the S curve works for CapEx in respect to these facilities is you've got long time and small cash outlays that build up over time before the larger CapEx commitments come in. So it makes sense to put down deposits on long lead items, get the ball rolling, so that we can maintain a really competitive fast time to power for Horizon 2. Now sitting here today relative to 3, 6 months ago, we're seeing further validation of a decision to spec a relatively small amount of capital. We are seeing demand take up for AI cloud. We're seeing the number of inbounds for colocation. We're seeing better visibility on the overall demand supply imbalance for liquid cooled chips. So it's a bit of a no-brainer to be honest. And in terms of committing full CapEx to that, we've got time, and we'll just continue to monitor the market live because things are changing week-to-week in this industry. And that flexibility, having a governance structure that is founder-led, the ability to make quick decisions, work with the Board and adapt to where the market is going is really important because it is super dynamic.

Operator

Operator

Thank you. I would now like to turn the call back over to Daniel Roberts for any closing remarks.

Daniel Roberts

Analyst

Thank you very much. Thanks, everyone, for dialing in. It's obviously been an exciting quarter and exciting year. We're thrilled about expanding to 10,900 GPUs in the coming months and really putting our AI cloud service further on the map. But for us, most of our time is now focused on what lies beyond that. So expanding our 3 gigawatt power portfolio we're working hard on. That's exciting. That's many years away, but it was many years away the 3 gigawatts where we started 7 years ago. So continuing to position ourselves ahead of the curve in every respect is just critical. And it's really important when you're fighting this real-world digital world imbalance where digital demand increases overnight, it goes exponential. Your ability to service that demand with real-world infrastructure and compute works in a linear fashion. It's harder, it takes longer. So the ability to preempt those digital demands and build for tomorrow, position for tomorrow rather than where we are today is a key competitive advantage and something we will maintain. And it manifests itself in us building 200-kilowatt racks, but the industry can't support 200-kilowatt racks. So Horizon 1, we're having to reconfigure to make it smaller. So we'll continue to keep that in mind. We're excited about the future. We appreciate all of your support and can't wait for the next quarterly earnings. Thanks, everyone.

Operator

Operator

Thank you. This concludes the conference. Thank you for your participation. You may now disconnect.