OpenAI Is a Systemic Risk to the Tech Industry

OpenAI Is a Systemic Risk to the Tech Industry

Before we go any further: I hate to ask you to do this, but I need your help — I'm up for this year's Webbys for the best business podcast award. I know it's a pain in the ass, but can you sign up and vote for Better Offline? I have never won an award in my life, so help me win this one.


Soundtrack: Mastodon - High Road


I wanted to start this newsletter with a pithy anecdote about chaos, both that caused by Donald Trump's tariffs and the brittle state of the generative AI bubble.

Instead, I am going to write down some questions, and make an attempt to answer them.

How Much Cash Does OpenAI Have?

Last week, OpenAI closed "the largest private tech funding round in history," where it "raised"  an astonishing "$40 billion," and the reason that I've put quotation marks around it is that OpenAI has only raised $10 billion of the $40 billion, with the rest arriving by "the end of the year." 

The remaining $30 billion — $20 billion of which will (allegedly) be provided by SoftBank — is partially contingent on OpenAI's conversion from a non-profit to a for-profit by the end of 2025, and if it fails, SoftBank will only give OpenAI a further $20 billion. The round also valued OpenAI at $300 billion.

To put that in context, OpenAI had revenues of $4bn in 2024. This deal values OpenAI at 75 times its revenue. That’s a bigger gulf than Tesla at its peak market cap — a company that was, in fact, worth more than all other legacy car manufacturers combined, despite making far less than them, and shipping a fraction of their vehicles. 

I also want to add that, as of writing this sentence, this money is yet to arrive. SoftBank's filings say that the money will arrive mid-April — and that SoftBank would be borrowing as much as $10 billion to finance the round, with the option to syndicate part of it to other investors. For the sake of argument, I'm going to assume this money actually arrives.

Filings also suggest that "in certain circumstances" the second ($30 billion) tranche could arrive "in early 2026." This isn't great. It also seems that SoftBank's $10 billion commitment is contingent on getting a loan, "...financed through borrowings from Mizuho Bank, Ltd., among other financial institutions."

OpenAI also revealed it now has 20 million paying subscribers and over 500 million weekly active users. If you're wondering why it doesn’t talk about monthly active users, it's because they'd likely be much higher than 500 million, which would reveal exactly how poorly OpenAI converts free ChatGPT users to paying ones, and how few people use ChatGPT in their day-to-day lives.

The Information reported back in January that OpenAI was generating $25 million in revenue a month from its $200-a-month "Pro" subscribers (it still loses money on every one of them), suggesting around 125,000 ChatGPT Pro subscribers. Assuming the other 19,875,000 users are paying $20 a month, that puts its revenue at about $423 million a month, or about $5 billion a year, from ChatGPT subscriptions. 

This is what reporters mean when they say "annualized revenue" by the way — it's literally the monthly revenue multiplied by 12.

Bloomberg reported recently that OpenAI expects its 2025 revenue to "triple" to $12.7 billion this year. Assuming a similar split of revenue to 2024, this would require OpenAI to nearly double its annualized subscription revenue from Q1 2025 (from $5 billion to around $9.27 billion) and nearly quadruple API revenue (from 2024's revenue of $1 billion, which includes Microsoft's 20% payment for access to OpenAI's models, to $3.43 billion).

While these are messy numbers, it's unclear how OpenAI intends to pull this off.

The Information reported in February that it planned to do so by making $3 billion a year selling "agents," with ChatGPT subscriptions ($7.9 billion) and API calls ($1.8 billion) making up the rest. This, of course, is utter bollocks. OpenAI's "agents" can't do even the simplest tasks, and three billion dollars of the $12.7 billion figure appears to be a commitment made by SoftBank to purchase OpenAI's tech for its various subsidiaries and business units. 

Let's say out the numbers precisely:

  • Incoming monthly revenue: roughly $425 million, give or take.
  • Theoretical revenue from Softbank: $250 million a month. However, I can find no proof that SoftBank has begun to make these payments or, indeed, that it intends to make them.
  • Liquidity:
    • $10 billion that it is yet to receive from SoftBank and a syndicate of investors including Microsoft, potentially.
    • An indeterminate amount of remaining capital on the $4 billion credit facility provided by multiple banks back in October 2024, raised alongside a funding round that valued the company at $157 billion.
      • As a note, this announcement stated that OpenAI had "access to over $10 billion in liquidity."
    • Based on reports, OpenAI will not have access to the rest of its $40bn funding until "the end of the year," and it's unclear what part of the end of the year.

We can assume, in this case, that OpenAI likely has, in the best case scenario, access to roughly $16 billion in liquidity at any given time. It's reasonable to believe that OpenAI will raise more debt this year, and I'd estimate it does so to the tune of around $5 billion or $6 billion. Without it, I am not sure what it’s going to do.

As a reminder: OpenAI loses money on every single user.

What Are OpenAI's Obligations?

When I wrote "How Does OpenAI Survive?" and "OpenAI Is A Bad Business," I used reported information to explain how this company was, at its core, unsustainable.

Let's refresh our memories.

Compute Costs: at least $13 billion in 2025 with Microsoft alone, and as much as $594 million to CoreWeave.

It seems, from even a cursory glance, that OpenAI's costs are increasing dramatically. The Information reported earlier in the year that OpenAI projects to spend $13 billion on compute with Microsoft alone in 2025, nearly tripling what it spent in total on compute in 2024 ($5 billion).

This suggests that OpenAI's costs are skyrocketing, and that was before the launch of its new image generator which led to multiple complaints from Altman about a lack of available GPUs, leading to OpenAI's CEO saying to expect "stuff to break" and delays in new products. Nevertheless, even if we assume OpenAI factored in the compute increases into its projections, it still expects to pay Microsoft $13 billion for compute this year.

This number, however, doesn't include the $12.9 billion five-year-long compute deal signed with CoreWeave, a deal that was a result of Microsoft declining to pick up the option to buy said compute itself. Payments for this deal, according to The Information, start in October 2025, and assuming that it's evenly paid (the terms of these contracts are generally secret, even in the case of public companies), this would still amount to roughly $2.38 billion a year.

However, for the sake of argument, let's consider the payments are around $198 million a month, though there are scenarios — such as, say, CoreWeave's buildout partner not being able to build the data centers or CoreWeave not having the money to pay to build them — where OpenAI might pay less.

To be clear, and I’ll explain in greater detail later, this wouldn’t be a good thing, either. While it would be off the hook for some of its payments, it would also be without the compute that’s essential for it to continue growing, serving existing customers, and building new AI models. Cash and compute are both essential to OpenAI’s survival.  

Stargate: $1 Billion+

OpenAI has dedicated somewhere in the region of $19 billion to the Stargate data center project, along with another $19 billion provided by SoftBank and an indeterminate amount by other providers.

Based on reporting from Bloomberg, OpenAI plans to have 64,000 Blackwell GPUs running "by the end of 2026," or roughly $3.84 billion worth of them. I should also note that Bloomberg said that 16,000 of these chips would be operational by Summer 2025, though it's unclear if that will actually happen.

Though it's unclear who actually pays for what parts of Stargate, it's safe to assume that OpenAI will have to, at the very least, put a billion dollars into a project that is meant to be up and running by the end of 2026, if not more.

As of now, Stargate has exactly one data center under development in Abilene, Texas, and as above, it's unclear how that's going, though a recent piece from The Information reported that it was currently "empty and incomplete," and that if it stays that way, "OpenAI could walk away from the deal, which would cost Oracle billions of dollars." Though the article takes pains to assure the reader that won't be likely, even an inkling of such a possibility is a bad sign.

Business Insider's reporting on the site in Abilene calls it a "$3.4 billion data center development" (as did the press release from site developer Crusoe), though these numbers don't include GPUs, hardware, or the labor necessary to run them. Right now, Crusoe is (according to Business Insider) building "six new data centers, each with a minimum square footage...[which will] join the two it is already constructing for Oracle." Oracle has signed, according to The Information, a 15-year-long lease with Crusoe for its data centers, all of which will be rented to OpenAI.

In any case, OpenAI’s exposure could be much, much higher than the $1bn posited at the start of this section (and I’ll explain in greater depth how I reached that figure at the bottom of this section). If OpenAI has to contribute significantly to the costs associated with building Stargate, it could be on the hook for billions. 

Data Center Dynamics reports that the Abilene site is meant to have 200MW of compute capacity in the first half of 2025, and then as much as 1.2GW by "mid-2026." To give you a sense of total costs for this project, former Microsoft VP of Energy Brian Janous said in January that it costs about $25 million a megawatt (or $25 billion a gigawatt), meaning that the initial capital expenditures for Stargate to spin up its first 200MW data center will be around $5 billion, spiraling to $30 billion for the entire project. 

Or perhaps even more. The Information has reported that the site, which could be "...potentially one of the world's biggest AI data centers," could cost "$50 billion to $100 billion in the coming years." 

Assuming we stick with the lower end of the cost estimates, it’s likely that OpenAI is on the hook for over $5 billion for the Abilene site based on the $19 billion it has agreed to contribute to the entire Stargate project, the (often disagreeing) cost projections of the facility), and the contributions of other partners. 

This expenditure won’t come all at once, and will be spread across several years. Still, assuming even the rosiest numbers, it's hard to see how OpenAI doesn't have to pony up $1 billion in 2025, with similar annual payments going forward until its completion, and that is likely because the development of this site is going to be heavily delayed by both tariffs, labor shortages, and Oracle's (as reported by The Information) trust in "scrappy but unproven startups to develop the project."

Other costs: at least $3.5 billion

Based on reporting from The Information last year, OpenAI will spend at least $2.5 billion across salaries, "data" (referring to buying data from other companies), hosting and other cost of sales, and sales and marketing, and then another billion on what infrastructure OpenAI owns.

I expect the latter cost to balloon with OpenAI's investment in physical infrastructure for Stargate.

How Does OpenAI Meet Its Obligations?

OpenAI Could Spend $28 Billion Or More In 2025, and Lose over $14 Billion while having an absolute maximum of $20 billion in liquidity

Based on previous estimates, OpenAI spends about $2.25 to make $1. At that rate, it's likely that OpenAI's costs in its rosiest revenue projections of $12.7 billion are at least $28 billion — meaning that it’s on course to burn at least $14 billion in 2025.

Assuming that OpenAI has all of its liquidity from last year (it doesn't, but for sake of argument, let’s pretend it still has the full $10 billion), as well as the $10 billion from SoftBank, it is still unclear how it meets its obligations.

While OpenAI likely has preferential payment structures with all vendors, such as its discounted rates with Microsoft for Azure cloud services, it will still have to pay them, especially in the case of costs related to Stargate, many of which will be up-front costs. In the event that its costs are as severe as reporting suggests, it’s likely the company will find itself needing to raise more capital — whether through equity (or the weird sort-of equity that it issues) or through debt. 

And yes, while OpenAI has some revenue, it comes at a terrible cost, and anything that isn’t committed to paying for salaries and construction fees will likely be immediately funnelled directly into funding the obscene costs behind inference and training models like GPT 4.5 — a "giant expensive model" to run that the company has nevertheless pushed to every user.

Worse still, OpenAI has, while delaying its next model (GPT-5), promised to launch its o3 reasoning model after saying it wouldn't do so, which is strange, because it turns out that o3 is actually way more expensive to run than people thought. 

Reasoning models are almost always more expensive to operate, as they involve the model “checking” its work, which, in turn, requires more calculations and more computation. Still, o3 is ludicrously expensive even for this category, with the Arc Prize Foundation (a non-profit that makes the ARC-AGI test for benchmarking models) estimating that it will cost $30,000 a task.

SoftBank Has To Borrow Money To Meet Its OpenAI and Stargate Obligations, leading to SoftBank's "...financial condition likely deteriorating."

As of right now, SoftBank has committed to the following:

SoftBank's exposure to OpenAI is materially harming the company. To quote the Wall Street Journal:

Ratings agency S&P Global said last week that SoftBank’s “financial condition will likely deteriorate” as a result of the OpenAI investment and that its plans to add debt could lead the agency to consider downgrading SoftBank’s ratings. 

While one might argue that SoftBank has a good amount of cash, the Journal also adds that it’s  somewhat hamstrung in its use as a result of CEO Masayoshi Son's reckless gambles:

SoftBank had a decent buffer of $31 billion of cash as of Dec. 31, but the company has also pledged to hold much of that in reserve to quell worried investors. SoftBank has committed not to borrow more than 25% of the value of all of its holdings, which means it will likely need to sell some of the other parts of its empire to pay for the rest of the OpenAI deal.

Worse still, it seems, as mentioned before, that SoftBank will be financing the entirety of the first $10 billion — or $7.5 billion, assuming it finds investors to syndicate the first tranche, and they follow through right until the moment Masayoshi Son hits ‘send’ on the wire transfer .

As a result, SoftBank will likely have to start selling off parts of its valuable holdings in companies like Alibaba and ARM, or, worse still, parts of its ailing investments from its Vision Fund, resulting in a material loss on its underwater deals.

This is an untenable strategy, and I'll explain why.

OpenAI Needs At Least $40 billion A Year To Survive, And Its Costs Are Increasing

While we do not have much transparency into OpenAI's actual day-to-day finances, we can make the educated guess that its costs are increasing based on the amount of capital it’s raising. If OpenAI’s costs were flat, or only mildly increasing, we’d expect to see raises roughly the same size as previous ones. Its $40bn raise is nearly six times the previous funding round. 

Admittedly, multiples like that aren’t particularly unusual. If a company raises $300,000 in a pre-seed round, and $3m in a Series A round, that’s a tenfold increase. But we’re not talking about hundreds of thousands of dollars, or even millions of dollars. We’re talking about billions of dollars. If OpenAI’s funding round with Softbank goes as planned, it’ll raise the equivalent of the entire GDP of Estonia — a fairly wealthy country itself, and one that’s also a member of Nato and the European Union. That alone should give you a sense of the truly insane scale of this. 

Insane, sure, but undoubtedly necessary. Per The Information, OpenAI expects to spend as much as $28 billion in compute on Microsoft's Azure cloud in 2028. Over a third of OpenAI's revenue, per the same article, will come from SoftBank's (alleged) spend.It's reasonable to believe that OpenAI will, as a result, need to raise in excess of $40 billion in funding a year, though it's reasonable to believe that it will need to raise more along the lines of $50 billion or more a year until it reaches profitability. This is due to both its growing cost of business, as well as its various infrastructure commitments, both in terms of Stargate, as well as with third-party suppliers like CoreWeave and Microsoft. 

Counterpoint: OpenAI could reduce costs: While this is theoretically possible, there is no proof that this is taking place. The Information claims that "...OpenAI would turn profitable by the end of the decade after the buildout of Stargate," but there is no suggestion as to how it might do so, or how building more data centers would somehow reduce its costs.This is especially questionable when you realize that Microsoft is already providing discounted pricing on Azure compute. We don’t know if these discounts are below Microsoft’s break-even point — which it wouldn’t, nor would any other company offer, if they didn’t have something else to incentivize it, such as equity or a profit-sharing program. Microsoft, for what it’s worth, has both of those things. 

OpenAI CEO Sam Altman's statements around costs also suggest that they're increasing. In late February, Altman claimed that OpenAI was "out of GPUs." While this suggests that there’s demand for some products — like its image-generating tech, which enjoyed a viral day in the sun in March — it also means that to meet the demand it needs to spend more. And, at the risk of repeating myself, that demand doesn’t necessarily translate into profitability. 

SoftBank Cannot Fund OpenAI Long-Term, as OpenAI's costs are projected to be $320 billion in the next five years

As discussed above, SoftBank has to overcome significant challenges to fund both OpenAI and Stargate, and when I say "fund," I mean fund the current state of both projects, assuming no further obligations.

The Information reports that OpenAI forecasts that it will spend $28 billion on compute with Microsoft alone in 2028. The same article also reports that OpenAI "would turn profitable by the end of the decade after the buildout of Stargate," suggesting that OpenAI's operating expenses will grow exponentially year-over-year.

These costs, per The Information, are astronomical:

The reason for the expanding cash burn is simple: OpenAI is spending whatever revenue comes in on computing needs for operating its existing models and developing new models. The company expects those costs to surpass $320 billion overall between 2025 and 2030.

The company expects more than half of that spending through the end of the decade to fund research-intensive compute for model training and development. That spending will rise nearly sixfold from current rates to around $40 billion per year starting in 2028. OpenAI projects its spending on running AI models will surpass its training costs in 2030.

SoftBank has had to (and will continue having to) go to remarkable lengths to fund OpenAI's current ($40 billion) round, lengths so significant that it may lead to its credit rating being further downgraded.

Even if we assume the best case scenario — OpenAI successfully converts to a for-profit entity by the end of the year, and receives the full $30 billion — it seems unlikely (if not impossible) for it to continue raising the amount of capital they need to continue operations. As I’ve argued in previous newsletters, there are only a few entities that can provide the kinds of funding that OpenAI needs. These include big tech-focused investment firms like Softbank, sovereign wealth funds (like those of Saudi Arabia and the United Emirates), and perhaps the largest tech companies.

These entities can meet OpenAI’s needs, but not all the time. It’s not realistic to expect Softbank, or Microsoft, or the Saudis, or Oracle, or whoever, to provide $40bn every year for the foreseeable future. 

This is especially true for Softbank. Based on its current promise to not borrow more than 25% of its holdings, it is near-impossible that SoftBank will be able to continue funding OpenAI at this rate ($40 billion a year), and $40 billion a year may not actually be enough.

Based on its last reported equity value of holdings, SoftBank's investments and other assets are worth around $229 billion, meaning that it can borrow just over $57bn while remaining compliant with these guidelines.

In any case, it is unclear how SoftBank can fund OpenAI, but it's far clearer that nobody else is willing to.

OpenAI Is Running Into Capacity Issues, Suggesting Material Instability In Its Business or Infrastructure — And It's Unclear How It Expands Further

Before we go any further, it's important to note that OpenAI does not really have its own compute infrastructure. The majority of its compute is provided by Microsoft, though, as mentioned above, OpenAI now has a deal with CoreWeave to take over Microsoft's future options for more capacity.

Anyway, in the last 90 days, Sam Altman has complained about a lack of GPUs and pressure on OpenAI's servers multiple times. Forgive me for repeating stuff from above, but this is necessary.

  • On February 27, he lamented how GPT 4.5 was a "giant, expensive model," adding that it was "hard to perfectly predict growth surges that lead to GPU shortages." He also added that they would be adding tens of thousands of GPUs in the following week, then hundreds of thousands of GPUs "soon."
  • On March 26, he said that "images in chatgpt are wayyyy more popular than [OpenAI] expected," delaying the free tier launch as a result.
  • On March 27, he said that OpenAI's "GPUs [were] melting," adding that it was "going to introduce some temporary rate limits" while it worked out how to "make it more efficient."
  • On March 28, he retweeted Rohan Sahai, the product team lead on OpenAI's Sora video generation model, who said "The 4o image gen demand has been absolutely incredible. Been super fun to watch the Sora feed fill up with great content...GPUs are also melting in Sora land unfortunately so you may see longer wait times / capacity issues over coming days."
  • On March 30, he said "can yall please chill on generating images this is insane our team needs sleep."
  • On April 1, he said that "we are getting things under control, but you should expect new releases from openai [sic] to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges." He also added that OpenAI is "working as fast we can to really get stuff humming; if anyone has GPU capacity in 100k chunks we can get asap please call!"

These statements, in a bubble, seem either harmless or like OpenAI's growth is skyrocketing — the latter of which might indeed be true, but bodes ill for a company that burns money on every single user.

Any mention of rate limits or performance issues suggests that OpenAI is having significant capacity issues, and at this point it's unclear what further capacity it can actually expand to outside of that currently available. Remember, Microsoft has now pulled out of as much as 2GW of data center projects, walked away from a $1 billion data center development in Ohio, and declined the option on $12bn of compute from CoreWeave that OpenAI had to pick up — meaning that it may be pushing up against the limits of what is physically available.

While the total available capacity of GPUs at many providers like Lambda and Crusoe is unknown, we know that CoreWeave has approximately 360MWavailable, compared to Microsoft's 6.5 to 7.5 Gigawatts, a large chunk of which already powers OpenAI.

If OpenAI is running into capacity issues, it could be one of the following:

  • OpenAI is running up against the limit of what Microsoft has available, or is willing to offer the company. The Information reported in October 2024 that OpenAI was frustrated with Microsoft, which said it wasn’t moving fast enough to supply it with servers.
  • While OpenAI's capacity is sufficient, It does not have the resources available to easily handle bursts in user growth in a stable manner.

Per The Information's reporting, Microsoft "promised OpenAI 300,000 NVIDIA GB200 (Blackwell) chips by the end of 2025," or roughly $18 billion of chips. It's unclear if this has changed since Microsoft allowed OpenAI to seek other compute in late January 2025.

I also don't believe that OpenAI has any other viable options for existing compute infrastructure outside of Microsoft. CoreWeave's current data centers mostly feature NVIDIA's aging "Hopper" GPUs, and while it could — and likely is! — retrofitting its current infrastructure with Blackwell chips, doing so is not easy. Blackwell chips require far more powerful cooling and server infrastructure to make them run smoothly (a problem which led to a delay in their delivery to most customers), and even if CoreWeave was able to replace every last Hopper GPU with Blackwell (it won't), it still wouldn't match what OpenAI needs to expand.

One might argue that it simply needs to wait for the construction of the Stargate data center, or for CoreWeave to finish the gigawatt or so of construction it’s working on.

As I've previously written, I have serious concerns over the viability of CoreWeave ever completing its (alleged) contracted 1.3 Gigawatts of capacity.

Per my article:

Per its S-1, CoreWeave has contracted for around 1.3 Gigawatts of capacity, which it expects to roll out over the coming years, and based on NextPlatform's math, CoreWeave will have to spend in excess of $39 billion to build its contracted compute. It is unclear how it will fund doing so, and it's fair to assume that CoreWeave does not currently have the capacity to cover its current commitments.

However, even if I were to humour the idea, it is impossible that any of this project is done by the end of the year, or even in 2026. I can find no commitments to any timescale, other than the fact that OpenAI will allegedly start paying CoreWeave in October (per The Information), which could very well be using current capacity.

I can also find no evidence that Crusoe, the company building the Stargate data center, has any compute available. Lambda, a GPU compute company that raised $320 million earlier in this year, and according to Data Center Dynamics "operates out of colocation data centers in San Francisco, California, and Allen, Texas, and is backed by more than $820 million in funds raised just this year," suggesting that it may not have their own data centers at all. Its ability to scale is entirely contingent on the availability of whatever data center providers it has relationships with. 

In any case, this means that OpenAI's only real choice for GPUs is CoreWeave or Microsoft. While it's hard to calculate precisely, OpenAI's best case scenario is that 16,000 GPUs come online in the summer of 2025 as part of the Stargate data center project.

That's a drop in the bucket compared to the 300,000 Blackwell GPUs that Microsoft had previously promised.

Any capacity or expansion issues threaten to kneecap OpenAI

OpenAI is, regardless of how you or I may feel about generative AI, one of the fastest-growing companies of all time. It currently has, according to its own statements, 500 million weekly active users. Putting aside that each user is unprofitable, such remarkable growth — especially as it's partially a result of its extremely resource-intensive image generator — is also a strain on its infrastructure.

The vast majority of OpenAI's users are free customers using ChatGPT, with only around 20 million paying subscribers, and the vast majority on the cheapest $20 plan. OpenAI's services — even in the case of image generation — are relatively commoditized, meaning that users can, if they really care, go and use any number of other different Large Language Model services. They can switch to Bing Image Creator, or Grok, or Stable Diffusion, or whatever.

Free users are also a burden on the company — especially with such a piss-poor conversion rate — losing it money with each prompt (which is also the case with paying customers), and the remarkable popularity of its image generation service only threatens to bring more burdensome one-off customers that will generate a few abominable Studio Ghibli pictures and then never return.

If OpenAI's growth continues at this rate, it will run into capacity issues, and it does not have much room to expand. While we do not know how much capacity it’s taking up with Microsoft, or indeed whether Microsoft is approaching capacity or otherwise limiting how much of it OpenAI can take, we do know that OpenAI has seen reason to beg for access to more GPUs.

In simpler terms, even if OpenAI wasn’t running out of money, even if OpenAI wasn’t horrifyingly unprofitable, it also may not have enough GPUs to continue providing its services in a reliable manner.

If that's the case, there really isn't much that can be done to fix it other than:

  • Significantly limiting free users' activity on the platform, which is OpenAI's primary mechanism for revenue growth and customer acquisition.
  • Limiting activity or changing the economics behind its paid product, to quote Sam Altman, "find[ing] some way to let people to pay for compute they want to use more dynamically."
    • On March 4th, Altman solicited feedback on "...an idea for paid plans: your $20 plus subscription converts to credits you can use across features like deep research, o1, gpt-4.5, sora, etc...no fixed limits per feature and you choose what you want; if you run out of credits you can buy more."
    • On January 5th, Sam Altman revealed that OpenAI is currently losing money on every paid subscription, including its $200-a-month "pro" subscription.
    • Buried in an article from The Information from March 5 is a comment that suggests it’s considering measures like changing its pricing model, with "...Sam Altman reportedly [telling] developers in London [in February] that OpenAI is primed to charge 20% or 30% of Pro customers a higher price because of how many research queries they’re doing, but he suggested an “a la carte” or pay-as-you-go approach. When it comes to agents, though, “we have to charge much more than $200 a month.”

The problem is that these measures, even if they succeed in generating more money for the company, also need to reduce the burden on OpenAI's available infrastructure.

Remember: data centers can take three to six years to build, and even with the Stargate's accelerated (and I'd argue unrealistic) timelines, OpenAI isn't even unlocking a tenth of Microsoft's promised compute (16,000 GPUs online this year versus the 300,000 GPUs promised by Microsoft).

What Might Capacity Issues Look Like? And What Are The Consequences?

Though downtime might be an obvious choice, capacity issues at OpenAI will likely manifest in hard limits on what free users can do, some of which I've documented above. Nevertheless, I believe the real pale horses of capacity issues come from arbitrary limits on any given user group, meaning both free and paid users. Sudden limits on what a user can do — a reduction in the number of generations of images of videos for paid users, any introduction of "peak hours," or any increases in prices are a sign that OpenAI is running out of GPUs, which it has already publicly said is happening.

However, the really obvious one would be service degradation — delays in generations of any kind, 500 status code errors, or ChatGPT failing to fully produce an answer. OpenAI has, up until this point, had fairly impressive uptime. Still, if it is running up against a wall, this streak will end.

The consequences depend on how often these issues occur, and to whom they occur. If free users face service degradation, they will bounce off the product, as their use is likely far more fleeting than a paid user, which will begin to erode OpenAI's growth. Ironically, rapid (and especially unprecedented) growth in one of OpenAI’s competitors, like xAI or Anthropic, could also represent a pale horse for OpenAI. 

If paid users face service degradation, it's likely this will cause the most harm to the company, as while paid users still lose OpenAI money in the end, it at least receives some money in exchange.

OpenAI has effectively one choice here: getting more GPUs from Microsoft, and its future depends heavily both on its generosity and there being enough of them at a time when Microsoft has pulled back from two gigawatts of data centers specifically because of it moving away from providing compute for OpenAI.

Admittedly, OpenAI has previously spent more on training models than inference (actually running them) and the company might be able to smooth downtime issues by shifting capacity. This would, of course, have a knock-on effect on its ability to continue developing new models, and the company is already losing ground, particularly when it comes to Chinese rivals like DeepSeek.

OpenAI Must Convert To A For-Profit Entity By The End of 2025 Or It Loses $10 Billion In Funding, And Doing So May Be Impossible

As part of its deal with SoftBank, OpenAI must convert its bizarre non-profit structure into a for-profit entity by December 2025, or it’ll lose $10 billion from its promised funding. 

Furthermore, in the event that OpenAI fails to convert to a for-profit by October 2026, investors in its previous $6.6 billion round can claw back their investment, with it converting into a loan with an attached interest rate. Naturally, this represents a nightmare scenario for the company, as it’ll increase both its costs and its outgoings.

This is a complex situation that almost warrants its own newsletter, but the long and short of it is that OpenAI would have to effectively dissolve itself, start the process of forming an entirely new entity, and distribute its assets to other nonprofits (or sell/license them to the for-profit company at fair market rates). It would require valuing OpenAI's assets, which in and of itself would be a difficult task, as well as getting past the necessary state regulators, the IRS, state revenue agencies, and the upcoming trial with Elon Musk only adds further problems.

I’ve simplified things here, and that’s because (as I said) this stuff is complex. Suffice to say, this isn’t as simple as liquidating a company and starting afresh, or submitting a couple of legal filings. It’s a long, fraught process and one that will be — and has been — subject to legal challenges, both from OpenAI’s business rivals, as well as from civil society organizations in California.

Based on discussions with experts in the field and my own research, I simply do not know how OpenAI pulls this off by October 2026, let alone by the end of the year.

OpenAI Has Become A Systemic Risk To The Tech Industry

OpenAI has become a load-bearing company for the tech industry, both as a narrative — as previously discussed, ChatGPT is the only Large Language Model company with any meaningful userbase — and as a financial entity. 

Its ability to meet its obligations and its future expansion plans are critical to the future health — or, in some cases, survival — of multiple large companies, and that's before the after-effects that will affect its customers as a result of any financial collapse. 

The parallels to the 2007-2008 financial crisis are startling. Lehman Brothers wasn’t the largest investment bank in the world (although it was certainly big), just like OpenAI isn’t the largest tech company (though, again, it’s certainly large in terms of market cap and expenditure). Lehman Brothers’ collapse sparked a contagion that would later spread throughout the global financial services industry, and consequently, the global economy. 

I can see OpenAI’s failure having a similar systemic effect. While there is a vast difference between OpenAI’s involvement in people’s lives compared to the millions of subprime loans issued to real people, the stock market’s dependence on the value of the Magnificent 7 stocks (Apple, Microsoft, Amazon, Alphabet, NVIDIA and Tesla), and in turn the Magnificent 7’s reliance on the stability of the AI boom narrative still threatens material harm to millions of people, and that’s before the ensuing layoffs. 

And as I’ve said before, this entire narrative is based off of OpenAI’s success, because OpenAI is the generative AI industry. 

I want to lay out the direct result of any kind of financial crisis at OpenAI, because I don't think anybody is taking this seriously.

Oracle Will Lose At Least $1 Billion If OpenAI Doesn't Fulfil Its Obligations

Per The Information, Oracle, which has taken responsibility for organizing the construction of the Stargate data centers with unproven data center builder Crusoe, "...may need to raise more capital to fund its data center ambitions."

Oracle has signed a 15-year lease with Crusoe, and, to quote The Information, "...is on the hook for $1 billion in payments to that firm."

To further quote The Information:

...while that’s a standard deal length, the unprecedented size of the facility Oracle is building for just one customer makes it riskier than a standard cloud data center used by lots of interchangeable customers with more predictable needs, according to half a dozen people familiar with these types of deals.

In simpler terms, Oracle is building a giant data center for one customer — OpenAI — and has taken on the financial burden associated with it. If OpenAI fails to expand, or lacks the capital to actually pay for its share of the Stargate project, Oracle is on the hook for at least a billion dollars, and, based on The Information's reporting, is also on the hook to buy the GPUs for the site.

Even before the Stargate announcement, Oracle and OpenAI had agreed to expand their Abilene deal from two to eight data center buildings, which can hold 400,000 Nvidia Blackwell GPUs, adding tens of billions of dollars to the total cost of the facility.

In reality, this development will likely cost tens of billions of dollars, $19 billion of which is due from OpenAI, which does not have the money until it receives its second tranche of funding in December 2025, which is contingent partially on its ability to convert into a for-profit entity, which, as mentioned, is a difficult and unlikely proposition.

It's unclear how many of the Blackwell GPUs that Oracle has had to purchase in advance, but in the event of any kind of financial collapse at OpenAI, Oracle would likely take a loss of at least a billion dollars, if not several billion dollars.

CoreWeave's Expansion Is Likely Driven Entirely By OpenAI, And It Cannot Survive Without OpenAI Fulfilling Its Obligations (And May Not Anyway)

I have written a lot about publicly-traded AI compute firm CoreWeave, and it would be my greatest pleasure to never mention it again.

Nevertheless, I have to.

The Financial Times revealed a few weeks ago that CoreWeave's debt payments could balloon to over $2.4 billion a year by the end of 2025, far outstripping its cash reserves, and The Information reported that its cash burn would increase to $15 billion in 2025.

As per its IPO filings, 62% of CoreWeave's 2024 revenue (a little under $2 billion, with losses of $863 million) was Microsoft compute, and based on conversations with sources, a good amount of this was Microsoft running compute for OpenAI.

Starting October 2025, OpenAI will start paying Coreweave as part of its five-year-long $12 billion contract, picking up the option that Microsoft declined. This is also when CoreWeave will have to start making payments on its massive, multi-billion dollar DDTL 2.0 loan, which likely makes these payments critical to CoreWeave's future.

This deal also suggests that OpenAI will become CoreWeave's largest customer. Microsoft had previously committed to spending $10 billion on CoreWeave's services "by the end of the decade," but CEO Satya Nadella added a few months later on a podcast that its relationship with CoreWeave was a "one-time thing." Assuming Microsoft keeps spending at its previous rate — something that isn't guaranteed — it would still be only half of OpenAI's potential revenue.

CoreWeave's expansion, at this point, is entirely driven by OpenAI. 77% of its 2024 revenue came from two customers — Microsoft being the largest, and using CoreWeave as an auxiliary supplier of compute for OpenAI. As a result, the future expansion efforts — the theoretical 1.3 gigawatts of contracted (translation: does not exist yet) compute — are largely (if not entirely) for the benefit of OpenAI.

In the event that OpenAI cannot fulfil its obligations, CoreWeave will collapse. It is that simple. 

NVIDIA Relies On CoreWeave For More Than 6% Of Its Revenue, And CoreWeave's Future Creditworthiness To Continue Receiving It — Much Of Which Is Dependent On OpenAI

I’m basing this on a comment I received from Gil Luria, Managing Director and Head of Technology Research at analyst D.A. Davidson & Co:

Since CRWV bought 200,000 GPUs last year and those systems are around $40,000 we believe CRWV spent $8 billion on NVDA last year. That represents more than 6% of NVDA’s revenue last year. 

CoreWeave receives preferential access to NVIDIA's GPUs, and makes up billions of dollars of its revenue. CoreWeave then takes those GPUs and raises debt using them as collateral, then proceeds to buy more of those GPUs from NVIDIA. NVIDIA was the anchor for CoreWeave's IPO, and CEO Michael Intrator said that the IPO "wouldn't have closed" without NVIDIA buying $250 million worth of shares. NVIDIA invested $100 million in the early days of CoreWeave, and, for reasons I cannot understand, also agreed to spend $1.3 billion over four years to, and I quote The Information, "rent its own chips from CoreWeave."

Buried in CoreWeave's S-1 — the document every company publishes before going public —  was a warning about counterparty credit risk, which is when one party provides services or goods to another with specific repayment terms, and the other party not meeting their side of the deal. While this was written as a theoretical (as it could, in theoretically, come from any company to which CoreWeave acts as a creditor) it only named one company: OpenAI. 

As discussed previously, CoreWeave is saying that, should a customer — any customer, but really, it means OpenAI — fail to pay its bills for infrastructure built on their behalf, or for services rendered, it could have a material risk to the business.

Aside: The Information reported that Google is in "advanced talks" to rent GPUs from CoreWeave. It also, when compared to Microsoft and OpenAI's deals with CoreWeave, noted that "...Google's potential deal with CoreWeave is "significantly smaller than those commitments, according to one of the people briefed on it, but could potentially expand in future years."

CoreWeave's continued ability to do business hinges heavily on its ability to raise further debt (which I have previously called into question), and its ability to raise further debt is, to quote the Financial Times, "secured against its more than 250,000 Nvidia chips and its contracts with customers, such as Microsoft." Any future debt that CoreWeave raises wou

Stay Informed

Get the best articles every day for FREE. Cancel anytime.