I think Ed maybe buries the lede a little bit with all the ranting. About halfway through the piece he drops this chart[1] which personally left me shocked.
To my knowledge, a 10,000% growth in revenue over 7 years isn't a reality once you're at that level of volume. Asking 4o about this projection acknowledges the reality -
"OpenAI’s projection to grow from approximately $1 billion in revenue in 2023 to $125 billion by 2029 is extraordinarily ambitious, implying a compound annual growth rate (CAGR) exceeding 90%. Such rapid scaling is unprecedented, even among the fastest-growing tech companies."
Am I missing something? I like OAI and I use ChatGPT every day, but I remain unconvinced of those figures.
The fact that people are generally uncritical of this growth claim is just stunning. You have to believe that this is as big as smartphones while ignoring all of the differences with this cycle of hype.
Nobody intelligent is actually uncritical about these growth claims. What you are seeing is the other side of "the market can remain irrational longer than you can remain liquid". There's no financial benefit to calling bullshit, because timing when the market is suddenly going to correct is impossible. Rational investors are either in the market of boosting the stock to sell it in the short term (a bigger fools scam) or staying as far away from it as possible.
It doesn't have to hit the projections. It just has to keep looking like it could 2 months from now when you sell.
The market has never been determined by the actual value of the underlying tangible thing, but rather the market value of that tangible thing. Although they may seem similar they are not the same. As the market becomes increasingly detached from reality, you'll see more of this.
The vision they are selling is that AI will replace high value professionals in the economy. It's a vision that entails the collapse of the entire western societal model. If you believe in that investing in them becomes a existential matter, and the actual immediate value doesn't matter. You simply can't afford not to invest.
The numbers are meaningless. They are just meant to evoke "we will control the economy".
Ed's style is very much to rant about this stuff lol. But it's really important to have critical voices right now since a miss with this stuff could have devastating consequences for the industry and the people who work in it.
I think the author might need a reality check. In almost every company the usage of AI is becoming more and more valuable. Even if progress in models freezes and halts, the current state is too valuable. I have talked to multiple people in multiple companies and LLMs are becoming indispensable at so many things. Yes chatgpt might or might not conquer the world but the enterprise usage I don't see decreasing (and I am not talking about customer facing LLM usage which might be very stupid).
The author's tone is over the top, but I think this quote is true:
> Large Language Models and their associated businesses are a $50 billion industry masquerading as a trillion-dollar panacea for a tech industry that’s lost the plot.
There are very real use cases with high value, but it's not an economy-defining technology, and the total value is going to fall far short of projections. On bottom lines, the overall productivity gain from AI for most companies is almost a rounding error compared to other factors.
Part of this is because VCs are hoping for a repeat of the Web 2.0 boom, where marginal costs were zero and billions of people were buying smartphones. If you check YC’s RFS (just an example), they’re all software: https://www.ycombinator.com/rfs
Everyone asks “what if this is like the internet” but what if it’s actually like the smartphone, which took decades of small innovations to make work? If in 1980 you predicted that in 30 years handheld computers would be a trillion dollar industry you’d be right but it still required billions in R&D.
There are a ton of non-software innovations out there, they just require more than a million dollar seed to get working. For example making better batteries, better solar panels, fusion power, innovations in modular housing, etc.
Hopefully you are right, but I fear this is a very naive and premature judgement. At one point the internet was a $50B industry insisting it would be a $1T industry. It even had a bubble, bursting and burying entire companies.
Yet, $1T was nevertheless a profound underestimation.
Same could be said about all the hype and trend that died, blockchain(ignore cryptocurrency part), IoT(not sure what happened), bigData(foundation of current AI regardless of whag anyone says) , app for everything(we indeed have more apps now and everything is junk) were also considered the new water/air/electricity/revolution/disruption.
We in aggregate seem to have developed a collective amnesia due to how fast these trends move and how much is burned in keeping the hype machine going to keep us on the edge. We also need to stop calling LLMs different just like every kid wants to claim mark zuck was diff or bill gates was diff so dropping out like them would make these kids owner of next infinite riches.
After a long decade of fast moving “this will truly revolutionize everything” speech every so often, we need to keep some skepticism. Additionally, the AI bubble is more devastating than the previous as previous money was being spread into multiple hypes from which some emerged silent victors of current trends but now everything is consolidated into one thing, all eggs in one basket. If the eggs break, a large population and industry will metaphorically starve and suffer.
Is it? There's certainly $1T of other businesses built with the internet, but the internet business, itself, was rapidly commoditized. The valuable things were the applications built on it, not the network. The argument here is that nobody's found the $1T applications built on AI foundation models yet, but OpenAI is valued as if they have, because their demo chatbot took off out of peoples' curiosity and people are extrapolating that accident exponentially into the future.
The internet bubble is probably a good analogy. It took almost 20 years and several rounds of failed businesses for the internet to have the impact that was originally promised. The big internet companies of the 90s are not where the money was ultimately made.
Similarly, the current LLM vendors and cloud providers are likely not where the money will ultimately be made. Some startup 10-15 years from now will likely stack a cheaply hosted or distributed LLM with several other technologies, and create a whole new category of use cases we haven't even thought of yet, and that will actually create the new value.
Almost all of the internet build out happened between 1998 and 2008, and cost about $1T and was adding $1T to the economy annually by the end of that buildout.
This latest AI hype cycle is also about 10 years old and about $1T invested, and yet it's still a super-massive financial black hole with no economy-wide trillion dollar boost anywhere in sight.
The internet broadband, fiber, and cellular buildout changed the world significantly. This LLM buildout is doing no such thing and is unlikely to ever do so.
By the time we finished pouring a trillion dollars into the global broadband, fiber, and cellular network buildout between 1998 and 2008, the Internet was already adding a trillion dollars a year to the economy.
We've now got 10 years and about a trillion dollars invested in this latest AI bubble, and it's still a super-massive financial black hole.
Ten years and a trillion dollars can make great things like happen. AI ain't that.
How much is "the internet" an industry? It's an enabler and a commodity as much as electricity or road networks are. Are you counting everything using the internet as contributing a sizable share to the internet industry's value?
There is a difference between valuable and profitable. I think anyone who wants to say there isn’t a bubble needs to solve two problems:
1) Inference is too damn expensive.
2) The models/products aren’t reliable enough.
I also personally think talking to industry folks isn’t a silver bullet. No one knows how to solve #2. No one. We can improve by either waiting for better chips or using bigger models, which has diminishing returns and makes #1 worse.
Maybe OpenAI’s next product should be selling dollars for 99 cents. They just need a billion dollars of SoftBank money, and they can do 100 billion in sales before they need to reraise. And if SoftBank agrees to buy at $1.01 the business can keep going even longer!
I think AI will be useful to industries/companies where #2 is unimportant: Where the quantity of product is far more important than its quality. Disturbingly, this describes the market for a lot of industries/companies.
It seems like every 6 months they come out with a new model that's as good as the previous generation but with a fraction of the inference cost. Inference cost for a given level of quality has been dropping fast. E.g. GPT-4.1 nano outperforms the old GPT-4 model but costs 300x less.
Right now, the API cost for asking a single question costs about a fiftieth of a cent on their cheap 4.1-nano model, up to about 2 cents on their o3 model. This is pretty affordable.
On the other end of the spectrum, if you're maxing out the context window on o3 or 4.1, it'll cost you around $0.50 to $2. Pricy, but on 4.1, this is like inputting several novels worth of text (maybe around 2000 pages).
If you're looking at it from the perspective of business sustainability (i.e. what the article is about) "we keep lowering our prices" doesn't sound so great. The question is whether GPT-4.1 nano costs OpenAI 300x less than GPT-4 to run or not. If it costs exactly that much less, that still means demand needs to grow by more than 300x just to keep revenue constant. And if it does, then total inference cost correspondingly goes up again.
That would be great news for OpenAI if they could somehow prevent people from buying their own computers. Because as inference costs come down for OpenAI, their customers also get access to better and better commodity hardware (and also better models to run on it). And commodity models become more and more capable all the time, even if they’re not the best.
There are different kinds of reliable. There is the reliability of certain failure for instance. These things are unreliable in a way that the other things that you listed are not.
Nature is reliable in that we have rules (physics) and practices (engineering) to base our process upon.
People aren't reliable, for a specific value of reliable.
We expect of technology(machines, software, AI, whatever) to be: deterministically reliable (knowing their failure modes), and significantly more reliable than humans at what they do (because that's what we rely upon, why we use them to replace humans at what they, in ways humans can't: harder, faster, stronger).
> In almost every company the usage of AI is becoming more and more valuable.
It's certainly becoming more common, and there are lots of people who want it to be valuable, and indeed believe it's valuable.
Personally, I find it about as valuable as a really, really good intellisense. Which is certainly valuable, but I feel like that's way off from the type/quality/quantity of value you're suggesting.
I also find the intellisense aspect of it good though the price is still too high when my local IDE can do 1/10th of that for a long time.
Additionally, LLMs are sort of using old day google mastery to find the right result quickly to save a huge waste of wading through junk and SEO spam, which translates to productivity but then we are again balanced out because this gained productivity was lost once SEO spam took off a decade back. I am indifferent about this gain again, as anything with mass adaptation tends to devolve in garbage behavior eventually, slowly the gains will again be eaten up.
This is the comment I wanted to write before scrolling down. Information retrieval in general is really improving computer-related tasks, but I think present or visible is a much better term to describe it than valuable.
That's besides the point, though. How are they gonna meet the shareholder's expectations of future revenue? When AI becomes equally expensive as a human, what happens then?
When AI becomes equally expensive as a human, layoff humans to continue finance AI because, it is easy to fire and rehire but expensive(more precisely acknowledging failure and bad decision making to hurt leadership ego) to remove and become independent from complex integrations.
Edit: I am just sharing how our CTO responds to the massive push of AI into everything, because integration of a non deterministic system has massive cost and eventually once the thing is made deterministic, the additional steps add expenses which finally makes the entire solution too expensive compared to the benefit. This is not my opinion, just sharing how typical leaderships hope to tackle the expense issue.
It seems like the question is whether you believe in cost-based or value-based pricing. The cost of AI for the same amount of power is going down a lot year-to-year. [1]
If market prices go down with costs, then we see something like solar power where it’s everywhere but suppliers don’t make money, not even in China.
Or maybe customers spend a lot more on more-expensive models? Hard to say.
"The costs of inference are coming down: Source? Because it sure seems like they're increasing for OpenAI, and they're effectively the entire userbase of the generative AI industry!"
Seems to me like Ed is making a very elementary mistake here. I don't think anyone has ever claimed the total amount of money spent on inference would decrease. Claims about inference cost are always about cost per quality of output (admittedly there is no consensus on how to measure the latter). If usage goes up faster than cost per unit goes down, then you spend more, but the point is that you're also getting more.
> people are babbling about the "AI revolution" as the sky rains blood and crevices open in the Earth, dragging houses and cars and domesticated animals into their maws. Things are astronomically fucked outside,
It took me 20+ ranty paragraphs to realise that this guy is not, actually, an AI doomer. Dear tech writers, there's camps and sides here, and they're all very deeply convinced of being right. Please make clear which one you're on before going down the deep end in anger.
Might be a bit naive, but by the time 2029 comes around, and AI companies have started 'monetising free users', won't a lot of people/companies have open-source models tuned and tailored to their needs running locally, no provider required?
If there is anything I can expect to remain consistent over the next 30-40 years, it's that the majority of people have no interest or ability to maintain technical systems themselves, as evidenced by the often banal and extremely simplistic tasks that are required of me as an IT support technician, family, friends, etc. There absolutely will be a direction in models toward more specific tasks, but they will be packaged and distributed by channels and entities that need to monetize their services in some way. That is just inevitable. The problem was never the access to information, it was the nature of most people. (Just to be clear, I don't think this is technically a bad thing, just something I have noticed.)
Remember when kids used to learn editing or photoshop? You can do that in some remarkably capable tools online, and the kids don't bother with offline tools anymore. This is the same thing.
I think it'll mostly still be centralized around providers
I don't see good local models happening on mobile devices anytime soon, and the majority of desktop users will use whatever is the best + most convenient. Competitive open source models running on your average laptop? Seems unlikely
As other commenters have rightly pointed out - agents being a good product is orthogonal to agents being profitable. If you can start securing big contracts with enterprises by convincing them that the AI revolution is coming, that's enough for profit.
It will only be years down the road when people start realizing that they're spending millions on AI agents that are significantly less capable than human employees. But by that point OpenAI will already be filthy rich.
To put it a different way - AI is currently a gold rush. While Nvidia sells shovels, OpenAI sells automated prospecting machines. Both are good businesses to be in right now.
Nvidia made $44B profit last year. They are functionally a monopoly and can charge whatever they want. That’s a good business.
OpenAI lost $5B last year. There are many competitors offering similar products, preventing OpenAI from extracting large profits. It isn’t a good business now. Sam Altman is promising it will be in the future.
Absolutely. Although another sentiment also on HN is that it also doesn't matter which I find cynical when I see people use that as a pro-llm sentiment like "who cares if it doesn't work."
Granted, but you and I were both here in days when Terry Davis, may he rest in peace, was also. We have uncommon cause to know how broad a church "sentiment also on HN" may be.
> I don't know why I'm the one writing what I'm writing, and I frequently feel weird that I, a part-time blogger and podcaster, am writing the things that I'm writing.
I can't tell whether this man actually believes that he is the only one critiquing AI? I mean.. I can barely walk 2 feet without tripping over anti-AI blogs, posts, news articles, youtube videos or comments.
Ed's main critique is about business sustainability -- it's true that there are many articles about AI on IP issues or ethics but he is unique in actually crunching the numbers on profit.
There's Financial Times, Forbes, tons of reddit posts, youtube videos.. I suppose it's possible that he's the only blogger doing this, but as far as I can see he is not the only one crunching profit numbers on the most "visible" company in the world.
Lots of people are saying "this doesn't work very well", but I think he probably _is_ the main person banging the "this makes no financial sense whatsoever" drum; that bit does seem to get weirdly skimmed over most of the time, even by many AI sceptics.
OpenAI could very well be to this AI boom what Netscape was to the dotcom bubble. Even post dotcom crash, a lot of lasting value remained—and I believe the same will happen this time too.
The lasting value was a trillion dollars worth of broadband, fiber, and cellular build-out, the literal physical internet that was built between 1998 and 2008 and persists today, allowing for trillions of dollars of new economic activity.
The lasting value of OpenAI and all of the other frontier labs and hardware makers supporting them will be what? What have they collectively built that will outlive their doomed corporations for decades providing trillions of dollars in new economic activity?
Are all those GPU warehouses going to bolster the economy by a trillion or two every year when these AI companies are all gone? Will those ancient LLMs be adding trillions to the economy?
OpenAI is a bubble. AI industry is a bubble. AI value is real (and probably underestimated). It would take at least a decade for all the relevant industries to incorporate the progress so far.
If Ed fees this strongly, he should short NVDA and laugh all the way to the bank when it pops.
I think, such sayings are more and more being proven wrong. Buy the rumor, sell the news, or buy the dip and hodl long term etc have severely backfired in recent months and many individuals(I personally know) lost huge.
Until we return to ZIRP somehow, there is just too much money in AI to keep the hype going as long as possible to avoid the investor backlash or cause mass layoffs(cutting costs and recovering because we spent too much on AI but can’t get refunds so layoff people instead), as money seem to only chase hype and due to not enough free and openly available money for other topics, all flows to only thing that everyone chases.
Well, people have been saying Tesla is overvalued for years and it’s only just now started to come down. It’s still massively overvalued on PE compared to other car companies.
GameStop is still trading at 82 PE. Insane valuation. Apple and Google are money printers and only trade at 20-30.
> Generative AI has never had the kind of meaningful business returns or utility that actually underpins something meaningful
Even if we assume this is true, it’s worth asking.. did the promised efficiency of the advertising economy ever need to be “real” to completely transform society?
OpenAI has a massive brand advantage because the general public equates their products with AI.
Even if they don't 100% figure out agents they are now big enough that they can acquire those that do.
If the future is mostly about the app layer then they'll be very aggressive in consolidating the same way Facebook did with social media, see for example Windsurf.
I don't agree with Ed's general style of rhetoric but everything single thing he says is important and are active topics of avoidance and hand waving from language model advocates who are also obviously also upset by what he says.
The pivot of the article is this, at the end: "There are no other hypergrowth markets left in tech."
That's the key.
Tech is, at the moment, the only growth engine for capitalism at this time, that sustains the whole world economy (before, it used to be "just" IT [2015], credit [2008], oil [1973], coal, which all have demonstrated their limits to sustain a continuous growth).
AI is the only growth engine left within tech at this moment, supported by GPU/TPU/parallelization hardware.
Given what's at stake, if/when the AI bubble bursts, if there's no alternative growth engine to jump to, the dominos effect will not be pretty.
I don't know. Neither do you. Stay long NVDA, somebody'll take the other side of the bet, and eventually we'll all be able to look back and see where we started finding out who'd called it.
I think Ed maybe buries the lede a little bit with all the ranting. About halfway through the piece he drops this chart[1] which personally left me shocked.
To my knowledge, a 10,000% growth in revenue over 7 years isn't a reality once you're at that level of volume. Asking 4o about this projection acknowledges the reality -
"OpenAI’s projection to grow from approximately $1 billion in revenue in 2023 to $125 billion by 2029 is extraordinarily ambitious, implying a compound annual growth rate (CAGR) exceeding 90%. Such rapid scaling is unprecedented, even among the fastest-growing tech companies."
Am I missing something? I like OAI and I use ChatGPT every day, but I remain unconvinced of those figures.
[1] https://lh7-rt.googleusercontent.com/docsz/AD_4nXcTvV_KScCMt...
The fact that people are generally uncritical of this growth claim is just stunning. You have to believe that this is as big as smartphones while ignoring all of the differences with this cycle of hype.
Nobody intelligent is actually uncritical about these growth claims. What you are seeing is the other side of "the market can remain irrational longer than you can remain liquid". There's no financial benefit to calling bullshit, because timing when the market is suddenly going to correct is impossible. Rational investors are either in the market of boosting the stock to sell it in the short term (a bigger fools scam) or staying as far away from it as possible.
It doesn't have to hit the projections. It just has to keep looking like it could 2 months from now when you sell.
The market has never been determined by the actual value of the underlying tangible thing, but rather the market value of that tangible thing. Although they may seem similar they are not the same. As the market becomes increasingly detached from reality, you'll see more of this.
The vision they are selling is that AI will replace high value professionals in the economy. It's a vision that entails the collapse of the entire western societal model. If you believe in that investing in them becomes a existential matter, and the actual immediate value doesn't matter. You simply can't afford not to invest.
The numbers are meaningless. They are just meant to evoke "we will control the economy".
Ed's style is very much to rant about this stuff lol. But it's really important to have critical voices right now since a miss with this stuff could have devastating consequences for the industry and the people who work in it.
> sky rains blood
This reminds me of Neon Genesis Evangelion.
Of course, no one realized (at least publicly) that it is a metaphor for "everyone claps at the end" (also the ending of the original series).
Sound of rain sounds like an audience clapping. "Blood rain" means real claps, not some fake condescending simulacra of it.
"So that is how democracy dies? With thunderous applause." is also a reference to the same metaphor.
Both movies relate to the theme of sacrifice, worthiness, humanity survival.
Are you guys are too much into giant robots to even notice those things?
what
Some people can fly mechanical lines.
I think the author might need a reality check. In almost every company the usage of AI is becoming more and more valuable. Even if progress in models freezes and halts, the current state is too valuable. I have talked to multiple people in multiple companies and LLMs are becoming indispensable at so many things. Yes chatgpt might or might not conquer the world but the enterprise usage I don't see decreasing (and I am not talking about customer facing LLM usage which might be very stupid).
The author's tone is over the top, but I think this quote is true:
> Large Language Models and their associated businesses are a $50 billion industry masquerading as a trillion-dollar panacea for a tech industry that’s lost the plot.
There are very real use cases with high value, but it's not an economy-defining technology, and the total value is going to fall far short of projections. On bottom lines, the overall productivity gain from AI for most companies is almost a rounding error compared to other factors.
Part of this is because VCs are hoping for a repeat of the Web 2.0 boom, where marginal costs were zero and billions of people were buying smartphones. If you check YC’s RFS (just an example), they’re all software: https://www.ycombinator.com/rfs
Everyone asks “what if this is like the internet” but what if it’s actually like the smartphone, which took decades of small innovations to make work? If in 1980 you predicted that in 30 years handheld computers would be a trillion dollar industry you’d be right but it still required billions in R&D.
There are a ton of non-software innovations out there, they just require more than a million dollar seed to get working. For example making better batteries, better solar panels, fusion power, innovations in modular housing, etc.
Hopefully you are right, but I fear this is a very naive and premature judgement. At one point the internet was a $50B industry insisting it would be a $1T industry. It even had a bubble, bursting and burying entire companies.
Yet, $1T was nevertheless a profound underestimation.
Same could be said about all the hype and trend that died, blockchain(ignore cryptocurrency part), IoT(not sure what happened), bigData(foundation of current AI regardless of whag anyone says) , app for everything(we indeed have more apps now and everything is junk) were also considered the new water/air/electricity/revolution/disruption.
We in aggregate seem to have developed a collective amnesia due to how fast these trends move and how much is burned in keeping the hype machine going to keep us on the edge. We also need to stop calling LLMs different just like every kid wants to claim mark zuck was diff or bill gates was diff so dropping out like them would make these kids owner of next infinite riches.
After a long decade of fast moving “this will truly revolutionize everything” speech every so often, we need to keep some skepticism. Additionally, the AI bubble is more devastating than the previous as previous money was being spread into multiple hypes from which some emerged silent victors of current trends but now everything is consolidated into one thing, all eggs in one basket. If the eggs break, a large population and industry will metaphorically starve and suffer.
Is it? There's certainly $1T of other businesses built with the internet, but the internet business, itself, was rapidly commoditized. The valuable things were the applications built on it, not the network. The argument here is that nobody's found the $1T applications built on AI foundation models yet, but OpenAI is valued as if they have, because their demo chatbot took off out of peoples' curiosity and people are extrapolating that accident exponentially into the future.
The internet bubble is probably a good analogy. It took almost 20 years and several rounds of failed businesses for the internet to have the impact that was originally promised. The big internet companies of the 90s are not where the money was ultimately made.
Similarly, the current LLM vendors and cloud providers are likely not where the money will ultimately be made. Some startup 10-15 years from now will likely stack a cheaply hosted or distributed LLM with several other technologies, and create a whole new category of use cases we haven't even thought of yet, and that will actually create the new value.
It's basically the Gartner hype cycle in action.
Almost all of the internet build out happened between 1998 and 2008, and cost about $1T and was adding $1T to the economy annually by the end of that buildout.
This latest AI hype cycle is also about 10 years old and about $1T invested, and yet it's still a super-massive financial black hole with no economy-wide trillion dollar boost anywhere in sight.
The internet broadband, fiber, and cellular buildout changed the world significantly. This LLM buildout is doing no such thing and is unlikely to ever do so.
By the time we finished pouring a trillion dollars into the global broadband, fiber, and cellular network buildout between 1998 and 2008, the Internet was already adding a trillion dollars a year to the economy.
We've now got 10 years and about a trillion dollars invested in this latest AI bubble, and it's still a super-massive financial black hole.
Ten years and a trillion dollars can make great things like happen. AI ain't that.
> the internet was a $50B industry
How much is "the internet" an industry? It's an enabler and a commodity as much as electricity or road networks are. Are you counting everything using the internet as contributing a sizable share to the internet industry's value?
It’s not enough to be right eventually. Being too early is as good as wrong.
There is a difference between valuable and profitable. I think anyone who wants to say there isn’t a bubble needs to solve two problems:
1) Inference is too damn expensive.
2) The models/products aren’t reliable enough.
I also personally think talking to industry folks isn’t a silver bullet. No one knows how to solve #2. No one. We can improve by either waiting for better chips or using bigger models, which has diminishing returns and makes #1 worse.
Maybe OpenAI’s next product should be selling dollars for 99 cents. They just need a billion dollars of SoftBank money, and they can do 100 billion in sales before they need to reraise. And if SoftBank agrees to buy at $1.01 the business can keep going even longer!
I think AI will be useful to industries/companies where #2 is unimportant: Where the quantity of product is far more important than its quality. Disturbingly, this describes the market for a lot of industries/companies.
It seems like every 6 months they come out with a new model that's as good as the previous generation but with a fraction of the inference cost. Inference cost for a given level of quality has been dropping fast. E.g. GPT-4.1 nano outperforms the old GPT-4 model but costs 300x less.
Right now, the API cost for asking a single question costs about a fiftieth of a cent on their cheap 4.1-nano model, up to about 2 cents on their o3 model. This is pretty affordable.
On the other end of the spectrum, if you're maxing out the context window on o3 or 4.1, it'll cost you around $0.50 to $2. Pricy, but on 4.1, this is like inputting several novels worth of text (maybe around 2000 pages).
If you're looking at it from the perspective of business sustainability (i.e. what the article is about) "we keep lowering our prices" doesn't sound so great. The question is whether GPT-4.1 nano costs OpenAI 300x less than GPT-4 to run or not. If it costs exactly that much less, that still means demand needs to grow by more than 300x just to keep revenue constant. And if it does, then total inference cost correspondingly goes up again.
That would be great news for OpenAI if they could somehow prevent people from buying their own computers. Because as inference costs come down for OpenAI, their customers also get access to better and better commodity hardware (and also better models to run on it). And commodity models become more and more capable all the time, even if they’re not the best.
> The models/products aren’t reliable enough.
People aren't reliable enough.
Nature isn't reliable enough.
For most uses, all that is needed is a system to handle cases where it is not reliable enough.
Then suddenly it becomes reliable enough.
There are different kinds of reliable. There is the reliability of certain failure for instance. These things are unreliable in a way that the other things that you listed are not.
A computer can never be held accountable. Therefore, a computer must never make a management decision.
Nature is reliable in that we have rules (physics) and practices (engineering) to base our process upon.
People aren't reliable, for a specific value of reliable.
We expect of technology(machines, software, AI, whatever) to be: deterministically reliable (knowing their failure modes), and significantly more reliable than humans at what they do (because that's what we rely upon, why we use them to replace humans at what they, in ways humans can't: harder, faster, stronger).
> In almost every company the usage of AI is becoming more and more valuable.
It's certainly becoming more common, and there are lots of people who want it to be valuable, and indeed believe it's valuable.
Personally, I find it about as valuable as a really, really good intellisense. Which is certainly valuable, but I feel like that's way off from the type/quality/quantity of value you're suggesting.
I also find the intellisense aspect of it good though the price is still too high when my local IDE can do 1/10th of that for a long time.
Additionally, LLMs are sort of using old day google mastery to find the right result quickly to save a huge waste of wading through junk and SEO spam, which translates to productivity but then we are again balanced out because this gained productivity was lost once SEO spam took off a decade back. I am indifferent about this gain again, as anything with mass adaptation tends to devolve in garbage behavior eventually, slowly the gains will again be eaten up.
This is the comment I wanted to write before scrolling down. Information retrieval in general is really improving computer-related tasks, but I think present or visible is a much better term to describe it than valuable.
That's besides the point, though. How are they gonna meet the shareholder's expectations of future revenue? When AI becomes equally expensive as a human, what happens then?
When AI becomes equally expensive as a human, layoff humans to continue finance AI because, it is easy to fire and rehire but expensive(more precisely acknowledging failure and bad decision making to hurt leadership ego) to remove and become independent from complex integrations.
Edit: I am just sharing how our CTO responds to the massive push of AI into everything, because integration of a non deterministic system has massive cost and eventually once the thing is made deterministic, the additional steps add expenses which finally makes the entire solution too expensive compared to the benefit. This is not my opinion, just sharing how typical leaderships hope to tackle the expense issue.
At these companies it becomes more and more forced, but not more and more valuable.
Please make a distinction between what people say and what can be measured
I noticed there is not a single number in your comment.
It seems like the question is whether you believe in cost-based or value-based pricing. The cost of AI for the same amount of power is going down a lot year-to-year. [1]
If market prices go down with costs, then we see something like solar power where it’s everywhere but suppliers don’t make money, not even in China.
Or maybe customers spend a lot more on more-expensive models? Hard to say.
[1] https://simonwillison.net/2025/Feb/9/sam-altman/
Why is there a ChatGPT button on my gaming mouse? https://www.yankodesign.com/2025/04/27/razers-first-vertical...
How is this not indicative of a massive bubble?
Where is the more of value coming from if everyone uses the same tool to do the same things?
Your emails, presentations etc. will all look the same and what’s worse so will the emails, presentations etc. of scammers and phishers.
> I apologize, this is going to be a little less reserved than usual.
It's a bit of an open question which comes first; the AI bubble bursting, or Ed Zitron exploding from pure indignation.
"The market can stay irrational longer than you can stay solvent."
I actually think Ed's indignity can outlast the market.
His indignity, I grant you. His ascending aorta and circle of Willis? I hear him speak on occasion.
"The costs of inference are coming down: Source? Because it sure seems like they're increasing for OpenAI, and they're effectively the entire userbase of the generative AI industry!"
Seems to me like Ed is making a very elementary mistake here. I don't think anyone has ever claimed the total amount of money spent on inference would decrease. Claims about inference cost are always about cost per quality of output (admittedly there is no consensus on how to measure the latter). If usage goes up faster than cost per unit goes down, then you spend more, but the point is that you're also getting more.
Seriously, this opening!
> people are babbling about the "AI revolution" as the sky rains blood and crevices open in the Earth, dragging houses and cars and domesticated animals into their maws. Things are astronomically fucked outside,
It took me 20+ ranty paragraphs to realise that this guy is not, actually, an AI doomer. Dear tech writers, there's camps and sides here, and they're all very deeply convinced of being right. Please make clear which one you're on before going down the deep end in anger.
Ed is a very verbose guy. I ended up unsubscribing from his podcast & blog, just because he seems to have graphomania.
He makes good points! I just wish he'd make them quicker.
I don't mind the verbosity so much as steering me the entirely wrong way for that long.
Might be a bit naive, but by the time 2029 comes around, and AI companies have started 'monetising free users', won't a lot of people/companies have open-source models tuned and tailored to their needs running locally, no provider required?
If there is anything I can expect to remain consistent over the next 30-40 years, it's that the majority of people have no interest or ability to maintain technical systems themselves, as evidenced by the often banal and extremely simplistic tasks that are required of me as an IT support technician, family, friends, etc. There absolutely will be a direction in models toward more specific tasks, but they will be packaged and distributed by channels and entities that need to monetize their services in some way. That is just inevitable. The problem was never the access to information, it was the nature of most people. (Just to be clear, I don't think this is technically a bad thing, just something I have noticed.)
> it's that the majority of people have no interest or ability to maintain technical systems themselves
In the same way you likely don't grow your own food or make your own shoes.
Remember when kids used to learn editing or photoshop? You can do that in some remarkably capable tools online, and the kids don't bother with offline tools anymore. This is the same thing.
I think it'll mostly still be centralized around providers
I don't see good local models happening on mobile devices anytime soon, and the majority of desktop users will use whatever is the best + most convenient. Competitive open source models running on your average laptop? Seems unlikely
Doesn’t seem that unlikely to me. Ollama on Mac can already run decent DeepSeek/Llama distillations. For a lot of tasks it’s already good enough.
And 2029 is in 4 years. Four years ago leetcode benchmarks still meant something and OpenAI was telling us GPT3 was too dangerous to release.
As other commenters have rightly pointed out - agents being a good product is orthogonal to agents being profitable. If you can start securing big contracts with enterprises by convincing them that the AI revolution is coming, that's enough for profit.
It will only be years down the road when people start realizing that they're spending millions on AI agents that are significantly less capable than human employees. But by that point OpenAI will already be filthy rich.
To put it a different way - AI is currently a gold rush. While Nvidia sells shovels, OpenAI sells automated prospecting machines. Both are good businesses to be in right now.
Nvidia made $44B profit last year. They are functionally a monopoly and can charge whatever they want. That’s a good business.
OpenAI lost $5B last year. There are many competitors offering similar products, preventing OpenAI from extracting large profits. It isn’t a good business now. Sam Altman is promising it will be in the future.
"Gold rush" implies there is gold in fact to be had. Zitron's thesis is and has long been that there is not.
Absolutely. Although another sentiment also on HN is that it also doesn't matter which I find cynical when I see people use that as a pro-llm sentiment like "who cares if it doesn't work."
Granted, but you and I were both here in days when Terry Davis, may he rest in peace, was also. We have uncommon cause to know how broad a church "sentiment also on HN" may be.
> I don't know why I'm the one writing what I'm writing, and I frequently feel weird that I, a part-time blogger and podcaster, am writing the things that I'm writing.
I can't tell whether this man actually believes that he is the only one critiquing AI? I mean.. I can barely walk 2 feet without tripping over anti-AI blogs, posts, news articles, youtube videos or comments.
Ed's main critique is about business sustainability -- it's true that there are many articles about AI on IP issues or ethics but he is unique in actually crunching the numbers on profit.
There's Financial Times, Forbes, tons of reddit posts, youtube videos.. I suppose it's possible that he's the only blogger doing this, but as far as I can see he is not the only one crunching profit numbers on the most "visible" company in the world.
The FT have not covered the economics of this in any real detail at all. Not even Alphaville :(
Lots of people are saying "this doesn't work very well", but I think he probably _is_ the main person banging the "this makes no financial sense whatsoever" drum; that bit does seem to get weirdly skimmed over most of the time, even by many AI sceptics.
OpenAI could very well be to this AI boom what Netscape was to the dotcom bubble. Even post dotcom crash, a lot of lasting value remained—and I believe the same will happen this time too.
The lasting value was a trillion dollars worth of broadband, fiber, and cellular build-out, the literal physical internet that was built between 1998 and 2008 and persists today, allowing for trillions of dollars of new economic activity.
The lasting value of OpenAI and all of the other frontier labs and hardware makers supporting them will be what? What have they collectively built that will outlive their doomed corporations for decades providing trillions of dollars in new economic activity?
Are all those GPU warehouses going to bolster the economy by a trillion or two every year when these AI companies are all gone? Will those ancient LLMs be adding trillions to the economy?
I'm just not seeing it.
OpenAI is a bubble. AI industry is a bubble. AI value is real (and probably underestimated). It would take at least a decade for all the relevant industries to incorporate the progress so far.
If Ed fees this strongly, he should short NVDA and laugh all the way to the bank when it pops.
The market can stay irrational longer than you can stay solvent. It's not enough to be right. You have to be right at the right time i.e. lucky.
I think, such sayings are more and more being proven wrong. Buy the rumor, sell the news, or buy the dip and hodl long term etc have severely backfired in recent months and many individuals(I personally know) lost huge.
Until we return to ZIRP somehow, there is just too much money in AI to keep the hype going as long as possible to avoid the investor backlash or cause mass layoffs(cutting costs and recovering because we spent too much on AI but can’t get refunds so layoff people instead), as money seem to only chase hype and due to not enough free and openly available money for other topics, all flows to only thing that everyone chases.
Well, people have been saying Tesla is overvalued for years and it’s only just now started to come down. It’s still massively overvalued on PE compared to other car companies.
GameStop is still trading at 82 PE. Insane valuation. Apple and Google are money printers and only trade at 20-30.
As long as we're throwing bromides around, "how did he go bankrupt? Well, two ways: slowly at first, and then all at once."
> Generative AI has never had the kind of meaningful business returns or utility that actually underpins something meaningful
Even if we assume this is true, it’s worth asking.. did the promised efficiency of the advertising economy ever need to be “real” to completely transform society?
OpenAI has a massive brand advantage because the general public equates their products with AI.
Even if they don't 100% figure out agents they are now big enough that they can acquire those that do.
If the future is mostly about the app layer then they'll be very aggressive in consolidating the same way Facebook did with social media, see for example Windsurf.
Are they not competing with every other massive tech company to acquire the best app layers?
I don't think OpenAI has a moat here at all. Their brand advantage is very temporary
Imagine what would happen if the ever fickle tide of public opinion were to change, though. That's what people are really worrying about.
I don't agree with Ed's general style of rhetoric but everything single thing he says is important and are active topics of avoidance and hand waving from language model advocates who are also obviously also upset by what he says.
He’s not wrong when it comes to revenue and CapEx, what so the TAM here?
What are you guys using for RAG? I noticed OpenAI's embeddings struggle a bit with retrieval of specialized material. Is there anything better?
I haven't used RAG yet, but OpenAI's embeddings are pretty far behind AFAIK. Gemini has the SOTA in embeddings right now.
Check out the MTEB leaderboard:
https://huggingface.co/spaces/mteb/leaderboard
The pivot of the article is this, at the end: "There are no other hypergrowth markets left in tech."
That's the key.
Tech is, at the moment, the only growth engine for capitalism at this time, that sustains the whole world economy (before, it used to be "just" IT [2015], credit [2008], oil [1973], coal, which all have demonstrated their limits to sustain a continuous growth).
AI is the only growth engine left within tech at this moment, supported by GPU/TPU/parallelization hardware.
Given what's at stake, if/when the AI bubble bursts, if there's no alternative growth engine to jump to, the dominos effect will not be pretty.
EDIT: clarified.
Good. The the sooner this idioticy ends the better.
[dead]
Current AI could have produced this article in under 5 minutes.
Does knowing a human wrote this article over days of mulling increase or decrease its value?
I don't know. Neither do you. Stay long NVDA, somebody'll take the other side of the bet, and eventually we'll all be able to look back and see where we started finding out who'd called it.
Dyson sphere is coming. Gen 1 test panels going up as early as next year. 2026. Thorium reactors coming online.
Interesting times ahead.