What Will You Do When AI runs Out of Money and Disappear?
Posted by louwrentius 1 day ago
Comments
Comment by benlivengood 1 day ago
I really can't imagine OpenAI or Anthropic turning off inference for a model that my workplace is happy to spend >$200*person/month on. Google still has piles of cash and no reason to turn off Gemini.
The thing is, if inference is truly heavily subsidized (I don't think it is, because places like OpenRouter charge less than the big players for proportionally smaller models) then we'd probably happily pay >$500 a month for the current frontier models if everyone gave up on training new models because of some oddball scaling limit.
Comment by crimsoneer 1 day ago
Comment by iLoveOncall 1 day ago
Try $5,000. OpenAI loses hundreds of billions a year, they need a 100x, not 2x.
Comment by gingersnap 1 day ago
Comment by weirdmantis69 1 day ago
Comment by filoleg 1 day ago
Comment by ndriscoll 1 day ago
Another data point: I gave codex a 2 sentence description (being intentionally vague and actually slightly misleading) of a problem that another engineer spent ~1 week root causing a couple months ago, and it found the bug in 3.5 minutes.
These things were hot garbage right up until the second they weren't. Suddenly, they are immensely useful. That said, I doubt my usage costs anywhere near that much to openai.
Comment by Marsymars 1 day ago
Maybe, but that's a hard sell to all the workplaces who won't even spring for >1080p monitors for their experienced engineers.
Comment by thot_experiment 1 day ago
Comment by ndriscoll 1 day ago
I'm surprised it can't keep track of float vs uint8. Mine knew to look at things like struct alignment or places where we had slices (Go) on structures that could be arrays (so unnecessary boxing), in addition to things like timer reuse, object pooling/reuse, places where local variables were escaping to heap (and I never even gave it the compiler escape analysis!), etc. After letting it have a go with the profiler for a couple rounds, it eventually concluded that we were dominated by syscalls and crypto related operations, so not much more could be microoptimized.
I've only been using this thing since right before Christmas, and I feel like I'm still at a fraction of what it can do once you start teaching it about the specifics of your workplace's setup. Even that I've started to kind-of automate by just cloning all of our infra teams' repos too. Stuff I have no idea about it can understand just fine. Any time there's something that requires more than a super pedestrian application programmer's knowledge of k8s, I just say "I don't really understand k8s. Go look at our deployment and go look at these guys' terraform repo to see all of what we're doing" and it tells me what I'm trying to figure out.
Comment by thot_experiment 21 hours ago
Comment by apf6 1 day ago
There's wildly different reports about whether the cost of just inference (not the training) is expensive or not...
Sam Altman has said “We’re profitable on inference. If we didn’t pay for training, we’d be a very profitable company.”
But a lot of folks are convinced that inference prices are currently being propped up by burning through investor capital?
I think if we look at open source model hosting then it's pretty convincing - Look at say https://openrouter.ai/z-ai/glm-4.7 . There's about 10 different random API providers that are competing on price and they'll serve GLM 4.7 tokens at around $1.50 - $2.50 per output Mtokens. (which by the way is a tenth of the cost of Opus 4.5)
I seriously doubt that all these random services that no one has ever heard of are also being propped up by investor capital. It seems more likely that $1.50 - $2.50 is the "near cost" price.
If that's the actual cost, and considering that the open source models like GLM are still pretty useful when used correctly, then it's pretty clear that AI is here to stay.
Comment by UncleEntity 1 day ago
Any individual Sunday service is nearly cost free if we don't calculate in the 100+ years it took to build the church...
Comment by apf6 1 day ago
There's no scenario where AI goes away completely.
I don't think the "major AI services go away completely" scenario is realistic at all when you look at those companies' revenue and customer demand, but that's a different debate I guess.
Comment by blibble 22 hours ago
the scenario is if training becomes impossible (for any reason), then the currently available models quickly become out of date
say this had happened 30 years ago
today, would you be using an "AI" that only supported up to COBOL?
Comment by davidfiala 1 day ago
Comment by UncleEntity 1 day ago
I mean, we're not even up to the "Model T" era of AI development and more like in the 'coach-built' phase where every individual instance needs a bunch of custom work and tuning. Just wait until they get them down to where every Teddy Ruxpin has a full LLM running on a few AA batteries and then see where the market lands.
I always imagine these AI discussion in the context of a bunch of horses discussing these 'horseless carriages' circa 1900...
Comment by program_whiz 1 day ago
How much would it cost you to deploy a model that you and maybe a few coworkers could effectively use? $400k probably to buy all the hardware required to host a top-tier model that could do a few hundred tokens per second for 10 concurrent users? That's $40k per person. Ammortize the hardware over 5 years, thats $8k per person per year (roughly), with no training costs (that's just you buying hardware and running it yourself). So that means, you need ~$800 per user monthly just to cover hardware to run the model (this is with no staffing costs, internet, taxes, electricity, hosting, housing, etc).
So just food for thought, but $200 claude code is probably still losing money even just on inference.
Since they are in the software realm, they are probably shooting for a 90% profit margin. Using the above example, it would be ($800 + R&D + opex) x 10. My guess is assuming no more training (which probably can never be profitable at current rates), they need $20k per month per user, which is why that number was floated by OpenAI previously.
Comment by estimator7292 1 day ago
The only reason hardware is so expensive now is to scalp the hyperscalers. Once that demand crashes, the supply will skyrocket and prices will crash.
Comment by program_whiz 20 hours ago
Comment by Blemiono 1 day ago
It can easily serve 10 people or more depending on the overall usage pattern (coding vs everything else).
So now imagine Hardware getting better every year, models getting better too and everything overall gets more efficient.
M1 vs M4 apple increased performance by 100% in 4 years
And there are inferencing optimized chips like groq.
Don't forget kv cache and overall optimizations.
I think your math is off.
And ai is already better than interns. An intern costs you at least 1k per month probably 2k.
For me the math works just fine.
Comment by t0mas88 1 day ago
Comment by pixl97 1 day ago
1. AI disappears, goes up in price, etc. All the money you've spent goes up in smoke, or you have to spend a lot more money to keep the engine running.
2. AI does not disappear, becomes cheaper and eats your businesses primary revenue generation for lunch.
Number 1 could happen tomorrow. Number 1 could happen after number 2. Number 1 may never happen.
Also expect that even if the AI market crashes that AI has already massively changed the economy, and that at least some investment will go into making AI more efficient and at any point number 2 could spring out of nowhere yet again.
Comment by pvab3 1 day ago
Comment by iLoveOncall 1 day ago
Comment by pvab3 1 day ago
Comment by gingersnap 1 day ago
Comment by crazygringo 1 day ago
Comment by yellowapple 1 day ago
I know this is probably an annoying question, but… has the author actually tried self-hosting an AI with one's own hardware? I have; ollama (and various frontends thereof) makes it straightforward, and it's absolutely not cost-prohibitive — I've ran my share of LLMs even on laptops without dedicated GPUs at all, and while the experience wasn't great compared to the commercial options, it wasn't outright unusable, either. Locally-hosted LLMs are already finding their way into various applications; that's only going to get more viable over time, not less (unless the computing hardware industry takes a catastrophic nosedive, in which case AI affordability is arguably the least of our worries).
I'm sure the author understands this and is just being hyperbolic in the article's title, but the AI bubble bursting ≠ AI disappearing, for the same reason the dotcom bubble bursting ≠ the World Wide Web disappearing. The bubble will burst when AI shifts from being novel to being mundane, just as with any other technology-related bubble — and that entails a degree of affordability and ubiquity that's mutually exclusive with any notion of AI “disappearing”. Hopefully it'll mean companies being less motivated to shove AI “features” down everyone's throats, but the virtually-intelligent cat is already out of Pandora's box: the technology's here to stay, and I think it's presumptuous to think the race to the bottom w.r.t. cost is anywhere near the finish line.
Comment by estimator7292 23 hours ago
I have an old dual Xeon server from about 2015. 32 2.4GHz cores and 128GB of RAM. It runs models painfully slow (and loud) but they run just fine. My modern Ryzen system from last year works out of the box with full AMD GPU support.
I have yet to find a situation where ollama doesn't work at all out of the box. It literally just turns on and goes. Maybe slow, maybe without GPU, but by god you'll have an LLM running
Comment by emsign 1 day ago
Comment by ta9000 1 day ago
Comment by nurumaik 23 hours ago
Ads, obviously
Comment by nonamesleft 12 hours ago
Comment by rightbyte 1 day ago
Comment by manuelmoreale 23 hours ago
Comment by tietjens 23 hours ago
Comment by keernan 21 hours ago
Comment by hulitu 1 day ago
Comment by sph 21 hours ago
After AI: open Emacs, write code.
Comment by notenlish 1 day ago
Comment by thatguy0900 1 day ago
Comment by cornhole 1 day ago
Comment by cyanydeez 23 hours ago
For the valuable kick start usecase it pays off. It cant do all the magic bootsrraps but for baseline technical questions its perfect. Will put in a rag search eventuallly.
Im not optimistic any use case will come to substantiate todays valuations. But the intertwined fascist businesses is going to stunt a lot of people trying to chain their product to 3rd parties.
Comment by partomniscient 1 day ago
Never used a LLM or anything explicitly.
Got annoyed when I had to deal with AI chatbots as front-line customer service - although that only happened once or twice in the last couple of months.
So basically, keep doing what I'm doing.
I like AI for specifically targeted applications: - e.g. 100,000+ AI "eyeballs" vs. a few 100 for diagonstic imaging, working out whether there's something to worry about or not. I hate the idea of generalised AI, LLM's etc.
Lowering the bar to enable 'creative output' from non-creative individuals just fucks up the world, because natural talent is replaced by unnatural talent, especially in (late) capitalism, where money is worth more than human experience to those few control-freak managers.
I'm old. I even earnt enough to buy a house with lawn over 4 years ago during my (pre-AI) career as a Software Developer. Get off my damn lawn.
Comment by freejazz 22 hours ago