Hyperscalers have already outspent most famous US megaprojects
Posted by nowflux 18 hours ago
Comments
Comment by timmg 17 hours ago
https://x.com/paulg/status/2045120274551423142
Makes it a little less dramatic. But also shows what a big **'n deal the railroads were!
Comment by manquer 11 hours ago
The megaprojects of the previous generations all had decades long depreciation schedules. Many 50-100+ year old railways, bridges, tunnels or dams and other utilities are still in active use with only minimal maintenance
Amortized Y-o-Y the current spends would dwarf everything at the reported depreciation schedule of 6(!) years for the GPUs - the largest line item.
Comment by gravypod 8 hours ago
Comment by delusional 3 hours ago
We're a little too early to know if that's the case here too. I do foresee a chance at a reality where AI is a dead end, but after it we have a ton of cheap GPU compute lying about, which we all rush to somehow convert into useful compute (by emulating CPU's or translating traditional algorithms into GPU oriented ones or whatever).
Comment by TeMPOraL 1 hour ago
But AI proliferation is not stopping soon, because we've not picked up even the low hanging fruits just yet. Again, even if no new SOTA models were to be trained after today, there's years if not decades of R&D work into how to best use the ones we have - how to harness the big ones, where to embed the small ones, and of course, more fundamental exploration of the latent spaces and how they formed, to inform information sciences, cognitive sciences, and perhaps even philosophy.
And if that runs out or there is an Anti AI Revolution, we can still run those weather models and route planners on the chips once occupied by LLMs - just don't tell the proles that those too are AI, or it's guillotine o'clock again.
Comment by m_mueller 1 hour ago
Comment by PunchyHamster 1 hour ago
Comment by pembrook 2 minutes ago
Also, I'm pretty sure in a compute-hungry AI world you aren't going to retire GPUs every 6 years anymore. Even if compute capacity jumps such that current H100s only represent 10% of total compute available in 6 years, you're still running those H100s until they turn to dust.
But your point still stands. I just think it's hard to compare localized railroad infrastructure to globalized AI capacity and say one was more rational than the other until the history actually plays out.
Comment by Lerc 7 hours ago
The GPUs are the shovels, not the project. AI at any capability will retain that capbibilty forever. It only gets reduced in value by superior developments. Which are built upon technologies that the previous generation developed.
Comment by kennywinker 4 hours ago
If anything, the GPUs are the steel that the bridge is made of. Each beam can be replaced, but if too many fail the bridge is impassible. A bridge with a 6 year lifespan for each beam is insane.
Comment by throwup238 3 hours ago
A less literal example is the conquistadors: their shovels were ships, horses, gunpowder, and steel. You can look at Spanish records from the Council of the Indies archive and any time treasures were discovered, the price of each skyrocketed to the point where only the wealthiest hidalgos and their patrons could afford to go on such adventures. I.e. the cost of a ship capable of a cross Atlantic voyage going from 100k pieces of eight to over a million in the span of only a few years (predating the treasure fleet inflation!)
Gold rushes create demand shocks, and anyone who is a supplier to that demand makes bank, regardless of whether its GPUs or “shovels”.
Comment by TeMPOraL 1 hour ago
Today this is real estate. And it's something people keep forgetting when arguing that ${whatever breakthrough or just more competition} will make ${some good or service} cheaper for consumers: prices of other things elsewhere will raise to compensate and consume any average surplus. Money left on the table doesn't stay there for long.
Comment by zozbot234 3 hours ago
Comment by PunchyHamster 1 hour ago
Comment by loandbehold 7 hours ago
Comment by Mathnerd314 5 hours ago
Comment by jiggawatts 7 hours ago
Not really. The base training data cutoff will quickly render models useless as they fail to keep up with developments.
Translating some Farsi news articles about the war was hilarious, Gemini Pro got into a panic. ChatGPT either accused me of spreading fake news, or assumed this was some sort of fantasy scenario.
Comment by m00x 3 hours ago
Comment by zozbot234 3 hours ago
Comment by nl 5 hours ago
For coding I care mostly about reasoning ability which is uncorrelated with cut off
Comment by wr2 10 hours ago
What other uses do GPU's have that are critical...? lol
In addition to your points, this is why I always laugh when people do backward comparisons. What characteristics do they share in common? Very little.
Comment by jamesknelson 9 hours ago
Sure, LLMs can kind of put together a prototype of some CRUD app, so long as it doesn’t need to be maintainable, understandable, innovative or secure. But they excel at persisting until some arbitrary well defined condition is met, and it appears to be the case that “you gain entry to system X” works well as one of those conditions.
Given the amount of industrial infrastructure connected to the internet, and the ways in which it can break, LLMs are at some point going to be used as weapons. And it seems likely that they’ll be rather effective.
FWIW, people first saw TNT as a way to dye things yellow, and then as a mining tool. So LLMs starting out as chatbots and then being seen as (bad) software engineers does put them in good company.
Comment by bigfatkitten 8 hours ago
Unclassified public cloud GPUs are completely useless when your warfighting workloads are at the SECRET level or above.
Comment by jhide 7 hours ago
I think it’s maybe plausible that private compute feels similar in the next do-or-die global war.
[1] https://eh.net/encyclopedia/the-american-economy-during-worl...
Comment by bigfatkitten 4 hours ago
Even if private compute was at a level of maturity where you could use it for classified workloads, knowing that the infrastructure is being managed by someone in India or China, securely getting data into and out of that infrastructure is still a mostly unsolvable problem.
Comment by jhide 7 hours ago
Comment by AngryData 2 hours ago
Comment by andrewljohnson 5 hours ago
Comment by rayiner 8 hours ago
Comment by tripletao 16 hours ago
https://news.ycombinator.com/item?id=44805979
The modern concept of GDP didn't exist back then, so all these numbers are calculated in retrospect with a lot of wiggle room. It feels like there's incentive now to report the highest possible number for the railroads, since that's the only thing that makes the datacenter investment look precedented by comparison.
Comment by chromacity 17 hours ago
We're talking about the period before modern finance, before income taxes, back when most labor was agricultural... Did the average person shoulder the cost of railroads more than the average taxpayer today is shouldering the cost of F-35? (That's another line in Paul's post.)
Comment by topspin 16 hours ago
What that means for the US is this: if the US had to fight a conventional war with a near-peer military today, the US actually has the ability to replace stealth fighter losses. The program isn't some near-dormant, low-rate production deal that would take a year or more to ramp up: it's a operating line at full rate production that could conceivably build a US Navy squadron every ~15 days, plus a complete training and global logistics system, all on the front burner.
If there is any truth to Gen Bradley's "Amateurs talk strategy, professionals talk logistics" line, the F-35 is a major win for the US.
Comment by palmotea 15 hours ago
That's amazing. I had no idea the US was still capable of things like that.
I wonder if there's a way to get close to that, for things that aren't new and don't have a lot of active orders. Like have all the equipment setup but idle at some facility, keep an assembly teams ready and trained, then cycle through each weapon an activate a couple of these dormant manufacturing programs (at random!) every year, almost as a drill. So there's the capability to spin up, say F-22 production quickly when needed.
Obviously it'd cost money. But it also costs a lot of money to have fighter jets when you're not actively fighting a way. Seems like manufacturing readiness would something an effective military would be smart to pay for.
Comment by topspin 15 hours ago
It's more than just the US though. It's the demand from foreign customers that makes it possible. It's the careful balance between cost and capability that was achieved by the US and allies when it was designed.
Without those things, the program would peter out after the US filled its own demand, and allies went looking for cheaper solutions. The F-35 isn't exactly cheap, but allies can see the capability justifies the cost. Now, there are so many of them in operation that, even after the bulk of orders are filled in the years to come, attrition and upgrades will keep the line operating and healthy at some level, which fulfills the goal you have in mind.
Meanwhile, the F-35 equipped militaries of the Western world are trained to similar standards, operating similar and compatible equipment, and sharing the logistics burden. In actual conflict, those features are invaluable.
There are few peacetime US developed weapons programs with such a record. It seems the interval between them is 20-30 years.
Comment by rickydroll 14 hours ago
Comment by peyton 12 hours ago
Comment by bluedino 9 hours ago
Until we run out of materials
https://mwi.westpoint.edu/minerals-magnets-and-military-capa...
Comment by bombcar 17 hours ago
As you get further and further into the past you have to start trying to measure it using human labor equivalents or similar. For example, what was the cost of a Great Pyramid? How does the cost change if you consider the theory that it was somewhat of a "make work" project to keep a mainly agricultural society employed during the "down months" and prevent starvation via centrally managed granaries?
Comment by helterskelter 16 hours ago
With £800K today, you may not even be able to afford the annual maintenance for his mansion and grounds. I knew somebody with a biggish yard in a small town and the garden was ~$40K/yr to maintain. Definitely not a Darcy estate either.
Thinking about it, an income of £800K is something like the interest on £10m.
Comment by benl 6 hours ago
Alternatively, £10,000 is 200,000 sterling silver shillings per year (20 shillings per pound) for him. A sterling shilling today is about $13.50 at spot price. So that’s $2.7million per year in silver-equivalent wealth. Still plenty!
Comment by zozbot234 12 hours ago
Comment by Anon1096 3 hours ago
Comment by psychoslave 15 hours ago
Comment by bombcar 14 hours ago
Comment by cm2012 12 hours ago
Comment by somenameforme 16 hours ago
Then from 1971 (when the USD became completely unbacked) to present, it increased by more than 800 points, 1600% more than our baseline. And it's only increasing faster now. So the state of modern economics makes it completely incomparable to the past, because there's no precedent for what we're doing. But if you go back to just a bit before 1970, the economy would have of course grown much larger than it was in the past but still have been vaguely comparable to the past centuries.
And I always find it paradoxical. In basic economic terms we should all have much more, but when you look at the things that people could afford on a basic salary, that does not seem to be the case. Somebody in the 50s going to college, picking up a used car, and then having enough money squirreled away to afford the downpayment on their first home -- all on the back of a part time job was a thing. It sounds like make-believe but it's real, and certainly a big part of the reason boomers were so out of touch with economic realities. Now a days a part time job wouldn't even be able to cover tuition, which makes one wonder how it could be that labor cost practically nothing in the past, as you said. Which I'm not disputing - just pointing out the paradox.
https://www.minneapolisfed.org/about-us/monetary-policy/infl...
Comment by wahern 10 hours ago
It is notable that the median monthly rent was $35/month on a median income of $3000, so ~15% of income spent on rental housing. But it's interesting reading that report because a significant focus was on the overcrowding "problem". Housing was categorized by number of rooms, not number of bedrooms. The median number of rooms was 4, and the median number of occupants >4 per unit (or more than 1 person per room). I don't think it's a stretch to say that the amount of space and facilities you get for your money today is roughly equivalent. Yes, greater percentage of your income goes to housing, and yet we have far more creature comforts today then back in 1950--multiple TVs, cellphones, appliances, and endless amounts of other junk. We can buy many more goods (durable and non-durable) for a much lower percentage of our income.
There's no simple story here.
Comment by chaos_emergent 17 hours ago
Comment by dghlsakjg 16 hours ago
I am not an ai-booster, but I would not be surprised at AI having a similar enabling effect over the long term. My caveat being that I am not sure the massive data center race going on right now will be what makes it happen.
Comment by delecti 16 hours ago
Comment by Danox 9 hours ago
Comment by whattheheckheck 13 hours ago
Comment by delecti 6 hours ago
Comment by ezst 9 hours ago
Comment by dTal 9 hours ago
Reality check, they are already astoundingly meaningful and transformative AI. They can converse in natural language, recall any common fact off the top of their heads, do research online and synthesize new information, translate between different human languages (and explain the nuances involved), translate a vague hand wavey description into working source code (and explain how it works), find security vulnerabilities, and draw SVGs of pelicans on bicycles. All in one singularly mind-blowing piece of tech.
The age of computers that just do what you tell them to, in plain language, is upon us! My God, just look at the front page! Are we on the same HN?
Comment by ezst 17 seconds ago
The onus of the proof regarding their meaningful and transformative nature is on you.
The largest niche LLMs have so far managed to carve for themselves is software code, with the jury still on the fence as whether the productivity needle actually moved in one direction or the other, and the other, literal jury, enshrining the fact that vibe-coded software is not copyrightable and becomes a public good, that should give pause to any company living of selling software or software-related services as whether they want to poison their well.
Web search hasn't been disrupted very much either with users being quick to realise how hallucinogenic LLM summaries are (with the fact that it's baked in the tech and practically unsolvable being one of the reasons I don't consider LLMs a significant stepping stone towards actual AI).
The age of computers that respond to voice orders was 10 years ago, with Siri, Alexa, Google Assistant, nobody could care less then, and the fact the same systems became less capable after re-inventing themselves on top of LLMs probably won't have people care more now.
Comment by throwaway27448 16 hours ago
Comment by fyrn_ 14 hours ago
Comment by crote 15 hours ago
The big difference is that the current AI bubble isn't building durable infrastructure.
Building the railroads or the interstate was obscenely expensive, but 100+ years down the line we are still profiting from the investments made back then. Massive startup costs, relatively low costs to maintain and expand.
AI is a different story. I would be very surprised if any of the current GPUs are still in use only 20 years from now, and newer models aren't a trivial expansion of an older model either. Keeping AI going means continuously making massive investments - so it better finds a way to make a profit fast.
Comment by TeMPOraL 26 minutes ago
It's always like that with software. You can still run an OS or a program made 20 years ago, in some cases that program may in fact have no modern replacements available (think niche domains) - meanwhile, in those 20 years, you've probably churned through 5-10 generations of computing hardware.
Comment by operatingthetan 16 hours ago
Maybe? It seems as if the tech is starting to taper off already and AI companies are panicking and gaslighting us about what their newest models can actually do. If that's the case the industry is probably in trouble, or the world economy.
Comment by bigfatkitten 8 hours ago
Like Madoff, they’re desperate to pump their Ponzi scheme for as long as they can.
Comment by EFreethought 9 hours ago
I think they have been gaslighting us from the beginning.
Comment by SlinkyOnStairs 11 hours ago
It also makes it more dramatic, consider the programs on the list and what they have in common.
* The Apollo program. A government-funded science project. No return on investment required.
* The Manhattan Project. A government-funded military project. No return on investment required.
* The F-35 program. A government funded military project. No return on investment required.
* The ISS. A government funded science project. No return on investment required.
* The Interstate Highway System. A government funded infrastructure project. No return on investment required.
* The Marshall Plan. A government funded foreign policy project. No return on investment required.
The actual return on investment for these projects is in the very long term of decades; Economic development, national security, scientific progress that benefits the entire country if not the entire world.
Consider the Marshall Plan in particular. It's a massive money sink, but it's nature as a government project meant it could run at losses without significant economic risk and could aim for extremely long term benefits. It's been paying dividends until January last year; 77 years.
And that dividend wasn't always obvious; Goodwill from Europe towards the US is what has prevented Europe from taking similar actions as China around the US' Big Tech companies. Many of whom relied extensively on 'Dumping' to push European competitors out of business, a more hostile Europe would've taken much more protectionist measures and ended up much like China, with it's own crop of tech giants.
And then there's the two programs left out. The railroads and AI datacenters. Private enterprise that simply does not have the luxury of sitting on it's ass waiting for benefits to materialize 50 years later.
As many other comments in this thread have already pointed out: When the US & European railroad bubbles failed, massive economic trouble followed.
OpenAI's need for (partial) return on investment is as short as this year or their IPO risks failure. And if they don't, similar massive economic trouble is assured.
Comment by herbst 1 hour ago
Can you explain that? I really have no idea what you are referring to?
Comment by SlinkyOnStairs 2 minutes ago
The bubble failed in the sense that massive commitments for new railways were made, and then the 1847 economic crisis caused investment to dry up, which collapsed the bubble and put a halt to the railroad construction boom. Those railway commitments never materialized, and stock market crashes followed.
I'm also being a little cheeky with what "massive economic trouble" entails; While the stock market was heavy on railroads and crashed right into a recession, the world in the mid-1800s was much less financialized so the consequences in absolute terms were less pronounced than a similar bubble-collapse would be today. As such, the main historical comparison is structural.
(Similarly, the AI bubble is likely to burst "by itself" unless OpenAI's IPO is truly catastrophically bad. What's more likely is that a recession happens and then the recession triggers a stock market collapse, which then intensify eachother. And so these historical examples of similar situations may prove illustrative.)
Comment by yabutlivnWoods 11 hours ago
Just confirms my suspicion HN is not a forum for intellectual curiosity. It's been entirely subsumed by MBAs and wannabe billionaires.
Comment by SlinkyOnStairs 10 hours ago
No. Re-read the comment.
I specifically say "No return on investment required" not "Has no return on investment". It didn't matter whether these projects earned back their money in the short term, or whether it takes the longer term of many decades.
The ISS hasn't earned back it's $150 billion, and it won't for a pretty long time yet. Doesn't mean it's not a good thing for humanity. Just means that it'd be a bad idea to have the project ran & funded by e.g. SpaceX. The project would've failed, you just can't get ROI on $150 billion within the timeframe required. SpaceX barely survived the cost of developing it's rockets. (And observe how AI spending is currently crushing the profitability of the newly-merged SpaceX-xAI.)
I'm not even saying "AI doesn't provide anything to humanity", I was saying that AI needs trillions of dollars in returns that do not appear to exist, and so it's likely to collapse.
Comment by maxglute 9 hours ago
Tulips: weeks
GPUs: 6 years
Fiber: 20-50 years
Rail, roads, bridges: 50-100+ years
Hyperscalers closer to tulips than other hard infra.
Comment by casey2 8 hours ago
Comment by bdangubic 8 hours ago
the only reason any “maintenance” on them is expensive is corruption which at municipal level rivals current administration in some places
Comment by chatmasta 10 hours ago
Comment by mongol 10 hours ago
Comment by hyperbovine 10 hours ago
Comment by wr2 10 hours ago
LLMs+Data centres on the other hand...
Comment by j-bos 17 hours ago
Comment by globular-toast 3 hours ago
Comment by LeCompteSftware 8 hours ago
Likewise I don't think it makes sense to compare post-ChatGPT hyperscaler data center construction with all 19th-century US railroad construction. Why not include the already considerable infrastructure of pre-AI AWS/Azure? The relevant economic change isn't "AI," it's having oodles of fast compute available online and a market demanding more of it. OTOH comparing these data centers to the Manhattan Project is wrong in the opposite direction: we should really be comparing a specific headline-grabber like Stargate.
This categorization is just a confusing mishmash. The real conclusion to draw here is that we tend to spend more on long-term and broadly-defined things than we do on specific projects with specific deadlines. Indeed.
Comment by lukeschlather 17 hours ago
Comment by wisemanwillhear 15 hours ago
Comment by nullhole 8 hours ago
Comment by contingencies 2 hours ago
Comment by 0xbadcafebee 16 hours ago
We're seeing exactly the same thing with AI, as there is massive investment creating a bubble without a payoff. We know that the value will lower over time due to how software and hardware both gets more efficient and cheaper. And so far there's no evidence that all this investment has generated more profit for the users of AI. It's just a matter of time until people realize and the bubble bursts.
And when the bubble does burst, what's going to happen? Most of the investment is from private capital, not banks. We don't know where all that private capital is coming from, so we don't know what the externalities will be when it bursts. (As just one possibility: if it takes out the balance sheets of hyperscalers and tech unicorns, and they collapse, who's standing on top of them that collapses next? About half the S&P 500 - so 30% of US households' wealth - but also every business built on top of those mega-corps, and all the people they employ) Since it's not banks failing, they probably won't be bailed out, so the fallout will be immediate and uncushioned.
Comment by tracerbulletx 6 hours ago
Comment by wr2 10 hours ago
But what I see is the two big costs for America:
1) Less money being invested into risky AI projects in general, in both public (via cash flows from operations) and private markets 2) The large tech firms who participated in large capex spend related to AI projects won't be trusted with their cash balances - aka having to return more cash and therefore less money for reinvestment
All the hype and fanfare that draws in investment at al comes with a cost - you gotta deliver. People have an asymmetric relationship between gains and losses.
Comment by keeda 15 hours ago
...
And so far there's no evidence that all this investment has generated more profit for the users of AI.
If you look around a bit, you will find evidence for both. Recent data finds pretty high success in GenAI adoption even as "formal ROI measurement" -- i.e. not based on "vibes" -- becomes common: https://knowledge.wharton.upenn.edu/special-report/2025-ai-a... (tl;dr: about 75% report positive RoI.)
The trustworthiness, salience and nuances of this report is worth discussing, but unfortunately reports like this gets no airtime in the HN and the media echo chamber.
Preliminary evidence, but given this weird, entirely unprecedented technology is about 3+ years old and people are still figuring it out (something that report calls out) this is significant.
Comment by 0xbadcafebee 11 hours ago
I would love to see another report that isn't a year old with actual ROI figures...
Comment by chatmasta 10 hours ago
Comment by lazide 7 hours ago
All the middle managers are afraid to say anything though, so go go go.
Comment by SlinkyOnStairs 11 hours ago
It honestly just isn't that interesting. (Being most notable for people misunderstanding and misrepresenting the chart on page 46 of the report as being "ROI" rather than "ROI measurement")
In terms of ROI figures, it's really just a survey with the question "Based on internal conversations with colleagues and senior leadership, what has been the return on investment (ROI) from your organization's Gen AI initiatives to date?".
This doesn't mean much. It's not even dubiously-measured ROI data, it's not ROI data at all, it's just what the leadership thinks is true.
And that's a worrying thing to rely on, as it's well documented (and measured by the report's next question) that there's a significant discrepancy in how high level leadership and low-level leadership/ICs rate AI "ROI".
One of the main explanations of that discrepancy being Goodhart's law. A large amount of companies are simply demanding AI productivity as a "target" now, with accusations of "worker sabotage" being thrown around readily. That makes good economy-wide data on AI ROI very hard to get.
Comment by jeffbee 17 hours ago
Comment by lenerdenator 17 hours ago
The one Google's putting in KC North is 500 acres [0] and there were $10 billion in taxable revenue bonds put up by the Port Authority to help with the cost.
This for a company that could pay for that in cash right now.
[0] https://fox4kc.com/news/google-confirms-its-behind-new-data-...
Comment by jeffbee 16 hours ago
Comment by lenerdenator 16 hours ago
Again, they have the cash to buy that land and develop it without any further consideration beyond permits and planning.
Comment by jeffbee 6 hours ago
Comment by stefan_ 17 hours ago
We aren't even getting infrastructure out of it, they are just powering it with gas turbines..
Comment by jeffbee 17 hours ago
Comment by therobots927 17 hours ago
I would love to hear about the economic value being generated by these LLMs. I think a couple years is enough time for us to start putting some actual numbers to the value provided.
Comment by JumpCrisscross 16 hours ago
If they were laid on a sensible route, completed on budget and time, and savvily operated. Many railroads went bust.
Comment by lukeschlather 16 hours ago
Comment by Danox 8 hours ago
Comment by throwaway27448 15 hours ago
Comment by rangestransform 11 hours ago
Comment by therobots927 16 hours ago
And what is the ROI on either of those right now?
Comment by lukeschlather 11 hours ago
Comment by therobots927 11 hours ago
Got it.
Comment by lukeschlather 9 hours ago
Comment by negura 14 hours ago
Comment by Danox 8 hours ago
Comment by dlenski 5 hours ago
The US spent ~$12 trillion in ~2024 dollars on nuclear weapons between 1940 and 1996, and the vast majority of that spending was in the 1950s and early 1960s.
https://en.wikipedia.org/wiki/Nuclear_weapons_of_the_United_...
Comment by hargup 10 hours ago
Comment by operatingthetan 16 hours ago
Comment by theonemind 12 hours ago
Comment by therobots927 11 hours ago
I’m getting my popcorn ready for the bubble pop.
Comment by mattas 16 hours ago
Or is this "we said we are going to invest $X"? What about the circular agreements?
Comment by pier25 16 hours ago
Comment by philip1209 6 hours ago
Comment by djoldman 11 hours ago
~$6.5 trillion
Comment by losvedir 16 hours ago
Comment by measurablefunc 16 hours ago
Comment by RealityVoid 14 hours ago
I was reading geohot's musings about building a data center and doing so cost effectively and solar is _the_ way to get low energy costs. The problem is off-peak energy, but even with that... you might come off ahead.
And that dude is anything but a green fanatic. But he's a pragmatist.
Comment by rangestransform 11 hours ago
Comment by kerblang 16 hours ago
edit - sorry, it is in fact adjusted, text is kinda hard to see
Comment by amelius 12 hours ago
Comment by pstuart 11 hours ago
Comment by big-and-small 2 hours ago
Comment by therobots927 11 hours ago
The only problem is, if AI doesn’t solve cold fusion, we’re back to square one. And a few trillion dollars in the hole.
Comment by boxedemp 5 hours ago
And that point is right before rock bottom.
Comment by tim333 1 hour ago
Comment by uejfiweun 10 hours ago
Comment by moogly 5 hours ago
Comment by abofh 5 hours ago
Comment by therein 17 hours ago
Comment by rcxdude 15 hours ago
Comment by tim333 1 hour ago
>The term “hyperscale” first emerged in the late 1990s, heralding a paradigm shift in the world of computing. It was primarily used to describe the awe-inspiring scale and capabilities of data centers...
Comment by coffeefirst 17 hours ago
There’s a loop of everyone is saying stuff because everyone else is saying stuff that turns into a sort of reality inspired fan fiction.
It’s not just that it’s wrong or imprecise, that I expect, it’s that the folklore takes on a life of its own.
Comment by bombcar 17 hours ago
Comment by lenerdenator 17 hours ago
Comment by cidd 17 hours ago
Comment by mikrl 16 hours ago
Comment by throwaway27448 15 hours ago
Comment by tim333 54 minutes ago
Comment by guywithahat 15 hours ago
Comment by ElevenLathe 14 hours ago
I certainly think it was a mistake.
Comment by emp17344 8 hours ago
Comment by negura 14 hours ago
Comment by bawana 12 hours ago
Comment by thelastgallon 7 hours ago
Comment by tim333 1 hour ago
Comment by SpicyLemonZest 16 hours ago
Comment by jgalt212 12 hours ago
Comment by metalman 18 hours ago
Comment by cactacea 16 hours ago