Horses: AI progress is steady. Human equivalence is sudden

Posted by pbui 1 day ago

Counter556Comment555OpenOriginal

Comments

Comment by maciejzj 20 hours ago

I may have developed some kind of paranoia reading HN recently, but the AI atmosphere is absolutely nuts to me. Have you ever thought that you would see a chart showing how population of horses was decimated by the mass introduction of efficient engines accompanied by an implication that there is a parallel to human population? And the article is not written in any kind of cautionary humanitarian approach, but rather from perspective of some kind of economic determinism? Have you ever thought that you would be compared to a gasoline engine and everyone would discuss this juxtaposition from purely economic perspective? And barely anyone shares a thought like "technology should be warranted by the populace, not the other way around?". And the guy writing this works at Anthropic? The very guy who makes this thing happen, but is only able to conclude this with "I very much hope we'll get the two decades that horses did". What the hell.

Comment by UncleMeat 19 hours ago

I have been completely shocked by the number of people in the tech industry who seem to genuinely place no value on humanity and so many of its outputs. I see it in the writing of leaders within VC firms and AI companies but I also see it in ordinary conversations on the caltrain or in coffee shops.

Friendship, love, sex, art, even faith and childrearing are opportunities for substitution with AI. Ask an AI to create a joke for you at a party. Ask an AI to write a heartfelt letter to somebody you respect. Have an AI make a digital likeness of your grandmother so you can spend time with her forever. Have an AI tell you what you should say to your child when they are sad.

Hell. Hell on earth.

Comment by martin82 21 minutes ago

This is the direct result of abandoning religion altogether and becoming a 100% secular society.

I am currently reading the Great Books of the Western World in order to maybe somehow find god somewhere in there, at least in a way that can be woven into my atheist-grown brain, and even after just one year of reading and learning, I can feel the merits.

Accepting Science as our new and only religion was a grave mistake.

Comment by tokioyoyo 17 hours ago

If you want another side data point, most people I know both in Japan and Canada use some sort of an AI as a replacement for any kind of query. Almost nobody in my circles are in tech or tech-adjacent circles.

So yeah, it’s just everyone collectively devaluing human interaction.

Comment by vidarh 17 hours ago

I love AI, but I'm exasperated by the extent to which my fiancee uses Claude instead of search, and for everything...

Comment by giardini 5 hours ago

You'll likely be OK until Claude gets a penis. Then you're toast.

Comment by throw-the-towel 16 hours ago

With what Google has become, I can't blame her.

Comment by abustamam 14 hours ago

Why? Google search kinda sucks. And I find it helpful to provide context that I can't otherwise provide to any standard search engine.

Comment by vidarh 13 hours ago

Because the responses are often distilled down from the same garbage Google serves up, but presented as the opinion of Claude, whom she increasingly trusts.

I use Claude a lot. I have the most expensive Claude Max subscription both for my own consultancy and at client sites, separately. I'm increasingly close to an AI maximalist on many issues, so I'm not at all against extensive use of these models.

But it's not quick enough to of its own accord resort to verifying things before giving answers to be suitable as a general purpose replacement for Google unless you specifically prompt it to search.

Comment by rerdavies 2 hours ago

Google search results: a dozen sponsored links; a dozen links to videos (which I never use -- I'd rather read than watch); six or seven pages with gamed SEOs; if you're lucky, what you actually want is far down near the end of the first page, or perhaps at the top of the second page; the other 700 pages of links are ... whatever. Repeat for our five times with variously tweaked queries, hoping that what you actually want will percolate up into the first or second page.

Claude: "Provide me links to <precise description of what you actually want". Result: 4 or 5 directly relevant links, most of which are useful, and it happens on the first query.

Claude is dramatically more efficient than Google Search.

Comment by abustamam 11 hours ago

> unless you specifically prompt it to search.

Ah, that's a good call-out. I don't use Claude aside from in Cursor; I use ChatGPT for normal queries and it's pretty good about doing searches when it doesn't think it knows the answer. Of course it'll search when prompted, but it'll often search without prompting too. I just mistakenly assumed that your fiancée's usage of Claude implied Claude was actually searching as well.

Comment by mrgoldenbrown 12 hours ago

Google search sucks now because it's been targeted by the spammers and content farms. Before that happened it was pretty good. LLMs will eventually be poisoned the same way, whether by humans or other LLMs.

Comment by heavyset_go 11 hours ago

Garbage in, garbage out + chat bots will be monetized which means they will show you things their ad partners want you to see vs what you actually want.

Comment by exe34 13 hours ago

frankly I've found even chat gpt free to be more useful when looking for something - I'd describe what I'm looking for, what are must-have features, what I definitely don't mean, etc and it'll suggest a few things. This has rarely led to not finding what I'm looking for. it's absolutely superior to Google search these days for things that have been around a while. I wouldn't check the news with it.

Comment by cryptonector 17 hours ago

> I have been completely shocked by the number of people in the tech industry who seem to genuinely place no value on humanity [...]

Who do they think will make their ventures profitable? Who do they think will take their dollars and provide goods and services in exchange?

If automation reaches the point where 99% of humans add no value to the "owners" then the "owners" will own nothing.

Comment by palmotea 16 hours ago

> If automation reaches the point where 99% of humans add no value to the "owners" then the "owners" will own nothing.

I don't think that's right. The owners will still own everything. If or when that happens, I think the economy would morph into a new thing completely focused on serving the whims of those "owners."

There's an old Isaac Asimov book with something similar: https://en.wikipedia.org/wiki/Foundation_universe#Solaria (though accomplished more peacefully and with less pain than I think is realistic).

Comment by cryptonector 16 hours ago

What would they get from the plebs? Suppose we went through The Phools and so the plebs were exterminated, then what? Perhaps we'd finally have Star Trek economics, but only for them, the "owners". Better be an "owner", then.

Comment by heavyset_go 10 hours ago

Consider being a significant shareholder in the future as analogous to citizenship as it exists today. Non-owners will be persona non gratae, if they're allowed to live at all.

Comment by palmotea 12 hours ago

> What would they get from the plebs?

I think the right question is: what would they want from the plebs? And the answer will be nothing.

Right now they want the plebs' labor, which is why things work the way they do.

> The Phools and so the plebs were exterminated, then what?

Is that a reference to this? https://press.princeton.edu/books/hardcover/9780691168319/ph...

> Perhaps we'd finally have Star Trek economics, but only for them, the "owners". Better be an "owner", then.

I don't think we'll have Star Trek economics, because that would be fundamentally fair and egalitarian and plentiful. There will still be resource constraints like energy production and raw materials. I think it will be more like B2B economics, international trade, with a small number relevant owners each controlling vast amounts of resources and productive capacity and occasionally trading basics amongst themselves. It could also end up like empires-at-war (which actually may be more likely, since war would give the owners something seemingly important to do, vs just building monuments to themselves and other types of jerking off).

Comment by cryptonector 5 hours ago

The Phools was a reference to this: https://www.newyorker.com/magazine/1981/10/12/phools

Comment by computerthings 8 hours ago

[dead]

Comment by sp527 13 hours ago

> If or when that happens, I think the economy would morph into a new thing completely focused on serving the whims of those "owners."

I think you might be a little behind on economic news, because that's already happening. And it's also rapidly reshaping business models and strategic thinking. The forces of capitalism are happily writing the lower and middle classes out of the narrative.

https://www.wsj.com/livecoverage/stock-market-today-dow-sp50...

Comment by palmotea 12 hours ago

>> If or when that happens, I think the economy would morph into a new thing completely focused on serving the whims of those "owners."

> I think you might be a little behind on economic news, because that's already happening. And it's also rapidly reshaping business models and strategic thinking. The forces of capitalism are happily writing the lower and middle classes out of the narrative.

No, that doesn't surprise me at all. I'm basically just applying the logic of capitalism and automation to a new technology, and the same thing has played out a thousand times before. The only difference with AI is that; unlike previous, more limited automation; it's likely there will be no roles for displaced workers to move into (just like when engines got good enough there were no roles for horses to move into).

It's important to remember that capitalism isn't about providing for people. It's about providing for people with wealth to exchange. That works OK when you have full employment and wealth gets spread around by paying workers, but if most jobs disappear due to automation there's no mechanism to spread wealth to the vast majority of people, so under capitalism they'll eventually die of want.

Comment by heavyset_go 10 hours ago

See also: Citigroup's plutonomy thesis[1] from 2006

tldr: the formal economy will shift to serving plutocrats instead of consumers, it's much more profitable to do so and there are diminishing returns serving the latter

[1] https://www.sourcewatch.org/images/b/bc/CITIGROUP-MARCH-5-20...

Comment by rerdavies 2 hours ago

... then the "owners" will own EVERYTHING. Fixed that for u.

Comment by Faark 15 hours ago

Making predictions on how it will turn out VS designing how it should be. Up til now, powerful people needed lots and lots of other humans to sustain their power&life. Thus that dependency gave the masses leverage. Now I'd like a society we're everyone is valued for being human and stuff. With democracies we got quite far in that direction. Attempts to go even further... Let's just say "didn't work out". And right now, especially in the US, the societal system seems to go back to "power" instead rules.

Yeah, I see a bleak future ahead. Guess that's life, after all.

Comment by smallmancontrov 14 hours ago

> didn't work out

In the "learn to love democracy and freedom" sense, sure, but in the economic sense? "Didn't work out" feels like a talking point stuck in 1991. Time has passed, China is the #2 economy in the world, #1 if you pick a metric that emphasizes material or looks to the future. How did they get there? By paying the private owners of our economy to sell our manufacturing base to them piece by piece -- which the private owners were both entitled and incentivized to do by the fundamental principles of capitalism. The ending hasn't been written, but it smells like the lead-up to a reversal in fortune.

As for our internal balance of power, we've been here before, and the timeline conveniently lines up to almost exactly 100 years ago. I'm hoping for another Roosevelt. It wasn't easy then, it won't be easy now, but I do think it's fundamentally possible.

Comment by luckman212 7 hours ago

"Hell on Earth" - I don't think there is a more succinct or accurate way to describe the current environment.

Comment by M95D 18 hours ago

> no value on humanity

It's practically the definition of psychopathy.

Comment by jeroenhd 17 hours ago

I can't say I'm shocked. Disappointed, maybe, but it's hardly surprising to see the sociopathic nature in the people fighting tooth and nail for the validation of venture capitalists who will not be happy until they own every single cent on earth.

There are good people everywhere, but bring good and ethical stands in the way of making money, so most of the good people lose out in the end.

AI is the perfect technology for those who see people as complaining cogs in an economic machine. The current AI bubble is the first major advancement where these people go mask off; when people unapologetically started trying to replace basic art and culture with "efficient" machines, people started noticing.

Comment by Slava_Propanei 10 hours ago

[dead]

Comment by netsharc 20 hours ago

I think, like the Bill Gates haters who interpret him talking about reducing the rate of birth in Africa as wanting to kill Africans, you're interpreting it wrong.

The graph says horse ownership per person. People probably stopped buying horses, they let theirs retire (well, to be honest, probably also sent to the glue factory), and when they stopped buying new horses, horse breeding programs slowed down.

Comment by danw1979 19 hours ago

I wish the author had had the courage of their convictions to extend the analogy all the way to the glue factory. It’s what we are all thinking.

Comment by jjmarr 18 hours ago

I have a modest proposal for dealing with future unemployment.

Comment by xg15 15 hours ago

There are too many people in power right now who I wouldn't put it past to take that proposal seriously.

Comment by sherr 14 hours ago

Ha ha, yes. You should write that up as a pamplet somewhere.

Comment by giardini 5 hours ago

Soylent glue?

Comment by _DeadFred_ 14 hours ago

Sending all the useless horses to glue factories in that time was so prevalent it was a cartoon trope. The other trope being men living in flop houses, and towns having entire sections for unemployable people called skid row.

The AI people point to post 1950s style employment and say 'people recovered after industrial advance' and ignore the 1880s through the 1940s. We actually have zero idea if the buggy whip manufacturer ever recovered or just lasted a year in skid row before giving up completely, or lived through the 2 world wars spurred by mechanisation.

Comment by tyre 12 hours ago

Horses were killed more often for meat that was used in dog food than for glue.

I did a deep research into the decline of horses and it was consistent with fewer births, not mass slaughter. The US Department of Agriculture has great records during this time, though they’re not fully digitized.

Comment by giardini 5 hours ago

Horse meat is tasty too! Pretty popular in France. Hmmm, remember my first steak tartare in Belgium! Ymmmm!

Comment by JohnMakin 20 hours ago

I don’t think you’re realizing that the OP understands this, and that in this analogy, the horses are human beings

Comment by bambax 19 hours ago

In this analogy, horses are jobs, not humans; you could argue there's not much of a difference between the two, because people without jobs will starve, etc., but still, they're not the same.

Comment by FuckButtons 17 hours ago

Why make the analogy at all if not for the implied slaughter. It is a visceral reminder of our own brutal history. Of what humans do given the right set of circumstances.

Comment by arowthway 17 hours ago

How is decreasing the number of horses killed every year brutal?

Comment by ToucanLoucan 17 hours ago

One would argue in a capitalist society like ours, fucking with someone's job at industrial scale isn't awfully dissimilar from threatening their life, it's just less direct. Plenty more people currently are feeling the effects of worsening job markets than have been involved in a hostage situation, but the negative end results are still the same.

One would argue also if you don't see this, it's because you'd prefer not to.

If we had at least a somewhat functioning safety net, or UBI, or both, you'd at least have an argument to be made, but we don't. AI and it's associated companies' business model is, if not killing people, certainly attempting to make lots of lives worse at scale. I wouldn't work for one for all the money in the world.

Comment by FuckButtons 17 hours ago

UBI will not save you from economic irrelevance. The only difference between you and someone starving in a 3rd world slum is economic opportunity and the means to exchange what you have for what someone else needs. UBI is inflation in a wig and dark glasses.

Comment by mattmaroon 18 hours ago

There is, at least, a way to avoid people without jobs starving. Whether or not we'll do it is anyone's guess. I think I'll live to see UBI but I am perphaps an optimist.

Comment by bigfishrunning 18 hours ago

You'd have to time something like UBI with us actually being able to replace the workforce -- The current LLM parlor tricks are simply not what they're sold to be, and if we rely on them too early we (humanity) is very much screwed.

Comment by snarfy 17 hours ago

It's here today - it's owning stock that produces dividends. That's capitalism.

Comment by mattmaroon 5 hours ago

Yeah I don't know why everyone doesn't just do that!

Comment by pas 18 hours ago

population projections they already predict that prosperity reduces population

and even if AI becomes good enough to replace most humans the economic surplus does not disappear

it's a coordination problem

in many places on Earth social safety nets are pretty robust, and if AI helps to reduce cost of providing basic services then it won't be a problem to expand those safety nets

...

there's already a pretty serious anti-inequality (or at least anti-billionaire) storm brewing, the question is can it motivate the necessary structural changes or just fuels yet another dumb populist movement

Comment by glenstein 17 hours ago

I think the concerns with UBI are (1) it takes away the leverage of a labor force to organize and strike for better benefits or economic conditions, and (2) following the block grant model, can be a trojan horse "benefit" that sets the stage for effectively deleting systems of welfare support that have been historically resilient due to institutional support and being strongly identified with specific constituencies. When the benefit is abstracted away from a constituency it's easier to chop over time.

I don't exactly know how I feel about those, but I respect those criticisms. I think the grand synthesis is that UBI exists on top of existing safety nets.

Comment by gota 16 hours ago

Point (2) seems wrong intuitively. "Chopping" away UBI would be much more difficult _because_ it is not associated to a specific constituency.

Not only would there be more people on the streets protesting against real or perceived cuts;

there also would be fewer movements based on exclusivist ideologies protesting _in favour of cuts_*

* e.g. racist groups in favour of cutting some kinds of welfare because of racial associations

Comment by bee_rider 18 hours ago

Somebody should try a smart populist movement instead. My least favorite thing about my favored (or rather least disfavored) party is that we seem to believe “we must win without appealing to the populace too directly, that would simply be uncouth.”

Comment by abixb 17 hours ago

One could argue that the quality of life per horse went up, even if the total number of horses went down. Lots more horses now get raised in farms and are trained to participate in events like dressage and other equestrian sports.

Comment by netsharc 17 hours ago

Someone said during the hype of "self-driving cars is the future!" that ICE/driver-driven cars will go the way of the horse: they'll be well-cared, kept in stables, and taken out in the weekends for recreation, on circuits but not on public roads..

Comment by heavyset_go 10 hours ago

Imagine it now, your future descendants existing solely to be part of some rich kid's harem.

Comment by _DeadFred_ 14 hours ago

'now instead of being work animals a few of you will be kept like pets by the tech bros'

Comment by netsharc 11 hours ago

"But only if you're a superior breed..."

Epstein was ahead of his times...

Comment by everdrive 17 hours ago

> Bill Gates haters who interpret him talking about reducing the rate of birth in Africa

I'm not up to speed here -- is Bill Gates doing work to reduce the birth rates in Africa?

Comment by netsharc 17 hours ago

For example, interview from 2018: https://www.youtube.com/watch?v=0MMifQvuN08

When the Covid-truther geniuses "figured out" that "Bill Gates was behind Covid", they pulled out things like this as "proof" that his master plan is to reduce the world's population. Not to reduce the rate of increase, but to kill them (because of course these geniuses don't understand derivatives)...

Comment by everdrive 17 hours ago

Ah, got it. This sounds like more of a "repugnant conclusion" sort of problem where if you care about the well being of people who exist, then it is possible to have too large of a population.

Comment by heavyset_go 10 hours ago

Horses are pretty and won't try to kill you for "your" food.

Comment by maciejzj 20 hours ago

We don't know what the author had in mind, but one has to really be tone deaf to let the weirdness of the discussion go unnoticed. Take a look at the last paragraphs in the text again:

> And not very long after, 93 per cent of those horses had disappeared.

> I very much hope we'll get the two decades that horses did.

> But looking at how fast Claude is automating my job, I think we're getting a lot less.

While most of the text is written from cold economic(ish) standpoint it is really hard not to get bleak impression from it. And the last three sentences express that in vague way too. Some ambiguity is left on purpose so you can interpret the daunting impression your way.

The article presents you with crushing juxtaposition, implicates insane dangers, and leaves you with the feeling of inevitability. Then back to work, I guess.

Comment by pzo 19 hours ago

> And not very long after, 93 per cent of those horses had disappeared.

> I very much hope we'll get the two decades that horses did.

Horses typically live between 25 to 30 years. I agree with OP that most likely those horses were not decimated (killed) but just died out and people stopped mass breeding them. Also as other noticed chart shows 'horses PER person in US'. Population between 1900 and 1950 increased from 1.5B to 2.5B (globally but probably similarly almost 70% increase in US).

I think depends what do you worry about:

1) `that human population decrease 50-80%`?

I don't worry about it even if that happen. 200 years ago human population was ~1 B today is ~8 B. At year 0 AD human population was ~0.250 B. Did we 200 years ago worry about it like "omg human population is only 1 B" ?

I doubt human population decrease 80% because of no demand for human as workforce but I don't see problem if it decrease by 50%. There will short transition period with surplus of retired people and work needed to keep the infrastructure but if robots can help with this then I don't see the problem.

2) `That we will not be needed and we will loose jobs?`

I don't see work like something in demand. Most people hate their jobs or do crappy jobs. What do people actually worry about that they will won't get any income. And actually not even about that - they worry that they will not be able to survive or be homeless. If there is improvement in production that food, shelter, transportation, healtcare is dirty cheap (all stuff from bottom maslov piramid) and fair distribution on social level then I also see a way this can be no problem.

3) `That we will all die because of AI`

This I find more plausable and maybe not even by AGI but earlier because of big social unrests during transition period.

Comment by rindalir 19 hours ago

As someone who raises horses and other animals, I can say with pretty high certainty that most of the horses were not allowed to "retire". Horses are expensive and time-consuming to care for, and with no practical use, most horses would have been sent not to the glue factory but (at that time) to the butcher and their non-meat parts used for fertilizer.

Comment by maciejzj 19 hours ago

Yeah, I agree with what you said. It's not about the absolute number of people, but the social unrest. If you look at how poor we did our job at redistribution of wealth so far, I find it hard to believe that we will do well in the future. I am afraid of mass pauperisation and immiseration of societies followed by violence.

Comment by listenallyall 15 hours ago

What's more important - "redistribution of wealth" or simply reducing the percentage of people living in abject poverty? And wouldn't you agree that by that measure, most of the world, including its largest countries, have done quite a good job?

https://www.un.org/en/global-issues/ending-poverty

From 1990 to 2014, the world made remarkable progress in reducing extreme poverty, with over one billion people moving out of that condition. The global poverty rate decreased by an average of 1.1 percentage points each year, from 37.8 percent to 11.2 percent in 2014.

Comment by techdmn 17 hours ago

I think the phrase "fair distribution on social level" is doing a lot of work in this comment. Do you consider this to be a common occurrence, or something our existing social structures do competently?

I see quite the opposite, and have very little hope that reduced reliance on labor will increase the equability of distribution of wealth.

Comment by Herring 17 hours ago

It probably depends on the society you start out with, eg a high trust culture like Finland will probably fare better here.

Comment by hnfong 16 hours ago

Doesn't matter. The countries with most chaos and internal strife gets a lot of practice fighting wars (civil war). Then the winner of the civil war, who's used to grabbing resources by force, and the one that has perfected war skills due to survival of the fittest, goes round looking for other countries to invade.

Historically, advanced civilizations with better production capabilities don't necessarily do better in war if they lack "practice". Sad but true. Maybe not in 21st century, but who knows.

Comment by Herring 16 hours ago

Yeah none of that fever dream is real. There's no "after" a civil war, conflicts persist for decades (Iraq, Afghanistan, Syria, Myanmar, Colombia, Sudan).

Check this out - https://data.worldhappiness.report/chart. The US is increasingly a miserable place to live in, and the worse it gets the more their people double down on being shitty.

Fun fact: Fit 2 lines on that data and you can extrapolate by ~2030 China will be a better place to live. That's really not that far off. Set a reminder on your phone: Chinese dream.

Comment by 16 hours ago

Comment by bgwalter 20 hours ago

Well, in this case corporations stop buying people and just fire them instead of letting them retire. Or an army of Tesla Optimi will send people to the glue factory.

That at least is the fantasy of these people. Fortunately. LLMs don't really work, Tesla cars are still built by KUKA robots (while KUKA has a fraction of Tesla's P/E) and data centers in space are a cocaine fueled dream.

Comment by palmotea 16 hours ago

> And the article is not written in any kind of cautionary humanitarian approach, but rather from perspective of some kind of economic determinism? Have you ever thought that you would be compared to a gasoline engine and everyone would discuss this juxtaposition from purely economic perspective?

One of the many terrible things about software engineers their the tendency to think and speak as if they were some kind of aloof galaxy-brain, passively observing humanity from afar. I think that's at least partially the result of 1) identifying as an "intelligent person" and 2) computers and the internet allowing them to in-large-part become disconnected from the rest of humanity. I think they see that aloofness as being a "more intelligent" way to engage with the world, so they do it to act out their "intelligence."

Comment by cryptonector 17 hours ago

> Have you ever thought that you would see a chart showing [...]

Yes, actually, because this has been a deep vein of writing for the past 100 or more years. There's The Phools, by Stanislav Lem. There's the novels written by Boris Johnson's father that are all about depopulation. There's Aldous Huxley's Brave New World. How about Logan's Run? There has been so much writing about the automation / technology apocalypse for humans in the past 100 years that it's hard to catalog it -- much of what I have read or seen go by in the vein I've totally forgotten.

It's not remotely a surprise to see this amp up with AI.

Comment by maciejzj 16 hours ago

Yeah, I am familiar with these works of art and probably most people are. However, they were mostly speculative. Now we are facing some of their premises in the real world. And the guys who push the technology in a reckless way seem to notice this, but just nod their heads and carry on.

At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus.

Comment by cryptonector 16 hours ago

Works of art, works of predictive programming, life imitating art -- what's the difference, if in the end the artistic predictions come true?

People have been thinking apocalyptic thoughts like these since.. at least Malthus's An Essay on the Principle of Population (1798). That's 227 years if you're keeping score. Probably longer; Malthus might only have been the first to write them down and publish them.

Comment by j-bos 20 hours ago

> Have you ever thought that you would see a chart showing how population of horses was decimated by the mass introduction of efficient engines accompanied by an implication that there is a parallel to human population?

Yes, here's a youtube classic that put forth the same argument over a decade ago, originally titled "Humans need not apply": https://youtu.be/7Pq-S557XQU

Comment by ThrowawayR2 16 hours ago

Oh, _now_ computer industry people are worried? Kind of late to the party.

Computerization, automation and robotics, document digitization, the telecoms and wireless revolution, etc. have been upending peoples' employment on a massive scale since before the 1970s. The reaction of the technologists has been a rather insensitive "adapt or die", "go and retrain", and analogies to buggy whip manufacturers when the automobile became popular. The only reason people here suddenly give a hoot is because they think the crosshairs are drifting towards them.

Comment by stared 18 hours ago

> the AI atmosphere is absolutely nuts to me

It reminds me of "You maniacs! You blew it up! Goddamn you all to hell!" from the original Planet of the Apes (1968), https://youtu.be/mDLS12_a-fk?t=71

Quite ironically, the scene features a horse.

Comment by tim333 16 hours ago

You can kind of separate the technical side of what will likely happen - AI get smarter and can do the jobs - with how we deal with that. Could be heaven like with abundance and no one needs to work, or post apocalyptic dystopia or likely somewhere in the middle.

We collectively have a lot of choice on the how we deal with it part. I'm personally optimistic that people will vote in people friendly policies when it comes to it.

Comment by hnfong 16 hours ago

Not seeing any horse heavens, do you have reason to believe humans (i.e. those not among the ruling class) are going to have a different fate from the horses?

I agree we can kinda make the argument that abundance is soon upon us, and humanity as a whole embraces the ideas of equality and harmony etc etc... but still there's a kinda uncanny dissociation if you're happily talking about horses disappearing and humans being next while you work on the product that directly causes your prediction to come true and happen earlier...

Comment by tim333 15 hours ago

We are in control (for now). The horses were not. The whole alignment debate is basically about keeping us in control.

Comment by hnfong 6 hours ago

Member of ruling class spotted!!

Comment by entropyneur 18 hours ago

My experience so far has been that the knowledge of what should and shouldn't be, while important, bears no predictive power whatsoever as to what actually ends up happening.

In this instance, in particular, I wouldn't expect our preferences to bear any relevance.

Comment by aiisjustanif 17 hours ago

> knowledge of what should and shouldn't be, while important, bears no predictive power whatsoever as to what actually ends up happening.

I don’t know if you are intentionally being vague and existential here. However, context matters, and the predictive power is zero sounds unreasonable in the face of history.

I think humans learning that diseases were affecting us and thus leading to solutions like antibiotics and vaccines. It was not guaranteed, but I’m skeptical of the predictive power being zero.

Comment by oxag3n 13 hours ago

> What the hell

It was always like this. Look at the history, and sometimes quite recent - people were always treated like a tool - for getting rich, for getting in power, to conquer other countries, to serve them.

Comment by xg15 9 hours ago

It's interesting though how the narrative is all bright-eyed idealism, make the world a better place, progress, etc until at some point the masks go off and suddenly it's "always has been, move along, nothing to see here"...

Comment by UniverseHacker 14 hours ago

I took the article as meaning white collar tech jobs that will go away, so those people will need to pivot their career, not humans.

However, it does seem like time for humanity to collectively think hard about our values and goals, and what type of world and lives we want to have in an age where human thought, and perhaps even human physical labor are economically worthless. Unfortunately this could not come at a worse time with humanity seemingly experiencing a widespread rejection of ideals like ethics, human rights, and integrity and embracing fascism and ruthless blind financial self interest as if they were high minded ideals.

Ironically, I think tech people could learn a lot here from groups like the Amish- they have clearly decided what their values and goals are, and ruthlessly make tech serve them, instead of the other way around. Despite stereotypes, Amish are often actually heavy users of, and competent with modern tech in service of making a living, but in a way that enforces firm boundaries about not letting the tech usurp their values and chosen way of life.

Comment by _aavaa_ 16 hours ago

The implication is very clearly about “killing” jobs, not killing people.

Comment by j-krieger 10 hours ago

Even scarier when you consider that this entire technology has reached the public only three years ago.

Comment by barrkel 20 hours ago

Incentives rule everything.

For the Romans, winning wars was the main source of elite prestige. So the Empire had to expand to accommodate winning more wars.

Today, the stock market and material wealth dominates. If elite dominance of the means of production requires the immiseration of most of the public, that's what we'll get.

Comment by thaumasiotes 20 hours ago

> For the Romans, winning wars was the main source of elite prestige. So the Empire had to expand to accommodate winning more wars.

That's almost 100% backwards. The Republic expanded. The Empire, not so much.

Comment by majewsky 18 hours ago

GP appears to be using "empire" as in "imperalistic" instead of as in "emperor".

Comment by classified 19 hours ago

Isn't that burying the lede on a technicality?

Comment by LogicFailsMe 15 hours ago

I think we have a bunch of people in the United States who see what we elected for leadership and the choices he made to advise him, and they have given up all hope. That despondent attitude is infusing their opinions on everything. But chin up, he's really old, and he doesn't seem very healthy or he'd be out there leading the charge throwing those rallies every weekend of which he used to be so fond.

And low information business leaders will attempt to do all the awful things described here and the free market will eliminate them from the game grid one horrible boss at a time. But if you surround yourself with the AI doomers and bubblers, how will you ever encounter or even consider positive uses of the technology? What an awful place to work Anthropic must be if they truly believe they are working on the metaphorical equivalent of the Alpha Omega bomb. Spoilers: they're not.

Meanwhile, in the rest of the world, many look forward to harnessing AI to ameliorate hunger, take care of the elderly, and perform the more dangerous and tedious jobs out there. Anthropic guy needs to go get a room with Eliezer Yudkowsky. I guess the US is about get horsed by the other 96% of the planet.

Go ahead, compare me to a horse, a gasoline engine, or even call me a meatbag. Have we become little more than Eloi snowflakes to be so offended by that?

But I guess as long as an electoral majority here continues to cheer on one man draining the juice of this country down to a bitter husk, the fun and games will continue.

Comment by heavyset_go 10 hours ago

> But chin up, he's really old, and he doesn't seem very healthy or he'd be out there leading the charge throwing those rallies every weekend of which he used to be so fond.

At this point in time, his whimsy is the only thing holding back younger, more extreme acolytes from doing what they want. Once he's gone, lol.

Comment by LogicFailsMe 9 hours ago

A bunch of charisma 3 acolytes. Only a select few get to be Zaphod Beeblebrox, swirlies for everyone else who tries...

Comment by _DeadFred_ 14 hours ago

Yes. Follow in the path of the tech leaders. They are optimists. They totally aren't building doomsday bunkers or trying to build their data centers with their own nuclear power plants to remove them from society and create self contained systems. Oh wait. Crap...

Comment by LogicFailsMe 14 hours ago

American tech leaders are just as bad, leading the charge straight into the abyss. But if you close your mind to the rest of the world, I can see why you'd see a 0 1 choice here. That's all the corporate media and influencers write these days, all the way from Paul Krugman to Corey Doctorow. And let's not even get started on the Three Men and an ASIC house of AI circle jerkers.

I mean if you're the sort that thinks Greta Thunberg and Eliezer Yudkowsky are agents of the Antichrist, it's long overdue to touch grass. And I don't think he believes that, but I think he thought people were stupid enough to buy it so he ran with it. Can't blame him for trying!

But given the right's hatred of renewables and the left thinks nuclear power plants can explode like atomic bombs, I'd be pushing for gas and nuclear to power my data centers too.

TLDR: you're being fed a false narrative that this is a 0 1 choice, but I guess it will take the rest of the world to demonstrate that not the US.

Comment by trymas 19 hours ago

> Have you ever thought that you would be compared to a gasoline engine and everyone would discuss this juxtaposition from purely economic perspective?

Not sure if by accident or not, but that’s what we are according today’s “tech elite”.

   Therefore, the most profitable disposition for this dubious form of capital is to convert them into biodiesel, which can help power the Muni buses
https://www.goodreads.com/work/quotes/55660903-patchwork-a-p...

Comment by mattmaroon 18 hours ago

The threat isn't to population, it's to jobs (at least so far) but yeah.

Comment by Herring 18 hours ago

American culture actively punishes compassion, then gaslights you about it.

https://www.census.gov/library/visualizations/interactive/te... Look at all the professions on the bottom right: Teachers, therapists, clergy, social workers, etc. It’s not a coincidence that cruel people take top positions.

Comment by myth_drannon 18 hours ago

It's been a decade or so but I'm mostly called "resource" at work, as in Human Resource. Barely collegue, comrade, co-worker... just a resource, a plug in the machine that needs to be replaced by an external resource to improve profit margins.

Comment by andai 20 hours ago

Is it good when number of human go up? Is bad when go down?

Comment by maciejzj 20 hours ago

I would say that it is bad when it has large derivative (positive or negative). However, the problem is not about >number of human beings< but about making agency that existing people have obsolete.

Comment by gcanyon 19 hours ago

It's bad if it goes down by more than about 1.2% per year. That would mean zero births, present-day natural deaths. Of course zero births isn't presently realistic, and we should expect the next 10-30 years to significantly increase human lifespan. If we assume continued births at the lowest rates seen anywhere on the planet, and humans just maxing out the present human lifespan limit, then anything more than about a 0.5% decrease means someone is getting murked.

Comment by classified 19 hours ago

> And barely anyone shares a thought like "technology should be warranted by the populace, not the other way around?"

It shines through that the most fervent AI Believers are also Haters of Humans.

Comment by tuyiown 16 hours ago

> I may have developed some kind of paranoia reading HN recently

My comments being downvoted, pretty rare lately, were about never discussed but legitimate points about AI that I validated IRL. I have no resonance about the way AI is discussed on HN and IRL, to the point that I can't rule out more or less subtle manipulation on the discussions.

Comment by maciejzj 16 hours ago

I don't think that bots have taken over HN. I meant that the frontier of the tech research brags about their recklessness here and the rest of us have become bystanders to this process. Gives me goosebumps.

Comment by KitN 16 hours ago

isn't this literally swift's laputa?

Comment by LarsDu88 8 hours ago

I'd like to note here that the lifespan of a horse is 25-30 years. They were phased out not with mass horse genocide, but likely in the same way we phase out Toyota Corollas that have gotten too old. Owners simply didn't buy a new horse when the old one wore out, but bought an automobile instead.

Economically it is no different from the demand for Mitsubishi's decreasing except the vehicle in this case eats grass, poops, and feels pain.

If you want to analogize with humans, a gradual reduction in breeding (which is happening anyways with or without AI) is probably a stronger analogy than a Skynet extinction scenario.

Truth is this is no different than the societal trends that were introduced with industrialization, simply accelerated on a massive scale.

The threshold for getting wealth through education is bumping up against our natural human breeding timeline, delaying childbirth past natural optimal human fertility ages in the developed world. The amount of education needed to achieve certain types of wealth will move into the decades causing even more strain on fertility metrics. Some people will decide to have more kids and live off purely off whatever limited wellfare the oligarchs in charge decide is acceptable. Others will delay having children far past natural human fertility timespans or forgo having children at all.

Comment by LarsDu88 8 hours ago

If we look at it this way, a reduction in human population would be contingent on whether you think human beings exist and are bred for the purposes of labor.

I believe most people would agree with me that the answer is NO.

The analogy to horses here then is not individuals, but specific types of jobs.

Comment by glenstein 17 hours ago

Honestly I can't tell if your incredulity is at the method of analysis for being tragically mistaken or superficial in some way, at the seemingly dehumanizing comparison of beloved human demonstrations of skill (chess, writing) to lowest common denominator labor, or the tone of passive indifference to computers taking over everything.

I think the comparisons are useful enough as metaphors, though I wonder at analysis, because it sounds like if someone took a Yudkowsky idea and talked about it like a human, which might make a bad assumption go down more smooth than it should. But I don't know.

Comment by constantcrying 16 hours ago

It isn't just AI. So much of the US "Tech"/VC scene is doing outright evil stuff, with seemingly zero regard for any consequence or even a shred of self awareness.

So much money is spent on developing gambling, social media, crypto (fraud and crime enabler) and surveillance software. All of these are making people's lives worse, these companies aren't even shy about it. They want to track you, they want you to spend as much time as possible on their products, they want to make you addicted to gambling.

Just by how large these segments are, many of the people developing that software must be posting here, but I have never seen any actual reflection on it.

Sure, I guess developing software making people addicted to gambling pays the bills (and more than that), but I haven't seen even that. These industries just exist and people seem to work for them as if it was just a normal job, with zero moral implications.

Comment by fullstackchris 20 hours ago

it's a con job and strawman take. if we collectively think token generators can replace humans completely, well then we've already lost the plot as a global society

Comment by camillomiller 18 hours ago

Honestly, the answer for me is yes. I had expected it. The signs were in all the comments that take the market forces for granted. All the comments that take capitalism as a given and immutable law of nature. They were in all the tech bros that never ever wanted to change anything but the number of zeros in their bank account after a successful exit. So yes, I had that thought you are finally having too.

Comment by twodave 1 day ago

Horses eat feed. Cars eat gasoline. LLMs eat electricity, and progress may even now be finding its limits in that arena. Besides the fact that just more compute and context size aren’t the right kind of progress. LLMs aren’t coming for your job any more than computer vision is, for a lot of reasons, but I’ll list two more:

  1. Even if LLMs made everyone 10x as productive, most companies will still have more work to do than resources to assign to those tasks. The only reason to reduce headcount is to remove people who already weren’t providing much value.
  
  2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job.

Comment by reeredfdfdf 1 day ago

"The only reason to reduce headcount is to remove people who already weren’t providing much value."

I wish corporations really acted this rationally.

At least where I live hospitals fired most secretaries and assistants to doctors a long time ago. The end result? High-paid doctors spending significant portion of their time on administrative and bureaucratic tasks that were previously handled by those secretaries, preventing them from seeing as many patients as they otherwise would. Cost savings may look good on spreadsheet, but really the overall efficiency of the system suffered.

Comment by ehnto 23 hours ago

That's what I see when companies cut juniors as well. AI cannot replace a junior because a junior has full and complete agency, accountability, and purpose. They retain learning and become a sharper bespoke resource for the business as time goes on. The PM tells them what to do and I give them guidance.

If you take away the juniors, you are now asking your seniors to do that work instead which is more expensive and wasteful. The PM cannot tell the AI junior what to do for they don't know how. Then you say, hey we also want you to babysit the LLM to increase productivity, well I can't leave a task with the LLM and come back to it tomorrow. Now I am wasting two types of time.

Comment by jack_pp 20 hours ago

> well I can't leave a task with the LLM and come back to it tomorrow

You could actually just do that, leave an agent on a problem you would give a junior, go back on your main task and whenever you feel like it check the agent's work.

Comment by ehnto 1 hour ago

It lacks the ability to self correct and do all the adjacent tasks like client comms etc. So if I come back to it in the afternoon I may have wasted a day in business terms, because I will need to try again tomorrow. What do I tell the client, sorry the LLM failed the simple task so we will have to try again tomorrow? Worse, lie and say sorry this 2 hour task could not be achieved by our developers today. Either way we look incompetent (because realistically, we were not competent, relying on a tool that fails frequently)

Comment by jack_pp 15 minutes ago

I'm sorry but I'm not familiar with the context you mention, have not worked in a job where I had to communicate with clients and I find it hard to imagine a job where a junior would have to communicate with a client on a 2 hour task. Why would you want a junior to be the public face of your company?

Comment by htrp 18 hours ago

that sounds like a pm problem

Comment by kylinhacker 1 day ago

I'm a full-stack developer, Recently i find that almost 90% of my work deadlines have been brought forward, and the bosses' scheduling has become stricter. the coworker who is particularly good at pair programming with AI prefers to reduce his/her scheduling(kind of unconsciously)。Work is sudden,but salary remains steady。what a bummer

Comment by listenallyall 22 hours ago

But wouldnt these spreadsheets be tracking something like total revenue? If a doctor is spending time on admin tasks instead of revenue-generating procedures, obviously the hospital has accountants and analysts who will notice this, yes?

I'll contrast your experience with a well-run (from a profitability standpoint) dentist's office, they have tons of assistants and hygienists and the dentist just goes from room-to-room performing high-dollar procedures, and very little "patient care." If small dentist offices have this all figured out it seems a little strange that a massive hospital does not.

Comment by gwd 22 hours ago

First of all, it's not unlikely that the dentist is the owner. And in any case, when you have a small system of less than 150 people, it's easy enough for a handful of people to see what's actually going on.

Once you get to something in the thousands or tens of thousands, you just have spreadsheets; and anything that doesn't show up in that spreadsheet might as well not exist. Furthermore, you have competing business units, each of which want to externalize their costs to other business units.

Very similar to what GP described -- when I was in a small start-up, we had an admin assistant who did most of the receipt entry and what-not for our expense reports; and we were allowed to tell the company travel agent our travel constrants and give us options for flights. When we were acquired by a larger company, we had to do our own expense reports, and do our own flight searches. That was almost certainly a false economy.

And then when we became a major conglomerate, at some point they merged a bunch of IT functions; so the folks in California would make a change and go home, and those of us in Europe or the UK would come in to find all the networks broken, with no way to fix it until the people in California started coming in at 4pm.

In all cases, the dollars saved is clearly visible in the spreadsheet, while the "development velocity" lost is noisy, diffuse, and hard to quantify or pin down to any particular cause.

I suppose one way to quantify that would be to have the Engineering function track time spent doing admin work and charge that to the Finance function; and time spent idle due to IT outages and charge that to the IT department. But that has its own pitfalls, no doubt.

Comment by listenallyall 9 hours ago

Problem with this analogy is that software development != revenue. The developers and IT are a cost center. So yea in a huge org one of the goals is to reduce costs (admin) spent on supporting a cost center.

Doctors generate revenue directly and it can all be traced, so even an extra 20 minutes out of their day doing admin stuff instead of one more patient or procedure is easily noticeable, and affects revenue directly.

Comment by Eisenstein 22 hours ago

> If a doctor is spending time on admin tasks instead of revenue-generating procedures, obviously the hospital has accountants and analysts who will notice this, yes?

I am going to assume that the Doctors are just working longer hours and/or aren't as attentive as they could be and so care quality declines but revenue doesn't. Overworking existing staff in order to make up for less staff is a tried and true play.

> I'll contrast your experience with a well-run (from a profitability standpoint) dentist's office, they have tons of assistants and hygienists and the dentist just goes from room-to-room performing high-dollar procedures, and very little "patient care." If small dentist offices have this all figured out it seems a little strange that a massive hospital does not.

By conflating 'Doctors' and 'Dentists' you are basically saying the equivalent of 'all Doctors' and 'Doctors of a certain specialty'. Dentists are 'Doctors for teeth' like a pediatrician is a 'Doctor for children' or an Ortho is a 'Doctor for bones'.

Teeth need maintenance, which is the time consuming part of most visits, and the Dentist has staff to do that part of it. That in itself makes the specialty not really that comparable to a lot of others.

Comment by htrp 18 hours ago

I feel like that's how you get Microsoft where each division has a gun pointed at the other division

Comment by listenallyall 9 hours ago

Doesn't really matter the type of doctor, spending all their time on revenue-generating activities would seem to be better than only 75% generating revenue and 25% on "administrative and bureaucratic tasks" that don't generate revenue and which could be accomplished by a much lower-paid employee ("secretaries and assistants").

Perhaps you're correct that the doctors are simply working much longer hours but that's one group of employees among a hospital's staff who do generally have a lot of power and aren't too easy to make extraordinary demands of.

Comment by shaka-bear-tree 1 day ago

Funny the original post doesn’t mention AI replacing the coding part of his job.

There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.

I want to be be optimistic. But it’s hard to ignore what I’m doing and seeing. As far as I can tell, we haven’t hit serious unemployment yet because of momentum and slow adoption.

I’m not replying to argue, I hope you are right. But I look around and can’t shake the feeling of Wile E. Coyote hanging in midair waiting for gravity to kick in.

Comment by kace91 21 hours ago

>There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.

Yes, it’s a god of the gaps situation. We don’t know what the ceiling is. We might have hit it, there might be a giant leap forward ahead, we might leap back (if there is a rug pull).

The most interesting questions are the ones that assume human equivalency.

Suppose an AI can produce like a human.

Are you ok with merging that code without human review?

Are you ok with having a codebase that is effectively a black box?

Are you ok with no human being responsible for how the codebase works, or able to take the reins if something changes?

Are you ok with being dependent on the company providing this code generation?

Are we collectively ok with the eventual loss of human skills, as our talents rust and the new generation doesn’t learn them?

Will we be ok if the well of public technical discussion LLMs are feeding from dries up?

Those are the interesting debates I think.

Comment by Symmetry 17 hours ago

> Are you ok with having a codebase that is effectively a black box?

When was the last time you looked at the machine code your compiler was giving you? For me, doing embedded development on an architecture without a mature compiler the answer is last Friday but I expect that the vast majority of readers here never look at their machine code. We have abstraction layers that we've come to trust because they work in practice. To do our work we're dependent on the companies that develop our compilers where we can at least see the output, but also companies that make our CPUs which we couldn't debug without a huge amount of specialized equipment. So I expect that mostly people will be ok with it.

Comment by kace91 16 hours ago

>When was the last time you looked at the machine code your compiler was giving you?

You could rephrase that as “when was the last time your compiler didn’t work as expected?”. Never in my whole career in my case. Can we expect that level of reliability?

I’m not making the argument of “the LLM is not good enough”. that would brings us back to the boring dissuasion of “maybe it will be”.

The thing is that human langauge is ambiguous and subject to interpretation, so I think we will have occasionally wrong output even with perfect LLMs. That makes black box behavior dangerous.

Comment by Symmetry 14 hours ago

We certainly can't expect that with LLMs now but neither could compiler users back in the 1970s. I do agree that we probably won't ever have them generating code without more back and forth where the LLM complains that its instructions were ambiguous and then testing afterwards.

Comment by etherlord 19 hours ago

I dont think it really matters if your or I or regular people are ok with it if the people with power are. There doesnt seem to be much any of us regular folks can do to stop it, especially as Ai eliminates more and more jobs thus further reducing the economic power of everyday people

Comment by kace91 16 hours ago

I disagree. There are personal decisions to make:

Do you bet on keeping your technical skills sharpened, or stop and focus on product work and AI usage?

Do you work for companies that go full AI or try to find one that stays “manual”?

What advice do you offer as a technical lead when asked?

Leadership ignoring technical advice is nothing new, but there is still value in figuring out those questions.

Comment by listenallyall 15 hours ago

Have you ever double-checked (in human fashion, not just using another calculator) the output from a calculator?

When calculators were first introduced I'm sure some people such as scientists and accountants did exactly that. Calculators were new, people likely had to be slowly convinced that these magic devices could be totally accurate.

But you and I were born well after the invention of calculators, our entire lives nobody has doubted that even a $2 calculator can immediately determine the square root of an 8-digit number and be totally accurate. So nobody verifies, and also, a lot of people can't do basic math.

Comment by torginus 23 hours ago

I predict by March 2026, AI will be better at writing doomer articles about humans being replaced than top human experts.

Comment by ben_w 21 hours ago

You mean it hasn't already?

Comment by twodave 11 hours ago

Well, I would just say to take into account the fact that we're starting to see LLMs be responsible for substantial electricity use, to the point that AI companies are lobbying for (significant) added capacity. And remember that we're all getting these sub-optimal toys at such a steep discount that it would be price gouging if everyone weren't doing it.

Basically, there's an upper limit even to how much we can get out of the LLMs we have, and it's more expensive than it seems to be.

Not to mention, poorly-functioning software companies won't be made any better by AI. Right now there's a lot of hype behind AI, but IMO it's very much an "emperor has no clothes" sort of situation. We're all just waiting for someone important enough to admit it.

Comment by jakewins 23 hours ago

I’m deeply sceptical. Every time a major announcement comes out saying so-and-so model is now a triple Ph.D programming triathlon winner, I try using it. Every time it’s the same - super fast code generation, until suddenly staggering hallucinations.

If anything the quality has gotten worse, because the models are now so good at lying when they don’t know it’s really hard to review. Is this a safe way to make that syscall? Is the lock structuring here really deadlock safe? The model will tell you with complete confidence its code is perfect, and it’ll either be right or lying, it never says “I don’t know”.

Every time OpenAI or Anthropic or Google announce a “stratospheric leap forward” and I go back and try and find it’s the same, I become more convinced that the lying is structural somehow, that the architecture they have is not fundamentally able to capture “I need to solve the problem I’m being asked to solve” instead of “I need to produce tokens that are likely to come after these other tokens”.

The tool is incredible, I use it constantly, but only for things where truth is irrelevant, or where I can easily verify the answer. So far I have found programming, other than trivial tasks and greenfield ”write some code that does x”, much faster without LLMs

Comment by NotOscarWilde 21 hours ago

> Is the lock structuring here really deadlock safe? The model will tell you with complete confidence its code is perfect

Fully agree, in fact, this has literally happened to me a week ago -- ChatGPT was confidently incorrect about its simple lock structure for my multithreaded C++ program, and wrote paragraphs upon paragraphs about how it works, until I pressed it twice about a (real) possibility of some operations deadlocking, and then it folded.

> Every time a major announcement comes out saying so-and-so model is now a triple Ph.D programming triathlon winner, I try using it. Every time it’s the same - super fast code generation, until suddenly staggering hallucinations.

As an university assistant professor trying to keep up with AI while doing research/teaching as before, this also happens to me and I am dismayed by that. I am certain there are models out there that can solve IMO and generate research-grade papers, but the ones I can get easy access to as a customer routinely mess up stuff, including:

* Adding extra simplifications to a given combinatorial optimization problem, so that its dynamic programming approach works.

* Claiming some inequality is true but upon reflection it derived A >= B from A <= C and C <= B.

(This is all ChatGPT 5, thinking mode.)

You could fairly counterclaim that I need to get more funding (tough) or invest much more of my time and energy to get access to models closer to what Terrence Tao and other top people trying to apply AI in CS theory are currently using. But at least the models cheap enough for me to get access as a private person are not on par with what the same companies claim to achieve.

Comment by empiricus 19 hours ago

I agree that the current models are far from perfect. But I am curious how you see the future. Do you really think/feel they will stop here?

Comment by jakewins 18 hours ago

I mean, I'm just some guy, but in my mind:

- They are not making progress, currently. The elephant-in-the-room problem of hallucinations is exactly the same or, as I said above, worse as it was 3 years ago

- It's clearly possible to solve this, since we humans exist and our brains don't have this problem

There's then two possible paths: Either the hallucinations are fundamental to the current architecture of LLMs, and there's some other aspect about the human brains configuration that they've yet to replicate. Or the hallucinations will go away with better and more training.

The latter seems to be the bet everyone is making, that's why there's all these data centers being built right? So, either larger training will solve the problem, and there's enough training data, silica molecules and electricity on earth to perform that "scale" of training.

There's 86B neurons in the human brain. Each one is a stand-alone living organism, like a biological microcontroller. It has constantly-mutating state, memory: short term through RNA and protein presence or lack thereof, long term through chromatin formation, enabling and disabling it's own DNA over time, in theory also permanent through DNA rewriting via TEs. Each one has a vast array of input modes - direct electrical stimulation, chemical signalling through a wide array of signaling molecules and electrical field effects from adjacent cells.

Meanwhile, GPT-4 has 1.1T floats. No billions of interacting microcontrollers, just static floating points describing a network topology.

The complexity of the neural networks that run our minds is spectacularly higher than the simulated neural networks we're training on silicon.

That's my personal bet. I think the 88B interconnected stateful microcontrollers is so much more capable than the 1T static floating points, and the 1T static floating points is already nearly impossibly expensive to run. So I'm bearish, but of course, I don't actually know. We will see. For now all I can conclude is the frontier model developers lie incessantly in every press release, just like their LLMs.

Comment by empiricus 17 hours ago

Thanks, that's a reasonable argument. Some critique: based on this argument it is very surprising that LLM work so well, or at all. The fact that even small LLM do something suggests that the human substrate is quite inefficient for thinking. Compared to LLMs, it seems to me that 1. some humans are more aware of what they know; 2. humans have very tight feedback loops to regulate and correct. So I imagine we do not need much more scaling, just slightly better AI architectures. I guess we will see how it goes.

Comment by botanrice 17 hours ago

idk man, I work at a big consultant company and all I'm hearing is dozens of people coming out of their project teams like, "yea im dying to work with AI, all we're doing is talking about with clients"

It's like everyone knows it is super cool but nobody has really cracked the code for what it's economic value truly, truly is yet

Comment by zwnow 1 day ago

> There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.

Any sources on that? Except for some big tech companies I dont see that happening at all. While not empirical most devs I know try to avoid it like the plague. I cant imagine that many devs actually jumped on the hype train to replace themselves...

Comment by tormeh 23 hours ago

This is what I also see. AI is used sparingly. Mostly for information lookup and autocomplete. It's just not good enough for other things. I could use it to write code if I really babysit it and triple check everything it does? Cool cool, maybe sometime later.

Comment by kakacik 22 hours ago

Who does typical code sweat shops churning out one smallish app at a time and quickly moving on? Certainly not your typical company-hired permanent dev, they (us) drown in tons of complex legacy code that keeps working for past 10-20 years and company sees no reason to throw it away.

Those folks that do churn out such apps, for them its great & horrible long term. For folks like me development is maybe 10% of my work, and by far the best part - creative, problem-solving, stimulating, actually learning myself. Why would I want to mildly optimize that 10% and loose all the good stuff, while speed wouldn't visibly even improve?

To really improve speed in bigger orgs, the change would have to happen in processes, office politics, management priorities and so on. No help of llms there, if anything trend-chasing managers just introduce more chaos with negative consequences.

Comment by frchalli 1 day ago

> The only reason to reduce headcount is to remove people who already weren’t providing much value.

There were many secretaries up until the late 20th century that took dictation, either writing notes of what they were told or from a recording, then they typed it out and distributed memos. At first, there were many people typing, then later mimeograph machines took away some of those jobs, then copying machines made that faster, then printers reduced the need for the manual copying, then email reduced the need to print something out, and now instant messaging reduces email clutter and keep messages shorter.

All along that timeline there were fewer and fewer people involved, all for the valuable task of communication. While they may not have held these people in high esteem, they were critical for getting things done and scaling.

I’m not saying LLMs are perfect or will replace every job. They make mistakes, and they always will; it’s part of what they are. But, as useful as people are today, the roles we serve in will go away and be replaced by something else, even if it’s just to indicate at various times during the day what is or isn’t pleasing.

Comment by belorn 22 hours ago

The thing that replaces the old memos is not email, its meetings. It not uncommon for meetings with hundreds of participants that in the past would be a simple memo.

It would be amazing if LLMs could replace the role that meetings has in communication, but somehow I strongly doubt that will happens. It is a fun idea to have my AI talk with your AI so no one need to actually communicate, but the result is more likely to create barriers for communication than to help it.

Comment by YouAreWRONGtoo 21 hours ago

[dead]

Comment by kalterdev 1 day ago

The crucial observation is the fact that automation has historically been a net creator of jobs, not destroyer.

Comment by zarzavat 1 day ago

Sure, if you're content to stack shelves.

AI isn't automation. It's thinking. It automates the brain out of human jobs.

You can still get a job that requires a body. My job doesn't require a body, so I'm screwed. If you're say, a surgeon or a plumber, you're in a better place.

Comment by ben_w 21 hours ago

> Sure, if you're content to stack shelves.

Why this example? One of the things automation has done is reduce and replace stevedores, the shipping equivalent of stacking shelves.

Amazon warehouses are heavily automated, almost self-stacking-shelves. At least, according to the various videos I see, I've not actually worked there myself. Yet. There's time.

> AI isn't automation. It's thinking. It automates the brain out of human jobs. You can still get a job that requires a body. My job doesn't require a body, so I'm screwed. If you're say, a surgeon or a plumber, you're in a better place.

Right up until the AI is good enough to control the robot that can do that job. Which may or may not be humanoid. (Plus side: look how long it's taking for self-driving cars, how often people think a personal anecdote of "works for me" is a valid response to "doesn't work for me").

Even before the AI gets that good, a nice boring remote-control android doing whatever manual labour could outsource the "controller" position to a human anywhere on the planet. Mental image: all the unemployed Americans protesting outside Tesla's factories when they realise the Optimus robots within are controlled remotely from people in 3rd world countries getting paid $5/day.

Comment by 1 day ago

Comment by ForHackernews 22 hours ago

Yes, AI is automation. It automates the implementation. It doesn't (yet?) automate the hard parts around figuring out what work needs to be done and how to do it.

The sad thing is that for many software devs, the implementation is the fun bit.

Comment by MLgulabio 22 hours ago

[dead]

Comment by bigfishrunning 17 hours ago

Except it isn't thinking. It is applying a model of statistical likelihood. The real issue is that it's been sold as thinking, and laypeople believe that it's thinking, so it is very likely that jobs will be eliminated before it's feasible to replace them.

People that actually care about the quality of their output are a dying breed, and that death is being accelerated by this machine that produces somewhat plausible-looking output, because we're optimizing around "plausible-looking" and not "correct"

Comment by OkayPhysicist 12 hours ago

That observation is only useful if you can point at a capability that humans have that we haven't automated.

Hunter-Gatherers were replaced by the technology of Agriculture. Humans still are needed to provide the power to plow the earth and reap the crops.

Human power was replaced by work animals pulling plows, but you only humans can make decisions about when to harvest.

Jump forward a good long time,

Computers can run algorithms to indicate when best to harvest. Humans are still uniquely flexible and creative in their ability to deal with unanticipated issues.

AI is intended to make "flexible and creative" no longer a bastion of human uniquness. What's left? The only obvious one I can think of is accountability: as long as computers aren't seen as people, you need someone to be responsible for the fully automated farm.

Comment by _DeadFred_ 14 hours ago

'Because thing X happened in past it is guaranteed to happen in the future and we should bet society on it instead of trying to you know, plan for the future. Magic jobs will just appear, trust me'

Comment by jstanley 21 hours ago

> At first, there were many people typing, then later [...]

There were more people typing than ever before? Look around you, we're all typing all day long.

Comment by kllamnjro 20 hours ago

I think they meant that there was a time when people’s jobs were:

1. either reading notes in shorthand, or reading something from a sheet that was already fully typed using a typewriter, or listening to recorded or live dictation

2. then typing that content out into a typewriter.

People were essentially human copying machines.

Comment by enduser 1 day ago

This is a very insightful take. People forget that there is competition between corporations and nations that drives an arms race. The humans at risk of job displacement are the ones who lack the skill and experience to oversee the robots. But if one company/nation has a workforce that is effectively 1000x, then the next company/nation needs to compete. The companies/countries that retire their humans and try to automate everything will be out-competed by companies/countries that use humans and robots together to maximum effect.

Comment by avereveard 1 day ago

Overseeing robot is a time limited activity. Even building robot has a finite horizon.

Current tech can't yet replace everything but many jobs already see the horizon or are at sunset.

Last few time this happened the new tech, whether textile mills or computers, drove job creation as well as replacement.

This time around some component of progress are visibile, because end of the day people can use this tech to create wealth at unprecedented scale, but other arent as the tech is run with small teams at large scale and has virtually no related ondustries is depends on like idk cars would. It's energy and gpus.

Maybe we will be all working on gpu related industries? But seems another small team high scale job. Maybe few tens of million can be employed there?

Meanwhile I just dont see the designer + AI job role materializing, I see corpos using AI and cutting the middleman, while designers + AI get mostly ostracized, unable to raise, like a cran in a bucket of crabs.

Comment by misnome 1 day ago

> because end of the day people can use this tech to create wealth at unprecedented scale

_Where?_ so far the only technology to have come out widespread for this is to shove a chatbot interface into every UI that never needed it.

Nothing has been improved, no revelatory tech has come out (tools to let you chatbot faster don’t count).

Comment by listenallyall 15 hours ago

Honestly, this comment sounds like someone dismissing the internet in 1992 when the web was all text-based and CompuServe was leading-edge. No "revelatory tech" just yet, but it was right around the corner.

Comment by avereveard 22 hours ago

In the backend, not directly customer facing. Coca cola is two years in running ai ads. Lovable is cash positive, and many of the builder there are too. A few creators are earning a living with suno songs. Not millions mind but they can live off their ai works.

If you dont see it happening around you, you're just not looking.

Comment by misnome 22 hours ago

So, a company cutting costs, a tool to let you chatbot faster, and musical slop at scale.

This doesn't sound like "creating wealth at unprecedented scale"

Comment by vlovich123 1 day ago

I think you’ve missed the point. Cars replaced horses - it wasn’t cars+horses that won. Computers replaced humans as the best chess players, not computers with human oversight. If successful, the end state is full automation because it’s strictly superhuman and scales way more easily.

Comment by 9rx 1 day ago

> Computers replaced humans as the best chess players, not computers with human oversight.

Oh? I sat down for a game of chess against a computer and it never showed up. I was certain it didn't show up because computers are unable to without human oversight, but tell me why I'm wrong.

Comment by p-e-w 1 day ago

Apparently human chess grandmasters also need “oversight” from airplanes, because without those, essentially none of them would show up at elite tournaments.

Comment by 9rx 1 day ago

Things like trains, boats, and cars exist. Human chess grandmasters can show up to elite tournaments, and perform while there, without airplanes. Computer chess systems, on the other hand, cannot do anything without human oversight.

Comment by ben_w 21 hours ago

> Things like trains, boats, and cars exist. Human chess grandmasters can show up to elite tournaments, and perform while there, without airplanes.

Those modes of transport are all equivalent to planes for the point being made.

I (not that I'm even as good as "mediocre" at chess) cannot legally get from my current location to the USA without some other human being involved. This is because I'm not an American and would need my entry to be OKed by the humans managing the border.

I also doubt that I would be able to construct a vessel capable of crossing the Atlantic safely, possibly not even a small river. I don't even know enough to enumerate how hard that would be, would need help making a list. Even if knew all that I needed to, it would be much harder to do it from raw materials rather than buying pre-cut timber, steel, cloth (for a sail), etc. Even if I did it that way, I can't generate cloth fibres and wood from by body like plants do. Even if I did extrude and secrete raw materials, plants photosynthesise and I eat, living things don't spontaneously generate these products from their souls.

For arguments like this, consider the AI like you consider treat Stephen Hawking: lack of motor skills aren't relevant to the rest of what they can do.

When AI gets good enough to control the robots needed to automate everything from mining the raw materials all the way up to making more robots to mine the raw materials, then not only are all jobs obsolete, we're also half a human lifetime away from a Dyson swarm.

Comment by 9rx 18 hours ago

> Those modes of transport are all equivalent to planes for the point being made.

The point is that even those things require oversight from humans. Everything humans do requires oversight from humans. How you missed it, nobody knows.

Maybe someday we'll have a robot uprising where humans can be exterminated from life and computers can continue to play chess, but that day is not today. Remove the human oversight and those computers will soon turn into lumps of scrap unable to do anything.

Sad state of affairs when not even the HN crowd understands such basic concepts about computing anymore. I guess that's what happens when one comes to tech by way of "Learn to code" movements promising a good job instead of by way of having an interest in technology.

Comment by ben_w 17 hours ago

> Everything humans do requires oversight from humans. How you missed it, nobody knows.

'cause you said:

  Computer chess systems, on the other hand, cannot do anything without human oversight.
The words "on the other hand" draws a contrast, suggesting that the subject of the sentence before it ("chess grandmasters") are different with regard to the task ("show up to elite tournaments"), and thus can manage without the stated limitation ("anything without human oversight").

> Maybe someday we'll have a robot uprising where humans can be exterminated from life and computers can continue to play chess, but that day is not today. Remove the human oversight and those computers will soon turn into lumps of scrap unable to do anything.

OK, and? Nobody's claiming "today" is that day. Even Musk despite his implausible denials regarding Optimus being remote controlled isn't claiming that today is that day.

The message you replied to was this: https://news.ycombinator.com/item?id=46201604

The chess-playing example there was an existing example of software beating humans in a specific domain in order to demonstrate that human oversight is not a long-term solution, you can tell by the use of the words "end state", and even then hypothetical (due to "if"), as in:

  If successful, the end state is full automation
There was a period where a chess AI that was in fact playing a game of chess could beat any human opponent, and yet would still lose to the combination of a human-AI team. This era has ended and now the humans just hold back the AI, we don't add anything (beyond switching it on).

Furthermore, there's nothing at all that says that an insufficiently competent AI won't wipe us out:

And as we can already observe, there's clearly nothing stopping real humans from using insufficiently competent AI due to some combination of being lazy and/or the vendors over-promising what can be delivered.

Also, we've been in a situation where the automation we have can trigger WW3 and kill 90% of the human population despite the fact that the very same automation would be imminently destroyed along with it since the peak of the Cold War, with near-misses on both US and USSR systems. Human oversight stopped it, but like I said, we can already observe lazy humans deferring to AI, so how long will that remain true?

And it doesn't even need to be that dramatic; never mind global defence stuff, just correlated risks, all the companies outsourcing all their decisions to the same models, even when the models' creators win a Nobel prize for creating them, is a description of how the Black–Scholes formula and its involvement in the 2008 financial crisis — and sure, didn't kill us all, but this is just an illustration of a failure mode rather than consequences.

Comment by 9rx 17 hours ago

> The words "on the other hand" draws a contrast, suggesting that the subject of the sentence before it

I know it can be hard for programmers stuck in a programming language mindset, especially where one learned about software from "Learn to code" movements, but as this is natural language, technically it only draws what I intended for it to draw. If you wish to interpret it another way, cool. Much like as in told in the Carly Simon song of similar nature, it makes no difference to me.

Comment by saberience 20 hours ago

What planet are you on? What relevance does this have at all? Computers don't need to go and fly somewhere, they can just be accessed over a network. Also, the location and traveling is irrelevant to the main point, that is, that computers far exceeded our capacity in Chess and Go many years ago and are now so much better we cannot even really understand their moves or why they do them and have no hope to ever compete.

The same will be true of every other intellectual discipline with time. It's already happening with maths and science and coding.

Comment by 9rx 19 hours ago

> What planet are you on?

The one where computers don't magically run all by themselves. It's amazing how out of touch HN has become with technology. Thinking that you can throw something up into the cloud, or whatever was imagined, needing no human oversight to operate it... Unfortunately, that's not how things work in this world. "The cloud" isn't heaven, despite religious imagery suggesting otherwise. It requires legions of people to make it work.

This is the outcome of that whole "Learn to code" movement from a number of years ago, I suppose. Everyone thinks they're an expert in everything when they reach the mastery of being able to write a "Hello, World" program in their bedroom.

But do tell us what planet you are on as it sounds wonderful.

Comment by vlovich123 15 hours ago

The amount of people it takes to maintain a server rack is minimal and low cost labor. Most of the money is spent on hardware and paying people to right software for that hardware.

Writing that software is becoming automated and it’s not hard to imagine that buying will as well. So you’re left with the equivalent of a plumber running your data center based on what automated systems flag as issues and other automated systems explain you the troubleshooting to go do. There might be a specialist they fly in for an insane rate (in the shorter term) if none of that works but we’re talking about a drastic reduction in workforce needed, and this is for the data center maintenance which not many companies have anymore since the cloud migration

Comment by 9rx 11 hours ago

> The amount of people it takes to maintain a server rack is minimal and low cost labor.

So what you're saying is that it requires human oversight. Got it.

Glad you finally caught up to where the rest of us were many comments ago. But why did it take so long? Inquiring minds want to know.

Comment by vlovich123 5 hours ago

Once again missing the forest for the trees about what the article is about. But it’s ok - reading comprehension isn’t for everybody.

Comment by fsflover 23 hours ago

Yes, a computer chess system replacing a thousand chess players requires a couple of developers for the oversight.

Comment by 9rx 23 hours ago

Computer chess systems don't need developer oversight. They do, however, require oversight from, let's call them, IT people.

Comment by baq 1 day ago

Humans still play chess and horses are still around as a species.

(Disclaimer: this is me trying to be optimistic in a very grim and depressing situation)

Comment by plufz 23 hours ago

I try to be optimistic as well. But obviously horses are almost exclusively a hobby today. The work horse is gone. I think the problem is political to a part, if we manage to spread the wealth AI can create we are fine. If we let it concentrate power even more it looks very grim.

Comment by skissane 1 day ago

B2C businesses need consumers. If AIs take all the jobs, then most of the population-minus the small minority who are independently wealthy and can live off their investments-go broke, and can’t afford to buy anything any more. Then all the B2C businesses go broke. Then all the B2B businesses lose all their B2C business customers and go broke. Then the stock market crashes and the independently wealthy lose all their investments and go broke. Then nobody can afford to pay the AI power bills any more, so the AIs get turned off.

And that’s why across-the-board AI-induced job losses aren’t going to happen-nobody wants the economic house of cards to collapse. Corporate leaders aren’t stupid enough to blow everything up because they don’t want to be blown up in the process. And if they actually are stupid enough, politicians will intervene with human protectionism measures like regulations mandating humans in the loop of major business processes.

The horse comparison ultimately doesn’t work because horses don’t vote.

Comment by 9rx 1 day ago

> B2C businesses need consumers

Businesses need consumers when those consumers are necessary to provide something in return (e.g. labor). If I want beef and only have grass, my grass business needs people with cattle wanting my grass so that we can trade grass for beef, certainly. But if technology can provide me beef (and anything else I desire) without involving any other people, I don't need a business anymore. Businesses is just a tool to facilitate trade. No need for trade, no need for business.

Comment by 1 day ago

Comment by baq 1 day ago

This is the optimistic take, too. There are plenty of countries which don’t care about votes, indeed there are dictators that don’t care about their subjects, they only care about outcomes for themselves. The economic argument only works in capitalism and rule of law - and that’s assuming money is worth anything anymore.

Comment by skissane 1 day ago

The Chinese Communist Party is obsessed with social stability. Do you think they’ll allow AI to take all the jobs, destroying China’s domestic economy in the process? Or will they enact human protectionism regulations? What Would Xi Jinping Do?

Comment by ben_w 21 hours ago

> Do you think they’ll allow AI to take all the jobs, destroying China’s domestic economy in the process?

If AI can take all the jobs (IMO at least a decade away for the robotics, and that's a minimum not a best-guess), the economy hasn't been destroyed, it's just doing whatever mega-projects the owners (presumably in this case the Chinese government) want it to do.

That can be all the social stability stuff they want. Which may be anything from "none at all" to whatever the Chinese equivalent is of the American traditional family in a big detached house with a white picket fence, everyone going to the local church every Sunday, people supporting whichever sports teams they prefer, etc.

I don't know Chinese culture at all (well, not beyond OSP and their e.g. retelling of Journey to the West), so I don't know what their equivalents to any of those things would be.

Comment by tstrimple 18 hours ago

Look at what China does to protect its citizens against social media. You see China enacting many of the social media protections that many HN enthusiasts demand, yet Sinophobia makes them reframe it as a negative. "Children shouldn't have access to social media, except when China does it then it's bad!"

Comment by jacquesm 1 day ago

The independently wealthy still need the economies of scale provided by a normal society.

Comment by myth_drannon 1 day ago

Can the process be similar to a sudden collapse of USSR's economic system? The leaders weren't stupid and tried to keep it afloat but with underlying systemic issues everything just cratered.

Can the process be modelled using game theory where the actors are greedy corporate leaders and hungry populace?

Comment by twoodfin 20 hours ago

The USSR’s political system collapsed fairly suddenly. Its economic system had been rotten for decades.

Comment by ErroneousBosh 22 hours ago

I am somewhat confident that horses are going to replace cars and tractors pretty soon, possibly within my lifetime and quite likely with my son's.

He's going to learn how to drive (and repair) a tractor but he's also going to learn how to ride a horse.

Comment by ahf8Aithaex7Nai 1 day ago

Perhaps you have missed the essential point. Who drives the cars? It's not the horses, is it? And a chess computer is just as unlikely to start a game of chess on its own as a horse is to put on its harness and pull a plow across a field. I'm not entirely sure what impact all this will have on the job market, but your comparisons are flawed.

Comment by Covenant0028 22 hours ago

In the case of horses and cars, you need the same number of people to drive both (exactly one per vehicle). In the case of AI and automation, the entire economic bet is that agents will be able to replace X humans with Y humans. Ideally for employers Y=0, but they'll settle for Y<<X.

People seem to think this discussion is a binary where either agents replace everybody or they don't. It's not that simple. In aggregate, what's more likely to happen (if the promises of AI companies hold good) is large scale job losses and the remaining employees becoming the accountability sinks to bear the blame when the agent makes a mistake. AI doesn't have to replace everybody to cause widespread misery.

Comment by ahf8Aithaex7Nai 10 hours ago

Yes, I understand that it's about saving on labor costs. Depending on how successful this is, it could lead to major changes in the labor market in economies where skilled workers have been doing quite well up to now.

Comment by 1 day ago

Comment by 1 day ago

Comment by ForHackernews 22 hours ago

Unless the state of the art has advanced, it was the case that grandmasters playing with computer assistance ("centaur chess") played better than either computers or humans alone.

Comment by ErroneousBosh 22 hours ago

> Computers replaced humans as the best chess players

Computers can't play chess.

Comment by impossiblefork 1 day ago

I think the big problem here though, is that humans go from being mandatory to being optional, and this changes the competitive landscape between employers and workers.

In the past a strike mattered. With robots, it may have to go on for years to matter.

Comment by baq 22 hours ago

A strike going long enough and becoming big enough becomes a political matter. In the limit, if politicians don't find a solution, blood gets spilled. If military and police robots are in place by that time, you can ask yourself what's the point of those unproductive human leeching freeriders at all.

Comment by MLgulabio 22 hours ago

[dead]

Comment by simgt 22 hours ago

In this scenario wages will have been driven down so much that there will be barely anyone left to buy the products made by these fully automated corps. A strike won't work, but a revolt may and is more likely to happen.

Comment by gunt_crusher 17 hours ago

[dead]

Comment by gniv 23 hours ago

> most companies will still have more work to do than resources to assign to those tasks

This is very important yet rarely talked about. Having worked in a well-run group on a very successful product I could see that no matter how many people would be on a project there was alway too much work. And always too many projects. I am no longer with the company but I can see some of the ideas talked about back then being launched now, many years later. For a complex product there is always more to do and AI would simply accelerate development.

Comment by somenameforme 1 day ago

Yip, the famous example here being John Maynard Keynes, of Keynesian economics. [1] He predicted a 15 hour work week following productivity gains that we have long since surpassed. And not only did he think we'd have a 15 hour work week, he felt that it'd be mostly voluntary - with people working that much only to give themselves a sense of purpose and accomplishment.

Instead our productivity went way above anything he could imagine, yet there was no radical shift in labor. We just instead started making billionaires by the thousand, and soon enough we can add trillionaires. He underestimated how many people were willing to designate the pursuit of wealth as the meaning of life itself.

[1] - https://en.wikipedia.org/wiki/Keynesian_economics

Comment by schmichael 1 day ago

Productivity gains are more likely to be used to increase margins (profits and therefore value to shareholders) then it is to reduce work hours.

At least since the Industrial Revolution, and probably before, the only advance that has led to shorter work weeks is unions and worker protections. Not technology.

Technology may create more surplus (food, goods, etc) but there’s no guarantee what form that surplus will reach workers as, if it does at all.

Comment by bloppe 1 day ago

Margins require a competitive edge. If productivity gains are spread throughout a competitive industry, margins will not get bigger; prices will go down.

Comment by LPisGood 1 day ago

That feels optimistic. This kind of naive free market ideology seems to rarely manifest in lower prices.

Comment by degamad 11 hours ago

That's because free markets don't always result in competitive industries.

Comment by bloppe 13 hours ago

Then maybe you've never worked in a competitive industry. I have. Margins were very small.

Comment by LPisGood 8 hours ago

I’ve certainly spent time in the marketplace buying or not buying products.

Comment by HDThoreaun 12 hours ago

Every competitive industry has tiny margins. High margin business exists because of lack of competition.

Comment by LPisGood 8 hours ago

I think there are plenty of counter examples.

Comment by HDThoreaun 3 hours ago

Every rule has exceptions. Usually its because of some quirk of the market. The most obvious example is adtech, which is able to sustain massive margins because the consumers get the product for free so see no reason to switch and the advertisers are forced to follow the consumers. Tech in general has high margins but I expect them to fall as the offerings mature. Companies will always try to lock in their users like aws/oracle do but thats just a sign of an uncompetitive market imo.

Comment by anon7000 1 day ago

> Productivity gains are more likely to be used to increase margins (profits and therefore value to shareholders) then it is to reduce work hours

I mean, that basically just sums up how capitalism works. Profit growth is literally (even legally!) the only thing a company can care about. Everything else, like product quality, pays service to that goal.

Comment by timClicks 1 day ago

Sorry if this is somewhat pedantic, but I believe that only US companies (and possibly only Delaware corporations?) are bound by the requirement to maximize shareholder value and then only by case law rather than statue. Other jurisdictions allow the directors more discretion, or place more weight on the company's constitution/charter.

Comment by tirant 1 day ago

That’s not a good summary of capitalism at all because you omit the part where interests of sellers and buyers align. Which is precisely what has made capitalism successful.

Profit growth is based primarily on offering the product that best matches the consumer wish at the lowest price, and production cost possible. That benefits both the buyer and the seller. If the buyer does not care about product quality, then you will not have any company producing quality products.

The market is just a reflection of that dinámica. And in the real world we can easily observed that: Many market niches are dominated by quality products (outdoor and safety gear, professional and industrial tools…) while others tend to be dominated by non-quality (low end fashion, toys).

And that result is not imposed by profit growth but by the average consumer preference.

You can of course disagree with those consumer preferences and don’t buy low quality products, that’s why you most probably also find high products in any market niche.

But you cannot blame companies for that. What they sell is just the result of the aggregated buyers preferences and the result of free market decisions.

Comment by LtWorf 1 day ago

Armies and pinkertons made capitalism succesfull.

Comment by goatlover 1 day ago

Failure of politics and the media then. Majority of voters have been fooled into voting against their economic interests.

Comment by thedailymail 1 day ago

In the same essay ("Economic Possibilities for our Grandchildren," 1930) where he predicted the 15-hour workweek, Keynes wrote about how future generations would view the hoarding of money for money's sake as criminally insane.

"There are changes in other spheres too which we must expect to come. When the accumulation of wealth is no longer of high social importance, there will be great changes in the code of morals. We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years, by which we have exalted some of the most distasteful of human qualities into the position of the highest virtues. We shall be able to afford to dare to assess the money-motive at its true value. The love of money as a possession – as distinguished from the love of money as a means to the enjoyments and realities of life – will be recognised for what it is, a somewhat disgusting morbidity, one of those semi-criminal, semi-pathological propensities which one hands over with a shudder to the specialists in mental disease. All kinds of social customs and economic practices, affecting the distribution of wealth and of economic rewards and penalties, which we now maintain at all costs, however distasteful and unjust they may be in themselves, because they are tremendously useful in promoting the accumulation of capital, we shall then be free, at last, to discard."

Comment by somenameforme 1 day ago

A study [1] I was looking at recently was extremely informative. It's a poll from UCLA given to incoming classes that they've been carrying out since the 60s. In 1967 86% of student felt it was "essential" or "very important" to "[develop] a meaningful philosophy of life", while only 42% felt the same of "being very well off financially." By 2015 those values had essentially flipped, with only 47% viewing a life philosophy as very important, and 82% viewing being financially well off as very important.

It's rather unfortunate it only began in 1967, because I think we would see an even more extreme flip if we were able to just go back a decade or two more, and back towards Keynes' time. As productivity and wealth accumulation increased, society seems to have trended in the exact opposite direction he predicted. Or at least there's a contemporary paradox. Because I think many, if not most, younger people hold wealth accumulation with some degree of disdain yet also seek to do the exact same themselves.

In any case, in a society where wealth is seen as literally the most important aspect in life, it's not difficult to predict what follows.

[1] - https://www.heri.ucla.edu/monographs/50YearTrendsMonograph20...

Comment by odo1242 1 day ago

Well, keep in mind students at UCLA at 1967 were probably among the most wealthy in the country. A lot more average people at UCLA nowadays. Of course being financially well off wouldn't be the most important thing if you were already financially well off.

Comment by somenameforme 23 hours ago

Interesting question that the study can also answer, because it also asked about parental income!

---

1966 Median Household Income = $7400 [1].

51% of students in the $0-9999 bracket

Largest chunk of students (33%) in $6k-$9999 bracket.

Percent of students from families earning at least 2x median = 23%

---

2015 Median Household Income = $57k [1].

65% of students came from families earning more than $60k.

Largest chunk of students (18%) in $100k-$150k bracket.

Percent of students from families earning at least 2x median = 44%

---

So I think it's fairly safe to say that the average student at UCLA today comes from a significantly wealthier family than in 1966.

[1] - https://www.census.gov/library/publications/1967/demo/p60-05...

[2] - https://fred.stlouisfed.org/series/MEHOINUSA646N

Comment by odo1242 9 hours ago

Oh, I guess not then lol

Comment by tirant 1 day ago

I wonder what would be the proportion of answers between different society economic levels.

What we know so far though is that many of the traditional values were bound to the old society structures, based on the traditional family.

The advent of the sexual revolution, brought by the contraception pill, completely obliterated those structures, changing the family paradigm since then. Only accentuated in the last decade by social media and the change in the sexual marketplace due to dating apps.

Probably today many young people would just prioritize reputation (eg followers) over wealth and life philosophy. As that seems to be the trend that dictates the sexual marketplace dinámics.

Comment by imtringued 15 hours ago

The paradox is that the general principles of the market work, but the market is invisibly dysfunctional in its details.

It is generally true that higher income jobs are allocated to higher productivity workers, but it does not follow that high incomes imply high productivity and vice versa for low incomes.

If you combine the above with a disequilibrium market where supply of labor exceeds the demand for labor, then from a naive perspective it would appear as if the unemployed would deserve their unemployment.

After all, the most productive members are all employed and rewarded for their efforts. The unemployed are just lazy (voluntarily unemployed) and incompetent (society is better off without them). Any form of punishment is seen as justified and not some structural failing of the system.

The problem is that if there is a labor market disequilibrium, there will always be unemployed people and even if you think the productivity ranking is a good thing, it just means that if one of the "lazy" people suddenly becomes "hard working", they will just take the place of someone else and nothing has changed other than that the standard for laziness has risen.

Even if people notice that the system is fundamentally broken, they realize that individually, they are either a beneficiary of the system and therefore don't see a reason to change it or they don't have the ability to change the system and rather focus on taking someone else's place.

This will result in an artificial Darwinian rat race where people see each other as competitors to defeat.

This is my explanation for why immigrants make a good scapegoat even though immigration doesn't affect the rules of the game at all.

Here is an analogy via a game of musical chairs. There is the perception that more immigrants means more players competing for chairs. This is a naive interpretation that looks obvious. What is being forgotten is that each player is bringing a new chair and the number of missing chairs is a percentage of the number of players. The truth is that having more immigrants means you can take their chair away for yourself. So immigration is not causative here. The problem is that there were never enough chairs to begin with no matter how many people are playing the game.

Comment by refactor_master 1 day ago

> We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years

Still haven’t gotten rid of work for work’s sake being a virtue, which explains everything else. Welfare? You don’t “deserve” it. Until we solve this problem, we’re not or less heading straight for feudalism.

Comment by azan_ 21 hours ago

> We just instead started making billionaires by the thousand, and soon enough we can add trillionaires.

Didn’t we also get standards of living much higher than he would ever imagine? I think blaming everything on billionaires is really misguided and shallow.

Comment by somenameforme 2 hours ago

It depends on how you value things. I'd prefer to have a surplus of time and a scarcity of gizmos, rather than a surplus of gizmos and a scarcity of time. Obviously basic needs being met is very important, but we've moved way beyond that as a goal, while somehow also kind of simultaneously missing it.

Comment by machomaster 1 day ago

> We just instead started making billionaires by the thousand, and soon enough we can add trillionaires.

We just instead started doing Bullshit Jobs. https://en.wikipedia.org/wiki/Bullshit_Jobs

Comment by hn_throwaway_99 1 day ago

I feel like this sort of misses the point. I didn't think the primary thrust of his article was so much about the specific details of AI, or what kind of tasks AI can now surpass humans on. I think it was more of a general analysis (and very well written IMO) that even when new technologies advance in a slow, linear progression, the point at which they overtake an earlier technology (or "horses" in this case), happens very quickly - it's the tipping point at which the old tech surpasses the new. For some reason I thought of Hemingway's old adage "How did you go bankrupt? - Slowly at first, then all at once."

I agree with all the limitations you've written about the current state of AI and LLMs. But the fact is that the tech behind AI and LLMs never really gets worse. I also agree that just scaling and more compute will probably be a dead end, but that doesn't mean that I don't think that progress will still happen even when/if those barriers are broadly realized.

Unless you really believe human brains have some sort of "secret special sauce" (and, FWIW, I think it's possible - the ability of consciousness/sentience to arise from "dumb matter" is something that I don't think scientists have adequately explained or even really theorized), the steady progress of AI should, eventually, surpass human capabilities, and when it does, it will happen "all at once".

Comment by pcrh 1 day ago

For what it's worth, the decline in use of horses was much slower than you might expect. The model T Ford motor car reached peak production in 1925 [0], and for an inexact comparison (I couldn't find numbers for the US) the horse population of France started to decline in 1935, but didn't drop below 80% of its historical peak until the late 1940's down to 10% of its peak by the 1970's [1].

[0] https://en.wikipedia.org/wiki/Ford_Model_T#Mass_production

[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC7023172/

Comment by hn_throwaway_99 1 day ago

> For what it's worth, the decline in use of horses was much slower than you might expect.

Not really, given that the article goes into detail about this in the first paragraph, with US data and graphs: "Then, between 1930 and 1950, 90% of the horses in the US disappeared."

Comment by pcrh 23 hours ago

Eyeballing the chart in the OP and the French data shows them to have a comparable pattern. While OP's data is horses per person, and the French is total number of horses, both show a decline in horse numbers starting about 10 years after widespread adoption of the motor vehicle and falling to 50% of their peak in the mid- to late-1950's, with the French data being perhaps a bit over 5 years delayed compared to the US data. That is, it took 25 to 30 years after mass production of automobiles was started by Ford for 50% of "horsepower" to be replaced.

The point isn't to claim that motor vehicles did not replace horses, they obviously did, but that the replacement was less "sudden" than claimed.

Comment by hn_throwaway_99 3 hours ago

> That is, it took 25 to 30 years after mass production of automobiles was started by Ford for 50% of "horsepower" to be replaced

I just googled "average horse lifespan", and the answer that came back was, exactly, "25-30 years". There's a clue in that number for you.

Comment by pcrh 3 hours ago

Yes, I considered that. Someone using a horse-drawn wagon to deliver goods about town would likely not consider buying a truck until the cart horse needed replacing.

The working life of a horse may be shorter than the realistic lifespan. Searching for "horse depreciation" gives 7 years for a horse under age 12, the prime years for a horse being between 7 and 12 yrs old, depending on what it is used for.

I'm willing to accept the input of someone more knowledgeable about working horses, though!

Comment by 23 hours ago

Comment by throw9384940 1 day ago

Frech eat horse meat. Cattle is still present in US...

Comment by Den_VR 1 day ago

If there’s more work than resources, then is that low value work or is there a reason the business is unable to increase resources? AI as a race to the bottom may be productive but not sure it will be societally good.

Comment by twodave 11 hours ago

Not low-value or it just wouldn't be on the board. Lower value? Maybe, but there are many, many reasons things get pushed down the backlog. As many reasons as there are kinds of companies. Most people don't work at one of the big tech companies where work priorities and business value are so stratified. There are businesses that experience seasonality, so many of the R&D activities get put on the backburner until the busy season is over. There are businesses that have high correctness standards, where bigger changes require more scrutiny, are harder to fit into a sprint, and end up getting passed over for smaller tasks. And some businesses just require a lot of contextual knowledge. I wouldn't trust an AI to do a payroll calculation or tabulate votes, for instance, any more than I would trust a brand new employee to dive into the deep end on those tasks.

Comment by retinaros 1 day ago

Most of corporate people dont provide direct value…

Comment by ben_w 21 hours ago

> 1. Even if LLMs made everyone 10x as productive, most companies will still have more work to do than resources to assign to those tasks. The only reason to reduce headcount is to remove people who already weren’t providing much value.

They have more work to do until they don't.

The number of bank tellers went up for a while after the invention of the ATM, but then it went down, because all the demand was saturated.

We still need food, farming hasn't stopped being a thing, nevertheless we went from 80-95% of us working in agriculture and fishing to about 1-5%, and even with just those percentages working in that sector we have more people over-eating than under-eating.

As this transition happened, people were unemployed, they did move to cities to find work, there were real social problems caused by this. It happened at the same time that cottage industries were getting automated, hand looms becoming power-looms, weaving becoming programmable with punch cards. This is why communism was invented when it was invented, why it became popular when it did.

And now we have fast-fashion, with clothes so fragile that they might not last one wash, and yet still spend a lower percentage of our incomes on clothes than the pre-industrial age did. Even when demand is boosted by having clothes that don't last, we still make enough to supply demand.

Lumberjacks still exist despite chainsaws, and are so efficient with them that the problem is we may run out of rainforests.

Are there any switchboard operators around any more, in the original sense? If I read this right, the BLS groups them together with "Answering Service", and I'm not sure how this other group then differs from a customer support line: https://www.bls.gov/oes/2023/may/oes432011.htm

> 2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job.

This would be absolutely correct — I've made the analogy to Amdahl's law myself previously — if LLMs didn't also do so many of the other things. I mean, the linked blog post is about answering new-starter questions, which is also not the only thing people get paid to do.

Now, don't get me wrong, I accept the limitations of all the current models. I'm currently fairly skeptical that the line will continue to go up as it has been for very much longer… but "very much longer" in this case is 1-2 years, room for 2-4 doublings on the METR metric.

Also, I expect LLMs to be worse at project management than at writing code, because code quality can be improved by self-play and reading compiler errors, whereas PM has slower feedback. So I do expect "manage the AI" to be a job for much longer than "write code by hand".

But at the same time, you absolutely can use an LLM to be a PM. I bet all the PMs will be able to supply anecdotes about LLMs screwing up just like all the rest of us can, but it's still a job task that this generation of AI is still automating at the same time as all the other bits.

Comment by twodave 11 hours ago

I agree mostly, though personally I expect LLMs to basically give me whitewashing. They don't innovate. They don't push back enough or take a step back to reset the conversation. They can't even remember something I told them not to do 2 messages ago unless I twist their arm. This is what they are, as a technology. They'll get better. I think there's some impact associated with this, but it's not a doomsday scenario like people are pretending.

We are talking about trying to build a thing we don't even truly understand ourselves. It reminds me of That Hideous Strength where the scientists are trying to imitate life by pumping blood into the post-guillotine head of a famous scientist. Like, we can make LLMs do things where we point and say, "See! It's alive!" But in the end people are still pulling all the strings, and there's no evidence that this is going to change.

Comment by MLgulabio 22 hours ago

[dead]

Comment by billisonline 1 day ago

An engine performs a simple mechanical operation. Chess is a closed domain. An AI that could fully automate the job of these new hires, rather than doing RAG over a knowledge base to help onboard them, would have to be far more general than either an engine or a chessbot. This generality used to be foregrounded by the term "AGI." But six months to a year ago when the rate of change in LLMs slowed down, and those exciting exponentials started to look more like plateauing S-curves, executives conveniently stopped using the term "AGI," preferring weasel-words like "transformative AI" instead.

I'm still waiting for something that can learn and adapt itself to new tasks as well as humans can, and something that can reason symbolically about novel domains as well as we can. I've seen about enough from LLMs, and I agree with the critique that som type of breakthrough neuro-symbolic reasoning architecture will be needed. The article is right about one thing: in that moment AI will overtake us suddenly! But I doubt we will make linear progress toward that goal. It could happen in one year, five, ten, fifty, or never. In 2023 I was deeply concerned about being made obsolete by AI, but now I sleep pretty soundly knowing the status quo will more or less continue until Judgment Day, which I can't influence anyway.

Comment by rukuu001 1 day ago

I think a lot about how much we altered our environment to suit cars. They're not a perfect solution to transport, but they've been so useful we've built tons more road to accommodate them.

So, while I don't think AGI will happen any time soon, I wonder what 'roads' we'll build to squeeze the most out of our current AI. Probably tons of power generation.

Comment by sotix 18 hours ago

This is a really interesting observation! Cars don't have to dominate our city design, and yet they do in many places. In the USA, you basically only have NYC and a few less convenient cities to avoid a city designed for cars. Society has largely been reshaped with the assumption that cars will be used whether or not you'd like to use one.

What would that look like for navigating life without AI? Living in a community similar to the Amish or Hasidic Jews that don't integrate technology in their lives as much as the average person does? That's a much more extreme lifestyle change than moving to NYC to get away from cars.

Comment by billisonline 9 hours ago

"Tons of power generation?" Perhaps we will go in that direction (as OpenAI projects), but it assumes the juice will be worth the squeeze, i.e., that scaling laws requiring much more power for LLM training and/or inference will deliver a qualitatively better product before they run out. The failure of GPT 4.5, while not a definitive end to scaling, was a pretty discouraging sign.

Comment by dredmorbius 13 hours ago

We didn't just build roads, we utterly changed land-use patterns to suit them.

Cities, towns, and villages (and there were far more of the latter then) weren't walkable out of choice, but necessity. At most, by the late 19th century, urban geography was walkable-from-the-streetcar, and suburbs walkable-from-railway-station. And that only in the comparatively few metros and metro regions which had well-developed streetcar and commuter-rail lines.

With automobiles, housing spread out, became single-family, nuclear-family, often single-storey, and frequently on large lots. That's not viable when your only options to get someplace are by foot, or perhaps bicycle. Shopping moved from dense downtowns and city-centres (or perhaps shopping districts in larger cities) to strips and boulevards. Supermarkets and hypermarkets replaced corner grocery stores (which you could walk to and from with your groceries in hand, or perhaps in a cart). Eventually shopping malls were created (virtually always well away from any transit service, whether bus or rail), commercial islands in shopping-lot lakes. Big-box stores dittos.

It's not just roads and car parks, it's the entire urban landscape.

AI, should this current fad continue and succeed, will likely have similarly profound secondary effects.

Comment by ForHackernews 22 hours ago

Customer service will be almost fully automated, and human customers will be forced to adapt to the bots.

Comment by xtracto 14 hours ago

It already has with IVRs . I wonder if as a generalization, current technology will keep being used to provide layers and layers of "automation" for communication between people.

SDR Agents will communicate with "Procurement" Agents. Customer Support Agents will communicate with Product Agents. Coffee Barista Agents will talk with Personal Assistant Agents.

People will communicate less and less among each other. What will people talk about? Who will we talk to?

Comment by usrbinbash 19 hours ago

Or customers will find other providers who don't annoy them.

Comment by dredmorbius 13 hours ago

History so far suggests this is a dim possibility.

Comment by ForHackernews 13 hours ago

Exactly why Comcast and Google went out of business with their abysmal customer support.

Comment by creshal 23 hours ago

> executives conveniently stopped using the term "AGI," preferring weasel-words like "transformative AI" instead.

Remember when "AGI" was the weasel word because 1980s AI kept on not delivering?

Comment by rvz 1 day ago

Remember, these companies (including the author) have an incentive to continue selling fear of job displacement not because of how disruptive LLMs are, but because of how profitable it is if you scare everyone into using your product to “survive”.

To companies like Anthropic, “AGI” really means: “Liquidity event for (AI company)” - IPO, tender offer or acquisition.

Afterwards, you will see the same broken promises as the company will be subject to the expectations of Wall St and pension funds.

Comment by 1 day ago

Comment by cubefox 19 hours ago

> I'm still waiting for something that can learn and adapt itself to new tasks as well as humans can

That's highly irrelevant because if it were otherwise, we would already be replaced. The article was talking about the future.

Comment by danw1979 19 hours ago

The article was speculating about the future.

Comment by cubefox 17 hours ago

It was doing both.

Comment by littlestymaar 18 hours ago

> An engine performs a simple mechanical operation

It only appears “simple” because you're used to see working engines everywhere without never having to maintain them, but neither the previous generations nor the engineers working on modern engines would agree with you on that.

An engine performs “a simple mechanical operation” the same way an LLM performs a “simple computation”.

Comment by ible 1 day ago

People are not simple machines or animals. Unless AI becomes strictly better than humans and humans + AI, from the perspective of other humans, at all activities, there will still be lots of things for humans to do to provide value for each other.

The question is how do our individuals, and more importantly our various social and economic systems handle it when exactly what humans can do to provide value for each other shifts rapidly, and balances of power shift rapidly.

If the benefits of AI accrue to/are captured by a very small number of people, and the costs are widely dispersed things can go very badly without strong societies that are able to mitigate the downsides and spread the upsides.

Comment by marcus_holmes 1 day ago

I'm optimistic.

Banks used to have rooms full of bank clerks who manually did double-entry bookkeeping for all the bank's transactions. For most people, this was a very boring job, and it made bank transactions slow and expensive. In the 50's and 60's we replaced all these people with computers. An entire career of "bank clerk" vanished, and it was a net good for humanity. The cost of bank transactions came down (by a lot!), banks became more responsive and served their customers better. And the people who had to do double-entry bookkeeping all day long got to do other, probably more interesting, jobs.

There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way. They're boring jobs (for most people doing them) and having humans do them makes administration slow and expensive. Automating them will be a net good for humanity. Imagine if "this meeting could have been an email" actually moves to "this meeting never happened at all because the person making the decision just told the LLM and it did it".

You are right that the danger is that most of the benefits of this automation will accrue to capital, but this didn't happen with the bank clerk automation - bank customers accrued a lot of the benefits too. I suspect the same will be true with this automation - if we can create and scale organisations easier and cheaper without employing all the admin staff that we currently do, then maybe we create more agile, responsive, organisations that serve their customers better.

Comment by reeredfdfdf 1 day ago

"I suspect the same will be true with this automation - if we can create and scale organisations easier and cheaper without employing all the admin staff that we currently do, then maybe we create more agile, responsive, organisations that serve their customers better."

I'm not sure most of those organizations will have many customers left, if every white collar admin job has been automated away, and all those people are sitting unemployed with whatever little income their country's social safety net provides.

Automating away all the "boring jobs" leads to an economic collapse, unless you find another way for those people to earn their living.

Comment by marcus_holmes 5 hours ago

> Automating away all the "boring jobs" leads to an economic collapse, unless you find another way for those people to earn their living.

Yes, that's what happens. All those people find other jobs, do other work, and that new work is usually much less boring than the old work, because boring work is easier to automate.

Historically, economies have changed and grown because of automation, but not collapsed.

Comment by nopinsight 22 hours ago

AI agents might be able to automate 80% of certain jobs in a few years but that would make the remaining 20% far more valuable. The challenge is to help people rapidly retrain for new roles.

Humans will continue to have certain desires far outstripping the supply we have for a long time to come.

We still don’t have cures for all diseases, personal robot chefs & maids, and an ideal house for everyone, for example. Not all have the time to socialize as much as they wish with their family and friends.

There will continue to be work for humans as long as humans provide value & deep connections beyond what automation can. The jobs could themselves become more desirable with machines automating the boring and dangerous parts, leaving humans to form deeper connections and be creatively human.

The transition period can be painful. There should be sufficient preparation and support to minimize the suffering.

Workers will need to have access to affordable and effective methods to retrain for new roles that will emerge.

“soft” skills such as empathetic communication and tact could surge in value.

Comment by Covenant0028 22 hours ago

> The jobs could themselves become more desirable with machines automating the boring and dangerous parts

Or, as Cory Doctorow argues, the machines could become tools to extract "efficiency" by helping the employer make their workers lives miserable. An example of this is Amazon and the way it treats its drivers and warehouse workers.

Comment by nopinsight 22 hours ago

That depends on the social contract we collectively decide (in a democracy at least). Many possibilities will emerge and people need to be aware and adapt much faster than most times in history.

Comment by visarga 1 day ago

An ATM is a reliable machine with a bounded risk - the money inside - while an AI agent could steer your company into bankruptcy and have no liability for it. AI has no skin and depending on application, much higher upper bound for damage. A digit read wrong in a medical transcript, patient dies.

> There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way.

Managing risks, can't automate it. Every project and task needs a responsibility sink.

Comment by ipython 1 day ago

You can bound risk on ai agents just like an atm. You just can’t rely upon the ai itself to enforce those limits, of course. You need to place limits outside the ai’s reach. But this is already documented best practice.

The point about ai not having “skin” (I assume “skin in the game”) is well taken. I say often that “if you’ve assigned an ai agent the ‘a’ in a raci matrix, you’re doing it wrong”. Very important lesson that some company will learn publicly soon enough.

Comment by marcus_holmes 1 day ago

> Every project and task needs a responsibility sink.

I don't disagree, though I'd put it more as "machines cannot take responsibility for decisions, so machines must not have authority to make decisions".

But we've all been in meetings where there are too many people in the room, and only one person's opinion really counts. Replacing those other people with an LLM capable of acting on the decision would be a net positive for everyone involved.

Comment by sotix 18 hours ago

> Banks used to have rooms full of bank clerks who manually did double-entry bookkeeping for all the bank's transactions. For most people, this was a very boring job, and it made bank transactions slow and expensive. > > And the people who had to do double-entry bookkeeping all day long got to do other, probably more interesting, jobs.

I don't mean to pick on your example too much. However, when I worked in financial audit, reviewing journal entries spit out from SAP was mind numbingly boring. I loved doing double-entry bookkeeping in my college courses. Modern public accounting is much, much more boring and worse work than it was before. Balancing entries is enjoyable to me. Interacting with the terrible software tools is horrific.

I guess people that would have done accounting are doing other, hopefully more interesting jobs in the sense that absolute numbers of US accountants is on a large decline due to the low pay and the highly boring work. I myself am certainly one of them as a software engineer career switcher. But the actual work for a modern accountant has not been improved in terms of interesting tasks to do. It's also become the email + meetings + spreadsheet that you mentioned because there wasn't much else for it to evolve into.

Comment by marcus_holmes 5 hours ago

I did qualify it with "most people" because of people like you who enjoy that kind of work :).

I would hate that work, but luckily we have all sorts of different people in the world who enjoy different things. I hope you find something that you really enjoy doing.

Comment by mrwrong 18 hours ago

> There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way.

it's interesting how it's never your job that will be automated away in this fantasy, it's always someone else's.

Comment by marcus_holmes 5 hours ago

I have absolutely had that job, and it sucked. I also worked as a farm hand, a warehouse picker, a construction site labourer, and a checkout clerk. Most of that work is either already automated or about to be, thankfully.

Comment by gverrilla 21 hours ago

"benefits" = shareholder profits ++

Comment by jondwillis 1 day ago

Workshopping this tortured metaphor:

AI, at the limit, is a vampiric technology, sucking the differentiated economic value from those that can train it. What happens when there are no more hosts to donate more training-blood? This, to me, is a big problem, because a model will tend to drift from reality without more training-blood.

The owners of the tech need to reinvest in the hosts.

Comment by hephaes7us 1 day ago

Realistically, at a certain point the training would likely involve interaction with reality (by sensors and actuators), rather than relying on secondhand knowledge available in textual form.

Comment by kfarr 1 day ago

Yeah I feel like the real ah ha moment is still coming once there is a GPT-like thing that has been trained on reality, not its shadow.

Comment by chongli 1 day ago

Yes and reality is the hard part. Moravec’s Paradox [1] continues to ring true. A billion years of evolution went into our training to be able to cope with the complexity of reality. Our language is a blink of an eye compared to that.

[1] https://en.wikipedia.org/wiki/Moravec's_paradox

Comment by baq 1 day ago

Reality cannot be perceived. A crisp shadow is all you can hope for.

The problem for me is the point of the economy in the limit where robots are better, faster and cheaper than any human at any job. If the robots don’t decide we’re worth keeping around we might end up worse than horses.

Comment by agos 19 hours ago

but that crisp shadow is exactly what we call perception

Comment by qsera 1 day ago

Look I think that is the whole difficulty. In reality, doing the wrong thing results in pain, and the right thing in relief/pleasure. A living thing will learn from that.

But machines can experience neither pain nor pleasure.

Comment by visarga 1 day ago

> What happens when there are no more hosts to donate more training-blood?

LLMs have over 1B users and exchange over 1T tokens with us per day. We put them through all conceivable tasks and provide support for completing those tasks, and push back when the model veers off. We test LLM ideas in reality (like experiment following hypothesis) and use that information to iterate. These logs are gold for training on how to apply AI in real world.

Comment by scotty79 1 day ago

There's only so much you can learn from humans. AI didn't get superhuman in go (game) by financing more new good human go players. It just played with itself even discarding human source knowledge and achieved those levels.

Comment by ghssds 1 day ago

People are animals.

Comment by 1 day ago

Comment by goatlover 1 day ago

When horses develop technology and create all sorts of jobs for themselves, this will be a good metaphor.

Comment by faidit 1 day ago

Sounds like something a goat lover would say..

Comment by traverseda 1 day ago

I'd be more worried about the implicit power imbalance. It's not what can humans provide for each-other, it's what can humans provide for a handful of ultra-wealthy oligarchs.

Comment by jordwest 1 day ago

Yeah, from the perspective of the ultra-wealthy us humans are already pretty worthless and they'll be glad to get rid of us.

But from the perspective of a human being, an animal, and the environment that needs love, connection, mutual generosity and care, another human being who can provide those is priceless.

I propose we break away and create our own new economy and the ultra-wealthy can stay in their fully optimised machine dominated bunkers.

Sure maybe we'll need to throw a few food rations and bags of youthful blood down there for them every once in a while, but otherwise we could live in an economy that works for humanity instead.

Comment by xeonmc 1 day ago

Charlie Chaplin's speech is more relevant now than ever before:

https://www.youtube.com/watch?v=J7GY1Xg6X20

Comment by jordwest 1 day ago

I first saw this about 15 years ago and it had a profound impact on me. It's stuck with me ever since

"Don't give yourselves to these unnatural men, machine men, with machine minds and machine hearts. You are not machines, you are not cattle, you are men. You have the love of humanity in your hearts."

Spoken 85 years ago and even more relevant today

Comment by vkou 1 day ago

The thing that the ultra-wealthy desire above all else is power and privilege, and they won't be getting either of that in those bunkers.

They sure as shit won't be content to leave the rest of us alone.

Comment by jordwest 1 day ago

Yeah I know it's an unrealistic ideal but it's fun to think about.

That said my theory about power and privilege is that it's actually just a symptom of a deep fear of death. The reason gaining more money/power/status never lets up is because there's no amount of money/power/status that can satiate that fear, but somehow naively there's a belief that it can. I wouldn't be surprised if most people who have any amount of wealth has a terrible fear of losing it all, and to somebody whose identity is tied to that wealth, that's as good as death.

Comment by faidit 1 day ago

Going off your earlier comment, what if instead of a revolution, the oligarchs just get hooked up to a simulation where they can pretend to rule over the rest of humanity forever? Or what if this already happened and we're just the peasants in the simulation

Comment by jordwest 1 day ago

I like this future, the Meta-verse has found its target market

Comment by _DeadFred_ 14 hours ago

This would make a good black mirror episode. The character lives in a total dystopian world making f'd up moral choice. Their choices make the world worse. It seems nightmarish to us the viewer. Then towards then end they pull back, they unplug and are living in a utopia. They grab a snack, are greeted by people that love and care about them, then they plug back in and go back to being their dystopian tech bro ideal self in their dream/ideal world.

Comment by 1 day ago

Comment by visarga 1 day ago

> It's not what can humans provide for each-other, it's what can humans provide for a handful of ultra-wealthy oligarchs.

You can definitely use AI and automation to help yourself and your family/community rather than the oligarchs. You set the prompts. If AI is smart enough to do your old job, it is also smart enough to support you be independent.

Comment by d--b 1 day ago

I was trying to phrase something like this, but you said it a lot better than I ever could.

I can’t help but smile at the possibility that you could be a bot.

Comment by richardles 1 day ago

I've also noticed that LLMs are really good at speeding up onboarding. New hires basically have a friendly, never tired mentor available. It gives them more confidence in the first drafted code changes / design docs. But I don't think the horse analogy works.

It's really changing cultural expectations. Don't ping a human when an LLM can answer the question probably better and faster. Do ping a human for meaningful questions related to product directions / historical context.

What LLMs are killing is:

- noisy Slacks with junior folks questions. Those are now your Gemini / chat gpt sessions.

- tedious implementation sessions.

The vast majority of the work is still human led from what I can tell.

Comment by lbreakjai 20 hours ago

This sounds horrible. Onboarding should ideally be marginally about the "what". After all, we already have a very precise and non ambiguous system to tell what the system does: the code.

What I want to know when I join a company is "why" the system does what it does. Sure, give me pointers, some overview of how the code is structured, that always helps, but if you don't tell me why how am I supposed to work?

$currentCompany has the best documentation I've seen in my career. It's been spun off from a larger company, from people collaborating asynchronously and remotely whenever they had some capacity.

No matter how diligent we've been, as soon as the company started in earnest and we got people fully dedicated to it, there's been a ton of small decisions that happened during a quick call, or on a slack thread, or as a comment on a figma design.

This is the sort of "you had to be there" context the onboarding should aim to explain, and I don't see how LLMs help with that.

Comment by candiddevmike 1 day ago

That sounds like a horrible onboarding experience. Human mentors provide a lot more than just answering questions, like providing context, comraderie or social skills, or even coping mechanisms. Starting a new job can be terrifying for juniors, and if their only friend is faceless chat bot...

Comment by richardles 1 day ago

You're right. We need to keep tabs on the culture for new hires for the reasons you mentioned. LLMs are really good at many onboarding tasks, but the social ones.

I think done right it is a superior onboarding experience. As a new hire, you no longer have to wait for your mentor to be available to learn some badly documented tech things. This is really empowering some of them. The lack of building human context / connections etc is real, and I don't think LLMs can meaningfully help there. Hence my skepticism for the horse analogy.

Comment by skywhopper 17 hours ago

If they’re badly documented, how does the LLM help?

Comment by 8note 1 day ago

you still lose a bit from not having those juniors' questions around - where is your documentation sucking or your code is confusing?

Comment by NitpickLawyer 1 day ago

We are now at a point where the tech can help with both of those, today. You can have a cc session "in a loop" going through your docs / code and try to do x and y, and if it gets stuck, that's a pretty good signal that something sucks there. At least you can get a heatmap of what works ootb, and what needs more eyes.

Comment by baq 1 day ago

Both questions are getting scary good answers from the latest models. Yes, I tried, on a large proprietary code base which shouldn’t be included in any training set.

Comment by inquirerGeneral 22 hours ago

[dead]

Comment by d4rkn0d3z 23 hours ago

It might be better to think about what a horse is to a human, mostly a horse is an energy slave. The history of humanity is a story about how many energy slaves are available to the average human.

In times past, the only people on earth who had their standard of living raised to a level that allowed them to cast there gaze upon the stars were the Kings and there courts, vassals, and noblemen. As time passed we have learned to make technologies that provide enough energy slaves to the common man that everyone lives a life that a king would have envied in times past.

So the question arises as to whether AI or the pursuit of AGI provides more or less energy slaves to the common man?

Comment by myrmidon 23 hours ago

The big problem I see with AI is that it undermines redistribution mechanisms in a novel and dangerous way; despite industrialization, human labor was always needed to actually do anything with capital, and even people born in poverty could do work to get their share of the growing economical pie.

AI kinda breaks this; there is a real risk that human labor is going to become almost worthless this century, and this might mean that the common man ends up worse off despite nominal economic growth.

Comment by nicbou 23 hours ago

Since AI is using my work without permission and capturing the value on behalf of tech companies, I feel like I am an energy slave to AI.

Comment by zwnow 23 hours ago

The goal is to eradicate the common man. Turns out you dont need a lot of energy, food, water, space if there aren't 8 billion humans to supply. It's the tech billionaires dream, replacing humans with robotic servants. Corporations do not care about the common man.

Comment by dominicrose 21 hours ago

Full robotic servants are very costly, only AI servants are cheap enough. But I do think we're going to see more wars and robotic use in wars.

Comment by MLgulabio 22 hours ago

[dead]

Comment by namesbc 1 day ago

Software engineers used to know that measuring lines of code written was a poor metric for productivity...

https://www.folklore.org/Negative_2000_Lines_Of_Code.html

Comment by underyx 1 day ago

Ctrl-F 'lines', 0 results

Ctrl-F 'code', 0 results

What is this comment about?

Comment by chongli 23 hours ago

The linked short story is barely 5 paragraphs long. You could have just read it instead of writing an insubstantial remark like this. It’s a fun anecdote about a famous programmer (Bill Atkinson).

Comment by underyx 23 hours ago

I’ve read it multiple times before, it’s irrelevant in this discussion.

Comment by pmg101 1 day ago

"The LLM can write lines of code, sure, but can it be productive?" is, I think, the implied question.

Comment by kalkin 1 day ago

Charitably I'm guessing it's supposed to be an allusion to the chart with cost per word? Which is measuring an input cost not an output value, so the criticism still doesn't quite make sense, but it's the best I can do...

Comment by anon7000 1 day ago

Maybe it was edited. I count at least 6 instances of the word “code”

Comment by Thorrez 19 hours ago

underyx was doing the ctrl+f on the original (horses) article, not the negative 2000 lines of code article.

It's a confusing comment. I misinterpreted it myself too originally.

Comment by actionfromafar 1 day ago

So, a free idea from me: train the next coding LLM to produce not regular text, but patches which shortens code while still keeping the code working the same.

Comment by NitpickLawyer 1 day ago

They can already do that. A few months ago I played around with the kaggle python golf competition. Got to top 50 without writing a line of code myself. Modern LLMs can take a piece of code and "golf" it. And modern harnesses (cc / codex / gemini cli) can take a task and run it in a loop if you can give them clear scores (i.e. code length) and test suites outside of their control (i.e. the solution is valid or not).

No idea why you'd want this in a normal job, but the capabilities are here.

Comment by actionfromafar 22 hours ago

LLMs won't ever shut up. That seems unfixable. But a "hack" would perhaps be to train them to make longer patches but which actually removes code.

Comment by wyre 1 day ago

gonna tell claude to write all my code in one line

Comment by lomase 22 hours ago

Imagine you are a Perl programmer writing js...

Comment by socketcluster 1 day ago

I think my software engineering job will be safe so long as big companies keep using average code as their training set. This is because the average developer creates unnecessary complexity which creates more work for me.

The way the average dev structures their code requires like 10x the number of lines as I do and at least 10x the amount of time to maintain... The interest on technical debt compounds like interest on normal debt.

Whenever I join a new project, within 6 months, I control/maintain all the core modules of the system and everything ends up hooked up to my config files, running according to the architecture I designed. Happened at multiple companies. The code looks for the shortest path to production and creates a moat around engineers who can make their team members' jobs easier.

IMO, it's not so different to how entrepreneurship works. But with code and processes instead of money and people as your moat. I think once AI can replace top software engineers, it will be able to replace top entrepreneurs. Scary combination. We'll probably have different things to worry about then.

Comment by alex_duf 21 hours ago

The majority of drivers believe they’re better than average [1]

1: https://www.lbec-law.com/blog/2025/04/the-majority-of-driver...

Comment by bad_username 1 day ago

> Whenever I join a new project, within 6 months, I control/maintain all the core modules of the system and everything ends up hooked up to my config files, running according to the architecture I designed. Happened at multiple companies

I am regularly tempted to do this (I have done this a few times), but unless I truly own the project (being the tech lead or something), I stop myself. One of the reasons is reluctance to trespass uninvited on someone's else territory of responsibility, even if they do a worse job than I could. The human cost of such a situation (to the project and ultimately to myself) is usually worse than the cost of living with status quo. I wonder what your thoughts are on this.

Comment by valine 1 day ago

Humans don’t learn to write messy complex code. Messy, complex code is the default, writing clean code takes skill.

You’re assuming the LLM produces extra complexity because it’s mimicking human code. I think it’s more likely that LLMs output complex code because it requires less thought and planning, and LLMs are still bad at planning.

Comment by socketcluster 1 day ago

Totally agree with the first observation. The default human state seems to be confusion. I've seen this over and over in junior coders.

It's often very creative how junior devs approach problems. It's like they don't fully understand what they're doing and the code itself is part of the exploration and brainstorming process trying to find the solution as they write... Very different from how senior engineers approach coding when it's like you don't even write your first line until you have a clear high level picture of all the parts and how they will fit together.

About the second point, I've been under the impression that because LLMs are trained on average code, they infer that the bugs and architectural flaws are desirable... So if it sees your code is poorly architected, it will generate more of that poorly architected code on top. If it sees hacks in your codebase, it will assume hacks are OK and give you more hacks.

When I use an LLM on a poorly written codebase, it does very poorly and it's hard to solve any problem or implement any feature and it keeps trying to come up with nasty hacks... Very frustrating trial and error process; eats up so many tokens.

But when I use the same LLM on one of my carefully architected side projects, it usually works extremely well, never tries to hack around a problem. It's like having good code lets you tap into a different part of its training set. It's not just because your architecture is easier to build on top, but also it follows existing coding conventions better and always addresses root causes, no hacks. Its code style looks more like that of a senior dev. You need to keep the feature requests specific and short though.

Comment by valine 15 hours ago

> About the second point, I've been under the impression that because LLMs are trained on average code, they infer that the bugs and architectural flaws are desirable

This is really only true about base models that haven’t undergone post training. The big difference between ChatGPT and GPT3 was OpenAI’s instruct fine tuning. Out of the box, language models behave how you describe. Ask them a question and half the time they generate a list of questions instead of an answer. The primary goal of post training is to coerce the model into a state in which it’s more likely to output things as if it were a helpful assistant. The simplest version is text at the start of your context window like: “the following is code was written by a meticulous senior engineer”. After a prompt like that the most likely next tokens will never be the models imitation of a sloppy code. Instruct fine tuning does the same thing but as permanent modifications to the weights of the model.

Comment by hatch_q 22 hours ago

You're just very opinionated. Other software engineers just give you space because they don't want to confront you and they don't want any conflict with you as it's just waste of time.

6 months is also average time it takes people like you to burn out on a project. Usually starting with relatively simple change/addition requested by customer that turns into 3 month long refactor - "because architecture is wrong". And we just let you do it, because we know fighting windmills is futile.

Comment by retrac98 1 day ago

Unnecessary complexity isn’t much of a problem when the code is virtually free to maintain or throw away and replace.

Comment by socketcluster 1 day ago

Depends on the size and complexity of the problem that the system is solving. For very complex problems, even the most succinct solution will be complex and not all parts of the code can be throwaway code. You have to start stacking the layers of abstractions and some code becomes critical. Like think of the Linux Kernel, you can't throw away the Linux Kernel. You can't throw away Chromium or the V8 engine... Millions of systems depend on those. If they had issues or vulnerabilities and nobody to maintain, it would be a major problem for the global economy.

Comment by gjvc 18 hours ago

companies have been abandoning products for decades, and shuffling ongoing support onto other entities. nothing has to be "thrown away" as you keep suggesting.

Comment by lonelyasacloud 19 hours ago

Even if a throw away and replace strategy is used, eventually a system's complexity will overrun any intelligence's ability to work effectively with it. Poor engineering will cause that development velocity drop off to happen earlier.

Comment by exfalso 21 hours ago

Although it's sad, I have to agree with what you're alluding to. I think there is huge overhead and waste (in terms of money, compute resources and time) hidden in the software industry, and at the end of the day it just comes down to people not knowing how to write software.

There is a strange dynamic currently at play in the software labour market where the demand is so huge that the market can bear completely inefficient coders. Even though the difference between a good and a bad software engineer is literally orders of magnitude.

Quite a few times I encountered programmers "in the wild" - in a sauna, on the bus etc, and overheard them talking about their "stack". You know the type, node.js in a docker container. I cannot fathom the amount of money wasted at places that employ these people.

I also project that actually, if we adopt LLMs correctly, these engineers (which I would say constitute a large percentage) will disappear. The age of useless coding and infinite demand is about to come to an end. What will remain is specialist engineer positions (base infra layer, systems, hpc, games, quirky hardware, cryptographers etc). I'm actually kind of curious what the effect on salary will be for these engineers, I can see it going both ways.

Comment by tuesdaynight 20 hours ago

If they became big companies with that "unnecessary complexity", maybe code quality does not matter as much as you want to believe. Furthermore, even the fastest or well behaved horses were replaced.

Comment by jsheard 1 day ago

Cost per word is a bizarre metric to bring up. Since when is volume of words a measure of value or achievement?

Comment by StilesCrisis 1 day ago

It also puts a thumb on the scale for AI, which tends to emit pages of text to answer simple questions.

Comment by garciasn 1 day ago

Sounds like any post-secondary, graduate student, or management consultant out there being there are, very often, page/word count or hours requirements. Considering the model corpora, wordiness wins out.

Comment by jsheard 1 day ago

The chart is actually words "thought or written" so I guess they are running up the numbers even more by counting Claudes entire inner monologue, on top of what it ultimately outputs.

Comment by sanex 18 hours ago

There was a time when these models were novel that if use it to write for me. After a year or so the verboseness and lack of personality got old. Now all I have is a decent proofreader. Maybe they'll take over my job but I'm finding the trend going the other way right now.

Comment by kashyapc 23 hours ago

It's not merely cost per word, but it is even more bizarre: "cost per word thought", whatever that is. Most of these "word thoughts" from LLMs of today are just auto-completed large dumps of text.

Comment by bdangubic 1 day ago

these are not just “words” but answers to questions from people who got a job at anthropic had…

Comment by 1970-01-01 1 day ago

How about we stop trying the analogy clothing on and just tell it like it is? AI is unlike any other technology to date. Just like predicting the weather, we don't know what it will be like in 20 months. Everything is a guesstimate.

Comment by Rastonbury 15 hours ago

Probably the point is to think whether the horse or chess engine analogy is a good one. The premise being there will come a certain point when technology reaches a level that makes the alternative obselete suddenly. I don't have good reasons to think that AI will not be able to automate simple jobs with an acceptable error rate eventually, once that happens whole categories of jobs will evaporate. Probably dealing with more people type job, making excel models, transactions based, same thing day in day out, those teams may be gone and only a person or two to do a final review

Comment by stego-tech 1 day ago

This is the correct take. We all have that "Come to Jesus" moment eventually, where something blows our minds so profoundly that we believe anything is possible in the immediate future. I respect that, it's a great take to have and promotes a lot of discussion, but now more than ever we need concretes and definitives instead of hype machines and their adjacent counterparts.

Too much is on the line here regardless of what ultimately ends up being true or just hype.

Comment by Gigachad 1 day ago

It’s hard to filter the hot air from the realistic predictions. I’ve been hearing for over 10 years now that truck drivers are obsolete and that trucks will drive themselves. Yet today truck drivers are still very much in demand.

While in the last year I’ve seen generated images go from complete slop to indistinguishable from real photos. It’s hard to know what is right around the corner and what isn’t even close.

Comment by tim333 22 hours ago

Against that you have the Moore's law like predictions that AI would be getting to around human levels around now from Moravec and the like that have proved fairly spot on. I think you may find it's more like the AI chess ranking graph than the weather.

Comment by dredmorbius 12 hours ago

I think that you're on to something here, though I agree more with your first sentence than the second.

AI is not identical to, as the article compares, mechanical power.

But your weather-forecasting comment suggests a possible similarity (though not the one you go to): for all the millions-fold increase in compute power, and the increased density and specificity of meterological measurements, our accurate weather-forecasting window has only extended by a factor or so of two (roughly five days to ten). That is, there are applications for which vastly more information-processing capacity provides fairly modest returns.

And there are also those in which it's transformative. I'd put reusable rockets in that category, where we can now put sufficiently-reliable compute (and a whole bunch of rocket-related hardware) on a boost-phase rocket such that it can successfully soft-land.

For some years I've been thinking of the notion of technology not as some general principle ("efficiency" is the classic economics formulation), but as a set of specific mechanisms each of which has specific capabilities and limitations.[1] I've held pretty constant with nine of these:

1. Fuels. Applying more (or more useful) energy to a process.

2. Energy transmission and transformation.

3. Materials. Specific properties, abundance, costs, effects, limitations.

4. Process knowledge --- how to do things. What's generally described as "technical knowledge", here considered as a specific mechanism of technology.

5. Structural or causal knowledge --- why things work. What's generally described as "scientific knowledge".

6. Networks. Interactions between nodes via links, physical or virtual, over which matter, energy, information, or some mix flow. Transport, comms, power, information.

7. Systems. Constructs including sensing, processing, action, and feedback. Ranging from conceptual to mechanical to human and social.

8. Information. Sensing, perceiving, processing, storing, retrieving, and transmitting. Ranging from our natural senses to augmented ones, from symbolic systems (language, maths) to algorithms.

9. Hygiene. Sinks and unintended consequences, affecting the function and vitality of systems, and their mitigations or limits.

AI / AGI falls into the 8th category: information, specifically information processing. And as such, getting back to my original point, we can compare it with other information-related technological innovations: speech, writing, maths, boolean logic, switches (valves, transistors, etc.), information storage/retrieval, etc. And, yes, human thought processes. We do have some priors we can look at here, and they might help guide us in what a true AGI might be able to accomplish, and what its limitations may be.

It's often noted (including in this thread) that AGI would not presently be able to persist without copious human assistance, in that it's predicated on a vast technological infrastructure only a small portion of which it would be capable of substituting for. It's quite likely that AGI would be both competitive with and complementary to much human activity. In the horse analogy, it's worth noting that the first stage of mechanised transport development, with steam shipping and rail technology, horses were strongly complementary in that they fulfilled the last-mile delivery role which steamships and locomotives couldn't furnish. Horse drayage populations actually boomed during this period. It was development of ICE-powered lorries which finally out-competed the horse-drawn cart for intra-urban delivery. AGI-as-augmenting-humans is an already highly-utilised model, and will likely persist for some time. Experiments in AGI replacing humans will no doubt occur, some successful, others not. I'd suggest that my 9th category, hygiene, and specifically failure modes of AGI, will likely prove highly interesting.

Mechanised transport also relies heavily on fuels and/or energy storage. The past 200 or so years were predicated on nonrenewable fossil fuels, first coal then oil, and there were several points in that timeline where continued availability of cheap fuels was seriously in question. We're now reaching the point where even given abundant supply, the relatively-clean byproducts of use are proving, at scales of current use, incompatible with climatic stability, possibly extending to incompatible with advanced technological civilisation or even advanced life on Earth (again, category 9).

AGI relies on IC chip manufacture (the province of vanishingly few companies), on copious amounts of electricity, scarce physical resources, and various legal regimes concerning use of intellectual works, property, profit, and more (categories 1, 2, 3, and 7, at a minimum). Whether or not a world with pervasive AGI proves to be a stable or unstable point is another open question.

________________________________

Notes:

1. A sampling of prior HN discussions may be found with this search: <https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...>.

Comment by s17n 1 day ago

This is a fun piece... but what killed off the horses wasn't steady incremental progress in steam engine efficiency, it was the invention of the internal combustion engine.

Comment by dcre 1 day ago

According to Wikipedia, the IC engine was invented around 1800 and only started to get somewhere in the late 1800s. Sounds like the story doesn’t change.

https://en.wikipedia.org/wiki/Internal_combustion_engine

Comment by pcrh 1 day ago

Quite. For reference, the horse population of France didn't decline significantly until the late 1940's [0].

[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC7023172/

Comment by 1 day ago

Comment by mkl 23 hours ago

Engines, not just steam engines.

Comment by s17n 10 hours ago

Sure, but if you look at more complex picture of engine development you could just as easily support the proposition that programmers are currently not in any danger (by pointing out that the qualitative differences between IC and steam engines were decisive when it comes to replacing horses, and the correct analogy is that much like a steam engine could never replace a horse, a transformer model can never replace a human).

Not detracting from the article, I think it's a fun way to shake your brain into the entirely appropriate space of "rapid change is possible"!

Comment by 1 day ago

Comment by COAGULOPATH 1 day ago

> In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.

But would you rather be a horse in 1920 or 2020? Wouldn't you rather have modern medicine, better animal welfare laws, less exposure to accidents, and so on?

The only way horses conceivably have it worse is that there are fewer of them (a kind of "repugnant conclusion")...but what does that matter to an individual horse? No human regards it as a tragedy that there are only 9 billion of us instead of 90 billion. We care more about the welfare of the 9 billion.

Comment by BeefySwain 1 day ago

The equivalency here is not 9 billion versus 90 billion, it's 9 billion versus 90 million, and the question is how does the decline look? Does it look like the standard of living for everyone increasing so high that the replacement rate is in the single digit percentage range, or does it look like some version of Elysium where millions have immense wealth and billions have nothing and die off?

Comment by schoen 1 day ago

> No human regards it as a tragedy that there are only 9 billion of us instead of 90 billion.

I have met some transhumanists and longtermists who would really like to see some orders of magnitude increase in the human population. Maybe they wouldn't say "tragedy", but they might say "burning imperative".

I also don't think it's clearly better for more beings to exist rather than fewer, but I just want to assure you that the full range of takes on population ethics definitely exists, and it's not simply a matter of straightforward common sense how many people (or horses) there ought to be.

Comment by barbazoo 1 day ago

Engine efficiency, chess rating, AI cap ex. One example is not like the other. Is there steady progress in AI? To me it feels like it’s little progress followed by the occasional breakthrough but I might be totally off here.

Comment by Calamityjanitor 1 day ago

The only 'line go up' graph they have left is money invested. I'm even dubious of the questions answered graph. It looks more like a feature added to internal wiki that went up in usage. Instead it's portrayed as a measure of quality or usefulness.

Comment by dcre 1 day ago

I think you are totally off. Individual benchmarks are not very useful on their own, but as far as I’m aware they all tell the same story of continual progress. I don’t find this surprising since it matches my experience as well.

Comment by raincole 1 day ago

What example do you need? In every single benchmark AI is getting better and better.

Before someone says "but benchmark doesn't reflect real world..." please name what metric you think is meaningful if not benchmark. Token consumption? OpenAI/Anthropic revenue?

Comment by jacobsenscott 1 day ago

Whenever I try and use a "state of the art" LLM to generate code it takes longer to get a worse result than if I just wrote the code myself from the start. That's the experience of every good dev I know. So that's my benchmark. AI benchmarks are BS marketing gimmicks designed to give the appearance of progress - there are tremendous perverse financial incentives.

This will never change because you can only use an LLM to generate code (or any other type of output) you already know how to produce and are expert at - because you can never trust the output.

Comment by whycombinetor 1 day ago

Third party benchmarks like terminalbench exist.

W.r.t code changes especially small ones (say 50 lines spread across 5 files), if you can't get an agent to make nearly exactly the code changes you want, just faster than you, that's a you problem at this point. If it maybe would take you 15 minutes, grok-code-fast-1 can do it in 2.

Comment by trollbridge 1 day ago

Right. With careful use of AIs, I can use it to gather information to help me make better designs (like giving me summaries of the current best available frameworks or libraries to choose for a given project), but as far as just generating an architecture and then generating the code and devops and so on for that? It's just not there, unless you're creating an app that effectively already exists, like some basic CRUD app.

If you're creating basic CRUDs, what on earth are you doing? That kind of thing should have been automated a long time ago.

Comment by whycombinetor 1 day ago

What do you mean when you say building crud apps should be automated?

Comment by trollbridge 1 day ago

CRUD apps are ridiculously simple and have been in existence my entire life. Yet it is surprisingly difficult to make a basic CRUD and host it somewhere. The bulk of useful but simple business apps are just a CRUD with a tiny bit of customisation and integration around them.

It is true that LLMs make it easier to build these kind of things without having to become a competent programmer first.

Comment by lomase 22 hours ago

I don't know what kind of CRUD apps you work on. The kind of CRUD apps people pay me to work on are not simple.

Comment by beeflet 1 day ago

conventionally, it should have been abstracted by a higher-level language.

Comment by machomaster 1 day ago

E.g using Rails and generate scaffolding. Makes it real fast and easy to make a CRUD app.

Comment by azemetre 9 hours ago

What metrics, that aren't controlled by industry, show AI getting better? Generally curious because those "ranking sites" to me seem to be infested with venture capital, so hardly fair or unbiased. The only reports I hear from academia are those being overly negative on AI.

Comment by fzeroracer 1 day ago

AI is getting better at every benchmark. Please ignore that we're not allowed to see these benchmarks and also ignore that the companies in question are creating the benchmarks that are being exceeded.

Comment by philipwhiuk 1 day ago

OpenAI net profit.

The figures for cost are wildly off to start with.

Comment by bluefirebrand 1 day ago

> please name what metric you think is meaningful

Job satisfaction and human flourishing

By those metrics, AI is getting worse and worse

Comment by machomaster 1 day ago

AI is very satisfied in doing the job, just ask it.

AI is able to speed up the progress, to give more resources, to give the most important thing people have - time. The fact that these incredible gifts are misused (or used inefficiently) is not the problem of AI. This would be like complaining that the objective positive of increased food production is actually a negative, because people are getting fatter.

Comment by bluefirebrand 15 hours ago

> AI is very satisfied in doing the job, just ask it

I could not care less about AI's satisfaction in anything

Comment by lomase 22 hours ago

Imagine anthropomorphing this hard.

Comment by machomaster 19 hours ago

You misunderstood. This is how the conversation went:

1. Is there steady progress in AI?

2. What example do you need? In every single benchmark AI is getting better and better.

3. Job satisfaction and human flourishing.

Hence my answer "AI is very satisfied in doing the job, just ask it". It came about because of the stupid comment 3, which tried to link and put a blame on unrelatable things (akin to refering to obesity when asked what metrics make him say that agriculture/transportation have not made progress in the last 100 years) and at the same time anthropomorphed AI. I only accepted the premise and continued answering on the same level in order to demonstrate stupidity of their answer.

Comment by yeasku 4 hours ago

I did not misunderstood anything clanker.

Comment by tim333 22 hours ago

Steady progress in the hardware for AI, lumpy progress in algorithms?

Comment by GaggiX 1 day ago

ChatGPT was released 3 years ago and that was complete ass compared to what we have today.

Comment by tills13 1 day ago

Person whose job it is to sell AI selling AI is what I got from this post.

Comment by agos 18 hours ago

in Italy we have a saying for this - "innkeeper, how is the wine?"

Comment by hurturue 1 day ago

person whose job is to not be replaced by AI saying AI is hype is what I get from your comment

Comment by lomase 22 hours ago

I does not work buddy. Nobody gets paid to not buy AI.

Comment by hurturue 22 hours ago

"it's difficult to make someone understand a thing when his job depends on him not understanding it"

Comment by personjerry 1 day ago

I think it's a cool perspective, but the not-so-hidden assumption is that for any given domain, the efficiency asymptote peaks well above the alternative.

And that really is the entire question at this point: Which domains will AI win in by a sufficient margin to be worth it?

Comment by danpalmer 1 day ago

> the not-so-hidden assumption is that for any given domain, the efficiency asymptote peaks well above the alternative

This is an assumption for the best-case scenario, but I think you could also just take the marginal case. Steady progress builds until you get past the state of the art system, and then the switch becomes easy to justify.

Comment by themafia 1 day ago

> Back then, me and other old-timers were answering about 4,000 new-hire questions a month.

> Then in December, Claude finally got good enough to answer some of those questions for us.

What getting high on your own supply actually looks like. These are not the types of questions most people have or need answered. It's unique to the hiring process and the nascent status of the technology. It seems insane to stretch this logic to literally any other arena.

On top of that horses were initially replaced with _stationary_ gasoline engines. Horses:Cars is an invalid view into the historical scenario.

Comment by burroisolator 1 day ago

"In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.

And not very long after, 93 per cent of those horses had disappeared.

I very much hope we'll get the two decades that horses did."

I'm reminded of the idiom "be careful what you wish for, as you might just get it." Rapid technogical change has historically lead to prosperity over the long term but not in the short term. My fear is that the pace of change this time around is so rapid that the short term destruction will not be something that can be recovered from even over the longer term.

Comment by mxfh 1 day ago

I just have no idea how rigerously the data was reviewed. The 95% decline simply does no compute with

4,500,000 in 1959

and even an increase to

7,000,000 in 1968

largely due to increase in recreational horse population.

https://time.com/archive/6632231/recreation-return-of-the-ho...

So that recreational existence at the leisure of our own machinery seems like an optional future humans can hope for too.

Turns out the chart is about farm horses only as counted by the USDA not including any recreational horses. So this is more about agricultural machinery vs. horses, not passenger cars.

---

City horses (the ones replaced by cars and trucks) were nearly extinct by 1930 already.

City horses were formerly almost exclusively bred on farms but because of their practical disappearance such breeding is no longer necessary. They have declined in numbers from 3,500,000 in 1910 to a few hundred thousand in 1930.

https://www2.census.gov/library/publications/decennial/1930/...

Comment by falcor84 1 day ago

My reading of tfa is exactly that - the author is hoping that we'll have at least a generation or so to adapt, like horses did, but is concerned that it might be significantly more rapid.

Comment by OccamsMirror 1 day ago

To be clear though, the horses didn't adapt. Their populate was reduced by orders of a magnitude.

Comment by sendes 1 day ago

True, but the horses' population started (slightly) rising again when they went from economic tools to recreational tools for humans. What will happen to humans?

Comment by Gigachad 1 day ago

The horse population was being boosted beyond normal numbers by human intervention. When humans stopped breeding them the numbers dropped.

At least currently humans do not need AI to reproduce.

Comment by baq 1 day ago

There were approximately zero horses in the wild, so it was all about what humans found useful.

Pray it’s still humans who ask these kinds of questions about AI, not the other way around.

Comment by goatlover 1 day ago

Did the population of work/service dogs decline? Horses were already a form of automation over human labor.

Comment by defrost 1 day ago

Bullocks.

That's what Sandy over the road (born 1932, died last year), used to hitch up every morning at 4am, when he was ten, to sled a tank of water back to the farm from the local spring.

Comment by burroisolator 1 day ago

"You're absolutely right!" Thanks for pointing it out. I was expecting that kind of perspective when the author brought up horses, but found the conclusion to be odd. Turns out it was just my reading of it.

Comment by nacozarina 1 day ago

the stability of no govt faced risk over a 20% increase in horse unemployment

Comment by burnto 1 day ago

The 1220s horse bubble was a wild time. People walked everywhere all slow and then BAM guys on horses shooting arrows at you.

AI is like that, but instead with dudes in slim fitting vests blogging about alignment

Comment by mark242 1 day ago

Someone who makes horseshoes then learns how to make carburetors, because the demand is 10x.

https://en.wikipedia.org/wiki/Jevons_paradox

Comment by Tzt 19 hours ago

In that analogy "someone" is an AI, who of course switches from answering questions from humans, to answering questions from other AIs, because the demand is 10x.

Comment by dominicrose 21 hours ago

> Governments have typically expected efficiency gains to lower resource consumption, rather than anticipating possible increases due to the Jevons paradox

I think that it's true that governments want the efficiency gains but it's false that they don't anticipate the consumption increases. Nobody is spending trillions on datacenters without knowing that demand will increase, that doesn't mean we shouldn't make them efficient.

Comment by YmiYugy 1 day ago

To stay within the engine analogy. We have engines that are more powerful than horses, but

1. we aren’t good at building cars yet,

2. they break down so often that using horses often still ends up faster,

3. we have dirt tracks and feed stations for horses but have few paved roads and are not producing enough gasoline.

Comment by dominicrose 21 hours ago

yes and the question is do horses have 20 years or less i.e. 5 years?

Comment by dealflowengine 3 hours ago

Everyone is missing the real valuable point here: we never needed 90+% of horses in the first place.

Comment by pbw 1 day ago

This is food for thought, but horses were a commodity; people are very much not interchangeable with each other. The BLS tracks ~1,000 different occupations. Each will fall to AI at a slightly different rate, and within each, there will be variations as well. But this doesn't mean it won't still subjectively happen "fast".

Comment by hnfong 16 hours ago

Whether people are interchangeable with each other isn't the point. The point is whether AI is interchangeable with jobs currently done by humans. Unless and until AI training requires 1000 different domain experts, the current projection is that at some point AI will be interchangeable with all kinds of humans...

Comment by erichocean 18 hours ago

That looks to me like there are ~1000 interchangeable economic human roles for AI to replace.

So I guess we should check to see if computers are good at scaling or doing things concurrently. If not, no worries!

Comment by jameslk 1 day ago

> Back then, me and other old-timers were answering about 4,000 new-hire questions a month.

> Then in December, Claude finally got good enough to answer some of those questions for us.

> … Six months later, 80% of the questions I'd been being asked had disappeared.

Interesting implications for how to train juniors in a remote company, or in general:

> We find that sitting near teammates increases coding feedback by 18.3% and improves code quality. Gains are concentrated among less-tenured and younger employees, who are building human capital. However, there is a tradeoff: experienced engineers write less code when sitting near colleagues.

https://pallais.scholars.harvard.edu/sites/g/files/omnuum592...

Comment by sothatsit 1 day ago

This tracks with my own AI usage over just this year. There have been two releases that caused step changes in how much I actually use AI:

1. The release of Claude Code in February

2. The release of Opus 4.5 two weeks ago

In both of these cases, it felt like no big new unlocks were made. These releases aren’t like OpenAI’s o1, where they introduced reasoning models with entirely new capabilities, or their Pro offerings, which still feel like the smartest chatbots in the world to me.

Instead, these releases just brought a new user interface, and improved reliability. And yet these two releases mark the biggest increases in my AI usage. These releases caused the utility of AI for my work to pass thresholds where Claude Code became my default way to get LLMs to read my code, and then Opus 4.5 became my default way to make code changes.

Comment by AIorNot 1 day ago

I would add Gemini Nano Banna Pro to that list - (its words with image ability) is amazing..

Comment by palmotea 1 day ago

Aren't you guys looking forward to the day when we get the opportunity to go the way of all those horses? You should! I'm optimistic; I think I'd make a fine pot of glue.

AI, faster please!

Comment by ternus 1 day ago

Regarding horses vs. engines, what changed the game was not engine efficiency, but the widespread availability of fuel (gas stations) and the broad diffusion of reliable, cheap cars. Analogies can be made to technologies like cell phones, MP3 players, or electric cars: beyond just the quality of the core technology, what matters is a) the existence of supporting infrastructure and b) a watershed level of "good/cheap enough" where it displaces the previous best option.

Comment by dredmorbius 12 hours ago

And roads, and other auto-friendly (or auto-dependent) infrastructure and urban / national land-use.

Cars went from a luxury to a necessity, though largely not until after WWII in the US, and somewhat later in other parts of the world.

There remain areas where a car is not required, or even a burden. NYC, and a few major metropolitan regions, as well as poorer parts of the world (though motorcycles and mopeds are often prevalent there).

Comment by baq 1 day ago

It’s both. A steam engine at 2% efficiency is good only for digging up more coal for itself, and barely so. Completely different story at 20%. Every doubling is a step function in some area as it becomes energetically and economically rational to use it for something.

Comment by anshulbhide 1 day ago

Yet, this applies for only three industries so far - coding, marketing and customer support.

I don't think applies for general human intelligence - yet.

Comment by naveen99 4 hours ago

It’s not like humans are standing still. Humans are still improving faster than ai.

Comment by Mawr 1 day ago

What is this horseshit.

What exactly does specifically engine efficiency have to do with horse usage? Cars like the Ford Model T entered mass production somewhere around 1908. Oh, and would you look at the horse usage graph around that date! sigh

The chess ranking graph seems to be just a linear relationship?

> This pink line, back in 2024, was a large part of my job. Answer technical questions for new hires.

>

> Claude, meanwhile, was now answering 30,000 questions a month; eight times as many questions as me & mine ever did.

So more == better. sigh. Ran any, you know, studies to see the quality of those answers? I too can consult /dev/random for answers at a rate of gigabytes per second!

> I was one of the first researchers hired at Anthropic.

Yeah. I can tell. Somebody's high on their own supply here.

Comment by empiricus 19 hours ago

Well, for some reason horse numbers and horse usage dropped sharply at a moment in time. Probably there was some horse pandemic I forgot about.

Comment by sceptic123 22 hours ago

> A system that costs less, per word thought or written, than it'd cost to hire the cheapest human labor on the face of the planet.

Is it really possible to make this claim given the vast sums of money that have gone in to AI/LLM training?

Comment by myrmidon 19 hours ago

I'd say yes, because AI training is mostly fixed-cost and not that expensive when you compare it to raising/educating human labor.

Early factories were expensive, too (compared to the price of a horse), but that was never a show-stopper.

Comment by agos 19 hours ago

it's coming from an extremely biased source, that's why nobody else would make that claim

Comment by bad_username 1 day ago

AI currently lacks the following to really gain a "G" and reliably be able to replace humans at scale:

- Radical massive multimodality. We perceive the world through many wide-band high-def channels of information. Computer perception is nowhere near. Same for ability to "mutate" the physical world, not just "read" it.

- Being able to be fine-tuned constantly (learn things, remember things) without "collapsing". Generally having a smooth transition between the context window and the weights, rather than fundamental irreconcilable difference.

These are very difficult problems. But I agree with the author that the engine is in the works and the horses should stay vigilant.

Comment by zkmon 22 hours ago

The work done by horses was not the only work out there. Games played by chess masters was not the only sport on the planet. Answering questions and generating content is not the only work that happens at work places.

Comment by emsign 17 hours ago

Wow! That is highly unscientific and speculative. Wow!

Comment by websiteapi 1 day ago

funny how we have all of this progress yet things that actually matter (sorry chess fans) in the real world are more expensive: health care, housing, cars. and what meager gains there are seem to be more and more concentrated in a smaller group of people.

plenty of charts you can look at - net productivity by virtually any metric vs real adjusted income. the example I like are kiosks and self checkout. who has encountered one at a place where it is cheaper than its main rival and is directly attributable to (by the company or otherwise) to lower prices?? in my view all it did was remove some jobs. that's the preview. that's it. you will lose jobs and you will pay more. congrats.

even with year 2020 tech you could automate most work that needs to be done, if our industry wouldn't endlessly keep disrupting itself and have a little bit of discipline.

so once ai destroys desk jobs and the creative jobs, then what? chill out? too bad anyone who has a house won't let more be built.

Comment by AnotherGoodName 1 day ago

To give backing i’m from Australia which has ~2.5x the median wealth per capita of US citizens but a lower average wealth. This shows through in the wealth of a typical citizen. Less homelessness, better living standards (hdi in australia is higher) etc.

Compare sorting by median vs average to get a sense of the issue; https://en.wikipedia.org/wiki/List_of_countries_by_wealth_pe...

This is a recent development where the median wealth of citizens in progressively taxes nations has quickly overtaken the median wealth of USA citizens.

All it takes is tax on the extremely wealthy and lessening taxes on the middle class… seems obvious right? Yet things gave consistently been going the other way for along time in the USA.

Comment by jacquesm 1 day ago

I think by the time the wealthy realize they're setting themselves up for the local equivalent of the French Revolution it will be a bit late. It's a really bad idea to create a large number of people with absolutely nothing to lose.

Comment by overfeed 1 day ago

I suspect the wealthy think they can shield themselves by exerting control over mass media, news outlets, the press, and domestic surveillance, all amplified by AI.

If all that fails, they have their underground bunkers on faraway islands and/or backup citizenships.

Comment by awillowingmind 13 hours ago

Assuming that they are able to fully replace the workforce, and that technocracy is fully realized, the majority stakeholders of these corporations will rely on corporations akin to palantir & anduril for private security.

Comment by 8 hours ago

Comment by jordwest 1 day ago

> I suspect the wealthy think they can shield themselves by exerting control over

Agreed and I think this is a result of a naive belief that we humans tend to have that controlling thoughts can control reality. Politicians still live by this belief but eventually reality and lived experience does catch up. By that time all trust is long gone.

Comment by baq 1 day ago

That’s what the bunkers in New Zealand are for, but if AI keeps its pace, it might not be enough anyway.

Comment by hsuduebc2 1 day ago

Moreover when you acting absolutely relentlessly like certain car maker.

People usually change their behavior after some pretty horrific events. So I would predict something like that in future. For both Europe and US too.

Comment by tadfisher 1 day ago

They already know, and do not care. Their plan is quite literally to retreat into bunkers with shock collars enforcing the loyalty of their guards.

The richest of the rich have purchased islands where they can hole up.

Comment by AstroBen 1 day ago

Stripped of their infinite freedom out here to hide in a bunker? No chance

The bunkers are in case of nuclear war or serious pandemics. Absolutely worst case last resort scenario, not just "oh I don't care if I end up there"

Comment by zdragnar 1 day ago

> All it takes is tax on the extremely wealthy and lessening taxes on the middle class… seems obvious right?

You could tax 100% of all of the top 1%'s income (not progressively, just a flat 100% tax) and it'd cover less than double the federal government's budget deficit in the US. There would be just enough left over to pay for making the covid 19 ACA subsidies permanent and a few other pet projects.

Of course, you can't actually tax 100% of their income. In fact, you'd need higher taxes on the top 10% than anywhere else in the West to cover the deficit, significantly expand social programs to have an impact, and lower taxes on the middle class.

It should be pointed out that Australia has higher taxes on their middle class than the US does. It tops out at 45% (plus 2% for medicare) for anyone at $190k or above.

If you live in New York City, and you're in the top 1% of income earners (taking cash salary rather than equity options) you're looking at a federal tax rate of 37%, a state tax rate of 10.9%, and a city income tax rate of 3.876% for a total of 51.77%. Some other states have similarly high tax brackets, others are less, and others yet use other schemes like no income tax but higher sales and property taxes.

Not quite so obvious when you look closer at it.

Comment by yulker 1 day ago

The point isn't to just cover the tax bill, it's that by shifting the burden up the class ladder, there is more capital available to the classes that spend and circulate their money in the economy rather than merely accumulate it

Comment by some_guy_nobel 20 hours ago

What are you responding to?

How much of the current burden is shouldered by the middle class? How much by the 1%? How does that compare to other Western nations? What measurable effect would raising this on the 1% be? What about the middle class?

Comment by deaux 20 hours ago

Paper income tax rates are completely and utterly meaningless. Bringing them up is just muddying the waters. Effective rates on total income (including from capital, wealth taxes etc.), post-loopholes, are the only thing that matters.

Comment by wotWhytho 1 day ago

[dead]

Comment by naveen99 18 hours ago

Without usa the way it is, Australia would be much less prosperous. From the perspective of employers and consumers, labor costs are the same. It’s just that in Europe and Australia, taxes are a larger percentage of cost of labor.

Comment by atleastoptimal 1 day ago

Those are all expensive because of artificial barriers meant to keep their prices high. Go to any Asian country and houses, healthcare and cars are priced like commodities, not luxuries.

Tech and AI have taken off in the US partially because they’re in the domain of software, which hasnt bee regulated to the point of deliberate inefficiency like other industries in the US.

Comment by tyre 1 day ago

If we had less regulation of insurance companies, do you think they’d be cheaper?

(I pick this example because our regulation of insurance companies has (unintuitively) incentivized them to pay more for care. So it’s an example of poor regulation imo)

Comment by davidw 1 day ago

Health care is the more complicated one of the examples cited, but housing definitely is an 'own goal' in how we made it too difficult to build in too many places - especially "up and in" rather than outward expansion.

Stuff like this isn't Wall Street or Billionaires or whatever bogeyman - it's our neighbors: https://bendyimby.com/2024/04/16/the-hearing-and-the-housing...

Comment by wyre 1 day ago

Health care is complicated, but I don't think it would hard to understand how less regulations could lower prices. More insurers could enter markets, could compete across state lines, and compliance costs could be lowered.

However regulation is helpful for those already sick or with pre-existing conditions. Developed countries with well-regulated systems also have better health outcomes than the US does.

Comment by murderfs 1 day ago

Well, they'd be more functional as insurance, at least! The way insurance is supposed to work is that your insurance premium is proportional to the risk. You can't go uninsured and then after discovering that your house is on fire and about to burn down, sign up for an insurance plan and expect it to be covered.

We've blundered into a system that has the worst parts of socialized health care and private health insurance without any of the benefits.

Comment by 1 day ago

Comment by refactor_master 1 day ago

> Go to any Asian country and houses, healthcare and cars are priced like commodities, not luxuries.

What do you mean? Several Asian cities have housing crises far worse than the US in local purchasing power, and I'd even argue that a "cheap" home in many Asian countries is going to be of a far lower quality than a "cheap" home in the US.

Comment by websiteapi 1 day ago

you mean the same Asia that has the same problem? USA enjoying arbitrage is not actually a solution nor is it sustainable. not to mention that if you control for certain things, like house size for instance relative to inflation adjusted income it isn't actually much different despite popular belief.

Comment by jordwest 1 day ago

It would be kinda funny if not so tragic how economists will argue both "[productive improvement] will make things cheaper" and then in the next breath "deflation is bad and must be avoided at all costs"

Comment by actionfromafar 1 day ago

But is it really, though? Dollars aren't meant to be held.

Comment by jordwest 1 day ago

I think the idea of dollars as purely a trading medium where absolute prices don't matter wouldn't be such an issue if wages weren't always the last thing to rise with inflation.

As it is now anyone with assets is only barely affected by inflation while those who earn a living from wages have their livelihood eroded over time covertly.

Comment by actionfromafar 1 day ago

Exactly as the current owners… ahem, leaders of this country want it.

Comment by samdoesnothing 1 day ago

Barely affected? They benefit massively from it. That is why the rich get richer.

Comment by jordwest 1 day ago

True, in terms of share of the pie for sure

Comment by cal_dent 1 day ago

Housing is a funny old one and speaks to it being a human problem. One thing a lot of people dont truly engage with with the housing issue is that its a massive issue of distribution. Too many people want to live in too few places. Yes, central banks & interest rates (being too low and also now being relatively too high), nimbyism, and rent seeking play an important role too but solving the "too many people live in too few places" issue actually fixes that problem (slowly, and possibly unpalatably slow for some, but a fix nonetheless)

The key issue upstream is that too many good jobs are concentrated in too few places, and that leads to consumerism stimulating those places and making them further more attractive. Technology, through Covid, actually gave governments a get out of jail free card by allowing remote work to become more mainstream. Only to just not grasp the golden egg they were given. Pivot economies more to remote working more actively helps distribute people to other places with more affordable home. Over time, and again slowly, those places become more attractive because people now actually live there.

Existing homeowners can still wrap themselves in the warm glow of their high house prices which only loses "real" value through inflation which people tend not to notice as much.

But we decided to try to go back to the status quo so oh well

Comment by torginus 22 hours ago

I see the issue of housing is a combination of:

- House prices increasing while wages are stagnant

- Home loans and increasing prices mean the people going for huge leverages on their home purchases

- Supply is essentially government controlled, and dependent, and building more housing is heavily politicized

- A lot of dubious money is being created, which gets converted to good money by investing it in the housing market

- Housing is genuinely difficult to build and labor and capital intensive

> The key issue upstream is that too many good jobs are concentrated in too few places

This no longer is the case with remote work on the rise, If that were the case, housing prices would increase faster in trendy overpriced places, but the increase as of late was more uniform, with places like London growing slower (or even depreciating, relatively speaking) to less in-demand places.

Comment by cal_dent 7 hours ago

I'd argue that if your point around remote fully held true then that would be shown in the very short-term by rental prices (as the key indicator of people leaving London in lieu of population/census data) but that hasnt been apparent in the data. London rentals have seen much stronger growth post-covid than other places in the UK.

What is happening with house prices in london is a combination of the simple effects of high-ish interest rates v high house prices (limiting affordability) and also flats in general taking a beating from post-grenfell building regs changes and leasehold issues. When you look at granular data there is still a surprising amount growth in Zone 3-4 onwards in London because actual houses in those locations are still sort of achievable for decently paid couples.

Also regionally, a bit glib, but the price increases are happening in Manchester not Bolton or Sheffield not Scunthorpe. If remote working was truly acceptable then those latter locations would be seeing far more inward movement of people but they're not really

Comment by raldi 1 day ago

Food and clothes are much cheaper. People used to have to walk or hitchhike a lot more. People died younger, or were trapped with abusive spouses and/or parents. Crime was high. There was little economic mobility. It really sucked if you weren’t a straight white man. Houses had one bathroom. Power went out regularly. Travel was rare and expensive; people rarely flew anywhere. There was limited entertainment or opportunities to learn about the world.

Comment by dzonga 1 day ago

yeah that my question to the author too - if A.I is to really earn its keep it means A.I should help in getting more physical products into people's hands & helping with producing more energy.

physical products & energy are the two things that are relevant to people's wellbeing.

right now A.I is sucking up the energy & the RAM - so is it gonna translate into a net positive ?

Comment by Avicebron 1 day ago

That's the question though isn't it. If everyone got a subscription to claude-$Latest would they be able to pay their rent with it?

Comment by twodave 1 day ago

No, because they’d be waiting in the lengthy queues that would be necessary for anyone to use it. There are hard constraints to this tech that make what you’re talking about infeasible.

Comment by goatlover 1 day ago

No because nurses, mechanics, and janitors are still needed.

Comment by lowbloodsugar 1 day ago

>in the real world are more expensive: health care, housing, cars.

Think of it another way. It's not that these things are more expensive. It's that the average US worker simply doesn't provide anything of value. China provides the things of value now. How the government corrected for this was to flood the economy with cash. So it looks like things got more expensive, when really it's that wages reduced to match reality. US citizens selling each other lattes back and forth, producing nothing of actual value. US companies bleeding people dry with fees. The final straw was an old man uniting the world against the USA instead of against China.

If you want to know where this is going, look at Britain: the previous world super power. Britain governed far more of the earth than the USA ever did, and now look at it. Now the only thing it produces is ASBOs. I suppose it also sells weapons to dictators and provides banking to them. That is the USA's future.

Comment by copypaper 1 day ago

Yep. My grandma bought her house in ~1962 for $20k working at a factory making $2/hr. Her mortgage was $100/m; about 1 weeks worth of pay. $2/hr then is the equivalent of ~$21/hr today.

If you were to buy that same house today, your mortgage would be about $5100/m-- about 6 weeks of pay.

And the reason is exactly what you're saying: the average US worker doesn't provide as much value anymore. Just as her factory job got optimized/automated, AI is going to do the same for many. Tech workers were expensive for a while and now they're not. The problem is that there seems to be less and less opportunity where one can bring value. The only true winners are the factory owners and AI providers in this scenario. The only chance anybody has right now is to cut the middleman out, start their own business, and pray it takes off.

Comment by galangalalgol 1 day ago

But the us is China's market, so the ccp goes along even though they are the producer. Because a domestic consumer economy would mean sharing the profits of that manufacturing with the workers. But that would create a middle class not dependent on the party leading (at least in their minds, and perhaps not wrongly) to instability. It is a dance of two, and neither can afford to let go. And neither can keep dancing any longer. I think it will be very bad everywhere.

Comment by hsuduebc2 1 day ago

It's interesting to see Cyberpunk 2077 became somehow relatable more and more.

Comment by baq 1 day ago

Sci fi works on this topic are about as old as sci fi. I’m terrified the stories started hitting close to home in the past few years.

Comment by torginus 22 hours ago

I remember reading Burning Chrome, written in 1982, and one of the characters commented on late-stage capitalism.

Comment by renewiltord 1 day ago

Well, politically, housing becoming cheaper is considered a failure. And this is true for all ages. As an example, take Reddit. Skews younger, more Democrat-voting, etc. You'd think they'd be for lower housing prices. But not really. In fact, they make fun of states like Texas whose cities act to allow housing to become cheaper: https://www.reddit.com/r/LeopardsAteMyFace/comments/1nw4ef9/...

That's just an example, but the pattern will easily repeat. One thing that came out of the post-pandemic era is that the lowest deciles saw the biggest rises in income. Consequently, things like Doordash became more expensive, and stuff like McDonald's stopped staffing as much.

This isn't some grand secret, but most Americans who post on Twitter, HN, or Reddit consider the results some kind of tragedy, though it is the natural thing that happens when people become much higher income: you can't hire many of them to do low-productivity jobs like bus a McD's table.

That's what life looks like when others get richer relative to you. You can't consume the fruits of their labor for cheap. And they will compete for you with the things that you decided to place supply controls on. The highly-educated downwardly-mobile see this most acutely, which is why you see it commonly among the educated children of the past elite.

Comment by mlrtime 1 day ago

Thank you, I've replied too many times that if people want low priced housing, it's easily found in Texas. The replies are empty or stating that they don't want to live there because... it's Texas.

So the young want cheap affordable housing, right in the middle of Manhattan, never going to happen.

Comment by baq 1 day ago

Don’t blame people they want to live close to where the good jobs are.

Comment by samdoesnothing 1 day ago

It's inflation, simple as that. The US left the gold standard at the exact same time that productivity diverged from wages. Coincidence? No.

Pretty much everything gets more expensive, with the outliers being tech which has gotten much cheaper, mostly because the rate at which it progresses is faster than the rate at which governments can print money. But everything we need to survive, like food, housing, etc, keeps getting more expensive. And the asset class get richer as a result.

Comment by wotWhytho 1 day ago

[dead]

Comment by gniv 22 hours ago

This makes me think of another domain where it could happen: electricity generation and distribution. If solar+battery becomes cheap enough we could see the demise of the country-scale grid.

Comment by Gud 22 hours ago

I work in the energy sector. I test high voltage gas insulated switchgear for a living.

With this setup, you would need batteries that can sustain load for weeks on end, in many parts of the world.

Comment by haritha-j 22 hours ago

Horses pull carts. Chessbots play chess. Humans do lots of things. Equivalence in one thing is not equivalence in the vast collection of things we do.

Comment by dredmorbius 12 hours ago

AI seems capable of doing lots of things, particularly in comparison to domain-specific programming or even domain-specific AI. Your critique doesn't seem so powerful as you might suppose.

Comment by chairmansteve 1 day ago

4000 questions a month from new hireds. How many of those were repeated many times. A lot. So if they'd built a wiki?

I am not an AI sceptic.. I use it for coding. But this article is not compelling.

Comment by 20after4 20 hours ago

Maybe I can get a job programming for the Amish.

Comment by kgk9000 1 day ago

I think the author's point is that each type of job will basically disappear roughly at once, shortly after AI crosses the bar of "good enough" in that particular field.

Comment by xlbuttplug2 1 day ago

I think the turning point will be when AI assisted individuals or tiny companies are able to deliver comparable products/value as the goliaths.

Comment by pmg101 21 hours ago

Why hasn't this happened already?

I'm willing to believe the hype on LLMs except that I don't see any tiny 1-senior-dev-plus-agents companies disrupting the market. Maybe it just hasn't happened "yet"... But I've been kind of wondering the same thing for most of 2025.

Comment by xlbuttplug2 13 hours ago

I think it has to get good enough to a point where humans are not the bottleneck for code review and course correction.

I guess the "velocity" multiplier is closer to 10x rather than the 1000x needed for true disruption capability.

Comment by esafak 1 day ago

That would be the ideal scenario; when you can build a small business more easily.

Comment by byronic 1 day ago

my favorite part was where the graphs are all unrelated to each other

Comment by cuttothechase 1 day ago

>>This was a five-minute lightning talk given over the summer of 2025 to round out a small workshop.

Glad I noticed that footnote.

Article reeks of false equivalences and incorrect transitive dependencies.

Comment by eigencoder 9 hours ago

I'm confused. Isn't the sharp decline in the graph due to the population boom?

Comment by leowoo91 1 day ago

We still have chess grandmasters if you have noticed..

Comment by xlbuttplug2 1 day ago

Yes, and we'll continue to have human coding competitions for entertainment purpose. Good luck trying to live off the prize money though.

Comment by nextworddev 1 day ago

Hikaru makes good money streaming on Twitch tho

Comment by hurturue 1 day ago

so you're telling me there will be money for about 100 top streaming programmers

Comment by WhyOhWhyQ 1 day ago

Humans design the world to our benefit, horses do not.

Comment by bluefirebrand 1 day ago

Most humans don't. Only the wealthy and powerful are able to do this

And they often do it at the expense of the rest of us

Comment by glitchc 1 day ago

Conclusion: Soylent..?

Comment by nextworddev 1 day ago

damn

Comment by florilegiumson 1 day ago

If AI is really likely to cause a mass extinction event, then non-proliferation becomes critical as it was in the case with nuclear weapons. Otherwise, what does it really mean for AI to "replace people" outside of people needing to retool or socially awkward people having to learn to talk to people better? AI surely will change a lot, but I don't understand the steps needed to get to the highly existential threat that has become a cliché in every "Learn CLAUDE/MCP" ad I see. A period of serious unemployment, sure, but this article is talking about population collapse, as if we are all only being kept alive and fed to increase shareholder value for people several orders of magnitude more intelligent than us, and with more opposable thumbs. Do people think 1.2B people are going to die because of AI? What is the economy but people?

Comment by tim333 22 hours ago

I don't think the people will die, just have AI do the jobs. The people will probably still be there giving instructions.

Comment by baq 1 day ago

Capitalism gives, capitalism takes. Regulation will be critical so it doesn’t take too much, but tech is moving so fast even technologists, enthusiasts and domain researchers don’t know what to expect.

Comment by johnsmith1840 1 day ago

I mean it's hard to argue that if we invented a human in a box (AGI) human work would be irrelevent. But I don't know how we could watch current AI and anyone can say we have that.

The big thing this AI boom has showed us that we can all be thankful to have seen is what a human in a box will eventually look like. The first generation of humans to be able to see that is a super lucky experience to have.

Maybe it's one massive breakthrough away or maybe it's dozens away. But there is no way to predict when some massive breakthrough will occur Illya said 5-20 that really just means we don't know.

Comment by goatlover 1 day ago

Why a human in a box and not an android? A lot of jobs will require advanced robotics to fully automate. And then there are jobs where customer preference is for human interaction or human entertainment. It's like how superior chess engines have not reduced the profession of chess grandmasters, because people remain more interested in human chess competition.

Comment by baq 1 day ago

The assumption is superhuman AGI or a stronger ASI could invent anything it needed really fast, so ASI means intelligent robots within years or months, depending on manufacturing capabilities.

Comment by 1 day ago

Comment by cryptonector 17 hours ago

> 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.

Ambivalent??

Comment by wrs 1 day ago

Point taken, but it's hard to take a talk seriously when it has a graph showing AI becoming 80% of GDP! What does the "P" even stand for then?

Comment by baq 1 day ago

It’s called exponential growth and humans are well known to be almost comically bad at identifying and interpreting it.

Comment by rogerrogerr 14 hours ago

When people make forward looking statements using the term “exponential growth”, you can always replace that with “S-curve”.

Remember when we had two weeks of data, and governments acted like Covid was projected to kill everyone by next Tuesday?

Comment by lostmsu 1 day ago

Pokens

Comment by kazinator 1 day ago

Ironically, you could use the sigmoid function instead of horses. The training stimulus slowly builds over multiple iteration and then suddenly, flip: the wrong prediction reverses.

Comment by oidar 15 hours ago

I think AI is probably closer to jet engines than it is to horses.

Comment by dredmorbius 12 hours ago

Howso?

Comment by 1 day ago

Comment by nateburke 17 hours ago

Horses never figured out how to get government bailouts.

Comment by trabant00 44 minutes ago

Yawn, another article which hand picks success stories. What about the failures? Where's the graph of flying cars? Humanoid house servant robots? 3D TVs? Crypto decentralized banking for everyone? Etc.

Anybody who tells you they can predict the future is shoveling shit in his mouth then smiling brown teeth at the audience. 10 years from now there's a real possibility of "AI" being remembered as that "stuff that almost got to a single 9 reliability but stopped there".

Comment by narrator 1 day ago

Wait till the robots arrive. That they will know how to do a vast range of human skills, some that people train their whole lives for, will surprise people the most. The future shock I get from Claude Code, knowing how long stuff takes the hard way, especially niche difficult to research topics like the alternate applicable designs of deep learning models to a modeling task, is a thing of wonder. Imagine now that a master marble carver shows up at an exhibition and some sci-fi author just had robots make a perfect beautiful equivalent of a character from his novel, equivalent in quality to Michaelangelo's David, but cyberpunk.

Comment by gaigalas 1 day ago

People back then were primarily improving engines, not making articles about engines being better than horses. That's why it's different now.

Comment by AstroBen 1 day ago

Cool, now lets make a big list of technologies that didn't take off like they were expected to

Comment by tomxor 1 day ago

Terrible comparison.

Horses and cars had a clearly defined, tangible, measurable purpose: transport... they were 100% comparable as a market good, and so predicting an inflection point is very reasonable. Same with Chess, a clearly defined problem in finite space with a binary, measurable outcome. Funny how Chess AI replacing humans in general was never considered as a serious possibility by most.

Now LLMs, what is their purpose? What is the purpose of a human?

I'm not denying some legitimate yet tedious human tasks are to regurgitate text... and a fuzzy text predictor can do a fairly good job of that at less cost. Some people also think and work in terms of text prediction more often than they should (that's called bullshitting - not a coincidence).

They really are _just_ text predictors, ones trained on such a humanly incomprehensible quantity of information as to appear superficially intelligent, as far as correlation will allow. It's been 4 years now, we already knew this. The idea that LLMs are a path to AGI and will replace all human jobs is so far off the mark.

Comment by pansa2 1 day ago

> 90% of the horses in the US disappeared

Where did they go?

Comment by 20after4 20 hours ago

The glue factory.

Comment by xwolfi 1 day ago

they grew old and died ?

Comment by dsego 21 hours ago

There is a TV movie In Pursuit of Honor (1995) claiming to be based on true events. My short search online states that such things were never really documented, but it's plausible that there were similar things happening.

> In Pursuit of Honor is a 1995 American made-for-cable Western film directed by Ken Olin. Don Johnson stars as a member of a United States Cavalry detachment refusing to slaughter its horses after being ordered to do so by General Douglas MacArthur. The movie follows the plight of the officers as they attempt to save the animals that the Army no longer needs as it modernizes toward a mechanized military.

Comment by ekelsen 1 day ago

sometimes not nearly so pleasant for them.

Comment by cryptonector 17 hours ago

This is another one of those apocalyptic posts about AI. It might actually be true. I recommend reading The Phools, by Stanislav Lem -- it's a very short story, and you can find free copies of it online.

Also maybe go out for some fresh air. Maybe knowledge work will go down for humans, but plumbing and such will take much longer since we'll need dextrous robots.

Comment by moralestapia 11 hours ago

Great post.

This is the context wherein the valuation of AI companies makes sense, particularly those that already got a head start and have captured a large swath of that market.

Comment by globular-toast 1 day ago

And what happened to human population? It skyrocketed. So humans are going to get replaced by AI and human population will skyrocket again? This analogy doesn't work.

Comment by tim333 22 hours ago

Virtual humans?

Comment by LanceWinslow 11 hours ago

You know, this whole conversation reminds me of that old critique on Communism; Once the government becomes so large and encompassing, it reaches a point where it no longer needs to the people to exist, thus, people are culled by the millions, as they are simply no longer needed.

Comment by blondie9x 1 day ago

This post is kind of sad. It feels like he's advocating for human depopulation since the trajectory aligns with horse populations declining by 93% also.

Comment by throw234234234 1 day ago

Indeed. I do wonder if the inventors of the "transformer architecture" knew all the potential Pandora's boxes they were opening when they invented it. Probably not.

No one wants to say the scary potential logical conclusion of replacing the last value that humans have a competitive advantage in; that being intelligence and cognition. For example there is one future scenario of humanity where only the capital and resource holders survive; the middle and lower classes become surplus to requirements and lose any power. Its already happening slowly via inflation and higher asset prices after all - it is a very real possibility. I don't think a revolution will be possible in this scenario; with AI and robotics the rich could outnumber pretty much everyone.

Comment by dsego 21 hours ago

Not advocating, just warning about things to come.

Comment by taneq 1 day ago

Not advocating, just predicting. And not necessarily actual population, just population in paid employment.

Comment by john-radio 1 day ago

I've never visited this blog before but I really enjoy the synthesis of programming skill (at least enough skill to render quick graphs and serve them via a web blog) and writing skill here. It kind of reminds me of the way xkcd likes to drive home his ideas. For example, "Surpassed by a system that costs one thousand times less than I do... less, per word thought or written, than ... the cheapest human labor" could just be a throwaway thought, and wouldn't serve very well on its own, unsupported, in a serious essay, and of course the graph that accompanies that thought in Jones's post here is probably 99.9% napkin math / AI output, but I do feel like it adds to the argument without distracting from it.

(A parenthetical comment explaining where he ballparked the measurements for himself, the "cheapest human labor," and Claude numbers would also have supported the argument, and some writers, especially web-focused nerd-type writers like Scott Alexander, are very good at this, but text explanations, even in parentheses, have a way of distracting readers from your main point. I only feel comfortable writing one now because my main point is completed.)

Comment by leg100 21 hours ago

> In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.

I really doubt horses would be ambivalent about this, let alone about anything. Or maybe I'm wrong, they were in two minds: oh dear I'm at risk of being put to sleep, or maybe it could lead to a nice long retirement out on a grassy meadow. But they're in all likelihood blissfully unaware.

Comment by mrtesthah 1 day ago

LLMs can only hallucinate and cannot reason or provide answers outside of their training set distribution. The architecture needs to fundamentally change in order to reach human equivalence, no matter how many benchmarks they appear to hit.

Comment by baq 1 day ago

The sometimes stumble and hallucinate out of distribution. It’s rare, it’s more rare that is actually a good hallucination, but we’ve figured out how to enrich uranium, after all.

Comment by andai 20 hours ago

"If I'd asked people what they wanted, they would have said faster humans!"

Comment by conartist6 1 day ago

I thought this was going to be about how much more intelligent horses are than AIs and I was disappointed

Comment by fizlebit 1 day ago

yeah but machines don't produce horseshit, or do they? (said in the style of Vsauce)

Comment by oxag3n 13 hours ago

> I was one of the first researchers hired at Anthropic. ... > But looking at how fast Claude is automating my job, I think we're getting a lot less.

TL;DR If your work is answer questions, that can be retrieved from a corpus of data with inverted index + embedding, you'll be obsolete pretty fast.

Comment by bgwalter 21 hours ago

"I was one of the first researchers hired at Anthropic."

The article is a Misanthropic advertisement. The "AI" mafia feels that no one wants their products and doubles down.

They are so desperate that Pichai is now talking about data centers in space on Fox News. Next up are "AI" space lasers.

Comment by echelon 1 day ago

> And not very long after, 93 per cent of those horses had disappeared.

> I very much hope we'll get the two decades that horses did.

> But looking at how fast Claude is automating my job, I think we're getting a lot less.

This "our company is onto the discovery that will put you all out of work (or kill you?)" rhetoric makes me angry.

Something this powerful and disruptive (if it is such) doesn't need to be owned or controlled by a handful of companies. It makes me hope the Chinese and their open source models ultimately win.

I've seen Anthropic and OpenAI employees leaning into this rhetoric on an almost daily basis since 2023. Less so OpenAI lately, but you see it all the time from these folks. Even the top leadership.

Meanwhile Google, apart from perhaps Kilpatrick, is just silent.

Comment by trollbridge 1 day ago

At this point "we're going to make all office work obsolete" feels more like a marketing technique than anything actually connected to reality. It's sort of like how Coca-Cola implies that drinking their stuff will make you popular and well-liked by other attractive, popular people.

Meanwhile, my own office is buried in busywork that there are currently no AI tools on the market that will do the work for us, and AI entering a space sometimes increases busywork workloads. For example, when writing descriptions of publications or listings for online sales, we have to put more effort now into not sounding like it was AI-generated or we will lose sales. The AI tools for writing descriptions / generating listings are not very helpful either. (An inaccurate listing/description is a nightmare.)

I was able to help set up a client with AI tools to help him generate basically a faux website in a few hours that has lots of nice graphic design, images, etc. so that his new venture looks like a real company. Well, except for the "About Us" page that hallucinated an executive team plus a staff of half a dozen employees. So I guess work like that does get done faster now.

Comment by glitchc 1 day ago

Well, tbf the author was hired to answer newbie questions. Perhaps the position is that of an evangelist, not a scientist.

Comment by baq 1 day ago

I couldn’t have made a worse take if I tried

Comment by glitchc 12 hours ago

I understand that computer scientists are often wrong about things outside their expertise. Other scientists too.

Comment by skywhopper 17 hours ago

Truly depressing to see blasé predictions of AI infra spending approaching WW2 levels of GDP as if that were remotely desirable. One, that’s never going to happen, but if it does, it’ll mean a complete failure to address actual human needs. The amount of money wasted by Facebook on the Metaverse could have ended homelessness in the US, or provided universal college. Now here we are watching multiple times that much money get thrown by Meta, Google, et al into datacenters that are mostly generating slop that’s ruining what’s left of the Internet.

Comment by kaluga 1 day ago

[dead]

Comment by GreenJacketBoy 1 day ago

[dead]

Comment by Bleedblood 1 day ago

[dead]

Comment by inquirerGeneral 22 hours ago

[dead]

Comment by adventured 1 day ago

It's astounding how subtly anti-AI HN has become over the past year, as the models keep getting better and better. It's now pervasive across nearly every AI thread here.

As the potential of AI technical agents has gone from an interesting discussion to extraordinarily obvious as to what the outcome is going to be, HN has comically shifted negative in tone on AI. They doth protest too much.

I think it's a very clear case of personal bias. The machines are rapidly coming for the lucrative software jobs. So those with an interest in protecting lucrative tech jobs are talking their book. The hollowing out of Silicon Valley is imminent, as other industrial areas before it. Maybe 10% of the existing software development jobs will remain. There's no time to form powerful unions to stop what's happening, it's already far too late.

Comment by trollbridge 1 day ago

I don't think is the case; I think what's actually going on is that the HN crowd are the people who are stuck actually trying to use AI tools and aware of their limitations.

I have noticed, however, that people who are either not programmers or who are not very good programmers report that they can derive a lot of benefit from AI tools, since now they can make simple programs and get them to work. The most common use case seems to be some kind of CRUD app. It's very understandable this seems revolutionary for people who formerly couldn't make programs at all.

For those of us who are busy trying to deliver what we've promised customers we can do, I find I get far less use out of AI tools than I wish I did. In our business we really do not have the budget to add another senior software engineer, and we don't the spare management/mentor/team lead capacity to take on another intern or junior. So we're really positioned to be taking advantage of all these promises I keep hearing about AI, but in practical terms, it saves me at an architect or staff level maybe 10% of my time and for one of our seniors maybe 5%.

So I end up being a little dismissive when I hear that AI is going to become 80% of GDP and will be completely automating absolutely everything, when what I actually spend my day on is the same-old same-old of trying to get some vendor framework to do what I want to get some sensor data out of their equipment and deliver apps to end customers that use enough of my own infrastructure that they don't require $2,000 a month of cloud hosting services per user. (I picked that example since at one customer, that's what we were brought in to replace: that kind of cost simply doesn't scale.)

Comment by magarnicle 1 day ago

I value this comment even though I don't really agree about how useful AI is. I recognise in myself that my aversion to AI is at least partly driven by fear of it taking my job.

Comment by bwfan123 1 day ago

> The hollowing out of Silicon Valley is imminent

I think AI tools are great, and I use them daily and know their limits. Your view is commonly held by management or execs who don't have their boots on the ground.

Comment by trollbridge 1 day ago

That's what I've observed. I currently have more work booked than I can reasonably get done in the next year, and my customers would be really delighted if I could deliver it to them sooner, and take on even more projects. But I have yet to find any way that just adding AI tools to the mix makes us orders-of-magnitude better. The most I've been able to squeeze out is a 5% to 10% increase.

Comment by glitchc 1 day ago

But they do have their hands on your budget, and they are responsible for creating and filling positions.

Comment by twodave 1 day ago

I’m not anti-AI; I use it every day. But I also think all this hand-wringing is overblown and unbalanced. LLMs, because of what they are, will never replace a thoughtful engineer. If you’re writing code for a living at the level of an LLM then your job was probably already expendable before LLMs showed up.

Comment by bdangubic 1 day ago

except you know, you had a job. and coming out of college could get one… if you were graduating right now in compsci you’ll find a wasteland with no end in sight…

Comment by twodave 13 hours ago

You’re assuming a lot about me that isn’t true, but let’s just say we can’t really know, can we? And I think it’s a bit reductionist to attribute the current job market to LLMs. The market started to suck long before LLMs became useful.

Comment by bdangubic 10 hours ago

my apologies, I did not mean you as YOU, just general “you”…

and while we can’t know we can also… kind of know or look at data etc…

IntuitionLabs, “AI’s Impact on Graduate Jobs: A 2025 Data Analysis” (2025) -

https://intuitionlabs.ai/pdfs/ai-s-impact-on-graduate-jobs-a...

Indeed Hiring Lab, “AI at Work Report 2025: How GenAI is Rewiring the DNA of Jobs” (September 2025) -

https://www.hiringlab.org/wp-content/uploads/2025/09/Indeed-...

Comment by shermantanktop 1 day ago

It's not subtle.

But the temptation of easy ideas cuts both ways. "Oldsters hate change" is a blanket dismissal, and there are legitimate concerns in that body of comments.

Comment by fzeroracer 1 day ago

I worked for a company that was starting to shove AI incentives down the throat of every engineer as our product got consistently worse and worse due to layoffs and the perceived benefits of AI which were never realized. When you look at the companies that have shifted to 'AI first' and see them shoveling out garbage that barely works, it should be no surprised that people both aware of how the sausage is made and not are starting to hate it.

Comment by Lerc 1 day ago

>It's astounding how subtly anti-AI HN has become over the past year, as the models keep getting better and better. It's now pervasive across nearly every AI thread here.

I don't think you can characterise it as a sentiment of the community as a whole. While every AI thread seems to have it's share of AI detractors, the usernames of the posters are becoming familiar. I think it might be more accurate to say that there is a very active subset of users with that opinion.

This might hold true for the discourse in the wider community. You see a lot of coverage about artists outraged by AI, but when I speak to artists they have a much more moderate opinion. Cautious, but intrigued. A good number of them are looking forward to a world that embraces more ambitious creativity. If AI can replicate things within a standard deviation of the mean, the abundance of that content there will create an appetite for something further out.

Comment by kangs 1 day ago

hello faster horses

Comment by torginus 23 hours ago

Oh no, it's the lowercase people again.