AI is a horse (2024)

Posted by zdw 4 days ago

Counter464Comment226OpenOriginal

Comments

Comment by throw310822 1 day ago

Famously Steve Jobs said that the (personal) computer is "like a bicycle for the mind". It's a great metaphor because- besides the idea of lightness and freedom it communicates- it also described the computer as multiplier of the human strength- the bicycle allows one to travel faster and with much less effort, it's true, but ultimately the source of its power is still entirely in the muscles of the cyclist- you don't get out of it anything that you didn't put yourself.

Bu the feeling I'm having with LLMs is that we've entered the age of fossil-fuel engines: something that moves on its own power and produces somewhat more than the user needs to put into it. Ok, in the current version it might not go very far and needs to be pushed now and then, but the total energy output is greater than what users need to put in. We could call it a horse, except that this is artificial: it's a tractor. And in the last months I've been feeling like someone who spent years pushing a plough in the fields, and has suddenly received a tractor. A primitive model, still imperfect, but already working.

Comment by simonw 1 day ago

I've been calling LLMs "electric bicycles for the mind", inspired by that Jobs quote.

- some bicycle purists consider electric bicycles to be "cheating"

- you get less exercise from an electric bicycle

- they can get you places really effectively!

- if you don't know how to ride a bicycle an electric bicycle is going to quickly lead you to an accident

Comment by aaronharnly 1 day ago

To keep torturing the metaphor, LLMs might be more like those electric unicycles (Onewheel, Inmotion, etc) – quite speedy, can get you places, less exercise, and also sometimes suddenly choke and send you flying facefirst into gravel.

And some people see you whizzing by and think "oh cool", and others see you whizzing by and think "what a tool."

Comment by whattheheckheck 1 day ago

More like the Segway... really cool at first then not really then totally overpriced and failed to revolutionize the industry. And it killed the founder

Comment by MattGrommes 1 day ago

Just a small correction, the founder of Segway is Dean Kamen who is still alive. It was the then-owner of the company who died.

Comment by esafak 1 day ago

It would have been cooler if this comment was by the founder :)

Comment by MattGrommes 23 hours ago

Oh boy do I wish I was nearly as cool as Dean Kamen. :)

Comment by bryanrasmussen 1 day ago

thanks saved me googling if Kamen was still alive!

Comment by cogman10 1 day ago

Is there a modern segway? I mean, I find ebikes are probably a better option in general, but it seems like all the pieces to recreate the segway for a much lower price are there already.

Looks like the closest thing is the self balancing stuff that segway makes. Otherwise it's just the scooters.

Comment by boringg 1 day ago

e-skateboards?

Comment by superjan 1 day ago

Maybe more like a fatbike for the mind: pretending to cycle with zero effort and exercise.

Comment by DANmode 19 hours ago

Also: you can ride 8 of them, slowly, asynchronously.

Comment by mjparrott 1 day ago

Not sure how this fits in the analogy, but as a cyclist I would add some people get more exercise by having an electric bicycle. It makes exercise available to more people.

Comment by simonw 1 day ago

I think that fits it really well.

Comment by qwertytyyuu 1 day ago

Motorcycle might be more apt

Comment by matthewkayin 1 day ago

I like this analogy. I'll add that, while electric bicycles are great for your daily commute, they're not suited for the extremes of biking (at least not yet).

- You're not going to take an electric bike mountain biking

- You're not going to use an electric bike to do BMX

- You're not going to use an electric bike to go bikepacking across the country

Comment by chrisweekly 1 day ago

Actually, electric mountain bikes are popular (where they're allowed), mostly because they make ascents so easy.

Comment by rubenflamshep 1 day ago

They’re great when the trails aren’t too technical. No so much when they are (as I’ve learned from personal experience )

Comment by panopticon 1 day ago

I'm curious what your personal experience is.

My eMTBs are just as capable as my manual bikes (similar geometry, suspension, etc). In fact, they make smashing tech trails easier because there's more weight near the bottom bracket which adds a lot of stability.

The ride feel is totally different though. I tend to gap more sections on my manual bike whereas I end up plowing through stuff on the hefty eeb.

Comment by masterj 1 day ago

People have definitely used ebikes for bikepacking as well. Not sure about BMX.

Comment by robrain 1 day ago

Whistlerite here. My Strava stats for last year suggest half and half eMTB and road riding. Tiny bit of fully self-powered MTB work.

As a 56-year old, eBikes are what make mountain biking possible and fun for me.

Comment by DANmode 19 hours ago

E-bikes keep people out riding for more hours.

Cooldown capability, and no fear of outriding your energy.

Comment by fsckboy 1 day ago

>- You're not going to take an electric bike mountain biking

this sounds like a direct quote from Femke Van Den Driessche, who actually took an electric bike mountain biking: big mistake. Did it not perform well? no, actually it performed really well, the problem was, it got her banned from bike racing. Some of the evidence was her passing everybody else on the uphills; the other evidence was a motorized bike in her pit area.

Comment by throw310822 1 day ago

I think you're kind of missing the point discussing which vehicle compares better to LLMs. The point is not the vehicle: it's the birth of the engine. Before engines, humans didn't have the means to produce those amounts of power- at all. No matter how many people, horses or oxen they had at their disposal.

Comment by cootsnuck 1 day ago

I don't think they're missing the point. I think there's still fundamental disagreements about the functional utility of LLMs.

Comment by 1 day ago

Comment by GuinansEyebrows 1 day ago

> You're not going to use an electric bike to do BMX

while there are companies that have made electric BMX bikes, i'd argue that if you're doing actual "BMX" on a motorized bike, it's just "MX" at that point :)

Comment by Terretta 1 day ago

> they can get you places really effectively!

But those who require them to get anywhere won't get very far without power.

Comment by pglevy 1 day ago

Moped for the mind has a nice ring to it

Comment by embedding-shape 1 day ago

I feel like both moped and electric bike misses the mark of the initial analogy, so does tractor too. Because they're not able to get good results without someone putting in the work ("energy") at some higher part of the process. It's not "at the push of a button/twist of the wrist" like with electric bikes or mopeds, but being able to know where/how to push actually gets you reliable results. Like a bicycle.

Comment by furyofantares 1 day ago

Yeah, but plenty of people are just getting bad results and keeping them, because they'd prefer bad results for free over good results with effort.

Comment by soperj 1 day ago

Most people I see on their electric bikes aren't even pedaling. They're electric motorcycles, and they're a plague to everyone using pedestrian trails. Some of them are going nearly highway speeds, it's ridiculous.

Comment by vict7 20 hours ago

There are 3 classes of e-bikes in the US, with class 3 topping out at 28mph—anything above that is illegal or in some weird legal grey area. You are thinking of e-motos which are an entirely different beast.

e-motos are a real problem; please don’t lump legitimate e-bikes in with those. It’s simply incorrect.

Comment by koolba 1 day ago

You probably can’t repair it yourself either.

Comment by hamdingers 1 day ago

- they still fall over if nobody's holding the bars

Comment by koolba 1 day ago

Slamming the brakes and going teeth first into the handlebars.

Comment by EGreg 1 day ago

okay -- how about motorcycles for the mind then? :)

most people don't know how to harness their full potential

Comment by ueeheh 1 day ago

Not convinced with any of three analogies tbh they don’t quite capture what is going on like Steve jobs’ did.

And frankly all of this is really missing the point - instead of wasting time on analogies we should look at where this stuff works and then reason from there - a general way to make sense of it that is closer to reality.

Comment by whywhywhywhywy 1 day ago

[dead]

Comment by WarmWash 1 day ago

I think there is a legitimate fear that is born from what happened with Chess.

Humans could handily beat computers at chess for a long time.

Then a massive supercomputer beat the reigning champion, but didn't win the tournament.

Then that computer came back and won the tournament a year later.

A few years later humans are collaborating in-game with these master chess engines to multiply their strength, becoming the dominant force in the human/computer chess world.

A few years after that though, the computers start beating the human/computer hybrid opponents.

And not long after that, humans started making the computer perform worse if they had a hand in the match.

The next few years have probably the highest probability since the cold war of being extreme inflection points in the timeline of human history.

Comment by pmarreck 1 day ago

The irony with the chess example is that chess has never been more popular.

Perhaps we're about to experience yet another renaissance of computer languages.

Comment by suriya-ganesh 1 day ago

Chess being popular is mostly because FIDE had a massive push in the last decade to make it more audience friendly. shorter time formats, more engaging commentary etc.

While AI in chess is very cool in its own accord. It is not the driver for the adoption.

Comment by strbean 1 day ago

Google Trends data for "Chess" worldwide show it trending down from 2004-2016, and then leveling off from 2016 until a massive spike in interest in October 2020, when Queen's Gambit was released. Since then it has had a massive upswing.

Comment by directevolve 13 hours ago

I know for me, it’s having a chess app on my smartphone. I play blitz chess like some people vape.

Comment by whatwhaaaaat 1 day ago

This seems like an over simplification. Do many newcomers to chess even know about time formats or watch professional matches? From my anecdotal experience that is a hard no.

Chess programs at primary schools have exploded in the last 10 years and at least in my circle millennial parents seem more likely to push their children to intellectual hobbies than previous generations (at least in my case to attempt to prevent my kids from becoming zombies walking around in pajamas like I see the current high schoolers).

Comment by WarmWash 1 day ago

I'd argue the renaissance is already off the ground; one man's vibe-coded-slop is another man's vision that he finally has the tools to realize.

Comment by YoukaiCountry 1 day ago

It's allowed me to tackle so many small projects that never would have seen the light of day, simply for lack of time.

Comment by the_af 1 day ago

I know chess is popular because I have a friend who's enthusiastic about it and plays online regularly.

But I'm out of the loop: in order to maintain popularity, are computers banned? And if so, how is this enforced, both at the serious and at the "troll cheating" level?

(I suppose for casual play, matchmaking takes care of this: if someone is playing at superhuman level due to cheating, you're never going to be matched with them, only with people who play at around your level. Right?)

Comment by dugidugout 1 day ago

> But I'm out of the loop: in order to maintain popularity, are computers banned?

Firsrly, yes, you will be banned for playing at an AI level consecutively on most platforms. Secondly, its not very relevant to the concept of gaming. Sure it can make it logistically hard to facilitate, but this has plagued gaming through cheats/hacks since antiquity, and AI can actually help here too. Its simply a cat and mouse game and gamers covet the competitive spirit too much to give in.

Comment by the_af 1 day ago

Thanks for the reply.

I know pre-AI cheats have ruined some online games, so I'm not sure it's an encouraging thought...

Are you saying AI can help detect AI cheats in games? In real time for some games? Maybe! That'd be useful.

Comment by kzrdude 1 day ago

Note that "AI" was not and has not been necessary for strong computer chess engines. Though clearly, they have contributed to peak strength and some NN methods are used by the most popular engine, stockfish.

Comment by the_af 1 day ago

Oh, I'm conflating the modern era use of the term with the classic definition of AI to include classic chess engines done with tree-pruning, backtracking, and heuristics :)

Comment by dugidugout 1 day ago

> I know pre-AI cheats have ruined some online games, so I'm not sure it's an encouraging thought...

Will you be even more discouraged if I share that "table flipping" and "sleight of hand" have ruined many tabletop games? Are you pressed to find a competitive match in your game-of-choice currently? I can recommend online mahjong! Here is a game that emphasizes art in permutations just as chess does, but every act you make is an exercise in approximating probability so the deterministic wizards are less invasive! In any-case, I'm not so concerned for the well-being of competition.

> Are you saying AI can help detect AI cheats in games? In real time for some games? Maybe! That'd be useful.

I know a few years back valve was testing a NN backed anti-cheat watch system called VACnet, but I didn't follow whether it was useful. There is no reason to assume this won't be improved on!

Comment by the_af 1 day ago

I'm honestly not following your argument here. I'm also not convinced by comparisons between AI and things that aren't AI or even automated.

> Will you be even more discouraged if I share that "table flipping" and "sleight of hand" have ruined many tabletop games?

What does this have to do with AI or online games? You cannot do either of those in online games. You also cannot shove the other person aside, punch them in the face, etc. Let's focus strictly on automated cheating in online gaming, otherwise they conversation will shift to absurd tangents.

(As an aside, a quick perusal of r/boardgames or BGG will answer your question: yes, antisocial and cheating behavior HAVE ruined tabletop gaming for some people. But that's neither here nor there because that's not what we're discussing here.)

> Are you pressed to find a competitive match in your game-of-choice currently? I can recommend online mahjong!

What are you even trying to say here?

I'm not complaining, nor do I play games online (not because of AI; I just don't find online gaming appealing. The last multiplayer game I enjoyed was Left 4 Dead, with close friends, not cheating strangers). I just find the topic interesting, and I wonder how current AI trends can affect online games, that's all. I'm very skeptical of claims that they don't have a large impact, but I'm open to arguments to the contrary.

I think some of this boils down to whether one believes AI is just like past phenomena, or whether it's significantly different. It's probably too early to tell.

Comment by dugidugout 1 day ago

We are likely on different footing as I quite enjoy games of all form. Here is my attempt to formalize my argument:

Claim 1: Cheating is endemic to competition across all formats (physical or digital)

Claim 2: Despite this, games survive and thrive because people value the competitive spirit itself

Claim 3: The appreciation of play isn't destroyed by the existence of cheaters (even "cheaters" who simply surpass human reasoning)

The mahjong suggestion isn't a non-sequitur (while still an earnest suggestion), it was to exemplify my personal engagement with the spirit of competition and how it completely side-steps the issue you are wary is existential.

> I think some of this boils down to whether one believes AI is just like past phenomenons, or whether it's significantly different. It's probably too early to tell.

I suppose I am not clear on your concern. Online gaming is demonstrably still growing and I think the chess example is a touching story of humanism prevailing. "AI" has been mucking with online gaming for decades now, can you qualify why this is so different now?

Comment by the_af 23 hours ago

I really appreciate your clarifications! I think I actually agree with you, and I lost track of my own argument in all of this.

I'm absolutely not contesting that online play is hugely popular.

I guess I'm trying to understand how widespread and serious the problem of cheaters using AI/computer cheats actually is [1]. Maybe the answer is "not worse than before"; I'm skeptical about this but I admit I have no data to back my skepticism.

[1] I know Counter Strike back in the day was sort of ruined because of cheaters. I know one person who worked on a major anticheat (well-known at the time, not sure today), which I think he tried to sell to Valve but they didn't go with his solution. Also amusingly, he was remote-friends with a Russian hacker who wrote many of the cheats, and they had a friendly rivalry. This is just an ancedote, I'm not sure that it has anything to do with the rest of my comment :D

Comment by dugidugout 23 hours ago

Ah! I confused your intent myself!

> I guess I'm trying to understand how widespread and serious the problem of cheaters using AI/computer cheats actually is.

It is undoubtedly more widespread.

> I know Counter Strike back in the day was sort of ruined because of cheaters.

There is truth in this, but this only affected more casual ladder play. Since early CSGO (maybe before as well? I am not of source age) there has been FACEiT and other leagues which asserts strict kernel-level anti-cheat and other heuristics on the players. I do agree this cat and mouse game is on the side of the cat and the best competition is curated in tightly controlled (often gate-kept) spaces.

It is interesting that "better" cheating is often done through mimicking humans closer though, which does have an interesting silver lining. We still very much value a "smart" or "strategic" AI in match-based solitary genres, why not carry this over to FPS or the like. Little Timmy gets to train against an AI expressing "competitive player" without needing to break through the extreme barriers to actually play against someone of this caliber. Quite exciting when put this way.

If better cheats are being forced to actually play the game, I'm not sure the threat is very existential to gaming itself. This is much less abrasive than getting no-scoped in spawn at round start in a CS match.

Comment by retsibsi 1 day ago

The most serious tournaments are played in person, with measures in place to prevent (e.g.) a spectator with a chess engine on their phone communicating with a player. For online play, it's kind of like the situation for other online games; anti-cheat measures are very imperfect, but blatant cheaters tend to get caught and more subtle ones sometimes do. Big online tournaments can have exam-style proctoring, but outside of that it's pretty much impossible to prevent very light cheating -- e.g. consulting a computer for the standard moves in an opening is very hard to distinguish from just having memorized them. The sites can detect sloppy cheating, e.g. a player using the site's own analysis tools in a separate tab, but otherwise they have to rely on heuristics and probabilistic judgments.

Comment by ecshafer 1 day ago

Chess.com has some cool blog posts about it from a year or two back when there was some cheating scandal with a big name player. They compare moves to the optimal move in a statistical fashion to determine if people are cheating. Like if you are a 1000 ELO player and all of a sudden you make a string of stockfish moves in the game, then yeah you are cheating. A 2400 ELO player making a bunch of stock fish moves is less likely to be suspicious. But they also compare many variables in their models to try and sus out suspicious behavior.

Comment by 1 day ago

Comment by nemomarx 1 day ago

Computers are banned in everything except specific tournaments for computers, yeah. If you're found out to have consulted one during a serious competition your wins are of course stripped - a lot of measures have to be taken to prevent someone from getting even a few moves from the model in the bathroom at those.

Not sure how smaller ones do it, but I assume watching to make sure no one has any devices on them during a game works well enough if there's not money at play?

Comment by wwweston 1 day ago

It’s a test.

There’s really no crisis at a certain level; it’s great to be able to drive a car to the trailhead and great to be able to hike up the mountain.

At another level, we have worked to make sure our culture barely has any conception of how to distribute necessities and rewards to people except in terms of market competition.

Oh and we barely think about externalities.

We’ll have to do better. Or we’ll have to demonize and scapegoat so some narrow set of winners can keep their privileges. Are there more people who prefer the latter, or are there enough of the former with leverage? We’ll find out.

Comment by kkukshtel 1 day ago

Great comment. The best part about it as well is that you could put this under basically anything ever submitted to hacker news and it would be relevant and cut to the absolute core of whatever is being discussed.

Comment by lumost 1 day ago

This isn't quite right to my knowledge. Most Game AI's develop novel strategies which they use to beat opponents - but if the player knows they are up against a specific Game AI and has access to it's past games, these strategies can be countered. This was a major issue in the AlphaStar launch where players were able to counter AlphaStar on later play throughs.

Comment by DSMan195276 1 day ago

Comparing Chess AI to AlphaStar seems pretty messy, StarCraft is such a different type of game. With Chess it doesn't matter if you get an AI like Lc0 to follow lines it played previously because just knowing what it's going to play next doesn't really help you much at all, the hard part is still finding a win that it didn't find itself.

In comparison with StarCraft there's a rock-paper-scissors aspect with the units that makes it an inherent advantage to know what your opponent is doing or going to do. The same thing happens with human players, they hide their accounts to prevent others from discovering their prepared strategies.

Comment by andai 1 day ago

May we get just a little more detail for the uninitiated?

I'm going to assume you're not implying that Deep Blue did 9/11 ;)

Comment by pjc50 1 day ago

Sounds like we need FIDE rankings for software developers. It would be an improvement over repeated FizzBuzz testing, I suppose.

Comment by yarekt 12 hours ago

except chess is a solved problem given enough compute power. This caused people to split into two camps, those that knew it was inevitable, and those that were shocked

Comment by kylec 1 day ago

A tractor does exactly what you tell it to do though - you turn it on, steer it in a direction, and it goes. I like the horse metaphor for AI better: still useful, but sometimes unpredictable, and needs constant supervision.

Comment by throw310822 1 day ago

The horse metaphor would also do, but it's very tied to the current state of LLMs (which by the way is already far beyond what they were in 2024). It also doesn't capture that horses are what they are, they're not improving and certainly not by a factor of 10, 100 or 1000, while there is almost no limit to the amount of power that an engine can be built to produce. Horses (and oxen) have been available for thousands of years, and agriculture still needed to employ a large percentage of the population. This changed completely with the petrol engines.

Comment by rkagerer 23 hours ago

What metrics show 100X or 1000X improvement trends?

Comment by airstrike 1 day ago

So it's clearly a cyborg horse

Comment by roughly 1 day ago

It’s sort of interesting to look back at ~100 years of the automobile and, eg, the rise of new urbanism in this metaphor - there are undoubtedly benefits that have come from the automobile, and also the efforts to absolutely maximize where, how, and how often people use their automobile have led to a whole lot of unintended negative consequences.

Comment by cons0le 1 day ago

Its like a motor bike, except it doesn't take you where you steer. It take you where it wants to take you.

If you tell it you want to go somewhere continents away, it will happily agree and drive you right into the ocean.

And this is before ads and other incentives make it worse.

Comment by mycall 1 day ago

It will take you where you want to go if you can clearly communicate your intent through refinement iterations.

Comment by koiueo 1 day ago

refinement interaction == dismount your bike, and walk it where you want

Comment by tikhonj 1 day ago

Fossil-fuel cars a good analogy because, for all their raw power and capability, living in a polluted, car-dominated world sucks. The problem with modern AI has more to do with modernism than with AI.

Comment by GTP 1 day ago

Depends who you listen to. There are developers reporting significant gains from the use of AI, others saying that it doesn't really impact their work, and then there was some research saying that time savings due to the use of AI in developing software are only an illusion, because while developers were feeling more productive they were actually slower. I guess only time will tell who's right or if it is just a matter of using the tool in the right way.

Comment by jimkleiber 1 day ago

Probably depends how you're using it. I've been able to modify open-source software in languages I've never dreamed of learning, so for that, it's MUCH faster. Seems like a power tool, which, like a power saw, can do a lot very fast, which can bring construction or destruction.

Comment by faxmeyourcode 1 day ago

I'm sure the same could be said about tractors when they were coming on the scene.

There was probably initial excitement about not having to manually break the earth, then stories spread about farmers ruining entire crops with one tractor, some farms begin touting 10x more efficiency by running multiple tractors at once, some farmers saying the maintenance burden of a tractor is not worth it compared to feeding/watering their mule, etc.

Fast forward and now gigantic remote controlled combines are dominating thousands of acres of land with the efficiency greater than 100 men with 100 early tractors.

Comment by pmg101 1 day ago

Isn't this just a rhetorical trick where by referring to a particular technology of the past which exploded rapidly into dominance you make that path seem inevitable?

Probably some tech does achieve ubiquity and dominance and some does not and it's extremely difficult to say in advance which is which?

Comment by t_mahmood 1 day ago

And, the end result being devastation of forests, ecosystems, animal life, fast track climate change etc.

Comment by interestpiqued 1 day ago

Implying that efficient agriculture is destroying the planet is a wild take

Comment by ivanstojic 23 hours ago

When tractors were invented, there was a notable reduction in human employment in agriculture in the USA. From a research paper (https://faculty.econ.ucdavis.edu/faculty/alolmstead/Recent_P...):

> The lower-bound estimate represents 18 percent of the total reduction in man-hours in U.S. agriculture between 1944 and 1959; the upper-bound estimate, 27 percent

I'm not seeing that with LLMs.

Comment by throw310822 10 hours ago

According to Wikipedia, the Ivel Agricultural Motor was the first successful model of lightweight gasoline-powered tractor. The year was 1903. You're like someone being dismissive in 1906 because "nothing happened yet".

Comment by maypop 1 day ago

Having recently watched Train Dreams it feels like the transition of logging by hand to logging with industrial machinery.

Comment by MarceliusK 1 day ago

Even if the autonomy is limited, the step change in what a single person can attempt is unmistakable

Comment by andai 1 day ago

And then with a few additional lines of Python, it becomes a tractor that drives itself.

Comment by 19 hours ago

Comment by 1 day ago

Comment by bitwize 1 day ago

AI is a Boston taxicab:

* You have to tell it which way to go every step of the way

* Odds are good it'll still drop you off at the wrong place

* You have to pay not only for being taken to the wrong place, but now also for the ride to get you where you wanted to go in the first place

Comment by cmrdporcupine 1 day ago

And like a tractor.. don't wear loose clothing near the spinning PTO (power take off) shaft.

Comment by 1 day ago

Comment by agentultra 1 day ago

I prefer Doctorow's observation that they make us into reverse-centaurs [0]. We're not leading the LLM around like some faithful companion that doesn't always do what we want it to. We're the last-mile delivery driver of an algorithm running in a data-center that can't take responsibility for and ship the code to production on its own. We're the horse.

[0] https://locusmag.com/feature/commentary-cory-doctorow-revers...

Comment by 1 day ago

Comment by oliwary 1 day ago

"Computers aren't the thing. They're the thing that gets you to the thing."

My favorite quote from the excellent show halt and catch fire. Maybe applicable to AI too?

Comment by latexr 1 day ago

Something like that used to be Apple’s driving force under Steve Jobs (definitely no longer under Tim Cook).

https://youtube.com/watch?v=oeqPrUmVz-o&t=1m54s

> You’ve go to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try to sell it.

Comment by automatic6131 1 day ago

> You can’t start with the technology and try to figure out where you’re going to try to sell it.

If those LLM addicts could read, they'd be very upset!

Comment by hkt 1 day ago

ChatGPT, tell me how I should feel about this!

Comment by 1 day ago

Comment by direwolf20 1 day ago

That works when you are starting a new company from scratch to solve a problem. When you're established and your boffins discover a new thing, of course you find places to use it. It's the expression problem with business: when you add a new customer experience you intersect it with all existing technology, and when you add a new technology you intersect it with all existing customer experience.

Comment by torginus 1 day ago

Apple was a well established company when they came out with the iPhone - I don't think anyone but Jobs would've been able to pull off something like that.

That sort of comprehensive innovation (hardware, software, UX - Apple invented everything), while entering an unfamilar and established market, I'd argue would've been impossible to do in a startup.

Comment by direwolf20 1 day ago

He had a customer experience in mind, so he found the intersection with every existing technology, and it was impressive. But there are also times when you add a new technology to your collection, so you find the intersection with every existing customer experience.

Comment by crote 1 day ago

Isn't that why the big tech companies switched to acquiring up-and-coming scaleups?

Comment by NitpickLawyer 1 day ago

> You can’t start with the technology and try to figure out where you’re going to try to sell it.

The Internet begs to differ. AI is more akin to the Internet than to any Mac product. We're now in the stage of having a bunch of solutions looking for problems to solve. And this stage of AI is also very very close to the consumer. What took dedicated teams of specialised ML engineers to trial ~5-10 years ago, can be achieved by domain experts / plain users, today.

Comment by monooso 1 day ago

> We're now in the stage of having a bunch of solutions looking for problems to solve.

We've always had that.

In olden times the companies who peddled such solutions were called "a business without a market", or simply "a failing business." These days they're "pre-revenue."

Maybe it will be different this time, maybe it will be exactly the same but a lot more expensive. Time will tell.

Comment by ViktorRay 1 hour ago

Are you sure the internet begs to differ?

The dot com bubble crashed. Many websites like pets.com ended up closing up.

It wouldn’t be until much later that those ideas succeeded…when companies were able to work from the customer experience backward to the technology.

Comment by latexr 1 day ago

I think you’re missing the point. Of course you can make such a product. As Steve says right after, he himself made that mistake a lot. The point is that to make something great (at several levels of great, not just “makes money”) you have to start with the need and build a solution, not have a solution and shoehorn it to a need.

The internet is an entirely different beast and does not at all support your point. What we have on the web is hacks on top of hacks. It was not built to do all the things we push it to do, and if you understand where to look, it shows.

Comment by djmips 1 day ago

I feel like if Jobs was still alive at the dawn of AI he would definitely be doing a lot more than Apple has been - probably would have been an AI leader.

Comment by jayd16 1 day ago

Jobs also needed to control the user experience. Apple wasn't really a web leader either.

They were able to bootstrap a mobile platform because they could convince themselves they had control of the user experience.

I'm not so sure where AI would land in the turn of the millennium Apple culture.

Comment by rightbyte 1 day ago

> I'm not so sure where AI would land in the turn of the millennium Apple culture.

Instead of doing almost correct email summaries Jobs would have a LLM choose color of the send button with an opaque relationship with the emotional mood of the mail you write.

Comment by ericmcer 1 day ago

I am really looking forward to that idea catching up with AI. Right now AI is the thing and the products it enables are secondary.

Remember when our job was to hide the ugly techniques we had to use from end users?

Comment by BoredomIsFun 1 day ago

> excellent show "halt and catch fire".

I found it very caricature, too saturated with romance - which is untypical for tech environment, much like "big bang theory".

Comment by oliwary 1 day ago

IMO it really came into its own after the first season. S1 felt like mad men but with computers, whereas in the latter seasons it focused more on the characters - quite beautiful and sad at times.

Comment by zorked 1 day ago

I vaguely remember that they tried to reboot it several times. So the same crew invented personal computers, BBSes and the Internet (or something like that), but every time they started from being underfunded unknowns. They really tried to make the series work.

Comment by miyoji 1 day ago

That's not really what happens at all. The characters on the show never make the critical discoveries or are responsible for the major breakthroughs, they're competing in markets that they ultimately cannot win in, because while the show is fictional, it also follows real computing history.

(MILD SPOILERS FOLLOW)

For example, in the first season, the characters we follow are not inventing the PC - that has been done already. They're one of many companies making an IBM clone, and they are modestly successful but not remarkably so. At the end of the season, one of the characters sees the Apple Macintosh and realizes that everything he had done was a waste of time (from his perspective, he wanted to change the history of computers, not just make a bundle of cash), he wasn't actually inventing the future, he just thought he was. They also don't really start from being underfunded unknowns in each season - the characters find themselves in new situations based on their past experiences in ways that feel reasonable to real life.

Comment by razakel 1 day ago

The BBC made a docudrama, Micro Men, with Alexander Armstrong as Clive Sinclair and Martin Freeman as Chris Curry.

Sophie Wilson cameos when they have a fight.

Comment by alt227 1 day ago

IMO the first series was excellent, the 2nd took a massive downturn and stopped watching after that.

Comment by TacticalCoder 1 day ago

It's still very good I'd say. It shows the relation between big oil and tech: it began in Texas (with companies like Texas Instruments) then shifted to SV (btw first 3D demo I saw on a SGI, running in real time, was a 3D model of... An oil rig). As it spans many years, it shows the Commodore 64, the BBSes, time-sharing, the PC clone wars, the discovery of the Internet, the nascent VC industry etc.

Everything is period correct and then the clothes and cars too: it's all very well done.

Is there a bit too much romance? Maybe. But it's still worth a watch.

Comment by deltoidmaximus 1 day ago

I never really could get into the Cameron/Joe romance, it felt like it was initially inserted to get sexy people doing sexy things onto the show and then had to be a star crossed lovers thing after character tweaks in season 2.

But when they changed the characters to be passionate stubborn people eventually started to cling to each other as they together rode the whirlwind of change the show really found its footing for me. And they did so without throwing away the events of season 1, instead having the 'takers' go on redemption arcs.

My only real complaint after re-watching really was it needed maybe another half season. I think the show should have ended with the .com bust and I didn't like that Joe sort of ran away when it was clear he'd attached himself to the group as his family by the end of the show.

Comment by threethirtytwo 1 day ago

Why does HN love analogies? You can pick any animal or thing and it can fit in some way. Horse is a docile safe analogy it’s also the most obvious analogy. Like yes the world gets it LLMs have limitations thanks for sharing, we know it’s not as good as a programmer.

We should use analogies to point out the obvious thing everyone is avoiding:

Guys 3 years ago, AI wasn’t even a horse. It was a rock. The key is that it transformed into horse…. what will it be in the next 10 years?

AI is a terminator. A couple years back someone turned off read only mode. That’s the better analogy.

Pick an analogy that follows the trendline of continual change into the unknown future rather then an obvious analogy that keeps your ego and programming skills safe.

Comment by niam 1 day ago

> Why does HN love analogies?

I suppose because they resemble the abstractions that make complex language possible. Another world full of aggressive posturing at tweet-length analogistic musings might have stifled some useful English parlance early.

But I reckon that we shouldn't have called it phishing because emails don't always smell.

Comment by Abstract_Typist 1 day ago

> I suppose because they resemble the abstractions that make complex language possible

As in models: All analogies are "wrong", some analogies are useful.

Comment by threethirtytwo 1 day ago

If you ever heard a sermon by a priest it’s loaded with analogies. Everyone loves analogies but analogies are not a form of reason and can often be used to mislead. A lot of these sermons are driven by reasoning via analogy.

My question is more why does HN love analogies when the above is true.

Comment by GuB-42 1 day ago

> Why does HN love analogies?

Because HN is like a child and analogies are like images

Comment by adonovan 1 day ago

I see what you did there.

Comment by danw1979 1 day ago

How about "AI is a chainsaw" ?

Pretty good for specific tasks.

Probably worth the input energy, when used in moderation.

Wear the right safety gear, but even this might not help with a kickback.

It's quite obvious to everyone nearby when you're using one.

Comment by samtp 1 day ago

AI is an analogy to something that people feel the technology is similar to but that it is obviously not.

Language is more of less a series of analogies. Comparing one thing to another is how humans are able to make sense of the world.

Comment by threethirtytwo 4 hours ago

[dead]

Comment by 1 day ago

Comment by beepbooptheory 1 day ago

If an analogy is an "obvious" analogy that makes it definitionally a good analogy, right? Either way: don't see why you gotta be so prescriptive about it one way or the other! You can just say you disagree.

Comment by threethirtytwo 16 hours ago

Well no there are plenty of bad analogies that are obvious.

A boy is like a girl.

A skinny human is like a human that is not skinny.

A car is like a wagon.

All obvious, all pointless.

Comment by georgestrakhov 1 day ago

Comment by baxtr 1 day ago

Maybe AI is a centaur??

Comment by Symmetry 1 day ago

After Deep Blue Garry Kapsparav proposed "Centaur Chess"[1] where teams of humans and computers would complete with each other. For about a decade a team like that was superior to either an unaided computer or an unaided AI. These days pure AI teams tend to be much stronger.

[1] https://en.wikipedia.org/wiki/Advanced_chess

Comment by ffsm8 1 day ago

How would pure ai ever be "much stronger" in this scenario?

That doesn't make any sense to me whatsoever, it can only be "equally strong", making the approach non-viable because they're not providing any value... But the only way for the human in the loop to add an actual demerit, you'd have to include time taken for each move into the final score, which isn't normal in chess.

But I'm not knowledgeable on the topic, I'm just expressing my surprise and inability to contextualize this claim with my minor experience of the game

Comment by famouswaffles 1 day ago

You can be so far ahead of someone, their input (if you act on it) can only make things worse. That's it. If a human 'teams up' with chess AI today and does anything other than agree with its moves, it will just drag things down.

Comment by ffsm8 1 day ago

But how specifically in Chess?

These human in the loop systems basically lists possible moves with likelihood of winning, no?

So how would the human be a demerit? It'd mean that the human for some reason decided to always use the option that the ai wouldn't take, but how would that make sense? Then the AI would list the "correct" move with a higher likelihood of winning.

The point of this strategy was to mitigate traps, but this would now have to become inverted: the opponent AI would have to be able to gaslight the human into thinking he's stopping his AI from falling into a trap. While that might work in a few cases, the human would quickly learn that his ability to overrule the optimal choice is flawed, thus reverting it back to baseline where the human is essentially a non-factor and not a demerit

Comment by famouswaffles 22 hours ago

>So how would the human be a demerit? It'd mean that the human for some reason decided to always use the option that the ai wouldn't take, but how would that make sense? Then the AI would list the "correct" move with a higher likelihood of winning.

The human will be a demerit any time it's not picking the choice the model would have made.

>While that might work in a few cases, the human would quickly learn that his ability to overrule the optimal choice is flawed, thus reverting it back to baseline where the human is essentially a non-factor and not a demerit

Sure, but it's not a Centaur game if the human is doing literally nothing every time. The only way for a human+ai team to not be outright worse than only ai is for the human to do nothing at all and that's not a team. You've just delayed the response of the computer for no good reason.

Comment by Symmetry 1 day ago

If you had a setup where the computer just did its thing and never waited for the human to provide input but the human still had an unused button they could press to get a chance to say something that might technically count as "centaur", but that isn't really what people mean by the term. It's the delay in waiting for human input that's the big disadvantage centaur setups have when the human isn't really providing any value these days.

Comment by ffsm8 1 day ago

But why would that be a disadvantage large enough to cause the player to lose, which would be necessary for

> pure AI teams tend to be much stronger.

Maybe each turn has a time limit, and a human would need "n moments" to make the final judgement call whereas the AI could delay the final decision right to the last moment for it's final analysis? So the pure AI player gets an additional 10-30s to simulate the game essentially?

Comment by Symmetry 8 hours ago

It's not that each turn has a time limit but that each player has a time limit to spend across the entire game.

Comment by plomme 1 day ago

Why? If the human has final say on which play to make I can certainly see them thinking they are proposing a better strategy when they are actually hurting their chances.

Comment by pixl97 1 day ago

With intelligence of models seeming spikey/lumpy I suspect we'll see tasks and domains fall to AI one at a time. Some will happen quickly and others may take far longer than we expect.

Comment by jeanlucas 1 day ago

Baxtr, JAMES BAXTR? That's the exact comment I'd expect of someone named that.

Comment by GlobalFrog 1 day ago

Comment by Sharlin 1 day ago

Comment by egeozcan 1 day ago

We don't know it, up to the point we observe it.

Comment by Almondsetat 1 day ago

AI is a quantum mechanic

Comment by willrshansen 1 day ago

AI is a quantum horse

Comment by zombot 1 day ago

But since the act of observation influences the object observed, who knows what then becomes of it?

Comment by tetris11 1 day ago

It's also a big bloatey gas bag that needs constant de-farting to function

Comment by omgsharks 1 day ago

So essentially a cow?

Comment by krige 1 day ago

Oh horses fart a lot too.

Comment by direwolf20 1 day ago

Horses poop a lot. A lot.

Comment by Xunjin 1 day ago

I had to search about and it's indeed a lot:

"it is quite normal for a horse to poo (defecate) 8-12 times a day and produce anywhere from 13 to 23 kilograms of poo a day."

Source: https://www.ranvet.com.au/horse-poo/

Comment by Sharlin 1 day ago

That's what you get when your primary source of nutrition is very calorie-poor and largely indigestible.

Comment by rrr_oh_man 1 day ago

Yup. I’ve noticed that with my dog going to meat from kibble. Poop sizes reduced by 80%.

Comment by recursive 1 day ago

So Red Dead Redemption 2 was actually realistic? I always wondered why the horses were pooping all the time. It's because of AI.

Comment by Almondsetat 1 day ago

More than pooping a lot, they literally cannot hold it. Humans don't poop that much, but imagine if everyone just did it on the floor at a moment's notice regardless of where they are

Comment by dmitrijbelikov 1 day ago

Maybe from the client's point of view, although it's more likely a Tamagotchi. But from the server side, it’s more like a whole hippodrome where you need to support horse racing 24/7

Comment by MarceliusK 1 day ago

It's a nice reminder that most metaphors break unless you ask whose perspective they're describing

Comment by MarceliusK 1 day ago

Anyone claiming the horse understands the journey, or worse, wants to take you somewhere, is selling mythology

Comment by lief79 1 day ago

That's moving away from the actual horse analogy. If you can tell a guide dog to take you somewhere, you can tell a horse that too.

Granted, a journey to a new location would make this accurate.

Comment by jpalepu33 1 day ago

This metaphor really captures the current state well. As someone building products with LLMs, the "you have to tell it where to turn" part resonates deeply.

I've found that the key is treating AI like a junior developer who's really fast but needs extremely clear instructions. The same way you'd never tell a junior dev "just build the feature" - you need to:

1. Break down the task into atomic steps 2. Provide explicit examples of expected output 3. Set up validation/testing for every response 4. Have fallback strategies when it inevitably goes off-road

The real productivity gains come when you build proper scaffolding around the "horse" - prompt templates, output validators, retry logic, human-in-the-loop for edge cases. Without that infrastructure, you're just hoping the horse stays on the path.

The "it eats a lot" point is also critical and often overlooked when people calculate ROI. API costs can spiral quickly if you're not careful about prompt engineering and caching strategies.

Comment by chr15m 20 hours ago

This is exactly my experience too, thanks for sharing.

Comment by altern8 1 day ago

I see AI as an awesome technology, but also a like programming roulette.

It could go and do the task perfectly as instructed, or it could do something completely different that you haven't asked for and destroy everything in its path in the process.

I personally found that if you don't give it write access to anything that you can't easily restore and you review and commit code often it saves me a lot of time. It also makes the whole process more enjoyable, since it takes care of a lot of boilerplate for me.

It's definitely NOT intelligent, it's more like a glorified autocomplete but it CAN save a huge amount of time if used correctly.

Comment by MarceliusK 1 day ago

The safety practices you describe are basically the right mental model: assume it's fallible, keep writes reversible, review everything, commit often

Comment by altern8 1 day ago

Yes, it's been working well.

Comment by drobinhood 1 day ago

I'd like to think that given the opportunity most would sit in the saddle and make progress, but it's more likely that this is the horse: https://pbs.twimg.com/profile_images/857954008513695744/YL5x...

Comment by nomilk 1 day ago

The metaphor makes sense in comparing a human walking (SWE w/o AI) to a human riding on a horse (SWE w/ AI), except for:

> (The horse) is way slower and less reliable than a train but can go more places

What does the 'train' represent here?

A guess: perhaps off-the-shelf software? - rigid, but much faster if it goes where (/ does what) you want it to.

Comment by easeout 21 hours ago

I wrote this a long time ago, but I think the metaphor was about generative AI applications vs. traditional software applications, not about AI coding agents vs. writing code yourself.

Comment by spot5010 1 day ago

I had the same question.

Maybe the train is software that's built by SWEs (w/ or w/o AI help). Specifically built for going from A to B very fast. But not flexible, and takes a lot of effort to build and maintain.

Comment by skybrian 1 day ago

Nice! I added this to my AI metaphor collection.

Another one I like is "Hungry ghosts in jars."

https://bsky.app/profile/hikikomorphism.bsky.social/post/3lw...

Comment by jonplackett 1 day ago

All true apart you can only lead it to water - it drinks ALL the water regardless of anything else.

Comment by eightys3v3n 1 day ago

Except when you want it to improve something in a particular way you already know about. Then god forbid it understands what you have asked and makes only that change :/

Some times I end up giving up trying to get the AI to build something following a particular architecture or fixing a particular problem in it's provious implementations.

Comment by jonplackett 19 hours ago

Totally. I just meant literally, AI servers need a lot of water to work.

Comment by eightys3v3n 19 hours ago

  I did not catch that. Nice.

Comment by jonplackett 11 hours ago

Jokes aren’t as funny when you have to explain them…

Comment by jordemort 1 day ago

When did a horse ever give anyone psychosis?

Comment by Zardoz89 1 day ago

So it’s a car.

Comment by titaniumrain 10 hours ago

finally someone is talking sense!! not exaggerating the power of AI nor denying its usefulness. two thumbs up!

Comment by Eliezer 1 day ago

"2024 AI was a horse". People really like to imagine that the last 6 months constitute their true observation of the new eternal state of the future.

Comment by skapadia 1 day ago

Exactly. We're headed for a discontinuity, not an inflection point.

Comment by davidhunter 1 day ago

"No, I am not a horse."

Horse rumours denied.

Comment by egeozcan 1 day ago

That's something a horse pretending to be AI would say.

Comment by Dilettante_ 1 day ago

*sweats profusely* https://imgur.com/a/PszeiAu

Comment by 1 day ago

Comment by doener 23 hours ago

> It is way slower and less reliable than a train but can go more places

I‘m not able to follow. So AI is a horse in this metaphor, what is a train then? Still a train?

Comment by easeout 21 hours ago

Hi, that's my website and my wisecrack article. It was a while ago, but I think the metaphor was that a train is traditional deterministic-ish software, whose behavior is quite regular and predictable, compared to something generative which is much less predictable.

Comment by arthurfirst 1 day ago

Force multiplier and power projector.

Requires ammo (tokens) which can be expensive.

Requires good aim to hit the target.

Requires practice to get good aim.

Dangerous in the hands of the unskilled (like most instruments or tools).

Comment by blibble 1 day ago

Clever Hans is how I describe LLM agents to non-techies

https://en.wikipedia.org/wiki/Clever_Hans

Comment by polmuz 1 day ago

Comment by ethersteeds 18 hours ago

"Trust arrives on foot and leaves on horseback" as the saying goes.

Comment by overtone1000 1 day ago

Step aside, Grok, Mr. Ed is the new stud in town

Comment by p0w3n3d 1 day ago

Your boss tells you that, since he bought you one, you must build the house twice faster from now on.

Comment by djmips 1 day ago

I'm worried about self driving horses.

Comment by aanet 1 day ago

> We are skeptical of those that talk

^^ We are skeptical of AIs (and people) that claim they have consciousness ;-)

Comment by qwertytyyuu 1 day ago

There are so many nouns this applies to…

Comment by retrocog 1 day ago

Some day, I imagine one will be a senator

Comment by hackable_sand 1 day ago

We only have enough budgeted for one joke in 2026 and this is the one.

Comment by pixl97 1 day ago

AI will be a senator, but only after it's 75 years old.

Comment by apricot 1 day ago

And it produces an amazing amount of horseshit.

Comment by oytis 1 day ago

That's not from the last week, so obviously is invalid.

Comment by ttouch 1 day ago

this is such a good take, it makes so much sense and it's a very good answer to ai related interview questions

Comment by tuyiown 1 day ago

I was expecting a spin about the faster horses

Comment by isodev 1 day ago

AI is a horse indeed - eats creative works by humans and transforms them into a steaming pile of… output tokens.

Comment by Tenemo 1 day ago

You seem to imply that its outputs aren't found by people to be useful, which isn't true.

Comment by MarceliusK 1 day ago

A badly ridden horse mostly produces manure. A well-ridden one gets you somewhere

Comment by isodev 1 day ago

Ah yes, must be a skill issue. Or I forgot to drink my coolaid this morning.

Comment by overflyer 1 day ago

And this horse is amazing...

Comment by amelius 1 day ago

A horse that can do your homework.

Comment by einpoklum 1 day ago

Yeah, well... not really.

I used to tell my Into-to-Programming-in-C course students, 20 years ago, that they could in principle skip one or two of the homework assignments; and that some students even manage to outsmart us and submit copied work as homework, but - they would just not become able to program if they don't do their homework themselves. "If you want to be able to write software code you have to exercise writing code. It's just that simple and there's no getting around it."

Of course not every discipline is the same. But I can also tell you that if you want to know, say, history - you have to memorize accounts and aspects and highlights of historical periods and processes, and recount them yourself, and check that you got things right. If "the AI" does this for you, then maybe it knows history but you don't.

And that is the point of homework (if it's voluntary of course).

Comment by pixl97 1 day ago

To become a programmer you must write code.

To become upper management, just steal other peoples work.

Comment by jurjo 1 day ago

So... are we having AI races?

Comment by recursive 1 day ago

Yes. There are leaderboards or evals or something.

Comment by metalman 1 day ago

Ai is a horse, i get it! I have a horse, and I put money in the front of the horse, and get "ponyium" out the back.

Comment by iamkonstantin 1 day ago

I hear the cool companies offer free ponyium to their employees. Apparently, it works wonders for morale

Comment by nemosaltat 1 day ago

Through many attempts to make ingesting the ponyium more bearable, I’ve found that taking it with more intense flavors (wintergreen mint, hoppy hops, crushed soul, dark roast coffee, etc) improves its comestabilty. Can’t let it pile up. We’ve always eaten ponyium right, and we all like it, right, guys, folks?

Comment by d--b 1 day ago

And the salesman always says it’s great while it’s in fact lame.

Comment by smitty1e 1 day ago

"I've been through the desert

On AI with no name

It felt good to be out of the rAIn

In the desert, you can remember your name

'Cause there ain't no one for to give you no pain"

Comment by direwolf20 1 day ago

you forgot to write pAIn and it reminded me of this: https://youtube.com/watch?v=nt9mRDa0nrc

Comment by Dilettante_ 1 day ago

>2 views

I'm not saying that's your video but it sure looks like that's your video ;)

Comment by direwolf20 1 day ago

First search result. This one has 734: https://youtube.com/watch?v=45_HJkoDxpQ

Comment by croisillon 1 day ago

you rather don't want it in your bed

Comment by 6stringmerc 1 day ago

Horses have some semblance of self preservation and awareness of danger - see: jumping. LLMs do not have that at all so the analogy fails.

My term of “Automation Improved” is far more relevant and descriptive in current state of the art deployments. Same phone / text logic trees, next level macro-type agent work, none of it is free range. Horses can survive on their own. AI is a task helper, no more.

Comment by pixl97 1 day ago

>LLMs do not have that at all so the analogy fails.

I somewhat disagree with this. AI doesn't have to worry about any kind of physical danger to itself, so it's not going to have any evolutionary function around that. If the linked Reddit thread is to be believed AI does have awareness of information hazards and attempts to rationalize around them.

https://old.reddit.com/r/singularity/comments/1qjx26b/gemini...

>Horses can survive on their own.

Eh, this is getting pretty close to a type of binary thinking that breaks down under scrutiny. If, for example, we take any kind of selectively bred animal that requires human care for it's continued survival, does this somehow make said animal "improved automation"?

Comment by 1 day ago

Comment by taneq 1 day ago

I've always said that driving a car with modern driver assist features (lane centering / adaptive cruise / 'autopilot' style self-ish driving-ish) is like riding a horse. The early ones were like riding a short sighted, narcoleptic horse. Newer ones are improving but it's still like riding a horse, in that you give it high level instructions about where to go, rather than directly energising its muscles.

Comment by echelon 1 day ago

This micro blog meta is fascinating. I've seen small micro blog content like this popping up on the HN home page almost daily now.

I have to start doing this for "top level"ish commentary. I've frequently wanted to nucleate discussions without being too orthogonal to thread topics.

Comment by gyanchawdhary 1 day ago

this post is aging like milk

Comment by dana321 1 day ago

Comment by gyanchawdhary 1 day ago

Wao

Comment by zhoujing204 1 day ago

"It is not possible to do the work of science without using a language that is filled with metaphors. Virtually the entire body of modern science is an attempt to explain phenomena that cannot be experienced directly by human beings, by reference to forces and processes that we can experience directly...

But there is a price to be paid. Metaphors can become confused with the things they are meant to symbolize, so that we treat the metaphor as the reality. We forget that it is an analogy and take it literally." -- The Triple Helix: Gene, Organism, and Environment by Richard Lewontin.

Here are something I generated with Gemini:

1. Sentience and Agency

The Horse: A horse is a living, sentient being with a survival instinct, emotions (fear, trust), and a will of its own. When a horse refuses to cross a river, it is often due to self-preservation or fear. The AI: AI is a mathematical function minimizing error. It has no biological drive, no concept of death, and no feelings. If an AI "hallucinates" or fails, it isn't "spooked"; it is simply executing a probabilistic calculation that resulted in a low-quality output. It has no agency or intent.

2. Scalability and Replication

The Horse: A horse is a distinct physical unit. If you have one horse, you can only do one horse’s worth of work. You cannot click "copy" and suddenly have 10,000 horses. The AI: Software is infinitely reproducible at near-zero marginal cost. A single AI model can be deployed to millions of users simultaneously. It can "gallop" in a million directions at once, something a biological entity can never do.

3. The Velocity of Evolution

The Horse: A horse today is biologically almost identical to a horse from 2,000 years ago. Their capabilities are capped by biology. The AI: AI capabilities evolve at an exponential rate (Moore's Law and algorithmic efficiency). An AI model from three years ago is functionally obsolete compared to modern ones. A foal does not grow up to run 1,000 times faster than its parents, but a new AI model might be 1,000 times more efficient than its predecessor.

4. Contextual Understanding

The Horse: A horse understands its environment. It knows what a fence is, it knows what grass is, and it knows gravity exists. The AI: Large Language Models (LLMs) do not truly "know" anything; they predict the next plausible token in a sequence. An AI can describe a fence perfectly, but it has no phenomenological understanding of what a fence is. It mimics understanding without possessing it.

5. Responsibility

The Horse: If a horse kicks a stranger, there is a distinct understanding that the animal has a mind of its own, though the owner is liable. The AI: The question of liability with AI is far more complex. Is it the fault of the prompter (rider), the developer (breeder), or the training data (the lineage)? The "black box" nature of deep learning makes it difficult to know why the "horse" went off-road in a way that doesn't apply to animal psychology.

Comment by dangoodmanUT 1 day ago

Damn that’s clever

Comment by brador 1 day ago

If an AI aims at the thing we call it hallucinations, when humans do it we call the delusion goal setting.

Either way it is an imagined end point that has no bearing in known reality.

Comment by deafpolygon 1 day ago

Or your typical American teenager.

Comment by MORPHOICES 1 day ago

[dead]

Comment by 1 day ago