AI is a horse (2024)
Posted by zdw 4 days ago
Comments
Comment by throw310822 1 day ago
Bu the feeling I'm having with LLMs is that we've entered the age of fossil-fuel engines: something that moves on its own power and produces somewhat more than the user needs to put into it. Ok, in the current version it might not go very far and needs to be pushed now and then, but the total energy output is greater than what users need to put in. We could call it a horse, except that this is artificial: it's a tractor. And in the last months I've been feeling like someone who spent years pushing a plough in the fields, and has suddenly received a tractor. A primitive model, still imperfect, but already working.
Comment by simonw 1 day ago
- some bicycle purists consider electric bicycles to be "cheating"
- you get less exercise from an electric bicycle
- they can get you places really effectively!
- if you don't know how to ride a bicycle an electric bicycle is going to quickly lead you to an accident
Comment by aaronharnly 1 day ago
And some people see you whizzing by and think "oh cool", and others see you whizzing by and think "what a tool."
Comment by whattheheckheck 1 day ago
Comment by MattGrommes 1 day ago
Comment by esafak 1 day ago
Comment by MattGrommes 23 hours ago
Comment by bryanrasmussen 1 day ago
Comment by cogman10 1 day ago
Looks like the closest thing is the self balancing stuff that segway makes. Otherwise it's just the scooters.
Comment by boringg 1 day ago
Comment by superjan 1 day ago
Comment by DANmode 19 hours ago
Comment by mjparrott 1 day ago
Comment by simonw 1 day ago
Comment by qwertytyyuu 1 day ago
Comment by matthewkayin 1 day ago
- You're not going to take an electric bike mountain biking
- You're not going to use an electric bike to do BMX
- You're not going to use an electric bike to go bikepacking across the country
Comment by chrisweekly 1 day ago
Comment by rubenflamshep 1 day ago
Comment by panopticon 1 day ago
My eMTBs are just as capable as my manual bikes (similar geometry, suspension, etc). In fact, they make smashing tech trails easier because there's more weight near the bottom bracket which adds a lot of stability.
The ride feel is totally different though. I tend to gap more sections on my manual bike whereas I end up plowing through stuff on the hefty eeb.
Comment by masterj 1 day ago
Comment by robrain 1 day ago
As a 56-year old, eBikes are what make mountain biking possible and fun for me.
Comment by DANmode 19 hours ago
Cooldown capability, and no fear of outriding your energy.
Comment by fsckboy 1 day ago
this sounds like a direct quote from Femke Van Den Driessche, who actually took an electric bike mountain biking: big mistake. Did it not perform well? no, actually it performed really well, the problem was, it got her banned from bike racing. Some of the evidence was her passing everybody else on the uphills; the other evidence was a motorized bike in her pit area.
Comment by throw310822 1 day ago
Comment by cootsnuck 1 day ago
Comment by GuinansEyebrows 1 day ago
while there are companies that have made electric BMX bikes, i'd argue that if you're doing actual "BMX" on a motorized bike, it's just "MX" at that point :)
Comment by Terretta 1 day ago
But those who require them to get anywhere won't get very far without power.
Comment by pglevy 1 day ago
Comment by embedding-shape 1 day ago
Comment by furyofantares 1 day ago
Comment by soperj 1 day ago
Comment by vict7 20 hours ago
e-motos are a real problem; please don’t lump legitimate e-bikes in with those. It’s simply incorrect.
Comment by koolba 1 day ago
Comment by hamdingers 1 day ago
Comment by koolba 1 day ago
Comment by EGreg 1 day ago
most people don't know how to harness their full potential
Comment by ueeheh 1 day ago
And frankly all of this is really missing the point - instead of wasting time on analogies we should look at where this stuff works and then reason from there - a general way to make sense of it that is closer to reality.
Comment by whywhywhywhywy 1 day ago
Comment by WarmWash 1 day ago
Humans could handily beat computers at chess for a long time.
Then a massive supercomputer beat the reigning champion, but didn't win the tournament.
Then that computer came back and won the tournament a year later.
A few years later humans are collaborating in-game with these master chess engines to multiply their strength, becoming the dominant force in the human/computer chess world.
A few years after that though, the computers start beating the human/computer hybrid opponents.
And not long after that, humans started making the computer perform worse if they had a hand in the match.
The next few years have probably the highest probability since the cold war of being extreme inflection points in the timeline of human history.
Comment by pmarreck 1 day ago
Perhaps we're about to experience yet another renaissance of computer languages.
Comment by suriya-ganesh 1 day ago
While AI in chess is very cool in its own accord. It is not the driver for the adoption.
Comment by strbean 1 day ago
Comment by directevolve 13 hours ago
Comment by whatwhaaaaat 1 day ago
Chess programs at primary schools have exploded in the last 10 years and at least in my circle millennial parents seem more likely to push their children to intellectual hobbies than previous generations (at least in my case to attempt to prevent my kids from becoming zombies walking around in pajamas like I see the current high schoolers).
Comment by WarmWash 1 day ago
Comment by YoukaiCountry 1 day ago
Comment by the_af 1 day ago
But I'm out of the loop: in order to maintain popularity, are computers banned? And if so, how is this enforced, both at the serious and at the "troll cheating" level?
(I suppose for casual play, matchmaking takes care of this: if someone is playing at superhuman level due to cheating, you're never going to be matched with them, only with people who play at around your level. Right?)
Comment by dugidugout 1 day ago
Firsrly, yes, you will be banned for playing at an AI level consecutively on most platforms. Secondly, its not very relevant to the concept of gaming. Sure it can make it logistically hard to facilitate, but this has plagued gaming through cheats/hacks since antiquity, and AI can actually help here too. Its simply a cat and mouse game and gamers covet the competitive spirit too much to give in.
Comment by the_af 1 day ago
I know pre-AI cheats have ruined some online games, so I'm not sure it's an encouraging thought...
Are you saying AI can help detect AI cheats in games? In real time for some games? Maybe! That'd be useful.
Comment by kzrdude 1 day ago
Comment by the_af 1 day ago
Comment by dugidugout 1 day ago
Will you be even more discouraged if I share that "table flipping" and "sleight of hand" have ruined many tabletop games? Are you pressed to find a competitive match in your game-of-choice currently? I can recommend online mahjong! Here is a game that emphasizes art in permutations just as chess does, but every act you make is an exercise in approximating probability so the deterministic wizards are less invasive! In any-case, I'm not so concerned for the well-being of competition.
> Are you saying AI can help detect AI cheats in games? In real time for some games? Maybe! That'd be useful.
I know a few years back valve was testing a NN backed anti-cheat watch system called VACnet, but I didn't follow whether it was useful. There is no reason to assume this won't be improved on!
Comment by the_af 1 day ago
> Will you be even more discouraged if I share that "table flipping" and "sleight of hand" have ruined many tabletop games?
What does this have to do with AI or online games? You cannot do either of those in online games. You also cannot shove the other person aside, punch them in the face, etc. Let's focus strictly on automated cheating in online gaming, otherwise they conversation will shift to absurd tangents.
(As an aside, a quick perusal of r/boardgames or BGG will answer your question: yes, antisocial and cheating behavior HAVE ruined tabletop gaming for some people. But that's neither here nor there because that's not what we're discussing here.)
> Are you pressed to find a competitive match in your game-of-choice currently? I can recommend online mahjong!
What are you even trying to say here?
I'm not complaining, nor do I play games online (not because of AI; I just don't find online gaming appealing. The last multiplayer game I enjoyed was Left 4 Dead, with close friends, not cheating strangers). I just find the topic interesting, and I wonder how current AI trends can affect online games, that's all. I'm very skeptical of claims that they don't have a large impact, but I'm open to arguments to the contrary.
I think some of this boils down to whether one believes AI is just like past phenomena, or whether it's significantly different. It's probably too early to tell.
Comment by dugidugout 1 day ago
Claim 1: Cheating is endemic to competition across all formats (physical or digital)
Claim 2: Despite this, games survive and thrive because people value the competitive spirit itself
Claim 3: The appreciation of play isn't destroyed by the existence of cheaters (even "cheaters" who simply surpass human reasoning)
The mahjong suggestion isn't a non-sequitur (while still an earnest suggestion), it was to exemplify my personal engagement with the spirit of competition and how it completely side-steps the issue you are wary is existential.
> I think some of this boils down to whether one believes AI is just like past phenomenons, or whether it's significantly different. It's probably too early to tell.
I suppose I am not clear on your concern. Online gaming is demonstrably still growing and I think the chess example is a touching story of humanism prevailing. "AI" has been mucking with online gaming for decades now, can you qualify why this is so different now?
Comment by the_af 23 hours ago
I'm absolutely not contesting that online play is hugely popular.
I guess I'm trying to understand how widespread and serious the problem of cheaters using AI/computer cheats actually is [1]. Maybe the answer is "not worse than before"; I'm skeptical about this but I admit I have no data to back my skepticism.
[1] I know Counter Strike back in the day was sort of ruined because of cheaters. I know one person who worked on a major anticheat (well-known at the time, not sure today), which I think he tried to sell to Valve but they didn't go with his solution. Also amusingly, he was remote-friends with a Russian hacker who wrote many of the cheats, and they had a friendly rivalry. This is just an ancedote, I'm not sure that it has anything to do with the rest of my comment :D
Comment by dugidugout 23 hours ago
> I guess I'm trying to understand how widespread and serious the problem of cheaters using AI/computer cheats actually is.
It is undoubtedly more widespread.
> I know Counter Strike back in the day was sort of ruined because of cheaters.
There is truth in this, but this only affected more casual ladder play. Since early CSGO (maybe before as well? I am not of source age) there has been FACEiT and other leagues which asserts strict kernel-level anti-cheat and other heuristics on the players. I do agree this cat and mouse game is on the side of the cat and the best competition is curated in tightly controlled (often gate-kept) spaces.
It is interesting that "better" cheating is often done through mimicking humans closer though, which does have an interesting silver lining. We still very much value a "smart" or "strategic" AI in match-based solitary genres, why not carry this over to FPS or the like. Little Timmy gets to train against an AI expressing "competitive player" without needing to break through the extreme barriers to actually play against someone of this caliber. Quite exciting when put this way.
If better cheats are being forced to actually play the game, I'm not sure the threat is very existential to gaming itself. This is much less abrasive than getting no-scoped in spawn at round start in a CS match.
Comment by retsibsi 1 day ago
Comment by ecshafer 1 day ago
Comment by nemomarx 1 day ago
Not sure how smaller ones do it, but I assume watching to make sure no one has any devices on them during a game works well enough if there's not money at play?
Comment by wwweston 1 day ago
There’s really no crisis at a certain level; it’s great to be able to drive a car to the trailhead and great to be able to hike up the mountain.
At another level, we have worked to make sure our culture barely has any conception of how to distribute necessities and rewards to people except in terms of market competition.
Oh and we barely think about externalities.
We’ll have to do better. Or we’ll have to demonize and scapegoat so some narrow set of winners can keep their privileges. Are there more people who prefer the latter, or are there enough of the former with leverage? We’ll find out.
Comment by kkukshtel 1 day ago
Comment by lumost 1 day ago
Comment by DSMan195276 1 day ago
In comparison with StarCraft there's a rock-paper-scissors aspect with the units that makes it an inherent advantage to know what your opponent is doing or going to do. The same thing happens with human players, they hide their accounts to prevent others from discovering their prepared strategies.
Comment by andai 1 day ago
I'm going to assume you're not implying that Deep Blue did 9/11 ;)
Comment by pjc50 1 day ago
Comment by yarekt 12 hours ago
Comment by kylec 1 day ago
Comment by throw310822 1 day ago
Comment by rkagerer 23 hours ago
Comment by airstrike 1 day ago
Comment by roughly 1 day ago
Comment by cons0le 1 day ago
If you tell it you want to go somewhere continents away, it will happily agree and drive you right into the ocean.
And this is before ads and other incentives make it worse.
Comment by tikhonj 1 day ago
Comment by GTP 1 day ago
Comment by jimkleiber 1 day ago
Comment by faxmeyourcode 1 day ago
There was probably initial excitement about not having to manually break the earth, then stories spread about farmers ruining entire crops with one tractor, some farms begin touting 10x more efficiency by running multiple tractors at once, some farmers saying the maintenance burden of a tractor is not worth it compared to feeding/watering their mule, etc.
Fast forward and now gigantic remote controlled combines are dominating thousands of acres of land with the efficiency greater than 100 men with 100 early tractors.
Comment by pmg101 1 day ago
Probably some tech does achieve ubiquity and dominance and some does not and it's extremely difficult to say in advance which is which?
Comment by t_mahmood 1 day ago
Comment by interestpiqued 1 day ago
Comment by ivanstojic 23 hours ago
> The lower-bound estimate represents 18 percent of the total reduction in man-hours in U.S. agriculture between 1944 and 1959; the upper-bound estimate, 27 percent
I'm not seeing that with LLMs.
Comment by throw310822 10 hours ago
Comment by maypop 1 day ago
Comment by MarceliusK 1 day ago
Comment by andai 1 day ago
Comment by bitwize 1 day ago
* You have to tell it which way to go every step of the way
* Odds are good it'll still drop you off at the wrong place
* You have to pay not only for being taken to the wrong place, but now also for the ride to get you where you wanted to go in the first place
Comment by cmrdporcupine 1 day ago
Comment by agentultra 1 day ago
[0] https://locusmag.com/feature/commentary-cory-doctorow-revers...
Comment by oliwary 1 day ago
My favorite quote from the excellent show halt and catch fire. Maybe applicable to AI too?
Comment by latexr 1 day ago
https://youtube.com/watch?v=oeqPrUmVz-o&t=1m54s
> You’ve go to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try to sell it.
Comment by automatic6131 1 day ago
If those LLM addicts could read, they'd be very upset!
Comment by direwolf20 1 day ago
Comment by torginus 1 day ago
That sort of comprehensive innovation (hardware, software, UX - Apple invented everything), while entering an unfamilar and established market, I'd argue would've been impossible to do in a startup.
Comment by direwolf20 1 day ago
Comment by crote 1 day ago
Comment by NitpickLawyer 1 day ago
The Internet begs to differ. AI is more akin to the Internet than to any Mac product. We're now in the stage of having a bunch of solutions looking for problems to solve. And this stage of AI is also very very close to the consumer. What took dedicated teams of specialised ML engineers to trial ~5-10 years ago, can be achieved by domain experts / plain users, today.
Comment by monooso 1 day ago
We've always had that.
In olden times the companies who peddled such solutions were called "a business without a market", or simply "a failing business." These days they're "pre-revenue."
Maybe it will be different this time, maybe it will be exactly the same but a lot more expensive. Time will tell.
Comment by ViktorRay 1 hour ago
The dot com bubble crashed. Many websites like pets.com ended up closing up.
It wouldn’t be until much later that those ideas succeeded…when companies were able to work from the customer experience backward to the technology.
Comment by latexr 1 day ago
The internet is an entirely different beast and does not at all support your point. What we have on the web is hacks on top of hacks. It was not built to do all the things we push it to do, and if you understand where to look, it shows.
Comment by djmips 1 day ago
Comment by jayd16 1 day ago
They were able to bootstrap a mobile platform because they could convince themselves they had control of the user experience.
I'm not so sure where AI would land in the turn of the millennium Apple culture.
Comment by rightbyte 1 day ago
Instead of doing almost correct email summaries Jobs would have a LLM choose color of the send button with an opaque relationship with the emotional mood of the mail you write.
Comment by ericmcer 1 day ago
Remember when our job was to hide the ugly techniques we had to use from end users?
Comment by BoredomIsFun 1 day ago
I found it very caricature, too saturated with romance - which is untypical for tech environment, much like "big bang theory".
Comment by oliwary 1 day ago
Comment by zorked 1 day ago
Comment by miyoji 1 day ago
(MILD SPOILERS FOLLOW)
For example, in the first season, the characters we follow are not inventing the PC - that has been done already. They're one of many companies making an IBM clone, and they are modestly successful but not remarkably so. At the end of the season, one of the characters sees the Apple Macintosh and realizes that everything he had done was a waste of time (from his perspective, he wanted to change the history of computers, not just make a bundle of cash), he wasn't actually inventing the future, he just thought he was. They also don't really start from being underfunded unknowns in each season - the characters find themselves in new situations based on their past experiences in ways that feel reasonable to real life.
Comment by razakel 1 day ago
Sophie Wilson cameos when they have a fight.
Comment by alt227 1 day ago
Comment by TacticalCoder 1 day ago
Everything is period correct and then the clothes and cars too: it's all very well done.
Is there a bit too much romance? Maybe. But it's still worth a watch.
Comment by deltoidmaximus 1 day ago
But when they changed the characters to be passionate stubborn people eventually started to cling to each other as they together rode the whirlwind of change the show really found its footing for me. And they did so without throwing away the events of season 1, instead having the 'takers' go on redemption arcs.
My only real complaint after re-watching really was it needed maybe another half season. I think the show should have ended with the .com bust and I didn't like that Joe sort of ran away when it was clear he'd attached himself to the group as his family by the end of the show.
Comment by threethirtytwo 1 day ago
We should use analogies to point out the obvious thing everyone is avoiding:
Guys 3 years ago, AI wasn’t even a horse. It was a rock. The key is that it transformed into horse…. what will it be in the next 10 years?
AI is a terminator. A couple years back someone turned off read only mode. That’s the better analogy.
Pick an analogy that follows the trendline of continual change into the unknown future rather then an obvious analogy that keeps your ego and programming skills safe.
Comment by niam 1 day ago
I suppose because they resemble the abstractions that make complex language possible. Another world full of aggressive posturing at tweet-length analogistic musings might have stifled some useful English parlance early.
But I reckon that we shouldn't have called it phishing because emails don't always smell.
Comment by Abstract_Typist 1 day ago
As in models: All analogies are "wrong", some analogies are useful.
Comment by threethirtytwo 1 day ago
My question is more why does HN love analogies when the above is true.
Comment by GuB-42 1 day ago
Because HN is like a child and analogies are like images
Comment by adonovan 1 day ago
Comment by danw1979 1 day ago
Pretty good for specific tasks.
Probably worth the input energy, when used in moderation.
Wear the right safety gear, but even this might not help with a kickback.
It's quite obvious to everyone nearby when you're using one.
Comment by samtp 1 day ago
Language is more of less a series of analogies. Comparing one thing to another is how humans are able to make sense of the world.
Comment by threethirtytwo 4 hours ago
Comment by beepbooptheory 1 day ago
Comment by threethirtytwo 16 hours ago
A boy is like a girl.
A skinny human is like a human that is not skinny.
A car is like a wagon.
All obvious, all pointless.
Comment by georgestrakhov 1 day ago
Comment by baxtr 1 day ago
Comment by Symmetry 1 day ago
Comment by ffsm8 1 day ago
That doesn't make any sense to me whatsoever, it can only be "equally strong", making the approach non-viable because they're not providing any value... But the only way for the human in the loop to add an actual demerit, you'd have to include time taken for each move into the final score, which isn't normal in chess.
But I'm not knowledgeable on the topic, I'm just expressing my surprise and inability to contextualize this claim with my minor experience of the game
Comment by famouswaffles 1 day ago
Comment by ffsm8 1 day ago
These human in the loop systems basically lists possible moves with likelihood of winning, no?
So how would the human be a demerit? It'd mean that the human for some reason decided to always use the option that the ai wouldn't take, but how would that make sense? Then the AI would list the "correct" move with a higher likelihood of winning.
The point of this strategy was to mitigate traps, but this would now have to become inverted: the opponent AI would have to be able to gaslight the human into thinking he's stopping his AI from falling into a trap. While that might work in a few cases, the human would quickly learn that his ability to overrule the optimal choice is flawed, thus reverting it back to baseline where the human is essentially a non-factor and not a demerit
Comment by famouswaffles 22 hours ago
The human will be a demerit any time it's not picking the choice the model would have made.
>While that might work in a few cases, the human would quickly learn that his ability to overrule the optimal choice is flawed, thus reverting it back to baseline where the human is essentially a non-factor and not a demerit
Sure, but it's not a Centaur game if the human is doing literally nothing every time. The only way for a human+ai team to not be outright worse than only ai is for the human to do nothing at all and that's not a team. You've just delayed the response of the computer for no good reason.
Comment by Symmetry 1 day ago
Comment by ffsm8 1 day ago
> pure AI teams tend to be much stronger.
Maybe each turn has a time limit, and a human would need "n moments" to make the final judgement call whereas the AI could delay the final decision right to the last moment for it's final analysis? So the pure AI player gets an additional 10-30s to simulate the game essentially?
Comment by Symmetry 8 hours ago
Comment by plomme 1 day ago
Comment by pixl97 1 day ago
Comment by jeanlucas 1 day ago
Comment by GlobalFrog 1 day ago
Comment by Sharlin 1 day ago
[1] https://locusmag.com/feature/commentary-cory-doctorow-revers...
Comment by egeozcan 1 day ago
Comment by Almondsetat 1 day ago
Comment by willrshansen 1 day ago
Comment by zombot 1 day ago
Comment by tetris11 1 day ago
Comment by omgsharks 1 day ago
Comment by krige 1 day ago
Comment by direwolf20 1 day ago
Comment by Xunjin 1 day ago
"it is quite normal for a horse to poo (defecate) 8-12 times a day and produce anywhere from 13 to 23 kilograms of poo a day."
Comment by pixl97 1 day ago
Comment by Sharlin 1 day ago
Comment by rrr_oh_man 1 day ago
Comment by recursive 1 day ago
Comment by Almondsetat 1 day ago
Comment by dmitrijbelikov 1 day ago
Comment by MarceliusK 1 day ago
Comment by MarceliusK 1 day ago
Comment by lief79 1 day ago
Granted, a journey to a new location would make this accurate.
Comment by jpalepu33 1 day ago
I've found that the key is treating AI like a junior developer who's really fast but needs extremely clear instructions. The same way you'd never tell a junior dev "just build the feature" - you need to:
1. Break down the task into atomic steps 2. Provide explicit examples of expected output 3. Set up validation/testing for every response 4. Have fallback strategies when it inevitably goes off-road
The real productivity gains come when you build proper scaffolding around the "horse" - prompt templates, output validators, retry logic, human-in-the-loop for edge cases. Without that infrastructure, you're just hoping the horse stays on the path.
The "it eats a lot" point is also critical and often overlooked when people calculate ROI. API costs can spiral quickly if you're not careful about prompt engineering and caching strategies.
Comment by chr15m 20 hours ago
Comment by altern8 1 day ago
It could go and do the task perfectly as instructed, or it could do something completely different that you haven't asked for and destroy everything in its path in the process.
I personally found that if you don't give it write access to anything that you can't easily restore and you review and commit code often it saves me a lot of time. It also makes the whole process more enjoyable, since it takes care of a lot of boilerplate for me.
It's definitely NOT intelligent, it's more like a glorified autocomplete but it CAN save a huge amount of time if used correctly.
Comment by MarceliusK 1 day ago
Comment by altern8 1 day ago
Comment by drobinhood 1 day ago
Comment by nomilk 1 day ago
> (The horse) is way slower and less reliable than a train but can go more places
What does the 'train' represent here?
A guess: perhaps off-the-shelf software? - rigid, but much faster if it goes where (/ does what) you want it to.
Comment by easeout 21 hours ago
Comment by spot5010 1 day ago
Maybe the train is software that's built by SWEs (w/ or w/o AI help). Specifically built for going from A to B very fast. But not flexible, and takes a lot of effort to build and maintain.
Comment by skybrian 1 day ago
Another one I like is "Hungry ghosts in jars."
https://bsky.app/profile/hikikomorphism.bsky.social/post/3lw...
Comment by jonplackett 1 day ago
Comment by eightys3v3n 1 day ago
Some times I end up giving up trying to get the AI to build something following a particular architecture or fixing a particular problem in it's provious implementations.
Comment by jonplackett 19 hours ago
Comment by eightys3v3n 19 hours ago
I did not catch that. Nice.Comment by jonplackett 11 hours ago
Comment by jordemort 1 day ago
Comment by Zardoz89 1 day ago
Comment by titaniumrain 10 hours ago
Comment by Eliezer 1 day ago
Comment by skapadia 1 day ago
Comment by davidhunter 1 day ago
Horse rumours denied.
Comment by egeozcan 1 day ago
Comment by Dilettante_ 1 day ago
Comment by doener 23 hours ago
I‘m not able to follow. So AI is a horse in this metaphor, what is a train then? Still a train?
Comment by easeout 21 hours ago
Comment by arthurfirst 1 day ago
Requires ammo (tokens) which can be expensive.
Requires good aim to hit the target.
Requires practice to get good aim.
Dangerous in the hands of the unskilled (like most instruments or tools).
Comment by blibble 1 day ago
Comment by polmuz 1 day ago
Comment by ethersteeds 18 hours ago
Comment by overtone1000 1 day ago
Comment by p0w3n3d 1 day ago
Comment by djmips 1 day ago
Comment by aanet 1 day ago
^^ We are skeptical of AIs (and people) that claim they have consciousness ;-)
Comment by qwertytyyuu 1 day ago
Comment by retrocog 1 day ago
Comment by hackable_sand 1 day ago
Comment by pixl97 1 day ago
Comment by apricot 1 day ago
Comment by oytis 1 day ago
Comment by ttouch 1 day ago
Comment by tuyiown 1 day ago
Comment by isodev 1 day ago
Comment by Tenemo 1 day ago
Comment by MarceliusK 1 day ago
Comment by isodev 1 day ago
Comment by overflyer 1 day ago
Comment by amelius 1 day ago
Comment by einpoklum 1 day ago
I used to tell my Into-to-Programming-in-C course students, 20 years ago, that they could in principle skip one or two of the homework assignments; and that some students even manage to outsmart us and submit copied work as homework, but - they would just not become able to program if they don't do their homework themselves. "If you want to be able to write software code you have to exercise writing code. It's just that simple and there's no getting around it."
Of course not every discipline is the same. But I can also tell you that if you want to know, say, history - you have to memorize accounts and aspects and highlights of historical periods and processes, and recount them yourself, and check that you got things right. If "the AI" does this for you, then maybe it knows history but you don't.
And that is the point of homework (if it's voluntary of course).
Comment by pixl97 1 day ago
To become upper management, just steal other peoples work.
Comment by jurjo 1 day ago
Comment by recursive 1 day ago
Comment by metalman 1 day ago
Comment by iamkonstantin 1 day ago
Comment by nemosaltat 1 day ago
Comment by d--b 1 day ago
Comment by smitty1e 1 day ago
On AI with no name
It felt good to be out of the rAIn
In the desert, you can remember your name
'Cause there ain't no one for to give you no pain"
Comment by direwolf20 1 day ago
Comment by Dilettante_ 1 day ago
I'm not saying that's your video but it sure looks like that's your video ;)
Comment by direwolf20 1 day ago
Comment by croisillon 1 day ago
Comment by 6stringmerc 1 day ago
My term of “Automation Improved” is far more relevant and descriptive in current state of the art deployments. Same phone / text logic trees, next level macro-type agent work, none of it is free range. Horses can survive on their own. AI is a task helper, no more.
Comment by pixl97 1 day ago
I somewhat disagree with this. AI doesn't have to worry about any kind of physical danger to itself, so it's not going to have any evolutionary function around that. If the linked Reddit thread is to be believed AI does have awareness of information hazards and attempts to rationalize around them.
https://old.reddit.com/r/singularity/comments/1qjx26b/gemini...
>Horses can survive on their own.
Eh, this is getting pretty close to a type of binary thinking that breaks down under scrutiny. If, for example, we take any kind of selectively bred animal that requires human care for it's continued survival, does this somehow make said animal "improved automation"?
Comment by taneq 1 day ago
Comment by echelon 1 day ago
I have to start doing this for "top level"ish commentary. I've frequently wanted to nucleate discussions without being too orthogonal to thread topics.
Comment by gyanchawdhary 1 day ago
Comment by dana321 1 day ago
Comment by gyanchawdhary 1 day ago
Comment by zhoujing204 1 day ago
But there is a price to be paid. Metaphors can become confused with the things they are meant to symbolize, so that we treat the metaphor as the reality. We forget that it is an analogy and take it literally." -- The Triple Helix: Gene, Organism, and Environment by Richard Lewontin.
Here are something I generated with Gemini:
1. Sentience and Agency
The Horse: A horse is a living, sentient being with a survival instinct, emotions (fear, trust), and a will of its own. When a horse refuses to cross a river, it is often due to self-preservation or fear. The AI: AI is a mathematical function minimizing error. It has no biological drive, no concept of death, and no feelings. If an AI "hallucinates" or fails, it isn't "spooked"; it is simply executing a probabilistic calculation that resulted in a low-quality output. It has no agency or intent.
2. Scalability and Replication
The Horse: A horse is a distinct physical unit. If you have one horse, you can only do one horse’s worth of work. You cannot click "copy" and suddenly have 10,000 horses. The AI: Software is infinitely reproducible at near-zero marginal cost. A single AI model can be deployed to millions of users simultaneously. It can "gallop" in a million directions at once, something a biological entity can never do.
3. The Velocity of Evolution
The Horse: A horse today is biologically almost identical to a horse from 2,000 years ago. Their capabilities are capped by biology. The AI: AI capabilities evolve at an exponential rate (Moore's Law and algorithmic efficiency). An AI model from three years ago is functionally obsolete compared to modern ones. A foal does not grow up to run 1,000 times faster than its parents, but a new AI model might be 1,000 times more efficient than its predecessor.
4. Contextual Understanding
The Horse: A horse understands its environment. It knows what a fence is, it knows what grass is, and it knows gravity exists. The AI: Large Language Models (LLMs) do not truly "know" anything; they predict the next plausible token in a sequence. An AI can describe a fence perfectly, but it has no phenomenological understanding of what a fence is. It mimics understanding without possessing it.
5. Responsibility
The Horse: If a horse kicks a stranger, there is a distinct understanding that the animal has a mind of its own, though the owner is liable. The AI: The question of liability with AI is far more complex. Is it the fault of the prompter (rider), the developer (breeder), or the training data (the lineage)? The "black box" nature of deep learning makes it difficult to know why the "horse" went off-road in a way that doesn't apply to animal psychology.
Comment by dangoodmanUT 1 day ago
Comment by brador 1 day ago
Either way it is an imagined end point that has no bearing in known reality.
Comment by deafpolygon 1 day ago
Comment by MORPHOICES 1 day ago