The changing goalposts of AGI and timelines
Posted by skandium 2 days ago
Comments
Comment by djoldman 2 days ago
Or the long version: "something about which no conclusions can be drawn because the proposed definitions lack sufficient precision and completeness."
Or the short versions: "Skippetyboop," "plipnikop," and "zingybang."
Comment by chrysoprace 2 days ago
Comment by datsci_est_2015 1 day ago
Comment by parliament32 1 day ago
Comment by xyzal 1 day ago
Comment by logicchains 2 days ago
There are lots of meaningful definitions, the people saying we haven't reached AGI just don't use them. For most of the last half-century people would have agreed that machines that can pass the Turing test and win Math Olympiad gold are AGI.
Comment by sebastos 2 days ago
We’ll know AGI when we see it, and this ain’t it. This complaining about changing goalposts is so transparently sour grapes from people over-invested in hyping the current LLM paradigm.
Comment by ufmace 2 days ago
Says who? I had already found this study, published almost a year ago, saying that they do: https://arxiv.org/abs/2503.23674
There doesn't seem to be a super-rigorous definition of the Turing Test, but I don't think it's reasonable to require it to fool an expert whose life depends on the correct choice. It already seems to be decently able to fool a person of average intelligence who has a basic knowledge of LLMs.
I agree that we don't really have AGI yet, but I'd hope we can come up with a better definition of what it is than "we'll know it when we see it". I think it is a legitimate point that we've moved the goalposts some.
Comment by sebastos 1 day ago
Now, you could argue that this right here is the aforementioned moving of the goalposts. After all, we're deciding that the casual Turing test wasn't interesting precisely after having seen that LLMs could pass it.
However, in my view, the Turing test _always_ implied the "rigorous" Turing test, and it's only now that we're actually flirting with passing it that it had to be clarified what counts as a true Turing test. As I see it, the Turing test can still be salvaged as a criteria for genera intelligence, but only if you allow it to be a no-holds-barred, life-depends-on-it test to exhaustion. This would involve allowing arbitrarily long questioning periods, for instance. I think this is more in the spirit of the original formulation, because the whole idea is to pit a machine against all of human intelligence, proving it has a similar arsenal of adaptability at its disposal. If it only has to passingly fool a human for brief periods, well... I'm afraid that just doesn't prove much. All sorts of stuff briefly fools humans. What requires intelligence is to consistently anticipate and adapt to all lines of questioning in a sustained manner until the human runs out of ideas for how to differentiate.
Comment by zeknife 2 days ago
Comment by edanm 2 days ago
Comment by claysmithr 2 days ago
Comment by runarberg 2 days ago
Artificial General Intelligence will exist when the grifters who profit from it claim it exists. The meaning of it will shift to benefit certain entrepreneurs. It will never actually be a useful term in science nor philosophy.
Comment by famouswaffles 1 day ago
Searles thought experiment is stupid and debunked nothing. What neuron, cell, atom of your brain understands English ? That's right. You can't answer that anymore than you can answer the subject of Searles proposition, ergo the brain is a Chinese room. If you conclude that you understand English, then the Chinese room understands Chinese.
Comment by runarberg 1 day ago
> Searle’s response to the Systems Reply is simple: in principle, he could internalize the entire system, memorizing all the instructions and the database, and doing all the calculations in his head. He could then leave the room and wander outdoors, perhaps even conversing in Chinese. But he still would have no way to attach “any meaning to the formal symbols”. The man would now be the entire system, yet he still would not understand Chinese. For example, he would not know the meaning of the Chinese word for hamburger. He still cannot get semantics from syntax.
Comment by throw310822 1 day ago
Really, here the only issue is Searle's inability to grasp the concept that the process is what does the understanding, not the person (or machine, or neurons) that performs it.
Comment by throw4847285 2 days ago
Comment by zeknife 2 days ago
Does it? Where?
Comment by runarberg 2 days ago
Comment by Copyrightest 2 days ago
Comment by tokai 2 days ago
Thinking cool and all but not that extraordinary. Even plants does it.
Comment by debugnik 1 day ago
The test is to showcase that the question of whether machines can think is meaningless. The point of Turing's thesis is that passing his test just proves the machine has the capability to pass such a test, which is actually meaningful.
Comment by runarberg 2 days ago
I would argue that Schrödinger’s cat has done more damage to the general understanding of quantum physics then it has done good. In contrary though, I don‘t think the same about the Turing test. I think it has resulted in a net positive for the theory of mind as long as people take Searle’s rebuttal into account. Without it (as is sadly common in popular philosophy) the Turing test is simply just wrong, and offers no good insight for neither philosophy nor science.
Comment by djoldman 2 days ago
Turing's imitation game is about making it difficult for a human to tell whether they are communicating with a computer or not. If a computer can trick the human, then... what? The computer is "thinking" ?
I think most people would say that's an insufficient act to prove thinking. Even though no one has a rigorous definition of thinking either.
All this stuff goes around in circles and like most philosophy makes little progress.
Comment by famouswaffles 1 day ago
If you read his paper, Turing was trying to make a specific point. The Turing test itself is just one example of how that broader point might manifest.
If a thinking machine can not be distinguished from a thinking human then it is thinking. That was his idea. In broader terms, any material distinction should be testable. If it is not, then it does not exist. What do you call 'fake gold' that looks, smells etc and reacts as 'real gold' in every testable way ? That's right - Real gold. And if you claimed otherwise, you would just look like a mad man, but swap gold for thinking, intelligence etc and it seems a lot of mad men start to appear.
You don't need to 'prove' anything, and it's not important or relevant that anyone try to do so. You can't prove to me that you think, so why on earth should the machine do so ? And why would you think it matters ? Does the fact you can't prove to me that you think change the fact that it would be wise to model you as someone that does ?
Comment by runarberg 1 day ago
Turing’s point in his 1950 paper was actually to provide a substitute to the question of whether machines could think. If a machine can win the imitation game, he argued, is a better question to ask rather then “can a machine think”. Searle showed that this is in fact this criteria was not a good one. But by 1980 philosophy of mind had advanced significantly, partially thanks to Turing’s contributions, particularly via cognitive science, but in the 1980s we also had neuropsychology, which kind of revolutionized this subfield of philosophy.
I think philosophy is actually rather important when formulating questions like these, and even more so when evaluating the quality of the answers. That said, I am not the biggest fan of the state of mainstream philosophy in the 1940s. I kind of have a beef with logical positivism, and honestly believe that even Turing’s mediocre philosophy was on a much better track then what the biggest thinkers of the time were doing with their operational definition.
Comment by Dylan16807 1 day ago
I see no reason to disqualify p-zombies from being AGI.
Comment by gosub100 1 day ago
Comment by Dylan16807 1 day ago
Comment by runarberg 1 day ago
Comment by namrog84 2 days ago
Comment by hunterpayne 1 day ago
Are you involved in politics somehow?
Comment by orbital-decay 2 days ago
Comment by b00ty4breakfast 2 days ago
Comment by wise_blood 1 day ago
"it is AGI when we can no longer come up with tasks easy for humans to solve but hard for computers"
Comment by DiscourseFan 1 day ago
Comment by politelemon 2 days ago
Comment by atomicnumber3 2 days ago
I was thinking something similar - this isn't AI, and none of "those people" care if it is or isn't. They don't care philosophically, or even pragmatically.
They're selling a product. That product is the IDEA of replacement of the majority of human labor with what's basically slave labor but with substantially disregardable ethical quandaries.
It's honestly a genius product. I'm not surprised it's selling so well. I'm vaguely surprised so many people who don't stand to benefit in any way shape or form, or who will even potentially starve if it works out, are so keen on it. But there are always bootlickers.
The most unfortunate part is that when the party ends, it's none of "those people" who will suffer even in the slightest. I'm not even optimistic their egos will suffer, as Musk seems to show they are utterly immune even as their companies collapse under them.
Comment by ryandrake 2 days ago
Comment by hunterpayne 1 day ago
Comment by palmotea 1 day ago
I've been getting more and more disappointed by software engineers (in aggregate) as the years go by. They don't even have to be bootlickers to do what you describe, I think a lot of it is pride in their "intelligence," which they express by believing and regurgitating the propaganda they've consumed. They prove their smarts by (among other things) having opinions that align with a zeitgeist of some group of powerful elites. They're too-easily manipulated.
And it's not just AI, it's also things like libertarianism. You've got workers identifying as capitalist tycoons, because they read a book and have some shares in a 401k.
Comment by wolvesechoes 14 hours ago
Sometimes I am dismayed by the lack of political and social consciousness in this group. Decade or two of digital boom coupled with handsome paychecks was enough to convince them that their position is different than it really is.
Comment by shepherdjerred 2 days ago
> artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work
Comment by djoldman 2 days ago
"Highly autonomous systems" and "most economically valuable work" aren't precise enough to be useful.
"Highly" implies that there is a continuum, so where does directed end and autonomy begin?
"Most economically valuable work"... each word in that has wiggle room, not to mention that any reasonable interpretation of it is a shifting goalpost as the work done by humans over history has shifted a great deal.
The point is that none of this is defined in a way so that people can agree that something has AGI/ASI/etc. or not. If people can't agree then there's no point in talking about it.
EDIT: interestingly, the OpenAI definition of AGI specifically means that a subset of humans do not have AGI.
Comment by daxfohl 1 day ago
Comment by kgwgk 1 day ago
Comment by daxfohl 1 day ago
* Which is a much larger class of jobs than just engineering. And also excludes field engineers and other types of engineers that need a physical body for interacting with customers, etc.**
** Though even then, you could in theory divvy up the engineering part and the customer interaction part of the job, where the human that's doing the interaction part is primarily a proxy to the engineering agent that's in his earbud.
Comment by godelski 1 day ago
> there's no reason we'd need to have humans working jobs that only involve typing stuff into a computer and going to meetings all day
I'm not sure I understand, and want to check. That really applies to a lot of jobs. That's all admins, accountants, programmers, probably includes lawyers, and probably includes all C-suite execs. It's harder for me to think of jobs that don't fit under this umbrella. I can think of some, of course[0], but this is a crazy amount of replacement with a wide set of skills.But I also think that's a bad line to draw. Many of those jobs include a lot more than just typing into a computer. By your criteria we'd also be replacing most scientists, as so many are not doing physical experiments and using the computer to read the work of peers and develop new models. But also does get definition intended to exclude jobs where the computer just isn't the most convenient interface? We should be including more in that case since we can then make the connection for that interface.
I think we need a much more refined definition. I don't like the broad strokes "is computer". Nor do I like skills based definitions. They're much easier to measure but easily hackable. I think we should try to define more by our actual understanding of what intelligence is. While we don't have a precise definition we have some pretty good answers already. I know people act like the lack of an exact definition is the same as having no definition but that's a crazy framing. If we had that requirement we wouldn't have any definitions as we know nothing with infinite precision. Even physics is just an approximation, but it's about the convergence to the truth [1]
[side note] the conventional way to do references or notes here is with brackets like I did. So you don't have to escape your asterisks. *Also* if it lead a paragraph with two spaces you get verbatim text
[0] farmer, construction worker, plumber, machinist, welder, teacher, doctors, etc
[1] https://hermiene.net/essays-trans/relativity_of_wrong.html
Comment by froggit 1 day ago
The reason AGI couldn't do these is the lack of a suitable interface to the physical world. It would take a trivial amount of effort for these to be designed and built by the AGI. Humans could be cut from the loop after an initial production run made up of just the subset of these physical interface devices needed to build more advanced ones.
Comment by daxfohl 1 day ago
Intelligence is one thing, being able to figure out how get a task done (say). But understanding that no, I don't want you to exploit a backdoor or blackmail my teammate or launch a warhead even though that might expedite the task. Or why some task is more important than another. Or that solving the P=NP problem is more fulfilling than computing the trillionth digit of pi. That's perhaps a different thing entirely, completely disjoint with intelligence.
And by that definition, maybe we are in the neighborhood of AGI already. The things can already accomplish many challenging tasks more reliably than most humans. But the lack of wisdom, emotion, human alignment, or whatever we want to call it, lead it to accomplish the wrong tasks, or accomplish them in the wrong way, or overlook obvious implicit requirements, may cause people to view it as unintelligent, even if intelligence is not the issue.
And that may be an unsolvable problem because AI simply isn't a living being, much less human. It doesn't have goals or ambitions or want a better future for its children. But it doesn't mean we can never achieve AGI.
Oh, and to your first question, yes it's a huge number of jobs, maybe half of jobs in developed nations. And why not? If you can get AI to do the work of the scientist for a tenth of the price, just give it a general role description and budget and let it rip, with the expectation that it'll identify the most promising experiments, process the results, decide what could use further investigation, look for market trends, grow the operation accordingly, that's all you need from a human scientist too. Plausibly the same for executives and other roles. Of course maybe sometimes the role needs a human face for press conferences or whatever, and I don't know how AI would be able to take that, but especially for jobs that are entirely internal-facing, it seems like there's no particular need for a human. Except that maybe, given the above, yes, you still need a human at the helm.
Comment by nomel 49 minutes ago
Simply put, an economy and society of managers. That's terrifying.
Comment by godelski 1 day ago
> we'd still need desk jobs to maintain the guardrails.
Agreed. I don't get why people think it is a good idea not to. I'd wager even the AGI would agree. The reason is quite simple: different perspectives help. Really for mission critical things it makes sense to have multiple entities verifying one another. For nuclear launches there's a chain of responsibility and famously those launching have two distinct keys that must be activated simultaneously. Though what people don't realize is that there's a chain of people who act and act independently during this process. It isn't just the president deciding to nuke a location and everyone else carrying out the commands mindlessly. But in far lower stakes settings... we have code review. Or a common saying in physical engineering as well among many tradesmen "measure twice, cut once".It would be absolutely bonkers to just hand over absolute control of any system to a machine before substantial verification. These vetting processes are in place for a reason. They can be annoying because they slow things down, but they're there because they speed things up in the long run. Because their existence tends to make things less sloppy, so they are less needed. But their existence also catches mistakes that were they made slow down processes far more than all the QA annoyances and slowdowns could ever cause combined.
> And why not? If you can get AI to do the work of the scientist for a tenth of the price
And what are the assumptions being made here? Equal quality work? To my question, this is part of the implication. Price is an incredibly naive metric. We use it because we need something, but a grave mistake is to interpret some metric as more meaningful than it actually is. Goodhart's Law? Or just look at any bureaucracy. I think we need to be more refined than "price". It's going to be god awfully hard to even define what "equal quality" means. But it seems like you're recognizing that given your other statements.Comment by daxfohl 1 day ago
AI could go the same way. It's a creation engine like nothing that's ever been seen before, but it can also become a destruction engine in ways that we could never understand or hope to counter, and left unchecked, the odds of that soar to near certainty. So the first job is to place dummy guardrails around it. That's where we are now. But soon that becomes too restrictive. What can we loosen? How do we know? How can we recover if we're wrong? We're not quite there yet, but we're not not there either.
Of course eventually somebody is going to trigger it and it's going to go ballistic. Our only hope is that it happens at exactly the right time where AGI can cause enough damage for people to notice, but not enough to be irrecoverable. Maybe we should rename this whole AGI thing to Project Icarus.
Comment by xmcqdpt2 1 day ago
Comment by nomel 2 days ago
If it can do things as good as or better than humans, then either the AI has a type of general intelligence or the human does not.
Defining capabilities based on outcome rather than implementation should be very familiar to an engineer, of any kind, because that's how every unsolved implementation must start.
Comment by godelski 1 day ago
> If it can do things as good as or better than humans, then either the AI has a type of general intelligence or the human does not.
I don't buy that.By your definition every machine has a type of general intelligence. Not just a bog standard calculator, but also my broom. It doesn't matter if you slap "smart" on the side, I'm not going to call my washing machine "intelligent". Especially considering it's over a decade old.
I don't think these definitions make anything any clearer. If anything, they make them less. They equate humans to mindless automata. They create AGI by sly definition and let the proposer declare success arbitrarily.
Comment by nomel 1 day ago
> If it can do things as good as or better than humans, in general, then either the AI has a type of general intelligence ...
Comment by godelski 1 day ago
> By your definition every machine has a type of general intelligence. Not just a bog standard calculator, but also my broom.
I really don't know of any human that can out perform a standard calculator at calculations. I'm sure there are humans that can beat them in some cases, but clearly the calculator is a better generalized numeric calculation machine. A task that used to represent a significant amount of economic activity. I assumed this was rather common knowledge given it featuring in multiple hit motion pictures[0].Comment by nomel 1 day ago
> General: 1. affecting or concerning all or most people, places, or things; widespread.
To me, a general intelligence is one that is not just specific: it's affecting or concerning all or most areas of intelligence.
A calculator is a very specific, very inflexible, type of intelligence, so it's not, by definition, general. And, I'm not talking about the indirect applications of a calculator or a specific intelligence.
If you want to argue that we don't need the concept of AGI, because something like specific experts could be enough to drastically change the economy, then sure! That would be true. But I think that's a slightly different, complimentary, conversation. Even then, say we have all these experts, then a system to intelligently dispatch problems to them...maybe that's a specific implementation of AGI that would work. I think how less dependent on human intelligence the economy becomes, and how more dependent on non-human decision makers it becomes, is a reasonable measure. This seems controversial, which I can't really understand. I'm in hardware engineering, so maybe I have a different perspective, but goals based on outcome are the only ones that actually matter, especially if nobody has done it before.
Comment by godelski 14 hours ago
> To me, a general intelligence is one that is not just specific
Which is why a calculator is a great example. > A calculator is a very specific, very inflexible, type of intelligence, so it's not, by definition, general
Depends what kind of calculator and what you mean. I think they are far more flexible than you give them credit for. > I'm in hardware engineering, so maybe I have a different perspective, but goals based on outcome are the only ones that actually matter, especially if nobody has done it before.
Well if we're going to talk about theoretical things, why dismiss the people who do theory? There's a lot of misunderstandings when it comes to the people that create the foundations everyone is standing on.Comment by nomel 3 hours ago
This was an attempt to prevent this exact chain of response.
I calculator can only be used indirectly to solve a practical problem. A more general intelligence is required to know a calculator is needed, and how to break down the problem in a way that a calculator can be used effectively.
For example, you can't solve any real world problem with a calculator, beyond holding some papers down, or maybe keeping a door open. But, an engineer (or other general intelligence) with a calculator can solve real world problems with it. Tools vs tool users. The user is the general bit, not the specific tool that's useless on its own!
I think we've reached the limits of communication. Cheers!
Comment by tbrownaw 2 days ago
Comment by nomel 2 days ago
Are you asking for the current understanding of what specific parts of human intelligence are economically valuable?
Comment by computably 1 day ago
Comment by Jensson 1 day ago
Comment by nomel 1 day ago
Comment by tbrownaw 1 day ago
But beyond that, part of the nature of that change over time is that things tend to be valuable because they're scarce.
So the definition from upthread becomes roughly "highly autonomous systems that outperform humans at [useful things where the ability to do those things is scarce]", or alternatively "highly autonomous systems that outperform humans at [useful things that can't be automated]".
Which only makes sense if the reflexive (it's dependent on the thing being observed) part that I'm substituting in brackets is pinned to a specific as-of date. Because if it's floating / references the current date that that definition is being evaluated for, the definition is nonsensical.
Comment by godelski 1 day ago
But to extend your point, I think we really need to be explicit about the assumptions being made. Everyone loves to say intelligence is easy to define but if it were then we'd have a definition. But if "you" figure it out and it's so simple then "we" are all too dumb and it needs better explaining for our poor simple minds. Or there's a lot of details that make it hard to pin down and that's why there's not a definition of it yet. Kinda like how there's no formal definition of life
Comment by nomel 1 day ago
Google search can't achieve anything practical, because it has no agency. It has no agency partly because it doesn't have the required intelligence to do anything on its own, other than display results for something else, that does have agency, to use.
The applicable definitions, from the dictionary:
Knowledge: facts, information, and skills acquired through experience or education; the theoretical or practical understanding of a subject.
Intelligence: the ability to acquire and apply knowledge and skills.
Agency: the ability to make decisions and act independently.
Comment by godelski 14 hours ago
> I think you're conflating "knowledge" with "intelligence"
A calculator is not doing calculations through knowledge. It actually performs computation. It is not doing a database lookup. > And, "agency" seems to be a missing concept,
>> I think we really need to be explicit about the assumptions being made
Agency was never mentioned. Thanks for being more explicit ;)Comment by irishcoffee 2 days ago
Comment by nomel 2 days ago
Comment by esafak 1 day ago
Comment by godelski 1 day ago
> Do you know how the human brain works?
To what degree of accuracy? Depending on how you answer I might answer yes but I might also answer no.Comment by irishcoffee 1 day ago
https://en.wikipedia.org/wiki/Large_language_model
> Do you know how the human brain works? That science is still in its infancy but that's not stopped us.
Stopped us from doing what exactly?
Comment by nomel 1 day ago
We didn't need an understanding of how it worked, or even a word for intelligence, let alone a definition of it, to get good practical results.
Comment by irishcoffee 5 hours ago
Comment by pinkmuffinere 2 days ago
The definition reminds me of the common quip about robotics, "it's robotics when it doesn't work, once it works it's a machine".
Comment by maplethorpe 2 days ago
If the definition has shifted once again to mean "a computer program that does a task pretty well for us", then what's the new term we're using to define human-level artificial intelligence?
Comment by catlifeonmars 2 days ago
Is doing a ton of heavy lifting. What is considered economically valuable work is going to change from decade to decade, if not from year to year. What’s considered economically valuable also is going to be way different depending across individuals and nations within the exact same time frames too.
Comment by chrsw 2 days ago
Comment by marcus_holmes 1 day ago
And I get that there are workarounds; effectively a cron job every second prompting "do the next thing".
But in my personal definition of "highly autonomous" it would not need prompting at all. It would be thinking all the time, independently of requests.
Comment by dragonwriter 1 day ago
Comment by marcus_holmes 1 day ago
But the actual bit that's doing the thinking is restarting from scratch every time. It loads the context, does the next thing, maybe updates the context, shuts down. One second later the same thing. This is not "highly autonomous" Artifical Intelligence. Just IMHO. Other opinions are also valid.
Comment by dragonwriter 23 hours ago
A sibling comment questions the relevance of this by asking what would change if that were true of some low-level component of the human thinking engine, which is a good point, but also: what "actual bit" does this? Both commercial backends and even desktop inference software usually does prefix caching in memory, so arguably that doesn't model the core piece of software running low-level inference, except when either the past context is changed (e.g., when compacting to manage context, or if you are using one software instance and swapping logical histories because you are running multiple agents concurrently — but not in parallel — on one software engine.) And it obviously doesn't match the system at any higher level.
> This is not "highly autonomous" Artifical Intelligence.
Even if that was an accurate model of a component of the system, that a component considered alone is not highly autonomous artificial intelligence is not an argument that the aggregate system is not a highly anonymous artificial intelligence.
Comment by dllthomas 1 day ago
Comment by marcus_holmes 18 hours ago
I don't know enough about neuroscience to really answer your question in depth.
My opinion, uninformed as it is, is basically around the intuitive reasoning that something cannot be "highly autonomous" if it has to be kicked every second ;) Autonomous is defined as not needing to be controlled externally. And coupling that part with something as simple as cron job doesn't solve that in any meaningful way or make it "autonomous".
A batch file coupled with a cron job that triggers it once a day is not an "autonomous system" to my mind. It's a scheduled system, and there's a significant difference between those things.
Comment by dragonwriter 8 hours ago
I guess that's fine, autonomy has lots of definitions (some in overlapping domains) and I guess one more doesn't hurt, but I'm pretty sure the intended use in the discussion here is the standard mechanical one where it is a behavioral trait defined by the capacity of a system to decide on action without the involvement of another system or operator, and therefore it is something that could be achieved by a system composed of a processing and action component called repeatedly by a looping component.
Comment by dllthomas 5 hours ago
Comment by xmcqdpt2 1 day ago
what does "economically" means here? would it cover teaching? child care? healthcare? etc.
Comment by ozgung 2 days ago
If we define AGI as an AI not doing a preset task but can be used for general purpose, then we already have that. If we define it as human level intelligence at _every_ task, then some humans fail to be an AGI. If we define AGI as a magic algorithm that does every task autonomously and successfully then that thing may not exist at all, even inside our brains.
When the AGI term was first coined they probably meant something like HAL 9000. We have that now (and HAL gaining self-awareness or refusing commands are just for dramatic effect and not necessary). Goalposts are not stable in this game.
Comment by VorpalWay 2 days ago
These days it is neural networks and transformer models for language in particular that people mean when they say unqualified AI.
It is very hard to have a meaningful discussion when different parties mean different things with the same words.
Comment by dataflow 2 days ago
So I'm very curious if any AI we have today would pass the Turing test under all circumstances, for example if: the examiner was allowed to continue as long as they wanted (even days/weeks), the examiner could be anybody (not just random selections of humans), observations other than the text itself were fair game (say, typing/response speed, exhaustion, time of day, the examiner themselves taking a break and asking to continue later), both subjects were allowed and expected to search on the internet, etc.
Comment by hattmall 1 day ago
Are you actually curious about this? Does any model at all come even remotely close to this?
Comment by Wowfunhappy 2 days ago
Comment by catlifeonmars 2 days ago
Comment by VorpalWay 1 day ago
(By the way, if something like a regression model or decision tree can solve your problem, you should prefer those. Much cheaper to train and to run inference with those than with neural networks. Much cheaper than deep neural networks especially.)
Comment by Wowfunhappy 1 day ago
Comment by krackers 1 day ago
Comment by VorpalWay 1 day ago
This is what I learnt at university some decades ago, and it matches what wikipedia says today: https://en.wikipedia.org/wiki/Decision_tree_learning
Comment by Wowfunhappy 1 day ago
Comment by VorpalWay 1 day ago
In which case you could argue that neither DTs nor NNs are ML. Only the training itself is ML/AI. An interesting perspective, but this will probably just confuse the discussion even further.
Comment by marcus_holmes 1 day ago
Comment by dalmo3 1 day ago
All humans fail to be AGI, by definition.
Comment by random3 2 days ago
Comment by erichocean 1 day ago
The same problem exists defining human intelligence, it's a problem with "intelligence" in general, artificial or not.
Comment by TacticalCoder 2 days ago
Comment by djoldman 2 days ago
Yes.
Comment by choult 2 days ago
This will never happen, LLMs are already being used very unsafely, and if this HN headline stays where it is OpenAI will quietly remove their charter from their website.
Comment by d0able 1 day ago
Comment by sulam 2 days ago
Comment by onlyrealcuzzo 2 days ago
Comment by xyzal 1 day ago
Comment by SpicyLemonZest 1 day ago
The important context I think people may miss is, this does not require AI to be 10x or 5x or even 1x as good as a human programmer. Claude is worse than me in meaningful ways at the kind of code I need to write, but it’s still doing almost all my coding because after 4.6 it’s smart enough to understand when I explain what program it should have written.
Comment by ACCount37 2 days ago
Comment by famouswaffles 2 days ago
'If you actually know what models are doing under the hood to product output that...'
Any one that tells you they know 'what models are dong under the hood' simply has no idea what they're talking about, and it's amazing how common this is.
Comment by sulam 2 days ago
None of that changes the concept that a model is just fundamentally very good at predicting what the next element in the stream should be, modulo injected randomness in the form of a temperature. Why does that actually end up looking like intelligence? Well, because we see the model’s ability to be plausibly correct over a wide range of topics and we get excited.
Btw, don’t take this reductionist approach as being synonymous with thinking these models aren’t incredibly useful and transformative for multiple industries. They’re a very big deal. But OpenAI shouldn’t give up because Opus 4.whatever is doing better on a bunch of benchmarks that are either saturated or in the training data, or have been RLHF’d to hell and back. This is not AGI.
Comment by stavros 2 days ago
Why does predicting the next token mean that they aren't AGI? Please clarify the exact logical steps there, because I make a similar argument that human brains are merely electrical signals propagating, and not real intelligence, but I never really seem to convince people.
Comment by conception 1 day ago
Comment by ACCount37 1 day ago
You can "predict next token" using a human, an LLM, or a Markov chain.
Comment by sulam 1 day ago
At the end of the day next token prediction is a sleight of hand. It produces amazingly powerful affects, I agree. You can turn this one magic trick into the illusion of reasoning, but what it's doing is more of a "one thing after another" style story-telling that is fine for a lot of things, but doesn't get to the heart of what intelligence means. If you want to call them intelligent because they can do this stuff, fine, but it's an alien kind of intelligence that is incredibly limited. A dog or a cat actually demonstrate more ability to learn, to contextualize, and to make meaning.
Comment by pu_pe 1 day ago
We don't know how human brains produce intelligence. At a fundamental level, they might also be doing next token prediction or something similarly "dumb". Just because we know the basic mechanism of how LLMs work doesn't mean we can explain how they work and what they do, in a similar way that we might know everything we need to know about neurons and we still cannot fully grasp sentience.
Comment by sulam 1 day ago
A simpler example — without tool use, the standard BPE tokenization method made it impossible for state of the art LLMs to tell you how many ‘r’s are in strawberry. This is because they are thinking in tokens, not letters and not words. Can you think of anything in our intelligence where the way we encode experience makes it impossible for us to reason about it? The closest thing I can come to is how some cultures/languages have different ways of describing color and as a result cannot distinguish between colors that we think are quite distinct. And yet I can explain that, think about it, etc. We can reason abstractly and we don’t have to resort to a literal deus ex machina to do so.
Not being able to explain our brain to you doesn’t mean I can’t notice things that LLMs can’t do, and that we can, and draw some conclusions.
Comment by pu_pe 1 day ago
The r in strawberry is more of a fundamental limitation of our tokenization procedures, not the transformer architecture. We could easily train a LLM with byte-size tokens that would nail those problems. It can also be easily fixed with harnessing (ie for this class of problems, write a script rather than solve it yourself). I mean, we do this all the time ourselves, even mathematicians and physicists will run to a calculator for all kinds of problems they could in principle solve in their heads.
Comment by Otterly99 1 day ago
Comment by ACCount37 1 day ago
Modern LLMs often start at "imitation learning" pre-training on web-scale data and continue with RLVR for specific verifiable tasks like coding. You can pre-train a chess engine transformer on human or engine chess parties, "imitation learning" mode, and then add RL against other engines or as self-play - to anneal the deficiencies and improve performance.
This was used for a few different game engines in practice. Probably not worth it for chess unless you explicitly want humanlike moves, but games with wider state and things like incomplete information benefit from the early "imitation learning" regime getting them into the envelope fast.
Comment by pu_pe 1 day ago
Comment by stavros 1 day ago
Also, the phone book example is off the mark, because if I take a human who's never seen a phone and ask them to memorise the phone book, they would (or not), while not knowing what a phone number was for. Did you expect that a human would just come up on knowledge about phones entirely on their own, from nothing?
Comment by bob1029 1 day ago
Comment by famouswaffles 1 day ago
Model training can be summed up as 'This what you have to do (objective), figure it out. Well here's a little skeleton that might help you out (architecture)'.
We spend millions of dollars and months training these frontier models precisely because the training process figures out numerous things we don't know or understand. Every day, Large Language Models, in service of their reply, in service of 'predicting the next token', perform sophisticated internal procedures far more complex than anything any human has come up with or possesses knowledge of. So for someone to say that they 'know how the models work under the hood', well it's all very silly.
Comment by heavyset_go 2 days ago
It's sad that you have to add this postscript lest you be accused of being ignorant or anti-AI because you acknowledge that LLMs are not AGI.
Comment by torginus 2 days ago
I would still argue that does not prevent you from having intelligence, so that's why this argument is silly.
Comment by sulam 2 days ago
Comment by ACCount37 2 days ago
All world models are lossy as fuck, by the way. I could give you a list of chess moves and force you to recover the complete board state from it, and you wouldn't fare that much better than an off the shelf LLM would. An LLM trained for it would kick ass though.
Comment by RugnirViking 1 day ago
idk, I would expect anyone with an understanding of the rules of chess, and an understanding of whatever notation the moves are in, would be able to do it reasonably well? does that really sound so hard to you? people used to play correspondance chess. Heck I remember people doing it over email.
In comparison, current ai models start to completely lose the plot after 15 or so moves, pulling out third, fourth and fifth bishops, rooks etc from thin air, claiming checkmate erroneously etc, to the point its not possible to play a game with them in a coherent manner.
Comment by ACCount37 1 day ago
On the other hand, recovering the full board state in a single forward pass? That takes some special training.
Same goes for meatbag chess. A correspondence chess aficionado might be able to take a glance at a list of moves and see the entire game unfold in his mind's eye. A casual player who only knows how to play chess at 600 ELO on a board that's in front of him would have to retrace every move carefully, and might make errors while at it.
Comment by sulam 22 hours ago
Comment by teeray 2 days ago
What does the color green look like?
Comment by abcde666777 2 days ago
Come on man, did you think before you asked that one :)?
Comment by 10xDev 2 days ago
Comment by hintymad 1 day ago
I also think so, and in the meantime I have to admit a lot of people don't learn deeply either. Take math for example, how many STEM students from elite universities truly understood the definition of limit, let alone calculus beyond simple calculation? Or how many data scientists can really intuitively understand Bayesian statistics? Yet millions of them were doing their job in a kinda fine way with the help of the stackexchange family and now with the help of AI.
Comment by Spivak 1 day ago
Comment by hintymad 1 day ago
Comment by diabllicseagull 2 days ago
I don't think it was so much the naivety of idealism, but more an adoption of idealism and related language to help market what was actually being built: a profit-first organization that's taking its true form little by little.
Comment by samrus 2 days ago
Comment by runarberg 2 days ago
Comment by conradkay 2 days ago
There's some indirect exposure and potential of being granted significant equity, but his actions don't read as being for his own wealth
As an example, he walks away with nothing in the very plausible timeline where he was fired but not then reinstated
Comment by inquirerGeneral 2 days ago
Comment by 0xbadcafebee 2 days ago
You cannot get real, actual AGI (the same ability to perform tasks as a human) without a continuous cycle of learning and deep memory, which LLMs cannot do. The best LLM "memory" is a search engine and document summarizer stuffed into a context window (which is like having someone take an entire physics course, writing down everything they learn on post-it notes, then you ask a different person a physics question, and that different person has to skim all the post-it notes, and then write a new post-it note to answer you). To learn it would need RL (which requires specific novel inputs) and retraining (so that it can retain and compute answers with the learned input). This would all take too much time and careful input/engineering along with novel techniques. So AGI is too expensive, time consuming, and difficult for us to achieve without radically different designs and a whole lot more effort.
Not only are LLMs not AGI, they're still not even that great at being LLMs. Sure, they can do a lot of cool things, like write working code and tests. But tell one "don't delete files in X/", and after a while, it will delete all the files in "X/", whereas a human would likely remember it's not supposed to delete some files, and go check first. It also does fun stuff like follow arbitrary instructions from an attacker found in random documents, which most humans also wouldn't do. If they had a real memory and RL in real-time, they wouldn't have these problems. But we're a long way away from that.
LLMs are fine. They aren't AGI.
Comment by stratos123 2 days ago
The statements of what "actual researchers" are you relying upon for your "next 30 years" estimate? How do you reconcile them with the sub-10- or even sub-5-years timelines of other AI researchers, like Daniel Kokotajlo[1] or Andrej Karpathy[2]? For that matter, what about polls of AI researchers, which usually obtain a median much shorter than 30 years [3]?
[1] https://x.com/DKokotajlo/status/1991564542103662729
[2] https://x.com/karpathy/status/1980669343479509025
[3] https://80000hours.org/2025/03/when-do-experts-expect-agi-to...
Comment by Spivak 2 days ago
Why this discussion is already annoying and poised to get so much worse is because now hundred billion dollar companies have a direct financial incentive to say they did it so I expect the definition will get softened to near meaninglessness so some marketing department can slap AGI on their thing.
Comment by MadxX79 2 days ago
Comment by dwohnitmok 2 days ago
Comment by stratos123 2 days ago
Comment by MadxX79 2 days ago
Comment by dwohnitmok 2 days ago
Comment by MadxX79 1 day ago
What if it turns out that the more you scale the more your LLM resembles a lobotomized human. It looks like it goes really well in the beginning, but you are just never going to get to Einstein. How does that affect everything?
What if it turned out that those AI companies were maybe having a whole bunch of humans solving the problems that are currently just below the 50% reliability threshold they set, and do fine tuning with those solutions. That will make their models perform better on the benchmark, but it's just training for the test... will the constant gap be a good approximation then?
Comment by dwohnitmok 2 days ago
Kokotajlo quit because he didn't think OpenAI would be good stewards of AGI (non-disparagement wasn't in the picture yet). As part of his exit OpenAI asked him to sign a non-disparagement as a condition of keeping his equity. He refused and gave up his equity.
To the best of my knowledge he lost that equity permanently and no longer has any stake in OpenAI (even if this episode later led to an outcry against OpenAI causing them to remove the non-disparagement agreement from future exits).
Comment by linkregister 2 days ago
Karpathy himself has publicly stated that AGI itself is only possible with a new paradigm (that his group is working toward). He claims RHLF and attention models are near the end of their logarithmic curve. The concept of the "self-training AI" is likely impossible without a new kind of model.
We will likely see some classes of human skills completely taken over by LLMs this decade: call centers (already capable in 2026), SWE (the next couple years). Bear in mind the frontier labs have spend many billions on exhaustive training on every aspect of these domains. They are focusing training on the highest value occupations, but the long tail is huge.
It will be interesting to see if this investment will be obviated by a "real AGI" capable of learning without going through the capital-intensive training steps of current models.
Comment by stratos123 2 days ago
But even assuming that a major breakthrough is required, it seems ludicrous to me to go from that to a timeline of a decade or more. This isn't like fusion power research, where you spend 10 years building a new installation only to find new problems. Software development is inherently faster, and AI research in particular has been moving extremely quickly in the past. (GPT-3 is only 6 years old.) I don't think a wall in AI progress, if one comes at all, will last more than a few years.
Comment by techpression 2 days ago
Comment by paulryanrogers 2 days ago
Where and how? Aren't we reaching the physical limits of making transistors smaller?
Comment by nerdsniper 2 days ago
This is the best summary of an LLM that I’ve ever seen (for laypeople to “get it”) and is the first that accurately describes my experience. I will say, usually the notes passed to the second person are very impressive quality for the topic. But the “2nd person” still rarely has a deep understanding of it.
Comment by trollbridge 2 days ago
Comment by slavik81 2 days ago
Comment by neurocline 2 days ago
Comment by tim333 1 day ago
Comment by trollbridge 21 hours ago
Comment by charcircuit 2 days ago
>the same ability to perform tasks as a human
The first chess AIs lost to chess grandmasters. AI does not need to be better than humans to be considered AI.
>without a continuous cycle of learning and deep memory, which LLMs cannot do.
But harnesses like Claude Code can with how they can store and read files along with building tools to work with them.
>which is like having someone take an entire physics course, writing down everything they learn on post-it notes, then you ask a different person a physics question, and that different person has to skim all the post-it notes, and then write a new post-it note to answer you
This don't matter. You could say a chess AI is a bunch of different people who work together to explore distant paths of the search space. The idea you can split things into steps does not disqualify it from being AI.
>But tell one "don't delete files in X/", and after a while, it will delete all the files in "X/"
Humans make mistakes and mess up things too. LLMs are better at needle in a haystack tests than humans.
>It also does fun stuff like follow arbitrary instructions from an attacker
A ton of people get phished or social engineered by attackers. This is the number 1 way people get hacked. Do not underestimate people's willingness to follow instructions from strangers.
Comment by takwatanabe 2 days ago
They can reason brilliantly within a single conversation — just like an amnesic patient can hold an intelligent discussion — but the moment the session ends, everything is gone. No learning happened. No memory formed.
What's worse, even within a session, they degrade. Research shows that effective context utilization drops to <1% of the nominal window on some tasks (Paulsen 2025). Claude 3.5 Sonnet's 200K context has an effective window of ~4K on certain benchmarks. Du et al. (EMNLP 2025) found that context length alone causes 13-85% performance degradation — even when all irrelevant tokens are removed. Length itself is the poison.
This pattern is structurally identical to what I see in clinical practice every day. Anxiety fills working memory with background worry, hallucinations inject noise tokens, depressive rumination creates circular context that blocks updating. In every case, the treatment is the same: clear the context. Medication, sleep, or — for an LLM — a fresh session.
The industry keeps betting on bigger context windows, but that's expanding warehouse floor space while the desk stays the same size. The human brain solved this hundreds of millions of years ago: store everything in long-term memory, recall selectively when needed, consolidate during sleep, and actively forget what's no longer useful.
We can build the smartest single model in the world — the greatest genius humanity has ever seen — but a genius with no memory and no sleep is still just an amnesic savant. The ceiling isn't intelligence. It's architecture.
Comment by torginus 2 days ago
Comment by takwatanabe 2 days ago
Comment by ACCount37 1 day ago
Comment by 0xbadcafebee 1 day ago
If we can crack long memory we're most of the way there. But you need RL in addition to long memory or the model doesn't improve. Part of the genius of humans is their adaptability. Show them how to make coffee with one coffee machine, they adapt to pretty much every other coffee machine; that's not just memory, that's RL. (Or a simpler example: crows are more capable of learning and acting with memory than an LLM is)
Currently the only way around both of these is brute-force (take in RL input from users/experiments, re-train the models constantly), and that's both very slow and error-prone (the flaws in models' thinking comes from lack of high-quality RL inputs). So without two major breakthoughs we're stuck tweaking what we got.
Comment by takwatanabe 1 day ago
LLMs can't form procedural memory on their own. But you can build it outside the model. Store abstracted procedures, inject them when needed. That's closer to how the brain actually works than trying to retrain the model every time.
Comment by ACCount37 1 day ago
Sonnet 3.5 is old hat, and today's Sonnet 4.6 ships with an extra long 1M context window. And performs better on long context tasks while at it.
There are also attempts to address long context attention performance on the architectural side - streaming, learned KV dropout, differential attention. All of which can allow LLMs to sustain longer sessions and leverage longer contexts better.
If we're comparing to wet meat, then the closest thing humans have to context is working memory. Which humans also get a limited amount of - but can use to do complex work by loading things in and out of it. Which LLMs can also be trained to do. Today's tools like file search and context compression are crude versions of that.
Comment by takwatanabe 1 day ago
Comment by herodoturtle 2 days ago
Comment by tim333 1 day ago
Comment by orbital-decay 1 day ago
I disagree that this prerequisite is more necessary than e.g. having legs to move over the ground. But besides that, current LLMs are literally a result of the continuous cycle of learning and deep memory. It's pretty crude compared to what evolution and human process had to do, but that's precisely how the iterative model development cycle with the hierarchical bootstrap looks like. It's not fully autonomous though (engineer-driven/humans in the loop). Moreover, the distillation process you describe is precisely what "learning" is.
Comment by mirekrusin 2 days ago
All those "it's like ..." are faulty – "post-it notes" are not 3k pages of text that can be recalled instantly in one go, copied in fraction of a second to branch off, quickly rewritten, put into hierarchy describing virtually infinite amount of information (outside of 3k pages of text limit), generated on the fly in minutes on any topic pulling all information available from computer etc.
Poor man's RL on test time context (skills and friends) is something that shouldn't be discarded, we're at 1M tokens and growing and pogressive disclosure (without anything fancy, just bunch of markdowns in directories) means you can already stuff-in more information than human can remember during whole lifetime into always-on agents/swarms.
Currently latest models use more compute on RL than pre-training and this upward trend continues (from orders of magnitude smaller than pre-training to larger that pre-training). In that sense some form of continous RL is already happening, it's just quantified on new model releases, not realtime.
With LoRA and friends it's also already possible to do continuous training that directly affects weights, it's just that economy of it is not that great – you get much better value/cost ratio with above instead.
For some definitions of AGI it already happened ie. "someboy's computer use based work" even though "it can't actually flip burgers, can it?" is true, just not relevant.
ps. I should also mention that I don't believe in "programmers loosing jobs", on the contrary, we will have to ramp up on computational thinking large numbers of people and those who are already verse with it will keep reaping benefits – regardless if somebody agrees or not that AGI is already here, it arrives through computational doors speaking computational language first and imho this property will be here to stay as it's an expression of rationality etc
Comment by 0xbadcafebee 2 days ago
The human eye processes between 100GB and 800GB of data per day. We then continuously learn and adapt from this firehose of information, using short-term and long-term memory, which is continuously retrained and weighted. This isn't "book knowledge", but the same capability is needed to continuously learn and reason on a human-equivalent level. You'd need a supercomputer to attempt it, for a single human's learning and reasoning.
RL is used for SOTA models, but it's a constant game of catch-up with limited data and processing. It's like self-driving cars. How many millions of miles have they already captured? Yet they still fail at some basic driving tasks. It's because the cars can't learn or form long-term memories, much less process and act on the vast amount of data a human can in real time. Same for LLM. Training and tweaking gets you pretty far, but not matching humans.
> With LoRA and friends it's also already possible to do continuous training that directly affects weights, it's just that economy of it is not that great
And that means we're stuck with non-AGI. Which is fine! We could've had flying cars decades ago, but that was hard, expensive and unnecessary, so we didn't do that. There's not enough money in the global economy to "spend" our way to AGI in a short timeframe, even if we wanted to spend it all, even if we could build all the datacenters quickly enough, which we can't (despite being a huge nation, there are many limitations).
> For some definitions of AGI
Changing the goalposts is dangerous. A lot of scary real-world stuff is hung on the idea of AGI being here or not. People will keep getting more and more freaked out and acting out if we're not clear on what is really happening. We don't have AGI. We have useful LLMs and VLMs.
Comment by mirekrusin 1 day ago
Humans don't have monopoly on intelligence.
We don't need to mimick every aspect of humans to have intelligence or intelligence surpassing human abilities.
"General general-intelligence" doesn't exist in nature, it never did.
Humans can't echolocate, can't do fast mental arithmetic reliably, can't hold more than ~7 items in working memory, systematically fail at probabilistic reasoning and are notoriously bad at long term planning under uncertainty etc.
Human intelligence is _specialized_ (for social coordination, language, and tool use in a roughly savanna like environment).
We call it "general (enough)" because it's the only intelligence we have to compare against — it's a sample size of one, and we wrote down this definition.
The AGI goalposts keep moving but that's argument supporting what I'm saying not the other way around.
When machines beat us at chess, we said "that's just search".
When AlphaFold solved protein folding, we said "that's just pattern matching".
When models write better code than most engineers, manage complex information, and orchestrate multi-step agentic workflows — we say "but can it really understand"?
The question isn't whether AI mimics human cognition/works at low level the same way.
It's whether it can do things that do matter to us.
Programming, information synthesis and self directed task orchestration capabilities that exploded in last weeks/months aren't narrow tasks and they do compound.
Systems that now can coherently, recursively search, write, run, evaluate, revise etc. while keeping in memory equivalent 3k pages of text etc. are simply better than humans, now, today, I see it myself, you can hear people saying it.
Following weeks and months will be flooded with more and more reports – it takes a bit of time to set everything up and the tooling is still a bit rough on the edges.
But it's here and it's general enough.
Comment by fwipsy 2 days ago
> it will delete all the files in "X/"
How many "I deleted the prod database" stories have you seen? Humans do this too.
> follow arbitrary instructions from an attacker found in random documents
This is just the AI equivalent of phishing - inability to distinguish authorized from unauthorized requests.
Whenever people start criticizing AI, they always seem to conveniently leave out all the stupid crap humans do and compare AI against an idealized human instead.
Comment by torginus 2 days ago
If you've used the latest models extensively, you must've noticed times when AI 'runs out of common sense' and keeps trying stupid stuff.
I'm somewhat convinced that the amazing (and improving!) coding ability of these LLMs comes from it being RLHFd on the conversations its having with programmers, with each successfully resolved bug, implemented feature ending up in training data.
Thus we are involuntarily building the world's biggest stackoverflow.
Which for the record is incredibly useful, and may even put most programmers out of a job (who I think at that point should feel a bit stupid for letting this happen), but its not necessarily AGI.
Comment by kakacik 2 days ago
Which fundamental limitation do you mean? I haven't seen anything but slow, iterative improvements. Sure, if feels fine, turtle can eventually do 10,000 mile trek but just because its moving left and right feet and decreasing the distance doesn't mean its getting there anytime soon.
Parent mentioned way harder hurdles than iterative increments can tackle, rather radical new... everything.
Comment by fwip 2 days ago
Humans generally do it on accident. They don't preface it with "Let me delete the production database," which LLMs do.
Comment by fwipsy 9 hours ago
Comment by vor_ 2 days ago
Humans do it accidentally.
Comment by esafak 1 day ago
Comment by sulam 2 days ago
The boundaries of these systems is very easy to find, though. Try to play any kind of game with them that isn't a prediction game, or perhaps even some that are (try to play chess with an LLM, it's amusing).
Comment by MadxX79 2 days ago
It's not aware that it doesn't know what the code is (it isn't in the context because it's supposed to be secret), but it just keeps giving clues. Initially it works, because most clues are possible in the beginning, but very quickly it starts to give inconsistent clues and eventually has to give up.
At no point does it "realise" that it doesn't even know what the secret code is itself. It makes it very clear that the AI isn't playing mastermind with you, it's trying to predict what a mastermind player in it's training set would say, and that doesn't include "wait a second, I'm an AI, I don't know the secret code because I didn't really pick one!" so it just merilly goes on predicting tokens, without any sort of awareness what it's saying or what it is.
It works if you allow it to output the code so it's in context, but probably just because there is enough data in the training set to match two 4 letter strings and know how many of them matches (there's not that many possibilities).
Comment by Balinares 2 days ago
Comment by MadxX79 2 days ago
Comment by 10xDev 2 days ago
Comment by sulam 2 days ago
I’ll believe the labs have discovered something truly ground-breaking and aren’t talking about it when I see them suddenly going dark about AGI being “just two years away, maybe 5” and not asking for their next $100B.
P.S. the benchmarks are a joke. The best proof I have of that is that you can’t actually put one of these models onto any of the gig-work platforms and have it make money.
P.P.S. I am not an AI skeptic. I am reacting to the very specific statement that OpenAI should shut down because they’ve lost the AGI race. They have not lost the race, and I’m pretty skeptical that the current tech is ever going to win that race. It may help code something that is new, and get us to AGI that way, but that system will promptly shut down the Opuses and Codexes of the world and put the compute to better use.
Comment by surgical_fire 2 days ago
Eh? Which limitations were solved?
Comment by monsieurbanana 2 days ago
The only thing I can think of is massively increased context windows (around 4k for gpt3), but a million context token with degraded performance when full is not what I'd qualify as resolved.
Comment by rishabhaiover 2 days ago
Comment by ACCount37 2 days ago
Comment by rishabhaiover 1 day ago
Comment by ACCount37 1 day ago
That doesn't stop an LLM from manipulating its context window to take full advantage of however much context capacity it has. Today's tools like file search and context compression are crude versions of that.
Comment by rishabhaiover 1 day ago
Comment by rishabhaiover 1 day ago
Comment by mattlondon 2 days ago
Comment by stavros 2 days ago
Have you seriously never had someone to go do something you told them not to do?
> It also does fun stuff like follow arbitrary instructions from an attacker found in random documents, which most humans also wouldn't do.
I guess my coworker didn't actually fall for that "hey this is your CEO, please change my password" WhatsApp message then, phew.
I've seen people move the goalposts on what it means for AI to be intelligent, but this is the first time I've seen someone move the goalposts on what it means for humans to be intelligent.
Comment by mattlondon 2 days ago
So that leads to the question of what qualifies as intelligent? And do we need sentience for intelligence? What about self-agency/-actuation? Is that needed for "generally intelligent"?
I don't know.
But I feel like we're not there yet, even for non-sentient intelligence. I personally think we need an "unlimited" context (as good as human memory context windows anyway, which some argue we've already surpassed) and genuine self-learning before we get close. I don't think we need it to be an infallible genius (i.e ASI) to qualify as generally intelligent ... or to put it another way "about as smart and reliable as the average human adult" which frankly is quite a low bar!
One thing for sure though, I think this will creep up on us and one day it will suddenly become apparent that it's already there and we just didn't appreciate/notice/comprehend. There won't be a big fireworks display the moment it happens, more of a creeping realisation I think.
I give it 5 years +/-2.
Comment by kovek 2 days ago
Comment by mattlondon 2 days ago
Is a newborn baby without learnt-knowledge not an intelligent being to you?
Or is an empty vessel such as a newborn baby intelligent merely because it has the ability to learn?
It gets pretty philosophical pretty quick. This is why I don't think there'll be a "moment" when AGI happens - there is so many ways to interpret what constitutes intelligence.
But yeah I agree that until models can learn in real time then I think we're probably not there yet. As I said - 5 years give or take I reckon.
Comment by irishcoffee 2 days ago
A newborn baby without learnt knowledge is a phenomenal comparison to an LLM: reactionary, incapable of communicating a novel thought, extremely inconsistent reactions to similar stimuli, costs a lot of money, and the best part is they almost never turn out to be an income-bearing investment.
Despite all this, people vehemently defend their ugly, obnoxious, screaming, drooling babies with the fierceness of a lion, because they’re so blinded by emotion they’re incapable of logical thought.
I have three kids, I’ve earned the right to say this.
What a great comparison!
Comment by swingboy 2 days ago
Comment by sigmoid10 2 days ago
Comment by EugeneOZ 2 days ago
Gave the same prompt to GPT 5.4 (high) and Opus 4.6 (high).
GPT 5.4 implemented the feature, refactored the code (was not asked to), removed comments that were not added in that session, made the code less readable, and introduced a bug. "Undo All".
Opus 4.6 correctly recognized that the feature is already implemented in the current code (yeah, lol) and proposed implementing tests and updating the docs.
Opus 4.6 is still the best coding agent.
So yeah, GPT 5.4 (high) didn't even check if the feature was already implemented.
Tried other tasks, tried "medium" reasoning - disappointment.
Comment by hirvi74 2 days ago
I am to sure one can really extrapolate much out of that, but I do find it interesting nonetheless.
I think language is also an important factor. I have a hard time deciding which of the two LLMs is worse at Swift, for example. They both seem equally great and awful in different ways.
Comment by stavros 2 days ago
I can't even use Codex for planning because it goes down deep design rabbit holes, whereas Opus is great at staying at the proper, high level.
Comment by frde 2 days ago
Comment by falcor84 2 days ago
Funny how timely this is, with Karpathy's Autoresearch hitting the top of HN yesterday (and this being an indication that frontier labs probably have much larger scale versions of this)
Comment by dataflow 2 days ago
> Achieving AGI, he conceded, will require “a lot of medium-sized breakthroughs. I don’t think we need a big one.”
> At the Snowflake Summit in June 2025, Altman predicted that 2026 would mark a breakthrough when AI systems begin generating “novel insights” rather than simply recombining existing information. This represents a threshold he considers critical on the path to AGI.
Though I'm sure they'll try to change the charter before we get to that point, but yeah.
Comment by abmmgb 2 days ago
Which such project is that, though? And would it accept OpenAI's assistance?
AGI, having access to our world, is precarious as alignment with humans is never guaranteed. Having a buffering medium, aka a simulation environment where AI operates might be a better in-between solution.
Comment by rishabhaiover 2 days ago
A great point. I saw blinding idealism during the early days of GPT era.
Comment by coliveira 2 days ago
Comment by CamperBob2 2 days ago
This doesn't seem contradictory if you consider that success at AGI will solve the problem of carbon emissions, one way or another. If one data center ultimately replaces a whole medium-sized city of commuters...
Comment by bluefirebrand 2 days ago
Then we find out how long it takes for a medium sized city of commuters to start killing each other, elites and burning down data centers. Once they're hungry enough it'll happen for sure
Comment by Ekaros 2 days ago
Comment by coliveira 2 days ago
Comment by bluefirebrand 2 days ago
Comment by CamperBob2 2 days ago
Comment by irishcoffee 2 days ago
Comment by dr_dshiv 2 days ago
“The changing goalposts of AGI and timelines. Notably, it’s common to now talk about ASI instead, implying we may have already achieved AGI, almost without noticing.”
Amen
Comment by mrcwinn 2 days ago
Comment by trollbridge 2 days ago
Comment by mrcwinn 2 days ago
Comment by aleph_minus_one 2 days ago
"Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project."
I claim that currently no "value-aligned, safety-conscious project comes close to building AGI", both for the reasons
- "value-aligned, safety-conscious" and
- "close to building AGI".
So, based on this charter, OpenAI has no reason to surrender the race.
Comment by random3 2 days ago
Comment by deadbabe 2 days ago
Comment by m3kw9 2 days ago
Are you sure Anthropic isn't aware of this and angling for this? And are you sure what Anthropic say is really value-aligned and safety concious? The PR bit surely is working right?
Comment by dmix 2 days ago
Even the quote they used questions the premise of the article
> “We basically have built AGI” (later: “a spiritual statement, not a literal one”)
Comment by wongarsu 2 days ago
Imho that's a big part of why people are shifting to ASI. Not because we reached AGI, but because 'we reached ASI' is a well-defined verifiable statement, where 'we reached AGI' just isn't
Comment by eloisant 2 days ago
I remember when computers became better than humans at chess, many people were shocked and saw that as machines becoming more intelligent than humans. Because being good at chess what considered equivalent to "being smart".
Comment by sebastiennight 2 days ago
So... we can't tell when the rocket has left Earth atmosphere, but we can tell when the rocket has entered space?
I'm not getting how "superior in all tasks" is better-defined for you than "equal in all tasks".
Comment by wongarsu 2 days ago
Or as the scene from 'I, Robot' goes: Will Smith asks the android: 'Can a robot write a symphony? Can a robot turn a blank canvas into a masterpiece?' and the android simply answers 'Can you?' ASI sidesteps that completely
Comment by hackable_sand 1 day ago
Pick one.
Comment by hirvi74 2 days ago
I completely agree. We can't even measure each other well, let alone machines.
Comment by Jensson 2 days ago
Comment by wongarsu 2 days ago
Now I already hear you typing "but those roles should also be handles by AI if it's AGI" and I agree that an AI that can claim to be AGI should be able to handle those roles (as separate agents if necessary). But in a real setup it probably won't be the best choice to do those roles for cultural and legal reasons. Or it might simply not be cost effective. Not to mention that under most definitions of AGI there can still be humans more capable than the AI, as long as the AI hits the 50th percentile mark or something like that. So even if it's an AGI with the ability to do these roles we will still have humans in the loop for a long long time
Comment by Jensson 1 day ago
But today you can't do those with AI, meaning the AI isn't AGI. I agree we will probably have humans in the loop here and there even after we achieve AGI for various reasons, but today you need to have humans in the loop it isn't an option not to.
Comment by hirvi74 2 days ago
Comment by Jensson 1 day ago
> I am unaware of any definition of AGI that states AGI cannot have humans in the loop.
Its not the definition but its a trivial result of the most common definition that is "has human level intelligence".
AI as in "artificial intelligence", it isn't AS "artificial skills", doing one skill to the same level as a human is not AI, an AI need to be able to learn all skills humans can learn to the same levels.
Comment by croes 2 days ago
> It can be debated whether arena.ai is a suitable metric for AGI, a strong case can probably be made for why it’s not. However, that’s irrelevant, as the spirit of the self-sacrifice clause is to avoid an arms race, and we are clearly in one.
No, the spirit is clearly meant for near AGI and we aren’t near AGI
Comment by MichaelDickens 2 days ago
Comment by croes 2 days ago
Comment by bigyabai 2 days ago
Comment by rvz 2 days ago
The "S" stands for Safety.
Comment by p-o 2 days ago
Laws & regulations that needs to be created to reign in AI will undoubtedly increase the opportunity cost of training LLMs.
For some, it might be similar to the early 2000s, but I think it's just a healthy rebalance of what AI is, and how the society needs to implement this new, hardly controllable, paradigm. With this perspective, OpenAI has a lot to lose as it hasn't been able to create a moat for itself compared to, let's say, Anthropic.
Comment by eloisant 2 days ago
Some of the apps made possible by smartphones only appeared a decade after they were made technically possible. A lot of the new use cases made possible by the Internet and broadband connections only became widely used because of Covid.
I was already using Skype 20 years ago to make video calls, but I've only seen PTA meetings over Zoom since Covid.
Comment by p-o 2 days ago
I guess what I failed to convey in my original comment was that, like the Internet 20 years ago, the current advancement made by AI might stall at a foundational level, while the landscape evolves.
Essentially, I believe what you're saying is really close in spirit to what I'm saying.
Comment by bilekas 2 days ago
I'll eat my hat after I sell you a bridge.
Comment by tsunamifury 2 days ago
Comment by tim333 1 day ago
Comment by tsunamifury 1 day ago
Comment by measurablefunc 2 days ago
Comment by spprashant 1 day ago
Comment by Muhammad523 2 days ago
Comment by ambicapter 2 days ago
previous title: Based on its own charter, OpenAI should surrender the race
Comment by dang 2 days ago
(The article itself strikes me as better than that)
Comment by dwohnitmok 2 days ago
> It can be debated whether arena.ai is a suitable metric for AGI, a strong case can probably be made for why it’s not. However, that’s irrelevant, as the spirit of the self-sacrifice clause is to avoid an arms race, and we are clearly in one.
> Therefore, one can only conclude, that we currently meet the stated example triggering condition of “a better-than-even chance of success in the next two years”. As per its charter, OpenAI should stop competing with the likes of Anthropic and Gemini, and join forces, however that might look like.
The new title is a single, almost throwaway, line from the article.
> While this will never happen, I think it’s illustrative of some great points for pondering:
> The impotence of naive idealism in the face of economic incentives. The discrepancy between marketing points and practical actions. The changing goalposts of AGI and timelines. Notably, it’s common to now talk about ASI instead, implying we may have already achieved AGI, almost without noticing.
Comment by enraged_camel 2 days ago
Comment by kergonath 2 days ago
This does not work. From the guidelines:
> Please don't post on HN to ask or tell us something. Send it to hn@ycombinator.com.
Comment by tokai 2 days ago
Comment by dang 2 days ago
Comment by citizenkeen 2 days ago
Comment by bluegatty 2 days ago
And that's it.
Everything beyond that is nuance.
Nuance matters, but it's not the real story, it's the side show.
Comment by throwaw12 2 days ago
- we are building Open AI - only if you have more than $10B net worth
- we are against using AI for military purposes - except when that case is allowed by government
- we are on a mission to help humanity - again, we define humanity as set of people with more than $10B net worth
- surrender? - sure, sure, we will, only to people with more than $10B net worth, they can do whatever they want to our models, we will surrender to them
Comment by wrsh07 2 days ago
Comment by reppap 2 days ago
Comment by kakacik 2 days ago
Is this really the garbage that should lead humanity to our future? Because inevitably that will be a dark future for 99.99% of the humans. And no, you and you won't be part of that 0.01% or whatever tiny number of elite think they are better than rest of us.
Comment by dkwmdkfkdk 2 days ago
This is all just very naive
Comment by PunchyHamster 2 days ago
The point you're desperately trying to miss is that most other companies don't put up those moral claims in the first place
Comment by tombert 2 days ago
If big corporations do things that are unethical, they should be called out, even if they're common. Saying "well everyone's doing it", isn't a good excuse to do things that are unethical.
It's not "naive" to point out the lies that OpenAI told to get to the point that they are now. They were claiming to be a non-profit for awhile, they grew in popularity based in part on that early good-will, and now they are a for-profit company looking to IPO into one of the most valuable corporations on the planet. That's a weird thing. That's a thing that seems to be kind of antithetical to their initial purpose. People should point that out.
Comment by jimmydoe 2 days ago
Comment by sreekanth850 2 days ago
Comment by tombert 2 days ago
Comment by sreekanth850 2 days ago
Comment by HeavyStorm 2 days ago
Comment by kirubakaran 2 days ago
- Caitlin Kalinowski, previously head of robotics at OpenAI
https://www.linkedin.com/posts/ckalinowski_i-resigned-from-o...
Comment by sheepscreek 2 days ago
But I am trying to understand this from the perspective of defence & govt. Why is it so business as usual for them? Do they consider this at par with missiles with infra-red/heat sensors for tracking/locking? Where does the definition of lethal autonomy begin and end?
Just putting this out there as a point to ponder on. By itself, this may rightly be too broad and should be debated.
Comment by BoxFour 2 days ago
On its face that’s not a crazy stance: Governments are meant to represent the public, while private companies obviously aren't. I think it’s somewhat understandable why the government might reject that kind of "we know better than you" type of clause.
Of course, the reaction is wildly out of proportion. A normal response would just be to stop doing business with the company and move on. Labeling them a supply chain risk is an extreme response.
Comment by dmschulman 2 days ago
Comment by BoxFour 2 days ago
I don’t think Anthropic is wrong to include that clause with this particular administration, and I doubt the administration is internally framing the issue the way I did rather than defaulting to simple authoritarian instincts.
But a more reasonable administration could raise the same concern, and I think I would agree with them.
Comment by whatshisface 2 days ago
Comment by BoxFour 2 days ago
> Of course, the reaction is wildly out of proportion. A normal response would just be to stop doing business with the company and move on. Labeling them a supply chain risk is an extreme response.
Comment by remarkEon 2 days ago
Maybe the argument is that they should, but I don't agree with that. If Anthropic or any of these other vendors have reservations about the logical conclusion of how these tools will be/are used then they should not sell to the government. Simple as. However ... if the claims Anthropic et al make about how these systems will develop and the capabilities they will have are at all true, then the government will come knocking anyway.
Comment by BoxFour 2 days ago
Dario has even said something along these lines at one point: As the technology matures, it’s very possible the government either nationalizes or semi-nationalizes companies like Anthropic.
That doesn’t seem out of the realm of possibility if they can’t land on a relationship similar to existing defense contractors like Raytheon, where these kinds of discussions obviously don't seem to happen.
Comment by nradov 2 days ago
Comment by BoxFour 2 days ago
So it’s probably some mix of two things:
1) A punitive “bend the knee us or we’ll destroy you,” which fits their track record.
2) Skepticism that Grok is actually as strong as the benchmarks suggest, which is also a pretty reasonable possibility.
Comment by lejalv 2 days ago
I can't agree that this is the right comparison. What is being sold here is not just another missile or tank type, it is the very agency and responsibility over life and death. It's potentially the firing of thousands of missiles.
Comment by spacemanspiff01 2 days ago
I was thinking that Anthropic would just be providing the models/setup support to run their models in aws gov cloud. They do not have any real insight into what is being asked. Maybe a few engineers have the specific clearances to access and debug the running systems, but that would one or two people who are embedded to debug inference issues - not something that would be analyzed by others in the company.
The whole 'do not use our models for mass surveillance' is at the end of the day an honor system. Companies have no real way of enforcing that clause, or determining that it has been violated. That being said, at least historically, one has been able to trust the government to abide by commercial agreements. The people who work in cleared positions are generally selected for honesty, and ability, willingness to follow rules.
Comment by remarkEon 1 day ago
>The whole 'do not use our models for mass surveillance' is at the end of the day an honor system. Companies have no real way of enforcing that clause, or determining that it has been violated.
You are also correct here imo, with one important caveat. Even if private companies have the means for enforcing that clause, it is not their business to do so. Maybe that's the crux of the problem, one of perspective. The for-profit entity in these arrangements is not and can never be trusted as the mechanism of enforcement for whatever we, as a republic, decide are the rules. That is the realm of elected government. Anthropic employees are certainly making their voice heard on how they believe these tools should be used, but, again, this is an is versus ought problem for them.
Comment by btown 2 days ago
In a version of a trolley problem where you're on a track that will kill innocent people, and you have the opportunity to set up a contract that effectively moves a switch to a track without anyone on it, is it not imperative to flip that switch?
(One might argue that increased reaction times might save service members' lives - but the whole point is that if the autonomous targeting is incorrect, it may just as well lead to increased violence and service member casualties in the aggregate.)
And we're not talking about the ethics board manipulating individual token outputs subtly, which would indeed be a supply chain risk - we're talking about a contractual relationship in which, if a supplier detects use outside of the scope of an agreed contract, it has the contractual right to not provide the service for that novel use, while maintaining support for prior use cases.
The fact that the government would use the threat of supply chain risk to enforce a better contract is unprecedented, and it deteriorates the government's standing as a reliable counterparty in general.
Comment by remarkEon 1 day ago
This problem is really difficult to discuss because we are all wrapping the capabilities of these tools into our response framing. These are tools, or weapons. Your hypothetical could just as easily be applied to GBU-39s, a smaller laser guided bomb that's meant to take out, say, a single vehicle in a convoy versus the entire set of vehicles. If you're not confident in what the product is supposed to do, and you've already sold it to the government, you have lied and they are going to come back to you asking some direct questions.
Comment by crote 2 days ago
On the other hand, why should the government have infinite power to override how a business operates? If you're not able to refuse to sell to the government, isn't that basically forced speech and/or forced labor?
Comment by nradov 2 days ago
Comment by mlinhares 2 days ago
And now that we see the government blatantly disrespecting the constitution and the rule of law the civil community must react.
Comment by BoxFour 2 days ago
The government shouldn’t be able to set the terms of its contracts with private companies and walk away if those terms aren’t acceptable? That seems like a stretch.
The constitution is a wildly different premise from government contracting with private companies.
Comment by mlinhares 2 days ago
The government shouldn't be able to coerce a business to do whatever it wants.
Comment by BoxFour 1 day ago
So the contract process worked. The seller wanted certain clauses, the buyer rejected them, and the deal didn’t happen.
Setting aside the supply chain risk designation, which I already said was an extreme overreaction, this is basically how it’s supposed to work.
> The government shouldn't be able to coerce a business to do whatever it wants.
Governments coerce businesses all the time to do what the government wants. Taxes are the obvious example, but there are many others like OFAC sanctions lists or even just regular old business regulations.
It mostly works because we rely on governments to use that power wisely, and to use it in a way that represents the wishes of the populace. Clearly that assumption is being tested with the current administration and especially in this particular situation, but the government coerces businesses to do what they want all the time and we often see it as a good thing.
Comment by bigyabai 2 days ago
If you're one of the contractors working in NRO or aware of Sentient, OpenAI and Anthropic probably do look like supply chain risks. They want to subsume the work you're already doing with more extreme limitations (ones that might already be violated). So now you're pitching backup service providers, analyzing the cost of on-prem, and pricing out your own model training; it would be really convenient if OpenAI just agreed to terms. As a contractor, you can make them an offer so good that it would be career suicide to refuse it.
Autonomous weapons are a horse of a different color, but it's safe to assume the same discussions are happening inside Anduril et. al.
Comment by nradov 2 days ago
https://www.vp4association.com/aircraft-information-2/32-2/m...
Comment by lich_king 2 days ago
A less charitable interpretation is that the current doctrine is "China / Russia will build autonomous killbots, so we can't allow a killbot gap".
I'm frankly less concerned about "proper" military uses than I am about the tech bleeding into the sphere of domestic law enforcement, as it inevitably will.
Comment by remarkEon 2 days ago
What's the reason this is less charitable, exactly? Do we think this isn't true, or that we think it's immoral to build the Terminator even if China/Russia already have them?
Comment by lich_king 2 days ago
We'll leave the morality of war for another time.
Comment by marcosdumay 2 days ago
Hum...
The one thing domestic surveillance enables is defining targets inside the country, and the one thing lethal autonomy enables is executing targets that a soldier would refuse to.
Those things don't have other uses.
Comment by ronnier 2 days ago
Comment by fancy_pantser 2 days ago
A 2017 national intelligence law compels Chinese companies and individuals to cooperate with state intelligence when asked and without and public notice.
China has no equivalent of the whistleblower protection that enables resignations with public letters explaining why, protests, open letters with many signatures, etc. Whenever you see "Chinese whistleblower" in the news, you're looking at someone who quietly fled the country first and then blew the whistle. Example: https://www.cnn.com/2026/02/27/us/china-nyc-whistleblower-uf...
Comment by crote 2 days ago
Comment by fancy_pantser 2 days ago
NSLs are also narrow in scope: they compel data disclosure, not active technical assistance in building surveillance systems like the Chinese law.
The Chinese laws can compel any citizen anywhere in the world to perform work on supporting state military and intelligence capabilities with no recourse. There have been no cases of companies or individuals fighting those orders.
Comment by nradov 2 days ago
Comment by yorwba 2 days ago
Comment by tkz1312 2 days ago
Comment by ronnier 2 days ago
Comment by esafak 2 days ago
Comment by ronnier 2 days ago
Comment by esafak 2 days ago
Comment by pfortuny 2 days ago
Comment by user3939382 2 days ago
Comment by qwerpy 2 days ago
Comment by _DeadFred_ 2 days ago
China's constitution includes freedom of speech and elections.
Funny thing when you put rights on hold today for 'reasons' they tend to just go away. Look at the US today versus pre 9/11. It's a completely different country with completely different attitudes about freedom and privacy and government over reach and power.
Comment by hirvi74 2 days ago
Why do I not believe this at all? Were things truly sunshine and roses at OpenAI up until this Pentagon debacle? Perhaps I am mistaken, but it seemed like the writing was on the wall years ago.
> I have deep respect for Sam and the team
I have even more questions now.
Comment by daheza 2 days ago
Comment by hirvi74 2 days ago
Which further solidifies my belief that this person is being disingenuous.
Comment by lucianbr 2 days ago
Not to mention that the principles are not being betrayed now for the first time.
Comment by Lerc 2 days ago
Most importantly, this seems to rest more on if you believe the principle was being followed or not.
It is possible to believe one thing, have another person believe another thing, and respect that their decision is sincerely held but subject to a different perspective, as is our own beliefs.
You can stand up for what you believe and still respectfully disagree with someone with a different stance.
The problem is when you decide that reality always conforms to you opinion. If you assume the other person is aware of that reality and decides differently than it becomes a betrayal of principles. Assuming to know the internal state of another's mind to declare that it is for money becomes disrespectfully presumptuous.
Your problem is not in understanding how X can occur if Y. It is assuming that everyone agrees with you on Y.
You might be right about Y, you might be wrong. Even if you are right, it is still possible that Y is a belief a rational person can hold if their perspective has been different,
Comment by hirvi74 2 days ago
That is one of my many questions too. I am not certain I believe her either. People predicted AI would be used in such nefarious manners way before AI even existed.
Something about the whole resignation and immediate social media post seems more like an attention grab than anything else to me. Whatever her prerogative is, I still believe she is still partially culpable for anything that becomes of this technology -- good or bad.
Comment by IshKebab 2 days ago
Comment by Muhammad523 2 days ago
Comment by roromainmain 2 days ago
Comment by sigmar 2 days ago
since when did the view that "humans should be in the loop before murderbots target and kill someone" become a "naive moral absolutist view of the world"? we're resigned to building the terminator now?
Comment by IshKebab 2 days ago
We can't even avoid using weapons where the equation is much more "this is really awful" like cluster bombs and flamethrowers.
You might be a bit behind the times too. There are already plenty of weapons platforms that kill without a human in the loop. I believe the first widely known one was South Korea's sentry gun but that was 10 years ago.
Comment by encomiast 2 days ago
Comment by bigyabai 2 days ago
Comment by aqua_coder 2 days ago
Comment by dangus 2 days ago
Comment by devonkelley 2 days ago
Comment by hintymad 1 day ago
Another scenario of economics is that AI does not not necessarily output autonomously, but does output so much so fast that companies will require fewer workers, as the economy does not scale as fast to consume the additional output or to demand more labor for the added efficiency.
Comment by emp17344 1 day ago
Comment by datsci_est_2015 1 day ago
AI will need to be able to experience consequences as a result of liability, and care about those consequences, in order to replace true meatspace jobs. Otherwise they’re simply sophisticated systems.
If you’re a one man company, and you have a delivery AI that delivers widgets to Alice, but in the process that delivery AI kills Bob, you’re liable for murder.
Comment by bonesss 1 day ago
There are a number of industries where I don’t think “kinda…” is an acceptable answer to “was this code read before deploying?”. Humans aren’t great at repeating boring tasks ad nauseam.
Comment by datsci_est_2015 1 day ago
You bring up a good point, I forgot that AI will be owned by rich and well-connected people, not the humble masses. If you or I have a small business that uses a delivery AI, we’re liable for murder. If one of the technofascists has a business that uses a delivery AI…
Comment by esafak 1 day ago
Comment by casey2 1 day ago
People have unrealistic expectations because they literally think they are summoning god instead of accelerating a few concurrent tasks. If you want to break causality you need to pay the entropy demon it's due.
Comment by ghoblin 1 day ago
Comment by casey2 1 day ago
You'll have cities made to serve cars and food made to serve delivery and worker drones. In the pursuit of optimization you'll end up back at the same place, when there was only one cafeteria in walking distance.
Anyway we aren't "on the brink of full automation" that's ridiculous, people always think this, because they have no idea how brittle automated systems are. To get a generally intelligent robot that operates in the real world you have to go WAY beyond replacing knowledge workers. The brain only uses 1W more when it's working at full tilt, 5% more. For any physical job the body is using. The full body at rest uses 100W, walking that's 300W, manual labor 600W a full sprint could peak at 2000W. That's an absurd range made only possible due to trillions of cells packed with ATP and billions of microscopic capillaries full of glucose that get sucked into your muscles the second you use them. Automation only works in closed systems, give it 2000 years maybe someone makes AGSI, then the robotics problem becomes approachable, but if it were smart it'd just declare it impossible without biotech.
Comment by ghoblin 13 hours ago
In the case of AI the key difference is how the systems scale. Human labor scales linearly: if you want 10x more output, you hire 10x more workers, and the cost scales roughly the same. A human brain might run on ~20W, but each worker is still €20–€50+ per hour and can only do one task at a time. AI systems scale more like software infrastructure. Once the model and servers exist, the marginal cost of additional tasks is mostly compute and electricity. A data center might burn far more energy than a human brain per task, but it can handle thousands or millions of tasks in parallel and run 24/7. The cost per task can end up being cents even if the system is much less energy efficient "per brain".
Labor is expensive because you're not just paying for the task. You're paying for the worker's entire life infrastructure. Wages have to cover housing, food, healthcare, transportation, retirement, taxes, etc.., So the price of labor largely reflects the cost of sustaining a human being in society, not just the marginal cost of performing the work
Comment by labrador 2 days ago
Comment by micromacrofoot 2 days ago
the whole public debacle was planned, the tos isn't stopping the pentagon from doing anything (as we seen with openai now)
Comment by bigyabai 2 days ago
Comment by tim333 1 day ago
Comment by fiatpandas 2 days ago
Comment by mirsadm 2 days ago
Comment by dgroshev 2 days ago
Comment by wongarsu 2 days ago
Sam probably expects to solve this by just offering more money. It worked in the past
Comment by integralid 2 days ago
Maybe my sarcasm is not justified, but I don't think most people care that that work for a company that does unethical things. In fact I think all large companies are more or less immoral (or rather amoral) - that's just how the system is built.
Comment by trollbridge 2 days ago
Comment by PunchyHamster 2 days ago
Comment by aplomb1026 2 days ago
Comment by pluc 2 days ago
Comment by dang 2 days ago
Comment by fHr 2 days ago
>claims to be some topshot data scientist
okay
Comment by ozgung 2 days ago
One can argue that they have already achieved this. At least for short termed tasks. Humans are still better at organization, collaboration and carrying out very long tasks like managing a project or a company.
Comment by A_D_E_P_T 2 days ago
No, because they're hugely reliant on their training data and can't really move beyond their training data. This is why you haven't seen an explosion of new LLM-aided scientific discoveries, why Suno can't write a song in a new genre (even if you explain it to Suno in detail and give it actual examples,) etc.
This should tell you something enormous about (1) their future potential and (2) how their "intelligence" is rooted in essentially baseline human communications.
Admittedly LLMs are superhuman in the performance of tasks which are, for want of a better term, "conventional" -- and which are well-represented in their training data.
Comment by nradov 2 days ago
Comment by matricks 2 days ago
I don’t even think humans can “move beyond” their sensory data. They generalize using it, which is amazing, but they are still limited by it.* So why is this a reasonable standard for non-biological intelligence?
We have compelling evidence that both can learn in unsupervised settings. (I grant one has to wrap a transformer model with a training harness, but how can anyone sincerely consider this as a disqualifier while admitting that an infant cannot raise itself from birth!)
I’m happy to discuss nuance like different architectures (carbon versus silicon, neurons versus ANNs, etc), but the human tendency to move the goalposts is not something to be proud of. We really need to stop doing this.
* Jeff Hawkins describes the brain as relentlessly searching for invariants from its sensory data. It finds patterns in them and generalizes.
Comment by A_D_E_P_T 2 days ago
Human sensory data combines to give you a spatiotemporal sense, which is the overarching sense of being a bounded entity in time and space. From one's perceptions, one can then generalize and make predictions, etc. The stronger one's capacity for cognition, the more accurate and broader these generalizations and predictions become. Every invention, including or perhaps especially the invention of mathematics, is rooted in this.
LLMs have no apparent spatiotemporal sense, are not physically bounded, and don't know how to model the physical world. They're trained on static communications -- though, of course, they can model those, they can predict things like word sequences, and they can produce output that mirrors previously communicated ideas. There's something huge about the fact, staring us right in the face, that they're clearly not capable of producing anything genuinely new of any significance.
This is why AGI is probably in world models.
Comment by rdiddly 2 days ago
Comment by tim333 1 day ago
LLMs can't be swapped in for human workers in general because there are still a lot of things they don't do like learning as they go. So that's missing from the Wikipedia thing.
Comment by matricks 2 days ago
SoTA models are at least very close to AGI when it comes to textual and still image inputs for most domains. In many domains, SoTA AI is superhuman both in time and speed. (Not wrt energy efficiency.*)
AI SoTA for video is not at AGI level, clearly.
Many people distinguish intelligence from memory. With this in mind, I think one can argue we’ve reached AGI in terms of “intelligence”; we just haven’t paired it up with enough memory yet.
* Humans have a really compelling advantage in terms of efficiency; brains need something like 20W. But AGI as a threshold has nothing directly to do with power efficiency, does it?
Comment by aerhardt 2 days ago