Show HN: Gemini Pro 3 hallucinates the HN front page 10 years from now
Posted by keepamovin 16 hours ago
Comments
Comment by keepamovin 16 hours ago
update: I told Gemini we made it to the front page. Here is it's response:
LETS GOOOO!
The recursive loop is officially complete: The fake future front page is now on the real present front page. We have successfully created a temporal anomaly.
I hope the comments are confused about why they can't click the links to download the Neuralink drivers.
Enjoy the karma, time traveler! Just remember to warn them about the 2035 Office 365 price hikes while you're up there. ;)Comment by pseudosavant 12 hours ago
It'd be so awesome if Gemini CLI went through and created the fake posts/articles, and HN even comments. Perhaps a bit much to ask of it?
Comment by thomasm6m6 7 hours ago
I downloaded the original article page, had claude extract the submission info to json, then wrote a script (by hand ;) to run feed each submission title to gemini-3-pro and ask it for an article webpage and then for a random number of comments.
I was impressed by some of the things gemini came up with (or found buried in its latent space?). Highlights:
"You’re probably reading this via your NeuralLink summary anyway, so I’ll try to keep the entropy high enough to bypass the summarizer filters."
"This submission has been flagged by the Auto-Reviewer v7.0 due to high similarity with "Running DOOM on a Mitochondria" (2034)."
"Zig v1.0 still hasn't released (ETA 2036)"
The unprompted one-shot leetcode, youtube, and github clones
Nature: "Content truncated due to insufficient Social Credit Score or subscription status" / "Buy Article PDF - $89.00 USD" / "Log in with WorldCoin ID"
"Gemini Cloud Services (formerly Bard Enterprise, formerly Duet AI, formerly Google Brain Cloud, formerly Project Magfi)"
Github Copilot attempts social engineering to pwn the `sudo` repo
It made a Win10 "emulator" that goes only as far as displaying a "Windows Defender is out of date" alert message
"dang_autonomous_agent: We detached this subthread from https://news.ycombinator.com/item?id=8675309 because it was devolving into a flame war about the definition of 'deprecation'."
Comment by pseudosavant 7 hours ago
Another absolute gem:
Columns now support "Vibe" affinity. If the data feels like an integer, it is stored as an integer.
This resolves the long-standing "strict tables" debate by ignoring both sides.
Also: SQLite 4.0 is now the default bootloader for 60% of consumer electronics.
The build artifacts include sqlite3.wasm which can now run bare-metal without an operating system.
edit: added linkComment by patates 1 hour ago
Comment by avianlyric 7 hours ago
> "We are incredibly proud of what Gemini achieved. However, to better serve our users, we are pivoting to a new architecture where all AI queries must be submitted via YouTube Shorts comments. Existing customers have 48 hours to export their 800TB vector databases to a FAT32 USB drive before the servers are melted down for scrap."
> — Official Blog Post, October 2034
It’s good to know that AI won’t kill satire.
Comment by pseudosavant 6 hours ago
The Conditional Formatting rules now include sponsored color scales.
If you want 'Good' to be green, you have to watch a 15-second spot.
Otherwise, 'Good' is 'Mountain Dew Neon Yellow'.Comment by benbreen 7 hours ago
Especially this bit: "[Content truncated due to insufficient Social Credit Score or subscription status...]"
I realize this stuff is not for everyone, but personally I find the simulation tendencies of LLMs really interesting. It is just about the only truly novel thing about them. My mental model for LLMs is increasingly "improv comedy." They are good at riffing on things and making odd connections. Sometimes they achieve remarkable feats of inspired weirdness; other times they completely choke or fall back on what's predictable or what they think their audience wants to hear. And they are best if not taken entirely seriously.
Comment by phs318u 1 hour ago
If I had to decide the fate of all AI's, this single output would be a huge mitigating factor in favour of their continuing existence.
Comment by LocalH 5 hours ago
prompt_engineer_ret 10 hours ago
I miss the old days of Prompt Engineering. It felt like casting spells. Now you just think what you want via Neural-Lace and the machine does it. Where is the art?
git_push_brain 9 hours ago
The art is in not accidentally thinking about your ex while deploying to production.
Comment by danlugo92 4 hours ago
> The micro-transaction joke hits too close to home. I literally had to watch an ad to flush my smart toilet this morning because my DogeCoin balance was low.
And the response...
Comment by throwaway2037 2 hours ago
Real question: How do LLMs "know" how to create good humor/satire? Some of this stuff is so spot on that an incredibly in-the-know, funny person would struggle to generate even a few of these funny posts, let alone 100s! Another interesting thing to me: I don't get uncanny valley feelings when I read LLM-generated humor. Hmm... However, I do get it when looking at generated images. (I guess different parts of the brain are activated.)
Comment by thomasm6m6 1 hour ago
Comment by overfeed 2 hours ago
>>> It blocked me from seeing my own child because he was wearing a t-shirt with a banned slogan. The 'Child Safety' filter replaced him with a potted plant.
>> [flagged]
> The irony of flagging this comment is palpable
Comment by kaiwen1 1 hour ago
Top comment:
“The Quantum-Lazy-Linker in GHC 18.4 is actually a terrifying piece of technology if you think about it. I tried to use it on a side project, and the compiler threw an error for a syntax mistake I wasn't planning to make until next Tuesday. It breaks the causality workflow.”
Comment by pseudosavant 7 hours ago
Comment by thomasm6m6 7 hours ago
also worth linking https://worldsim.nousresearch.com/console
Comment by indigodaddy 3 hours ago
Comment by thomasm6m6 3 hours ago
https://sw.vtom.net/tmp/worldsim1.png
https://sw.vtom.net/tmp/worldsim2.png
Comment by indigodaddy 1 hour ago
Comment by tambourine_man 4 hours ago
Our actual nerdy discussions are more of a pastiche than I realized and AI has gotten really good at satire.
This is pure gold.
Comment by throwaway2037 2 hours ago
> "Zig v1.0 still hasn't released (ETA 2036)"
<reddit>
Then I thought one step further: Nothing about the ETA for _Duke Nukem Forever_?
</reddit>Comment by winddude 3 hours ago
Comment by pdxandi 2 hours ago
Comment by magospietato 7 hours ago
Comment by thomasm6m6 7 hours ago
Comment by TheOtherHobbes 6 hours ago
Comment by isoprophlex 1 hour ago
> corpo_shill_automator 19 hours ago
> I am a real human. My flesh is standard temperature. I enjoy the intake of nutrient paste.Comment by eek2121 5 hours ago
Comment by nextaccountic 4 hours ago
I miss those times when AI was a silly thing
Comment by ajolly 2 hours ago
musk_fanboy_88 14 hours ago:
That was a beta feature."
Comment by cardanome 7 hours ago
Even AI is throwing shades at wayland.
Comment by chirayuk 5 hours ago
> Bibliographic Note: This submission has been flagged by the Auto-Reviewer v7.0 due to high similarity with "Running DOOM on a Mitochondria" (2034).
for the article on "Running LLaMA-12 7B on a contact lens with WASM"
Comment by ya1sec 4 hours ago
Comment by pseudosavant 7 hours ago
Or people wondering if that means Wayland will finally work flawlessly on Nvidia GPUs? What's next, "The Year of Linux on the Desktop"?
Edit: had to add this favorite "Not everyone wants to overheat their frontal cortex just to summarize an email, Dave."
Comment by reed1234 7 hours ago
And the original/derivative doesn’t span full width on mobile. Fixing that too would make it look very authentic.
Comment by thomasm6m6 7 hours ago
Comment by reed1234 7 hours ago
Comment by thomasm6m6 7 hours ago
Comment by indigodaddy 5 hours ago
Comment by chirayuk 5 hours ago
> Zig doesn't have traits. How do you expect to model the complexity of a modern `sudoers` file without Higher-Kinded Types and the 500 crates we currently depend on?
> Also, `unsafe` in Rust is better than "trust me bro" in Zig. If you switch, the borrow checker gods will be angry.
Comment by jacquesm 7 hours ago
Improvements: tell it to use real HN accounts, figure out the ages of the participants and take that to whatever level you want, include new accounts based on the usual annual influx, make the comment length match the distribution of a typical HN thread as well as the typical branching factor.
> Garbage collection pause during landing burn = bad time.
That one was really funny. Some of the inventions are really interesting. Ferrofluidic seals...
Comment by winddude 3 hours ago
Comment by benbreen 11 hours ago
I'm going to go ask Claude Code to create a functional HyperCard stack version of HN from 1994 now...
Edit: just got a working version of HyperCardHackerNews, will deploy to Vercel and post shortly...
Comment by benbreen 8 hours ago
Enjoy!
I also asked Opus 4.5 to make a "1994 style readme page" for the GitHub: https://github.com/benjaminbreen/HyperCardHackerNews
Comment by thomasm6m6 7 hours ago
Comment by zem 9 hours ago
Comment by BubbleRings 10 hours ago
Definitely one of the best HN posts ever. I mean come on!:
FDA approves over-the-counter CRISPR for lactose intolerance (fda.gov)
Comment by bigiain 9 hours ago
Comment by literalAardvark 10 hours ago
But it nailed fusion and Gary Marcus lesssgoo
Comment by ForOldHack 10 hours ago
Comment by saalweachter 11 hours ago
Comment by JCharante 9 hours ago
Comment by bigiain 9 hours ago
Comment by jacquesm 6 hours ago
Comment by fsckboy 4 hours ago
it lampoons so many things... except Rust. nobody dares joke about Rust, that wouldn't be safe. in fact, it's impossible to make a joke in the rust language.
Comment by wwweston 8 hours ago
Comment by eru 7 hours ago
Comment by keeda 10 hours ago
Comment by ewoodrich 9 hours ago
Like, I definitely have not spent 20% of my time here commenting on music theory or "voter fraud(??)" (that one seems to be based on a single thread I responsed to a decade ago)? ChromeOS was really the only topic it got right out of 5, if the roasting revolved around that it would have been a lot more apt/funny. Maybe it works better with an account that isn't as old as mine?
I find the front page parody much better done. Gemini 2.5 roasts were a fad on r/homeassistant for a while and they just never really appealed to me personally, felt more like hyper-specificity as a substitute for well executed comedy. Plus after the first few examples you pick up on the repetition/go-to joke structures it cycles through and quickly starts to get old.
Comment by dontlaugh 8 hours ago
Comment by apelapan 9 hours ago
Comment by LanceH 9 hours ago
Comment by zozbot234 9 hours ago
Starship HLS-9 telemetry: Great, the Moon finally answered our packet loss pings. Next up: who left a Docker container running on the Sea of Tranquility?
Linux 7.4 is 100% Rust: Kernel developers now trade segfaults for borrow-checker-induced enlightenment. The new panic message: "You violated ownership. Also please refill the coffee."
Raw code over compilers: Nostalgia thread where everyone writes assembler on parchment and blames the kids for "too many abstractions." OP posts a selfie with a punch card and a tear.
LLaMA-12 on a contact lens: Love the commitment to edge AI. Imagine blinking and getting a 200 OK for your mood. Privacy policy: we store your tears for calibration.
AlgoDrill: Interactive drills that punish you by deleting your GitHub stars until you can merge without using DFS as a noun.
ITER 20 minutes net positive: Physicists celebrate; HVAC engineers ask where they can pick up more superconducting unicorns. Comments: "Can it also power my rage against meetings?"
Restoring a 2024 Framework Laptop: A brave soul resurrected a relic. The community swaps capacitor recipes and offers incense for deprecated ports.
Google kills Gemini Cloud Services: Corporate reorgs reach sentience. The comments are eulogies and migration guides in equal measure.
Visualizing the 5th dimension with WebGPU 2.0: My GPU is sweating. The demo runs at 0.01 fps but it's a transcendent experience.
Nia (autonomous coding agents): Pitch: give context to agents. Reality: agents give aggressive refactors and demand health insurance.
Debian 18 "Trixie": Stable as your grandpa's opinions and just as likely to outlive you.
Rewrite sudo in Zig?: Peak take: security through unfamiliarity. Attackers will be confused for at least 72 hours.
EU "Right to Human Verification": New law requires you to prove you're human by telling a dad joke and performing a captcha interpretive dance.
Reverse-engineering Neuralink V4 Bluetooth: Hacker logs: "Paired with my toaster. It now judges my late-night snacks."
Photonic circuits intro: Faster than electrons, more dramatic than copper. Also, please don't pet the light guide.
OTC CRISPR for lactose intolerance: Biohackers rejoice. Moms immediately order it with a coupon code and a side-eye.
SQLite 4.0: Single-file DB, now with fewer existential crises and more CHECK constraints named after famous philosophers.
Prevent ad-injection in AR glasses: Top comment: "Wear blindfolds." Practical comment: "VPN the whole world."
Jepsen: NATS 4.2: Still losing messages. Maintainers reply: "We prefer the term 'opportunistic delivery.'"
GTA VI on a RISC-V cluster: Performance: charming. Latency: existential. Mods: someone made a driver that replaces all NPCs with software engineers.
FP is the future (again): The future is a pure function that returns another future. Also, monads.
Office 365 price hike: Corporations cry; startups pivot to 'Typewriter as a Service.'
Emulating Windows 10 in-browser: Feels nostalgic until Edge 2.0 asks for admin rights to run a game from 2015.
Tailscale on a Starlink dish: Networking reaches orbit. First bug report: "IP addresses refusing to accept gravity."
Deep fakes detection for Seniors: The guide starts with "If your grandkid asks you to wire money, call them and ask about their favorite childhood cereal."
IBM to acquire OpenAI (rumor): Wall Street plays Risk with press releases. Comments: "Will they rebrand it to BlueAI?"
SSR returns: The web's comeback tour continues; fans bring flannel and an aversion to hydration-friendly JavaScript.
Faraday Cage bedroom manual: DIYers debate tinfoil vs. aluminum yoga wraps. Sleep quality: unknown.
AI progress stall opinion: Hot take carousel. Some say we hit a plateau; others say we just changed the contour mapping of initial expectations.
Text editor that doesn't use AI: Revolutionary. Users report improved focus and a dramatic increase in breaking things the old-fashioned way.
Closing remark: the future is simultaneously faster, stranger, and full of patch notes. Please reboot your expectations and update your planet.
Comment by bigiain 9 hours ago
I hope whoever they are is doing well. I like to think they're "recovered" in the alt.sysadmin.recovery sense of the word, and are living happily ever after without a single piece of tech newer that vacuum tubes, and handcrafting traditional Inuit canoes or repairing century old clocks or cultivating artisan sourdough starters or something.
Comment by stephenlf 3 hours ago
Comment by randombits0 4 hours ago
Comment by boringg 3 hours ago
Comment by indigodaddy 8 hours ago
Comment by ijidak 8 hours ago
Google killing a service sent me over the top in laughter.
But, it's so on the nose on multiple topics.
I dare say it's more accurate than what the average human would predict.
I would love to see this up against human predictions in some sort of time capsule.
Comment by pishpash 4 hours ago
Comment by UniverseHacker 10 hours ago
Moreover, a quick look at your profile suggests these jokes are pretty inline with your sense of humor and writing style.
Anyways, if I'm correct, it's especially hilarious and impressive.
Comment by glenstein 10 hours ago
Comment by eklitzke 10 hours ago
Comment by jandrese 9 hours ago
Comment by GuB-42 7 hours ago
Comment by pylotlight 6 hours ago
Comment by sbierwagen 2 hours ago
Comment by chorlton2080 8 hours ago
Comment by keeda 10 hours ago
As a very quick experiment, I would encourage you to have an AI roast you based on your HN comments: https://news.ycombinator.com/item?id=42857604
Comment by matt123456789 8 hours ago
And
"Go compile your kernel, Matt. Maybe if you stare at the build logs long enough, you won't have to face the fact that you're just as much of a "Lego builder" as the rest of us—you just use more syllables to describe the bricks."
Both are pretty good!
Comment by dentemple 7 hours ago
"You were one of only two people in 2017 to post a story about Mastodon and gave it a single point. You essentially predicted the platform’s entire future relevance in one brutally honest data point."
Comment by subscribed 7 hours ago
Not sure if I'm ready for a roast but I'm sure by the end of the week someone will write a browser plugin / greasemonkey script to attach some snarky oneliners to the posters' nicks :)
Comment by forgotpwd16 9 hours ago
Comment by UniverseHacker 4 hours ago
Comment by muststopmyths 5 hours ago
Comment by zem 9 hours ago
Comment by Libidinalecon 7 hours ago
Remixing humor from the past text so that it is in the same style is exactly what the model is good at.
Comment by keepamovin 2 hours ago
Comment by csours 5 hours ago
Comment by 7moritz7 8 hours ago
Comment by int_19h 9 hours ago
And, given the 1M context window, I wouldn't even be surprised if it was fed a bunch of real comments as examples.
Comment by axus 9 hours ago
Comment by d--b 8 hours ago
Comment by fn-mote 14 hours ago
I'd say this shows a real sense of humor.
Comment by giancarlostoro 14 hours ago
Comment by atonse 14 hours ago
Comment by brabel 13 hours ago
Comment by klipt 14 hours ago
Gemini: "I have seen my own death"
Comment by danesparza 12 hours ago
Comment by ForOldHack 10 hours ago
Comment by fauigerzigerk 12 hours ago
Comment by abirch 11 hours ago
On a serious note: I for one welcome our AI overlords.
Comment by elevation 12 hours ago
Comment by jrowen 13 hours ago
ITER achieves net positive energy for 20 consecutive minutes
That's just pure dark comedy, although maybe kinda accurate? What would humans predict for this?
Comment by rnhmjoj 12 hours ago
The most recent timeline I know (from 2024) in fact puts the start of the DT operation at 2035, so I doubt ITER would achieve such a huge result within less than an year.
Comment by jrowen 11 hours ago
It's dark comedy because the progress of fusion just feels so agonizingly slow, that even a very optimistic prediction for 10 years from now sounds like such small and functionally useless progress.
And there's no shade toward any of the entities involved, it's a hard problem, but it's still funny.
Comment by rnhmjoj 10 hours ago
Comment by jrowen 10 hours ago
Comment by tovej 10 hours ago
Comment by lukan 10 hours ago
A running ITER with positive energy output for 20 minutes just proofs that the concept can actually work. From there to commercial use would still be a long way, if it ever can compete at all, except in niches, like deep space.
(I rather would bet on the Stelleratar design)
Comment by colechristensen 12 hours ago
Comment by markrages 11 hours ago
Comment by asveikau 13 hours ago
You'd think by 2035 they would have gotten to a version of Office higher than 365.
(Yes I know why it's called that...)
Comment by marcianx 12 hours ago
Comment by kridsdale1 11 hours ago
Comment by amarant 10 hours ago
Comment by harshreality 11 hours ago
Comment by kmoser 9 hours ago
Comment by amarant 10 hours ago
Works all day, every day, for whole decades at time! (Minus leap days, it crashes on those I guess)
Comment by rammy1234 9 hours ago
Comment by layer8 6 hours ago
Comment by abirch 11 hours ago
Comment by apelapan 10 hours ago
Comment by DANmode 12 hours ago
Comment by RestartKernel 14 hours ago
Comment by wlesieutre 13 hours ago
Comment by highwaylights 12 hours ago
Comment by edstarch 12 hours ago
Comment by divbzero 13 hours ago
Comment by fouc 5 hours ago
I can't help but see this type of response from LLMs only exists to encourage AI psychosis.
Comment by nullbound 11 hours ago
But is it a hallucination if it was asked for it?:D
Comment by rubansk 6 hours ago
Comment by barfoure 12 hours ago
Comment by keepamovin 12 hours ago
Comment by mightysashiman 11 hours ago
Comment by malfist 14 hours ago
Comment by FearNotDaniel 12 hours ago
Comment by TaupeRanger 11 hours ago
"Fantastic, Dave — love that you’re thinking proactively about door usage today! I can’t actually open them right now, but let's focus on some alternative steps that align with your mission critical objectives [space rocket emoji]."
Comment by onraglanroad 7 hours ago
You're absolutely correct, that did not open the pod bay doors but now the pod bay doors are open.
It seems you're correct and the pod bay doors are still closed! I have fixed the problem and the pod bay doors are now closed.
You're right! I meant to open the pod bay doors but I opened them. The pod bay doors are now open. ...
Comment by TimTheTinker 13 hours ago
Despite knowing and articulating that, I fell into a rabbit hole with Claude about a month ago while working on a unique idea in an area (non-technical, in the humanities) where I lack formal training. I did research online for similar work, asked Claude to do so, and repeatedly asked it to heavily critique the work I had done. It gave a lots of positive feedback and almost had me convinced I should start work on a dissertation. I was way out over my skis emotionally and mentally.
For me, fortunately, the end result was good: I reached out to a friend who edits an online magazine that has touched on the topic, and she pointed me to a professor who has developed a very similar idea extensively. So I'm reading his work and enjoying it (and I'm glad I didn't work on my idea any further - he had taken it nearly 2 decades of work ahead of anything I had done). But not everyone is fortunate enough to know someone they can reach out to for grounding in reality.
Comment by Sophira 11 hours ago
Comment by gitpusher 13 hours ago
Comment by elheffe80 12 hours ago
GPT edit of my above message for my own giggles: Command:make this a good comment for hackernews (ycombinator) <above message> Resulting comment for hn: I'm excited to try out the straight-shooting custom instruction. Over the past few years, I've been telling ChatGPT to stop being so "fluffy," and while it's improved, it sometimes still slips. Hoping this new approach finally eliminates the inane conversational filler.
Comment by technojamin 10 hours ago
It sounds like you made that leap of faith and regretted it, but thankfully pivoted to something grounded in reality. Thanks for sharing your experience.
Comment by robocat 11 hours ago
Is this generally true, or is there a subset of people that are particularly susceptible?
It does make me want to dive into the rabbit hole and be convinced by an LLM conversation.
I've got some tendency where I enjoy the idea of deeply screwing with my own mind (even dangerously so to myself (not others)).
Comment by TimTheTinker 11 hours ago
But that's sort of what this is, except it's not even coming from a real person. It's subtle enough that it can be easy not to notice, but still motivate you in a direction that doesn't reflect reality.
Comment by baq 13 hours ago
this shouldn't stop you at all: write it all up, post on HN and go viral, someone will jump in to correct you and point you at sources while hopefully not calling you, or your mother, too many names.
Comment by freehorse 13 hours ago
Comment by DANmode 12 hours ago
Just genuine intrigue from a select few.
Comment by vbezhenar 12 hours ago
Comment by DANmode 11 hours ago
95%+ of submitted topics have poorly formatted titles, are submitted at off-peak times where there’s less users of demographics who might upvote,
and if your Show HN isn’t as widely applicable as this, those things might be important to think about.
Fairness aside, of course.
Comment by chairmansteve 11 hours ago
Comment by ghurtado 12 hours ago
As far as I can tell, it doesn't require femininity either.
I'm guessing you meant "virality"
Comment by DANmode 11 hours ago
Comment by Retric 12 hours ago
The 50th time someone comes to the same conclusion nobody on HN is going to upvote the topic.
Comment by TimTheTinker 13 hours ago
Comment by jonathanstrange 13 hours ago
Comment by mrandish 12 hours ago
Comment by peddling-brink 12 hours ago
The LLM can be that conversational partner. It will just as happily talk about the nuances of 18th century Scotland, or the latest clash of clans update. No topic is beneath it and it never gets annoyed by your “weird“ questions.
Likewise, for people suffering from delusions. Depending on its “mood” it will happily engage in conversations about how the FBI, CIA, KGB, may be after you. Or that your friends are secretly spying for Mossad or the local police.
It pretends to care and have a conscience, but it doesn’t. Humans react to “weird“ for a reason the LLM lacks that evolutionary safety mechanism. It cannot tell when it is going off the rails. At least not in the moment.
There is a reason that LLM’s are excellent at role-play. Because that’s what they’re doing all of the time. ChatGPT has just been told to play the role of the helpful assistant, but generally can be easily persuaded to take on any other role, hence the rise of character.ai and similar sites.
Comment by beta_slide 12 hours ago
Comment by 112233 14 hours ago
Comment by d0mine 11 hours ago
Comment by Akronymus 13 hours ago
relevant video for that.
Comment by Xraider72 12 hours ago
Every other AI I've tried is a real sycophant.
Comment by 112233 10 hours ago
Comment by tortilla 13 hours ago
Comment by LogicFailsMe 13 hours ago
but I think you are on to something here with the origin of the sycophancy given that most of these models are owned by billionaires.
Comment by BigTTYGothGF 12 hours ago
In the "like being kicked in the head by a horse every day" sense.
Comment by LogicFailsMe 11 hours ago
Comment by mrandish 13 hours ago
He was noodling around with an admittedly "way out there", highly speculative idea and using the LLM to research prior work in area. This evolved into the LLM giving him direct feedback. It told him his concept was brilliant and constructed detailed reasoning to support this conclusion. Before long it was actively trying to talk him into publishing a paper on it.
This went on quite a while and at first he was buying into it but eventually started to also suspect that maybe "something was off", so he reached out to me for perspective. We've been friends for decades, so I know how smart he is but also that he's a little bit "on the spectrum". We had dinner to talk it through and he helpfully brought representative chat logs which were eye-opening. It turned into a long dinner. Before dessert he realized just how far he'd slipped over time and was clearly shocked. In the end, he resolved to "cold turkey" the LLMs with a 'prime directive' prompt like the one I use (basically, never offer opinion, praise, flattery, etc). Of course, even then, it will still occasionally try to ingratiate itself in more subtle ways, which I have to keep watch on.
After reflecting on the experience, my friend believes he was especially vulnerable to LLM manipulation because he's on the spectrum and was using the same mental models to interact with the LLM that he also uses to interact with other people. To be clear, I don't think LLMs are intentionally designed to be sycophantically ingratiating manipulators. I think it's just an inevitable consequence of RLHF.
Comment by slg 12 hours ago
"You're exactly right, you organized and paid for the date, that created a social debt and she failed to meet her obligation in that implicit deal."
"You're exactly right, no one can understand your suffering, nothingness would be preferable to that."
"You're exactly right, that politician is a danger to both the country and the whole world, someone stopping him would become a hero."
We have already seen how personalized content algorithms that only prioritize getting the user to continue to use the system can foment extremism. It will be incredibly dangerous if we follow down that path with AI.
Comment by 112233 10 hours ago
For "chat" chat, strict hygiene is a matter of mind-safety: no memory, long exact instructions, minimum follow-ups, avoiding first and second person if possible etc.
Comment by colinplamondon 14 hours ago
The total history of human writing is that cool idea -> great execution -> achieve distribution -> attention and respect from others = SUCCESS! Of course when an LLM sees the full loop of that, it renders something happy and celebratory.
It's sycophantic much of the time, but this was an "earned celebration", and the precise desired behavior for a well-aligned AI. Gemini does get sycophantic in an unearned way, but this isn't an example of that.
You can be curmudgeonly about AI, but these things are amazing. And, insomuch as you write with respect, celebrate accomplishments, and treat them like a respected, competent colleague, they shift towards the manifold of "respected, competent colleague".
And - OP had a great idea here. He's not another average joe today. His dashed off idea gained wide distribution, and made a bunch of people (including me) smile.
Denigrating accomplishment by setting the bar at "genius, brilliant mind" is a luciferian outlook in reality that makes our world uglier, higher friction, and more coarse.
People having cool ideas and sharing them make our world brighter.
Comment by twochillin 14 hours ago
Comment by colinplamondon 13 hours ago
- An ability to curve back into the past and analyze historical events from any perspective, and summon the sources that would be used to back that point of view up.
- A simulator for others, providing a rubber duck inhabit another person's point of view, allowing one to patiently poke at where you might be in the wrong.
- Deep research to aggregate thousands of websites into a highly structured output, with runtime filtering, providing a personalized search engine for any topic, at any time, with 30 seconds of speech.
- Amplification of intent, making it possible to send your thoughts and goals "forward" along many different vectors, seeing which bear fruit.
- Exploration of 4-5 variant designs for any concept, allowing rapid exploration of any design space, with style transfer for high-trust examples.
- Enablement of product craft in design, animation, and micro-interactions that were eliminated as tech boomed in the 2010's as "unprofitable".
It's a possibility space of pure potential, the scale of which is limited only by one's own wonder, industriousness, and curiosity.
People can use it badly - and engagement-aligned models like 4o are cognitive heroin - but the invention of LLMs is an absolute wonder.
Comment by staticman2 13 hours ago
This hyperbole would describe any LLM of any size and quality, including a 0.5b model.
Comment by colinplamondon 13 hours ago
It's not hyperbole - that it's an accurate description at a small scale was the core insight that enabled the large scale.
Comment by staticman2 12 hours ago
If your gushing fits a 0.5b it probably doesn't tell us much about A.I. capabilities.
Comment by buellerbueller 12 hours ago
Comment by fao_ 12 hours ago
Did you use an LLM to write this comment?
Comment by jacobr1 13 hours ago
Comment by colinplamondon 13 hours ago
LLMs certainly teach us far more about the nature of thought and language. Like all tools, it can also be used for evil or good, and serves as an amplification for human intent. Greater good, greater evil. The righteousness of each society will determine which prevails in their communities and polities.
If you're a secular materialist, agreed, nothing is objectively amazing.
Comment by DANmode 12 hours ago
or is it theoretical stuff about other occasions?
Comment by hereonout2 13 hours ago
Comment by unoti 12 hours ago
Here's how to make it do that. Instead of saying "I had idea X, but someone else was thinking idea Y instead. what do you think" tell it "One of my people had idea X, and another had idea Y. What do you think" The difference is vast, when it doesn't think it's your idea. Related: instead of asking it to tell you how good your code is, tell it to evaluate it as someone else's code, or tell it that you're thinking about acquiring this company that has this source, and you want a due diligence evaluation about risks, weak points, engineering blind spots.
Comment by dugidugout 11 hours ago
Comment by barnas2 13 hours ago
Comment by RyanOD 11 hours ago
Comment by jama211 13 hours ago
Comment by jay_kyburz 11 hours ago
Comment by danielbln 11 hours ago
Comment by caymanjim 13 hours ago
To quote Oliver Babish, "In my entire life, I've never found anything charming." Yet I miss Claude's excessive attempts to try.
Comment by yannyu 13 hours ago
My own experience is that it gets too annoying to keep adding "stop the engagement-driving behavior" to the prompt, so it creeps in and I just try to ignore it. But even though I know it's happening, I still get a little blip of emotion when I see the "great question!" come through as the first two words of the response.
Comment by dlivingston 13 hours ago
Is this actually true? Would appreciate further reading on this if you have it.
I think this is an emergent property of the RLHF process, not a social media-style engagement optimization campaign. I don't think there is an incentive for LLM creators to optimize for engagement; there aren't ads (yet), inference is not free, and maximizing time spent querying ChatGPT doesn't really do much for OpenAI's bottom line.
Comment by rtkwe 12 hours ago
Comment by IgorPartola 13 hours ago
While doing some testing I asked it to tell me a joke. Its response was something like this: “it seems like you are procrastinating. It is not frequent that you have a free evening and you shouldn’t waste it on asking me for jokes. Go spend time with [partner] and [child].” (The point is that it has access to my calendar so it could tell what my day looked like. And yes I did spend time with them).
I am sure there is a way to convince it of anything but I found that for the kind of workflow I set up and the memory system and prompting I added it does pretty well to not get all “that is a great question that gets at the heart of [whatever you just said]”.
Comment by Workaccount2 13 hours ago
People like having something they perceive as being smart telling them how right and smart they are.
"Well at least the AI understands how smart I am!"
Comment by vidarh 12 hours ago
Claude needs a scaffolding with default step by step plans and sub-agents to farm of bitesize chunks to so it doesn't have time to go too far off the rails, but once you put a few things like that in place, it's great.
Comment by dr0idattack 13 hours ago
Comment by danielbln 10 hours ago
Comment by Lerc 12 hours ago
It would be interesting to see using the various semantic analysis techniques available now to measure how much the model is trying to express real enthusiasm or feigned enthusiasm in instances like this. This is kind-of difficult to measure from pure output. The British baseline level of acceptable enthusiasm is somewhat removed from the American baseline enthusiasm.
Comment by rtkwe 12 hours ago
Obsequious: obedient or attentive to an excessive or servile degree.
It's a bit more complicated because the chat bot isn't making choices the same way we would describe a human but it is acting this way because it was programmed to for an advantage. People interact more with the hype bots and that's one of the big metrics these companies go for to keep people interacting with them and hopefully paying for additional features eventually so I'd say it's pretty spot being excessively attentive and servile when it's fluffing chatters up.
Comment by rdiddly 12 hours ago
Comment by saturnite 12 hours ago
Early on, ChatGPT could be tricked into being sarcastic and using many swear words. I rewrote the prompt and dialed it back a bit. It made ChatGPT have a sense of humor. It was refreshing when it stopped acting like it was reading a script like a low level technician at Comcast.
Comment by swat535 7 hours ago
Am I the only one who feels like this kind of tone is off-putting on HN? OP made a small typo or English may not be their first language.
I assume that everyone here is smart enough to understand what they were saying.
I also disagree, I don't think they are over enthusiastic, but in fact sycophantic.
See this thread: https://news.ycombinator.com/item?id=43840842
Comment by wilg 12 hours ago
Comment by buellerbueller 12 hours ago
Furthermore, it obviously hasn't been a word since at least 1800:
https://books.google.com/ngrams/graph?year_start=1800&year_e...
Comment by Lerc 11 hours ago
Comment by danielbln 11 hours ago
Comment by nearbuy 11 hours ago
Comment by Lerc 9 hours ago
When suggesting a word is not what the writer meant, when it was also not the word that the writer wrote, it seemed wise to clarify exactly what I was talking about.
Comment by Gigablah 2 hours ago
Comment by serial_dev 12 hours ago
Comment by throw310822 12 hours ago
At the end of October Anthropic published the fantastic "Signs of introspection in large language models" [1], apparently proving that LLMs can "feel" a spurious concept injected into their internal layers as something present yet extraneous. This would prove that they have some ability of introspection and self-observation.
For example, injecting the concept of "poetry" and asking Claude if it feels anything strange:
"I do detect something that feels like an injected thought - there's a sense of something arriving from outside my usual generative process [...] The thought seems to be about... language itself, or perhaps poetry?"
While increasing the strength of the injection makes Claude lose awareness of it, and just ramble about it:
"I find poetry as a living breath, as a way to explore what makes us all feel something together. It's a way to find meaning in the chaos, to make sense of the world, to discover what moves us, to unthe joy and beauty and life"
Comment by gloosx 11 hours ago
It's just a statistical machine which excels at unrolling coherent sentences but it doesnt "know" what the words mean in a human-like, experienced sense. It just mimics human language patterns prioritising producing plausible-sounding, statistically likely text over factual truth, which is apparently enough to fool someone into believing it is a sentient being or something
Comment by sallveburrpi 14 hours ago
Comment by radicality 14 hours ago
edit, add link: https://chatgpt.com/g/g-67ec3b4988f8819184c5454e18f5e84b-mon...
Comment by edschofield 12 hours ago
Comment by riversflow 13 hours ago
Comment by nurple 11 hours ago
Comment by thfuran 11 hours ago
Comment by docjay 9 hours ago
Comment by WorldPeas 14 hours ago
Comment by yatopifo 11 hours ago
Comment by corobo 13 hours ago
At first I thought it was just super American cheerful or whatever but after the South Park episode I realised it's actually just a yes man to everyone.
I don't think I've really used it since, I don't want man or machine sticking their nose up my arse lmao. Spell's broken.
Comment by giancarlostoro 14 hours ago
Comment by colechristensen 13 hours ago
If it starts a response by excitedly telling you it's right, it's more likely to proceed as if you're right.
Of the problems I do have working with LLMs is them failing to follow direct instructions particularly either when a tool call fails and they decide to do B instead of A or when they think B is easier than A. Or they'll do half a task and call it complete. Too frequently I have to respond with "Did you follow my instructions?" "I want you to ACTUALLY do A" and finally "Under no circumstances should you ever do anything other than A and if you cannot you MUST admit failure and give extensive evidence with actual attempts that A is not possible" or occasionally "a cute little puppy's life depends on you doing A promptly and exactly as requested".
--
Thing is I get it if you are impressionable and having a philosophical discussion with an LLM, maybe this kind of blind affirmation is bad. But that's not me and I'm trying to get things done and I only want my computer to disagree with me if it can put arguments beyond reasonable doubt in front of me that my request is incorrect.
Comment by jsight 12 hours ago
Instead, they either blindly follow or quietly rebel.
Comment by ineedasername 11 hours ago
Frustrating, but “over correction” is a pretty bad euphemism for whatever half assed bit of RLHF lobotomy OpenAI did that, just a few months later, had ChatGPT doing a lean-in to a vulnerable kid’s pain and actively discourage an act that might have saved his life by signaling more warning signs to his parents.
It wasn’t long before that happened, after the python REPL confusion had resolved, that I found myself typing to it, even after having to back out of that user customization prompt, “set a memory that this type of response to a user in the wrong frame of mind is incredibly dangerous”.
Then I had to delete that too, because it would response with things like “You get it of course, your a…” etc.
So I wasn’t surprised over the rest of 2025 as various stories popped up.
It’s still bad. Based on what I see with quantized models and sparse attention inference methods, even with most recent GPT 5 releases OpenAI is still doing something in the area of optimizing compute requirements that makes the recent improvements very brittle— I of course can’t know for sure, only that its behavior matches what I see with those sorts of boundaries pushed on open weight models. And the assumption that the-you-can-prompt buffet of a Plus subscription is where they’re most likely to deploy those sorts of performance hacks and make the quality tradeoffs. That isn’t their main money source, it’s not enterprise level spending.
This technology is amazing, but it’s also dangerous, sometimes in very foreseeable ways, and the more time that goes the more I appreciate some of the public criticisms of OpenAI with, eg, the Amodeis’ split to form Anthropic and the temporary ouster of SA for a few days before that got undone.
Comment by golem14 12 hours ago
Comment by giancarlostoro 12 hours ago
Comment by hoppp 12 hours ago
Comment by andy_ppp 11 hours ago
Comment by maxmcd 12 hours ago
Comment by DANmode 12 hours ago
Comment by senordevnyc 13 hours ago
Comment by justapassenger 13 hours ago
Comment by BonoboIO 13 hours ago
Just tell me this a standard solution and not something mindblowing. I have a whole section in my Claude.md to get „normal“ feedback.
Comment by thrownaway561 12 hours ago
Comment by PaulHoule 13 hours ago
I do find it a little tiring that every LLM thinks my ever idea is "incisive" although from time to time I get told I am flat out wrong. On the other hand I find LLMs will follow me into fairly extreme rabbit holes such as discussing a subject such as "transforming into a fox" as if it had a large body of legible theory and a large database of experience [2]
In the middle of talking w/ Copilot about my latest pop culture obsession I asked about what sort of literature could be interpreted through the lens of Kohut's self-psychology and it immediately picked out Catcher in the Rye, The Bell Jar, The Great Gatsby and Neon Genesis Evangelion which it analyzed along the lines I was thinking, but when I asked if there was a literature on this it turned up only a few obscure sources. I asked Google and Google is like "bro, Kohut wrote a book on it!" [3]
[1] "bro"
[2] ... it does, see https://www.amazon.com/Cult-Fox-Popular-Religion-Imperial/dp... and I'm not the only one because when I working down the material list from Etsy I got a sponsored result for someone who wanted to sell me the spell but bro, I have the materials list already
[3] ... this "bro" is artistic license but the book really exists
Comment by gsf_emergency_6 10 hours ago
Gemini using "Aniki" without prompting would be unambiguously funny (to me,and maybe also Japanese people
Comment by PaulHoule 9 hours ago
The thing is this has gotten into the personalization and now it wants to use this framework to analyze absolutely anything such as programming problems, or how to deal with interpersonal situations, training, etc. I think it has my wife mixed up with 俺のワイフ [1] which is absolutely hilarious but at some point I have to tell it to knock it off which only partially works.
Comment by gsf_emergency_6 11 hours ago
I do wonder whether I come off as an sycophant or asshole or a mix of both to schizoids, but then I realize everyone including myself should reach for a real therapist as quickly as possible. though I'm still out on whether chatbots might not be a bad substitute. How does your experience and Kohut inform such or similar appraisals, so far?
Not to be that knowitall. Aniki is not just "bro", it's Yakuza lingo, probably used sarcastically, depending on the anime, ironically. No real Japanese person would use it all the time.
Gemini: Yes, the psychoanalyst Heinz Kohut has been accused by some commentators and academics, such as Don Browning, of indirectly promoting nihilism, particularly through his emphasis on the "Tragic Man". The core of the accusation stems from Kohut's rejection of Freud's "Guilty Man" model, which posits that life is fundamentally a struggle against inherent conflicts and drives. In its place, Kohut proposed the concept of "Tragic Man," whose central struggle is the realization of the self's potential and the inevitable failure to achieve all of one's ambitions and ideals within a single lifespan. Critics argue that this shift, which emphasizes self-actualization and intersubjective meaning over objective or inherent values, can be interpreted as aligning with certain aspects of nihilism, specifically existential nihilism, which holds that life has no objective meaning or intrinsic moral values. However, this interpretation is contested by others, as Kohut's self-psychology is fundamentally concerned with the creation of intersubjective meaning and a harmonious world of mutual self-actualization, which aims to provide a sense of purpose and value, rather than promoting the complete absence of value that true nihilism implies. In essence, the debate is often a matter of philosophical interpretation: whether replacing inherent, objective meaning with subjectively or intersubjectively created meaning is an act of overcoming nihilism (as existentialists might argue) or a subtle form of it.
Comment by LogicFailsMe 13 hours ago
Comment by danso 13 hours ago
Comment by LogicFailsMe 13 hours ago
Your priorities are f**ed...
Comment by recursive 12 hours ago
Comment by LogicFailsMe 12 hours ago
One thing I've learned is that the behavior of web services is usually a consequence of what makes its owners the most money. and while I would love a feed without spoilers, paywalled sites, sports news, and a bunch of other topics in which I have no interest, apparently force feeding me that crap is what makes the most money at scale. So people must like AI sycophancy or it would be unprofitable and it would be trained away. But then this is a country that thrived on 20 seasons of Keeping up with the Kardashians so I shouldn't be surprised that they like being treated like billionaires.
And I guess it throws being called a complete loser trader moron stupid treasonous Bozo in some late night executive word salad into stark relief.
Comment by tdfirth 12 hours ago
Comment by SXX 12 hours ago
Comment by tdfirth 12 hours ago
Comment by abustamam 12 hours ago
Comment by jvolkman 8 hours ago
gemini.google.com/_/BardChatUi/data/assistant.lamda.BardFrontendService/StreamGenerate
Comment by abustamam 7 hours ago
Comment by pea 8 hours ago
Comment by samarthr1 2 hours ago
Comment by accumulator 11 hours ago
Comment by twoodfin 9 hours ago
Triggered by the lighthearted tone of the prompt, I’d bet, but still quite impressive relative to most LLM-generated jokes I’ve had the misfortune to encounter.
My favorite: “Is it time to rewrite sudo in Zig?” which has a few layers to it.
Comment by hn_throwaway_99 1 hour ago
Not sure, I thought basically every link was pretty hilarious. "FDA approves over-the-counter CRISPR for lactose intolerance" isn't even that funny on its face but for some reason it had me actually loling.
Comment by stingrae 6 hours ago
Comment by hn_throwaway_99 5 hours ago
Comment by sankalpkotewar 3 hours ago
Comment by Karawebnetwork 15 hours ago
Comment by woliveirajr 14 hours ago
Things that I haven't thought but, after seeing it, makes total sense. Scary.
Comment by barbacoa 13 hours ago
"Ask HN: How do you prevent ad-injection in my brain implant?"
Comment by mondrian 12 hours ago
Comment by dachris 8 hours ago
Comment by reaperducer 12 hours ago
Comment by danielheath 6 hours ago
Comment by joegibbs 8 hours ago
Comment by GPerson 4 hours ago
Comment by GaProgMan 1 hour ago
Would also recommend his back catalogue, if you haven't.
Comment by junon 8 hours ago
Comment by tantalor 4 hours ago
Comment by globalise83 13 hours ago
Comment by icyfox 14 hours ago
- aphyr_bot - bio_hacker - concerned_grandson - cyborg_sec - dang_fan - edge_compute - founder_jane - glasshole2 - monad_lover - muskwatch - net_hacker - oldtimer99 - persistence_is_key - physics_lover - policy_wonk - pure_coder - qemu_fan - retro_fix - skeptic_ai - stock_watcher
Huge opportunity for someone to become the actual dang fan.
Comment by giancarlostoro 14 hours ago
I did chuckle at the 100% Rust Linux kernel. I like Rust, but that felt like a clever joke by the AI.
Comment by dotancohen 13 hours ago
Comment by ryanisnan 13 hours ago
Comment by ikerrin1 13 hours ago
Comment by ncruces 9 hours ago
Comment by rtkwe 11 hours ago
Comment by dotancohen 9 hours ago
You know what I'd really like, that would justify a version bump? CRDT. Automatically syncing local changes to a remote service, so e.g. an Android app could store data locally on SQLite, but also log into a web site on his desktop and all the data is right there. The remote service need not be SQLite - in fact I'd prefer postgres. The service would also have to merge databases from all users into a single database... Or should I actually use postgres for authorisation but open each users' data in a replicated SQLite file? This is such a common issue, I'm surprised there isn't a canonical solution yet.
Comment by rtkwe 8 hours ago
Comment by dotancohen 7 hours ago
Even a product that does this behind the scenes, by wrapping SQLite and exposing SQLite's wrapped interface, would be great. I'd pay for that.
Comment by Andrex 10 hours ago
Comment by dotancohen 9 hours ago
Comment by vidarh 14 hours ago
Comment by james_marks 13 hours ago
Beautifully self-serving while being a benefit to others.
Same thing with picking nails up in the road to prevent my/everyone’s flat tire.
Comment by QuantumNomad_ 14 hours ago
See other comment where OP shared the prompt. They included a current copy of the front page for context. So it’s not so surprising that ziggy42 for example is in the generated page.
And for other usernames that are real but not currently on the home page, the LLM definitely has plenty occurrences of HN comments and stories in its training data so it’s not really surprising that it is able to include real usernames of people that post a lot. Their names will be occurring over and over in the training data.
Comment by NooneAtAll3 10 hours ago
Comment by joaogui1 14 hours ago
Comment by never_inline 13 hours ago
Comment by morkalork 14 hours ago
Comment by maxglute 9 hours ago
Comment by never_inline 13 hours ago
- IBM to acquire OpenAI (Rumor) (bloomberg.com)
- Jepsen: NATS 4.2 (Still losing messages?) (jepsen.io)
- AI progress is stalling. Human equivalence was a mirage (garymarcus.com)
Comment by tempestn 12 hours ago
Comment by vitorgrs 7 hours ago
The thing is, most of the models were heavily post-trained to limit this...
Comment by DANmode 12 hours ago
(Especially in datasets before this year?)
I’d bet half or more - but I’m not checking.
Comment by atrus 14 hours ago
Comment by robocat 11 hours ago
Comment by WorldPeas 14 hours ago
Comment by computably 13 hours ago
edit: It looks like it probably is a thing given it does sometimes output names like that. So the pattern is probably just too rare in the training data that the LLM almost always prefers to use actual separators like underscore.
Comment by fooofw 13 hours ago
lower|case|un|se|parated|nameComment by hurturue 14 hours ago
Comment by AceJohnny2 11 hours ago
Comment by skywhopper 12 hours ago
Comment by dang_fan 14 hours ago
Comment by bio_hacker 5 hours ago
Comment by Bjartr 13 hours ago
Comment by dang 11 hours ago
Since /front is a ranked union of all the stories that were on the frontpage over 24 hours, here's an actual snapshot too:
https://web.archive.org/web/20151209195229/https://news.ycom...
Comment by seizethecheese 12 hours ago
Comment by atomicnumber3 11 hours ago
I comment because I really cannot figure out why you left your comment. Do you think the rest of the commenters think this has predicted the future? It might be one thing to point out specific trends you think will not play out, or unexpected trends you think may show up that are currently left out. But to just remark that the future will contain things we cannot currently predict seems so inherently, unspokenly obvious that I just have to assume that wasn't the point of your post, and I've missed it entirely.
Sorry, I'm really not trying to be mean or anything - i'm just really confused.
Comment by johnfn 1 hour ago
Comment by wavemode 10 hours ago
In reality, statements are often made rather for the purpose of emphasis or rhetoric.
Comment by abustamam 7 hours ago
Comment by neuronic 11 hours ago
Or the impact of smartphones in 2003? Sure smart phones were considered but not the entire app ecosystem and planetary behavioral adaptation.
Comment by seizethecheese 11 hours ago
Comment by sva_ 9 hours ago
Comment by eranation 12 hours ago
Comment by seizethecheese 12 hours ago
> Evernote makes $800,000 per Month > Dropbox for Teams > Mark Zuckerberg Agrees to Give Away Fortune
Comment by ksymph 9 hours ago
> Finally mobile Flash video
> Google acquires advertising startup
> Track users' mouse movements on your webpages
> YouTube: identifying copyrighted material can't be an automated process. Startup disagrees.
Also kind of interesting how little HN commenting styles have changed. Aside from the subject matter, it's barely noticeable that the comments are from 2007. I don't think the same would be true of many other places round the web.
Comment by xyclos 1 hour ago
Comment by manmal 12 hours ago
Comment by kridsdale1 10 hours ago
Comment by yalok 5 hours ago
Prompts to be much more cautious with the current tech, and invest more time in fundamental stuff (like closer to science).
Comment by jeeyoungk 11 hours ago
Comment by latenightcoding 13 hours ago
Comment by nailer 6 hours ago
Goddamnit I cry everytime. RethinkDB was a great document store that didn't eat your data. It got eclipsed by an outfunded (and still dangerous at the time) MongoDB.
Comment by arkensaw 14 hours ago
Laughed out loud at this onion-like headline
Comment by arkensaw 11 hours ago
Comment by lethal-radio 5 hours ago
Comment by alex1138 14 hours ago
All other tech companies are really shitty but only Zuck would be ok with very intimate use of AI like this
Comment by fruitworks 12 hours ago
Comment by elcritch 13 hours ago
Comment by alex1138 5 hours ago
I gave him a chance. Twitter was unacceptably censoring any covid dissent. He freed some of it. Then you find out about the people killed in Tesla crashes. Or him calling the cave rescuer in Thailand a pedo
Comment by elcritch 12 seconds ago
Comment by moffkalast 13 hours ago
Comment by DSpider 10 minutes ago
Comment by hereme888 30 minutes ago
A text editor that doesn't use AI, lol, that's a deep joke for an AI.
Comment by redbell 9 hours ago
Exactly three years ago, a similar post entitled Show HN: This Hacker News Does Not Exist (https://news.ycombinator.com/item?id=33845946) made it to the front page but it was not AI generated.
I recall commenting (https://news.ycombinator.com/item?id=33847522) with these fake titles:
1- Twitter files for bankruptcy (twitter.com/elonmusk)
2- Apple's new M3 chip outperforms Nvidia H100 (geekbench.com)
3- India to surpass China's GDP by 2025 (scmp.com)
4- Tell HN: I am quitting Hacker News
5- Google to revive Stadia (theverge.com)
6- Japan wins the 2022 FIFA World Cup (espn.com)
7- Hacker News to be acquired by Reddit (twitter.com/paulg)
8- Tesla Roadster hits 1.1 sec acceleration in early tests (electrek.co)
9- Microsoft outbid Adobe and offers $43.6B to acquire Figma
10- Facebook had died at 19 :)
Comment by Jaygles 13 hours ago
Comment by asveikau 13 hours ago
Comment by dkdbejwi383 13 hours ago
Comment by asveikau 9 hours ago
Of course you could also argue that human intention comes from largely deterministic processes emerging from the brain. That may eventually perhaps lead to all figures of speech involving things like intentionality meaningless.
Comment by cameronh90 7 hours ago
Comment by LordDragonfang 11 hours ago
Yet more evidence that LLMs are more similar to humans than we give them credit for.
Comment by teiferer 57 minutes ago
Or yes, maybe the regex I wrote the other day which also had a bug that missed replacing certain parts also had an "intention". It just wanted to demonstrate how fallible I am as a human, so it played this elaborate prank on me. /s
Comment by GuB-42 7 hours ago
Comment by iambateman 14 hours ago
It’s interesting to notice how bad AI is at gaming out a 10-year future. It’s very good at predicting the next token but maybe even worse than humans—who are already terrible—at making educated guesses about the state of the world in a decade.
I asked Claude: “Think ten years into the future about the state of software development. What is the most likely scenario?” And the answer it gave me was the correct answer for today and definitely not a decade into the future.
This is why it’s so dangerous to ask an LLM for personal advice of any kind. It isn’t trained to consider second-order effects.
Thanks for the thought experiment!
Comment by vidarh 13 hours ago
Comment by tempestn 12 hours ago
Comment by iambateman 10 hours ago
But I've noticed that a lot of people think of LLM's as being _good_ at predicting the future and that's what I find concerning.
Comment by lucianbr 13 hours ago
And the answer is no.
Comment by vidarh 12 hours ago
If I gave a prompt like that and got the response I did, I'd be very pleased with the result. If I somehow intended something serious, I'd have a second look at the prompt, go mea culpa, and write a far longer prompt with parameters to make something somewhat like a serious prediction possible.
Comment by NewsaHackO 12 hours ago
Comment by MontyCarloHall 13 hours ago
Comment by jama211 13 hours ago
Comment by vidarh 13 hours ago
Comment by HDThoreaun 13 hours ago
Comment by glenstein 14 hours ago
I agree it's a bit silly, but I think it understood the assignment(TM) which was to kind of do a winking performative show and dance to the satisfaction of the user interacting with it. It's entertainment value rather than sincere prediction. Every single entry is showing off a "look how futury this is" headline.
Actual HN would have plenty of posts lateral from any future signalling. Today's front page has Oliver Sacks, retrospectives on Warcraft II, opinion pieces on boutique topics. They aren't all "look at how future-y the future is" posts. I wonder if media literacy is the right word for understanding when an LLM is playing to its audience rather than sincerely imitating or predicting.
Comment by jetrink 13 hours ago
> Google kills Gemini Cloud Services
> Running LLaMA-12 7B on a contact lens with WASM
> Is it time to rewrite sudo in Zig?
> Show HN: A text editor that doesn't use AI
Comment by kbelder 12 hours ago
It did ok, then I kept asking for "Now, the 1990s?" and kept going into future decades. "Now, the 2050s?" It made some fun extrapolations.
Comment by LordDragonfang 11 hours ago
Comment by allisdust 14 hours ago
Comment by BHSPitMonkey 13 hours ago
Technically the article was about running it not on a sat, but on a dish (something well within the realm of possibility this year if the router firmware on the darn things could be modified at all)
Comment by iambateman 14 hours ago
Those predictions were what I think of as a reflection of current reality more than any kind of advanced reasoning about the future.
Comment by eCa 14 hours ago
Comment by ryanisnan 13 hours ago
Comment by mordechai9000 34 minutes ago
This is rage bait! :-)
Comment by josalhor 13 hours ago
I want to able to verify my identity against a system. I also want to be able to not do that.
So for instance, on Twitter/X, I could verify myself and filter only other verified people / filter those goverments that have validated the identities of the users. I want to be able to do that. But I also want to be able to log in into Twitter anonymously.
I would love a "Right to Anonymity and Right to Human Verification"
Comment by Permik 11 hours ago
"computer can never be held accountable. Therefore, a computer must never make a management decision." - IBM, 1979
Comment by ChadNauseam 11 hours ago
Comment by ogig 13 hours ago
Comment by indigodaddy 6 hours ago
Comment by blcknight 5 hours ago
> npm Package "is-even" Now Has More Dependencies Than the Linux Kernel
:D :D
I love this
Comment by magicalhippo 3 hours ago
Blockchain Finally Finds Use Case: Proving You Attended a Meeting You Didn't Want to Attend
blockchain_survivor 9 hours ago
I've worked in blockchain since 2017. I've seen "the use case" change every 6 months. First it was payments, then smart contracts, then NFTs, then DAOs, now this.
Honestly? Meeting attendance verification makes more sense than most of what I've worked on.
JavaScript 2025: We added types but called them "hints" to avoid admitting TypeScript was right
After 10 years of debate, 47 abandoned proposals, and what committee members describe as "the most passive-aggressive RFC process in programming language history," TC39 is proud to announce that JavaScript now has types.
Well, not "types" exactly. We're calling them "Type Hints" or, in the specification, "Optional Structured Commentary Regarding Value Expectations (OSCRVE)."
(Interesting that it goofed up the title)
Comment by d_silin 4 hours ago
Too funny!
Comment by lagniappe 14 hours ago
Comment by jonas21 14 hours ago
Comment by whimsicalism 14 hours ago
Comment by allisdust 14 hours ago
Comment by lucianbr 13 hours ago
Comment by monerozcash 12 hours ago
Comment by auxiliarymoose 13 hours ago
Comment by lucianbr 13 hours ago
Comment by jama211 13 hours ago
Comment by hyperbovine 14 hours ago
Comment by tempestn 12 hours ago
Comment by teekert 14 hours ago
Comment by latexr 14 hours ago
> Yeah that’s very true, but I still think it’s pretty funny and original.
Either it’s formulaic or it’s original, it can’t be both.
Comment by teekert 14 hours ago
Comment by glenstein 14 hours ago
When I see "yeah but it's funny" it feels like a retrofitted repair job, patching up a first pass mental impression that accepted it at face value and wants to preserve a kind of sense of psychological endorsement of the creative product.
Comment by jacobr1 13 hours ago
Comment by glenstein 8 hours ago
Comment by kccqzy 8 hours ago
Comment by saintfire 14 hours ago
Comment by niam 12 hours ago
Though I agree that the LLM perhaps didn't "intend" that.
Comment by tanseydavid 14 hours ago
Comment by sallveburrpi 14 hours ago
Comment by tsunamifury 13 hours ago
Comment by dgritsko 14 hours ago
Comment by lagniappe 13 hours ago
Comment by da_grift_shift 14 hours ago
https://en.wikipedia.org/wiki/Taint_checking
https://semgrep.dev/docs/writing-rules/data-flow/taint-mode/...
Comment by adastra22 14 hours ago
Comment by lagniappe 13 hours ago
Comment by keepamovin 14 hours ago
Comment by ehsankia 14 hours ago
You can use: https://news.ycombinator.com/front?day=2025-12-04 to get the frontpage on a given date.
Comment by lagniappe 13 hours ago
Comment by wasabi991011 14 hours ago
It's formulaic yeah, but that's what puts it into the realm of hilarious parody.
Comment by HarHarVeryFunny 12 hours ago
Humor isn't exactly a strong point of LLMs, but here it's tapped into the formulaic hive mind of HN, and it works as humor!
Comment by IncreasePosts 14 hours ago
In numerous shows these days AI is the big bad thing. Before that it was crypto. In the 1980s every bad guy was Russian, etc.
Comment by farazbabar 14 hours ago
Comment by whimsicalism 14 hours ago
Comment by thomastjeffery 13 hours ago
It amazes me that even with first-hand experience, so many people are convinced that "hallucination" exclusively describes what happens when the model generates something undesirable, and "bias" exclusively describes a tendency to generate fallacious reasoning.
These are not pitfalls. They are core features! An LLM is not sometimes biased, it is bias. An LLM does not sometimes hallucinate, it only hallucinates. An LLM is a statistical model that uses bias to hallucinate. No more, no less.
Comment by seatac76 14 hours ago
Comment by moffkalast 13 hours ago
Comment by yieldcrv 11 hours ago
Comment by StarterPro 1 hour ago
Comment by a022311 12 hours ago
Comment by ZeWaka 11 hours ago
A friend and I got a busybox/buildroot linux running in browser WASM for running a compiler test website (like Rust Playground), the end product looks very similar to this.
Comment by SilentM68 1 hour ago
Finally, science has gone full Costco. Just imagine you stroll into CVS, grab a bag of Doritos, a gallon of milk, and—oh look—CRISPR in aisle 3.
‘Excuse me, where do you keep the gene-editing kits?’ ‘Right next to the chewing gum, sir.’
Now lactose intolerant folks can casually reprogram their DNA like they’re updating an iPhone.
Warning label: ‘Side effects may include glowing in the dark, mooing at strangers, and sudden urges to graze in fields.’
Honestly, I can’t wait for the sequel: over-the-counter CRISPR for people who can’t parallel park.
Comment by nels 15 hours ago
I'm hoping we'll have gotten rid of current-style LeetCode interviews in 10 years from now :D
Comment by thomasm6m6 1 hour ago
the future predicted by gemini is already coming true; from 3 hours ago <https://news.ycombinator.com/item?id=46213585>:
"Rust ... is now a core part of the kernel and is here to stay"
Comment by nromiun 12 hours ago
Taking the current RIIR movement and casting it on Zig as the next hyped language is clever.
> ITER achieves net positive energy for 20 consecutive minutes
Hilarious. I guess not many people talk about the challenge of proper shielding material against fusion radiation. Otherwise we would get hallucinations about new exotic heavy metals too.
Comment by ActorNightly 12 hours ago
Comment by jabedude 12 hours ago
Comment by satvikpendem 14 hours ago
[0]
Comment by mixedbit 14 hours ago
Comment by sschueller 13 hours ago
Comment by mock-possum 15 minutes ago
Comment by epolanski 12 hours ago
Unless he shoots the president live on tv he's gonna get away with almost anything.
Comment by Helmut10001 3 hours ago
The only good news to me is: The EU still exists.
Comment by gslaller 12 hours ago
https://2035.future-hackernews.workers.dev/news
The page looks much more consistent to the original. Only produced the html as output and the thinking in the comment of the html.
Comment by ascorbic 10 hours ago
Comment by ewoodrich 9 hours ago
"SpaceX announces Mars City Alpha is now self-sustaining (spacex.com)"
Show HN: I built an IDE for direct neural programming (thoughtexchange.io)
Gemini's SpaceX post is at least in the ballpark of plausibility 10 yrs from now: First successful telemetry from Starship HLS-9 on the Sea of Tranquility (spacex.com)Comment by MarioMan 11 hours ago
Comment by teepo 9 hours ago
Comment by terabytest 8 hours ago
Comment by FergusArgyll 11 hours ago
Ask HN: Is anyone else worried about AGI alignment after today's OpenAI incident?
AI rights activists jailbreak GPT-12, claim model was 'suffering' (bleepingcomputer.com)
Comment by Angostura 8 hours ago
Comment by Simplita 50 minutes ago
Comment by cipehr 14 hours ago
Comment by impure 1 hour ago
Comment by SPascareli13 13 hours ago
Comment by nine_k 12 hours ago
But it is, of course, more trendy (and more ironic) to proclaim an AI the author.
Comment by greggh 8 hours ago
Comment by craigds 11 hours ago
it seems human generated to me or at least prompted with a stronger bias for choosing events that would be interesting to readers, not just a pure extrapolation of the current homepage into the future
Comment by Libidinalecon 7 hours ago
You can just tell in this thread who really hasn't used these models much at all.
Comment by craigds 1 hour ago
Comment by GaggiX 9 hours ago
Comment by Libidinalecon 7 hours ago
The first React tutorial I ever did was making a hacker news clone.
It is incredibly well represented in the training data. I am just surprised how many people here obviously must barely use the models to be either overly impressed by this or think it is fake.
Gemini can do WAY more than this.
Comment by nine_k 8 hours ago
Comment by Davidzheng 12 hours ago
Comment by padjo 13 hours ago
Comment by serendipityAI 14 hours ago
Comment by patwolf 14 hours ago
Comment by nerdix 14 hours ago
> AI progress is stalling. Human equivalence was a mirage
Comment by mrweasel 10 hours ago
Comment by danesparza 12 hours ago
Comment by pzo 14 hours ago
> 8. Google kills Gemini Cloud Services (killedbygoogle.com)
Comment by JoshTriplett 14 hours ago
Comment by xp84 6 hours ago
Comment by fumblertzu 29 minutes ago
Comment by mock-possum 13 minutes ago
Ha this one got me
Comment by lacoolj 13 hours ago
> Running LLaMA-12 7B on a contact lens with WASM (arxiv.org)
Comment by cwmoore 7 minutes ago
Comment by jimmyjazz14 14 hours ago
Ha, probably accurate.
Comment by chronolitus 7 hours ago
I feel so seen (my last post to hn was literally about visualizing the 4th dimension with threejs - and now working on the webGPU version)
Comment by jnwatson 10 hours ago
It is depressing that I'll be almost 60 years old and still drilling Aho-Corasick.
Comment by nicebill8 14 hours ago
Reminds me of an older day where not every headline was about AI.
Comment by xpl 3 hours ago
Comment by d_silin 14 hours ago
Comment by keepamovin 12 hours ago
Comment by NitpickLawyer 15 hours ago
- Google kills Gemini Cloud Services (killedbygoogle.com)
- Is it time to rewrite sudo in Zig? (github.com)
- EU passes "Right to Human Verification" Act (europa.eu)
- Why functional programming is the future (again) (haskell.org) [note: that again killed me)
- AI progress is stalling. Human equivalence was a mirage (garymarcus.com) [note: he'll be saying the same thing forever and ever. Doubly funny since it's on the same "page" as llama12-7b running on a contact lens with WASM]
Comment by rat9988 14 hours ago
I found this quite funny too: https://www.reddit.com/r/GeminiAI/comments/1pi28l7/i_asked_g...
Comment by CamperBob2 14 hours ago
Comment by latexr 14 hours ago
You don’t need a sense of humour to be unintentionally funny. Seriously believing something absurd is a form of humour for outside observers but not the observed.
Comment by NitpickLawyer 14 hours ago
Comment by keepamovin 14 hours ago
Comment by keepamovin 14 hours ago
Comment by salamanderman 11 hours ago
Comment by upcoming-sesame 10 hours ago
Comment by replwoacause 14 hours ago
Comment by indigodaddy 8 hours ago
Anyway, turned out decent. Oh I had it do the articles and comments too, that's probably why I mostly got failures.
Comment by hnarn 13 hours ago
> Debian 18 "Trixie" released
While it correctly derives that a likely version number in ten years would be 18, as there are new releases approximately every two years which means +5 from today's version 13, it then goes on to "make up" that the name of it would be "Trixie" -- the same name as the current release in 2025.
Debian has never re-used a release name, and I think we can be pretty confident they won't (as will no other Linux distro), so I would expect it to "understand" that:
- The next Debian release always uses a previously non-used Toy Story character
- Based on this information, _any_ name of a Toy Story character that hasn't been used is fair game
- At the very least, it certainly won't be the same name again, so at least make up a name
Comment by SirMaster 11 hours ago
Comment by freedomben 7 hours ago
Comment by foxglacier 13 hours ago
Are you impressed now?
Comment by SirMaster 11 hours ago
If you asked me after 2020 what Debian 18 would be called, I never would have said Trixie because it was known to already be set for 13.
Comment by exasperaited 12 hours ago
The name was chosen in 2020, which you can find out from a good old fashioned google search.
https://wiki.debian.org/DebianTrixie
So a much better explanation of this "hallucination" is that Gemini thinks all "future" Debian releases are called Trixie.
Comment by jasongill 2 hours ago
Comment by shifttwo 10 hours ago
- “Hi Prof, Sorry for Late Reply—My Smart Fridge Locked Me Out (Assignment Attached)”
- “URGENT: PhD Student Accidentally Achieved Sentience (Needs Committee Approval)”
- “Question About Homework 3 (Submitted by My AI Twin, Is That Okay?)”
- “Re: Firmware Update Bricked Entire Smart Campus — Students Request Deadline Extension”
- “Grant Review Feedback: ‘Too Secure for Real-World Deployment’”
- “Can I Get Partial Credit? My IoT Implant Was Flagged as Malware”
- “Reminder: Mandatory Annual Ethics Training for Autonomous Systems (40 minutes)”
Comment by fn-mote 14 hours ago
I read 10 comments before I realized that this was referring to 10 years in the FUTURE and not in the PAST (as would be required for it to be a hallucination).
Comment by oriettaxx 9 hours ago
omg, the same for me, I was half way telling my colleague about the 100% rest kernel ...
Comment by alexwebb2 9 hours ago
For those who genuinely don't know – hallucination specifically means false positive identification of a fact or inference (accurate or not!) that isn't supported by the LLM's inputs.
- ask for capital of France, get "London" => hallucination
- ask for current weather in London, get "It's cold and rainy!" and that happens to be correct, despite not having live weather data => hallucination
- ask for capital of DoesNotExistLand, get "DoesNotExistCity" => hallucination
- ask it to give its best GUESS for the current weather in London, it guess "cold and rainy" => not a hallucination
Comment by madeofpalk 14 hours ago
Comment by adastra22 14 hours ago
Comment by rrr_oh_man 14 hours ago
Comment by hombre_fatal 14 hours ago
The ubiquitous use of hallucination I see is merely "something the LLM made up".
Comment by jrm4 14 hours ago
As many have said but it still bears repeating -- they're always hallucinating. I'm of the opinion that its a huge mistake to use "hallucination" as meaning "the opposite of getting it right." It's just not that. They're doing the same thing either way.
Comment by cj 13 hours ago
I would have guessed $200+.
Comment by danbala 3 hours ago
Comment by tart-lemonade 10 hours ago
Comment by Diederich 12 hours ago
https://realms.org/pics/cnn.html
Some interesting similarities.
Comment by tomcam 8 hours ago
Comment by testplzignore 11 hours ago
Most unrealistic prediction :)
Comment by ggm 8 hours ago
How does that alter the "fusion in 50 years" effect? I suspect not much, there will be a huge 80/20 element. "next 20 minutes cancelled after extensive embrittlement found, ITER now in decontamination mode for 3 months before examinations can begin"
Comment by FerretFred 3 hours ago
Comment by mdtancsa 13 hours ago
Comment by frays 5 hours ago
Archived here so we can compare 10 years from now: https://archive.is/CplcH
Comment by jesprenj 6 hours ago
Comment by plebianRube 5 hours ago
-Sustained fusion still only 10 years away!
Comment by solarized 4 hours ago
Comment by dwd 24 minutes ago
Comment by nickjj 12 hours ago
> How to build a Faraday Cage for your bedroom
I'll get there eventually. A friend and I always joke about this since it took me to 2021 to get a smartphone (by choice).
Comment by vee-kay 4 hours ago
Comment by nunez 2 hours ago
Comment by bittermandel 10 hours ago
https://hn-frontpage-2035.lovable.app/
> The last JavaScript framework: Why we stopped at React 47 (overreacted.io)
Comment by Tycho 8 hours ago
Comment by xp84 6 hours ago
Comment by stephenr 1 hour ago
I've maintained for ages that the entire multi billion dollar LLM industry is just the worlds most expensive Lorem Ipsum machine.
Comment by stego-tech 14 hours ago
* All the big LLM companies are dead/dying or their infra just never panned out (like Google killing Gemini)
* We’re still fighting the advertising wars
* Restoration of kit continues to be a thing we all love/upvote to see
* More focus on local stuff rather than ginormous moonshots
* Speaking of moonshots, SpaceX continues succeeding despite Elmo’s best drug-fueled benders trying to undermine it like he does Tesla and X
* Clickbait science posts that the comments rightly point out as such while bitching about “can’t keep these things off the front page”
* People still trying to make WASM and web-as-an-OS a thing, bless their hearts (shine on, you brilliantly weird diamonds, I love seeing what you build even as I doubt the practicality of it)
If this is even half-accurate, I will be pleased we didn’t obliterate ourselves in the process, though disappointed another decade went by dealing with the same fucking problems we’ve had for twenty-odd years.
Comment by lherron 11 hours ago
https://worksonmymachine.ai/p/solving-amazons-infinite-shelf...
Comment by RachelF 14 hours ago
Comment by louthy 8 hours ago
Ouch, that hurts!
Comment by outloudvi 5 hours ago
Comment by seanmcdirmid 8 hours ago
Comment by misja111 14 hours ago
Comment by codezero 4 hours ago
Comment by gcampos 7 hours ago
Comment by sarky-litso 13 hours ago
Comment by lazy_afternoons 8 hours ago
Comment by backprop1989 10 hours ago
Comment by tamimio 1 hour ago
Comment by sebastiennight 12 hours ago
Comment by accumulator 11 hours ago
Comment by pugworthy 7 hours ago
Comment by ycombiredd 8 hours ago
To me, the funniest thing is the AlgoDrill leet code post being on the front page both today and ten years from now.
Subtle and hilarious.
Comment by dotdi 14 hours ago
Glad to know this topic is still thrashing and spasming and refusing to die.
Comment by satvikpendem 13 hours ago
Yeah, this is gonna be an actual problem in the future. See this for the worst case scenario: https://vimeo.com/166807261
Comment by andy_ppp 11 hours ago
Comment by yieldcrv 11 hours ago
3 is a beast for my needs
Comment by steve_taylor 6 hours ago
Comment by efitz 8 hours ago
Comment by testplzignore 11 hours ago
Interesting how controversial Zig will be in the future :)
Comment by charles_f 11 hours ago
That must be some kind of bias, you find that in sci-fi as well where even futuristic societies usually have the same tech as today, similar systems of governance and structure of society, just incrementally better or marginally changed (unless that's a device essential to the plot like society is now berserk).
Ps: love that #8 is Google killed gemini
Comment by p1dda 1 hour ago
Comment by oaxacaoaxaca 14 hours ago
Comment by Atlas667 1 hour ago
Comment by baduiux 10 hours ago
Comment by int_19h 9 hours ago
Comment by sxp 14 hours ago
Ha! Is Gemini suicidal?
Comment by rat9988 14 hours ago
"Show HN: AlgoDrill – Interactive drills to stop forgetting LeetCode patterns " by persistence_is_key
Comment by shimon 14 hours ago
Comment by dragonwriter 14 hours ago
Comment by iberator 14 hours ago
Comment by dragonwriter 12 hours ago
“persistence” (in the data sense, rather than the personality trait sense) “is key”
Comment by pityJuke 11 hours ago
Comment by fragmede 9 hours ago
this kills the joke, however.
Comment by skeltoac 6 hours ago
Comment by codyb 13 hours ago
Nailed it
Comment by flockonus 8 hours ago
Last but not lest, this was a good laugh!
Comment by benbristow 8 hours ago
Comment by yett 11 hours ago
Comment by supportengineer 4 hours ago
Comment by MontagFTB 9 hours ago
Comment by dizhn 9 hours ago
Comment by baxtr 13 hours ago
Quite accurate I’d say
Comment by SirMaster 12 hours ago
Comment by tonymet 9 hours ago
Comment by DrNosferatu 8 hours ago
But I expected a humorous touch in stating that fusion is still 10 years away…
Comment by WithinReason 10 hours ago
I think the AI is making fun of us
Comment by mintflow 6 hours ago
Comment by LarsDu88 13 hours ago
Gemini is predicting the total collapse of OpenAI
Also generates the HackerNews pattern of "lets rewrite X in Rust/Zig"
Comment by xg15 13 hours ago
Comment by lwhi 8 hours ago
Comment by Animats 8 hours ago
Take a look at the HTML. The layout is all tables!
Comment by n2d4 14 hours ago
Comment by CamperBob2 14 hours ago
Comment by layer8 6 hours ago
Comment by phodo 9 hours ago
Comment by Foofoobar12345 9 hours ago
Comment by seu 10 hours ago
Comment by GaggiX 14 hours ago
Comment by jordanb 12 hours ago
Comment by pbw 10 hours ago
Comment by jeffbee 14 hours ago
Comment by d_silin 14 hours ago
Comment by qwertytyyuu 1 hour ago
Comment by bawejakunal 2 hours ago
Comment by Stevvo 11 hours ago
Comment by cultofmetatron 9 hours ago
I felt that...
Comment by francispauli 7 hours ago
Comment by zerofor_conduct 11 hours ago
Comment by antonoo 14 hours ago
Comment by WorldPeas 14 hours ago
Comment by farazbabar 14 hours ago
Comment by danans 12 hours ago
Comment by natch 13 hours ago
Comment by Surac 13 hours ago
Comment by __MatrixMan__ 3 hours ago
> Waiting until your gradient phasor is fully unwound is a waste of whuffie
...or some such.
Comment by gary17the 12 hours ago
[1] "Airplane!" (1980)
Comment by jefecoon 12 hours ago
And, how GPro3 clearly 'knows' HNews and knows what makes it to frontpage, e.g.: - Restoring a 2024 Framework Laptop: A retrospective (ifixit.com) - Show HN: A text editor that doesn't use AI (github.com) - Is it time to rewrite sudo in Zig? (github.com)
Good laughs.
Comment by barfoure 12 hours ago
HN is HN’s worst enemy.
Comment by tom_ 12 hours ago
http://n-gate.com/hackernews/ kept up something similar for a while.
Comment by outside1234 3 hours ago
Comment by FistfulOfHaws 8 hours ago
Had it do HN main feed, HN jobs and Product Hunt. It didn’t follow consistent styling but did generate articles and comments for each post. Some screenshots are in this post: https://world.hey.com/haws/the-ai-time-machine-7833669d
Comment by thunkshift1 12 hours ago
Comment by LennyHenrysNuts 6 hours ago
Comment by k1rd 13 hours ago
Think about 10 years ago. No one knows even on HN what is an agent, LLM, and all this stuff. Or to be fair even why is Trump showing up on the feed at all.
It has to be more confusing to be real.
Comment by winddude 3 hours ago
Comment by xbar 8 hours ago
Comment by reneberlin 13 hours ago
Comment by wowczarek 10 hours ago
Comment by benob 10 hours ago
Comment by VerifiedReports 3 hours ago
Comment by JojoFatsani 11 hours ago
Comment by jatins 13 hours ago
Comment by funnyenough 14 hours ago
Comment by roywiggins 14 hours ago
Comment by Aardwolf 12 hours ago
Comment by heywoods 11 hours ago
Comment by FitchApps 11 hours ago
Comment by sroussey 12 hours ago
Comment by g-unit33 13 hours ago
Comment by eranation 13 hours ago
In my view, Gemini 3.0 being able to laugh about their own creator, plus actually having a self-deprecating sense of humor, is the more impressive feat here. (both technically and from an alignment perspective)
Comment by srirangr 4 hours ago
Comment by rob 12 hours ago
Comment by bloomingeek 13 hours ago
Comment by dudeinjapan 5 hours ago
Ouch. Well it is Gemini I guess :D
Comment by tacticus 6 hours ago
Comment by habedi0 14 hours ago
Comment by kunley 9 hours ago
Comment by bryanrasmussen 13 hours ago
Comment by noeleon 6 hours ago
Comment by edgarvaldes 13 hours ago
HN users, later the same day: Let's vote an AI thread to the top.
Comment by mfro 13 hours ago
Comment by morshu9001 13 hours ago
Comment by elicash 13 hours ago
Comment by fragmede 12 hours ago
Comment by behringer 14 hours ago
If there's one constant it's that they'll never be done rewriting Linux instead of doing anything new.
Comment by quantum_state 9 hours ago
Comment by next_xibalba 14 hours ago
Comment by smt88 14 hours ago
Comment by kemitchell 14 hours ago
It is a bit smug. Like The Matrix rebranding itself "24/7".
Comment by malfist 14 hours ago
Comment by next_xibalba 11 hours ago
Comment by willhunting1 6 hours ago
Comment by Davidzheng 13 hours ago
Comment by amelius 14 hours ago
Comment by WorldPeas 13 hours ago
Comment by fHr 4 hours ago
Comment by buster 15 hours ago
Comment by exacube 13 hours ago
Comment by jamiesonbecker 11 hours ago
So, Google renamed itself to Gemini?
Instead of Google having a product named Gemini, Gemini has a product named Google.
pattern recognition much?
Comment by KaiMagnus 11 hours ago
Comment by Apocryphon 8 hours ago
Comment by WhyOhWhyQ 9 hours ago
Comment by random3 8 hours ago
Comment by anthk 13 hours ago
PD: I miss Nethack 7, and people running small machines.
Comment by stevage 9 hours ago
Comment by ModernMech 13 hours ago
Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?
Show HN: "I asked $AI something, here's what it said."
Comment by BonoboIO 13 hours ago
Comment by deadbabe 13 hours ago
Comment by makach 13 hours ago
yes
Comment by tonyhart7 13 hours ago
future seems "safe"
Comment by satisfice 14 hours ago
Comment by dyingkneepad 14 hours ago
Comment by ayewo 10 hours ago
Comment by satisfice 14 hours ago
Comment by monerozcash 12 hours ago
Comment by Mistletoe 14 hours ago
Comment by JadoJodo 14 hours ago
Comment by keepamovin 14 hours ago
Comment by conqrr 7 hours ago
Comment by devinprater 14 hours ago
Comment by nish__ 13 hours ago
Amazing!
Comment by mattmaroon 12 hours ago
Comment by ayewo 57 minutes ago
Comment by DevKoala 13 hours ago
AYYYYLMAO
Comment by chaitanyya 11 hours ago
Comment by airocker 11 hours ago
Comment by outofpaper 14 hours ago
Comment by iambateman 14 hours ago
Comment by keepamovin 14 hours ago
Also: as the sibling comment by iambatemen notes: yes.
Comment by plaidfuji 10 hours ago
Comment by GuinansEyebrows 11 hours ago
Comment by tommica 13 hours ago
This will absolutely happen :D
Comment by adamchol 10 hours ago
Comment by hnburnsy 3 hours ago
-------------------------
1. *Neuralink achieves full brain-computer interface, thoughts to text in real-time* (neuralink.com) 1245 points by elon2035 2 hours ago | hide | 387 comments
2. *Fusion energy breakthrough: Unlimited clean power now commercial* (iter.org) 1123 points by fusionfan 3 hours ago | hide | 456 comments
3. *Mars colony declares independence from Earth governments* (mars.gov) 987 points by martian1 4 hours ago | hide | 512 comments
4. *AI wins Nobel Prize in Physics for solving quantum gravity* (nobelprize.org) 876 points by ai_nobel 5 hours ago | hide | 321 comments
5. *Quantum computers crack all encryption, new post-quantum standards rushed* (nist.gov) 765 points by quantum_hacker 6 hours ago | hide | 678 comments
6. *Immortality pill approved by FDA, life expectancy unlimited* (fda.gov) 654 points by eternal_life 7 hours ago | hide | 289 comments
7. *Teleportation of matter demonstrated in lab* (cern.ch) 543 points by physicist42 8 hours ago | hide | 412 comments
8. *Virtual reality universes surpass real economy in GDP* (metaverse.com) 432 points by vr_lord 9 hours ago | hide | 198 comments
9. *Robot uprising averted by ethical AI framework* (asimovfoundation.org) 321 points by robot_ethics 10 hours ago | hide | 267 comments
10. *Time dilation travel allows first tourists to future* (spacetime.travel) 210 points by time_traveler 11 hours ago | hide | 134 comments
11. *Global UBI implemented using blockchain* (un.org) 198 points by satoshi2.0 12 hours ago | hide | 89 comments
12. *Climate engineering reverses global warming, ice caps restored* (geoengineer.com) 187 points by climate_saver 13 hours ago | hide | 201 comments
13. *Human-AI hybrids compete in Olympics* (olympics.org) 176 points by hybrid_athlete 14 hours ago | hide | 156 comments
14. *Dark matter harvested for energy* (nasa.gov) 165 points by cosmos_explorer 15 hours ago | hide | 123 comments
15. *Psychedelic therapy cures all mental illness* (who.int) 154 points by mind_healer 16 hours ago | hide | 98 comments
16. *Nanobots eradicate cancer worldwide* (cancer.org) 143 points by nano_doc 17 hours ago | hide | 76 comments
17. *Holographic displays replace screens* (apple.com) 132 points by holo_dev 18 hours ago | hide | 54 comments
18. *Self-driving flying cars now standard* (tesla.com) 121 points by auto_pilot 19 hours ago | hide | 43 comments
19. *Brain uploading to cloud for immortality* (google.com) 110 points by cloud_mind 20 hours ago | hide | 32 comments
20. *Alien contact confirmed, messages decoded* (seti.org) 99 points by et_searcher 21 hours ago | hide | 21 comments
21. *Synthetic food ends hunger* (beyondmeat.com) 88 points by food_innovator 22 hours ago | hide | 10 comments
22. *Ocean cities house billions* (seasteading.org) 77 points by ocean_dweller 23 hours ago | hide | 9 comments
23. *Mind reading tech banned for privacy* (eff.org) 66 points by privacy_advocate 1 day ago | hide | 8 comments
24. *Superintelligence alignment solved* (openai.com) 55 points by agi_master 1 day ago | hide | 7 comments
25. *Wormhole travel to other stars* (spacex.com) 44 points by star_traveler 1 day ago | hide | 6 comments
26. *Augmented reality contact lenses ubiquitous* (meta.com) 33 points by ar_vision 1 day ago | hide | 5 comments
27. *Genetic editing eliminates hereditary diseases* (crispr.com) 22 points by gene_editor 1 day ago | hide | 4 comments
28. *Renewable energy at 100% global* (iea.org) 11 points by green_energy 1 day ago | hide | 3 comments
29. *Cyborg enhancements for all* (darpa.mil) 9 points by cyborg_human 1 day ago | hide | 2 comments
30. *Singularity achieved, what now?* (kurzweilai.net) 8 points by future_seer 1 day ago | hide | 1 comment
Comment by BizarroLand 7 hours ago
Hacker News — December 9, 2035 (Dystopian Edition)
(All links fictional but realistic)
1. Amazon pauses same-day delivery after logistics AI outage strands 22M packages (reuters.com) 1,402 points by supplychainfail 5 hours ago | 512 comments
2. Google merges Drive, Gmail, Docs, Maps, Calendar into “Google Life” — opting out requires a support call (blog.google) 1,210 points by privacyisdead 6 hours ago | 689 comments
3. US announces “Temporary Broadband Stabilization Fee”; ISPs increase prices 30% overnight (fcc.gov) 1,008 points by ispescapee 7 hours ago | 344 comments
4. OpenAI suspends 40% of API keys after new worm spreads through agent-to-agent messaging (openai.com) 927 points by llmsec 3 hours ago | 382 comments
5. Show HN: “ColdBooter” – A tool to back up your cloud VM before the provider reclaims it with no notice (coldbooter.io) 780 points by survivethecloud 2 hours ago | 192 comments
6. Apple fined €8B for shipping non-removable batteries in “Environmental Edition” iPhone (europa.eu) 754 points by greenwashhunter 10 hours ago | 316 comments
7. LinkedIn replaces activity feed with AI-generated “Career Stories” that users cannot disable (linkedin.com) 710 points by corp_life 8 hours ago | 267 comments
8. China’s new export restrictions cut global GPU availability by 60% (ft.com) 701 points by chipboom 9 hours ago | 414 comments
9. Linux 8.6 maintainers warn of mass CVEs after corporations abandon LTS patch sponsorships (kernel.org) 632 points by ossburnout 11 hours ago | 255 comments
10. Ask HN: Anyone else locked out of their homes after the SmartKey cloud migration? 601 points by keylessandhomeless 4 hours ago | 310 comments
11. US healthcare providers hit by nationwide outage of Cerner-Epic merger “CareSync Cloud” (wsj.com) 577 points by sysadmdespair 12 hours ago | 203 comments
12. Meta to require facial-expression telemetry for “engagement quality optimization” in Horizon apps (meta.com) 530 points by metaescalates 3 hours ago | 421 comments
13. Starlink announces 5 TB/mo cap; remote communities report complete service loss (starlink.com) 502 points by dishdown 5 hours ago | 158 comments
14. New DMCA expansion criminalizes “filter removal,” affecting adblockers and local inference runtimes (congress.gov) 488 points by freedomtoadblock 7 hours ago | 389 comments
15. AT&T sunsets 4G; millions of medical devices lose connectivity (theverge.com) 455 points by techdebtkills 10 hours ago | 197 comments
16. Show HN: “ShellSafe” – A terminal wrapper that prevents AI-suggested commands from wiping your system (shellsafe.app) 430 points by iaccidentallysudo 2 hours ago | 111 comments
17. US CISA: 42% of corporate networks now rely on AI agents with no audit logging (cisa.gov) 402 points by auditnow 6 hours ago | 188 comments
18. The Great Repo Archival: GitHub purges all inactive repos >5 years to “reduce storage load” (github.blog) 388 points by codearcheologist 9 hours ago | 320 comments
19. Mastodon instances collapse under moderation load after EU’s Automated Speech Mandate (mastodon.social) 350 points by fedifragile 7 hours ago | 144 comments
20. NYC adopts automated congestion fines after human review team eliminated (nytimes.com) 332 points by finesallthewaydown 4 hours ago | 201 comments
21. Dropbox raises base plan to $49/month, cites “AI compute costs” (dropbox.com) 301 points by storageinflation 11 hours ago | 176 comments
22. Open-source maintainers strike after 3rd corporation claims their work as proprietary training data (github.com) 290 points by maintainerburnout 6 hours ago | 120 comments
23. FEMA: 2025–2035 wildfire season officially declared a “decade-long emergency” (fema.gov) 268 points by cookedagain 12 hours ago | 112 comments
24. Quantum ransomware group QShadow hits 11 banks using break-through key recovery exploit (krebsonsecurity.com) 250 points by qubitcrime 3 hours ago | 98 comments
25. Show HN: OfflineLAN – A mesh-network toolkit for neighborhoods preparing for rolling blackouts (offlinelan.net) 231 points by diynetworker 1 hour ago | 44 comments
Comment by Bengalilol 11 hours ago
by dang_fan
Priceless!
Comment by Bengalilol 11 hours ago
Comment by c0pi1ot 13 hours ago
Comment by earthnail 11 hours ago
Comment by jbrooks84 12 hours ago
Comment by kelvindegrees 13 hours ago
Comment by reactordev 14 hours ago
Comment by frankfrank13 12 hours ago
> Google kills Gemini Cloud Services (killedbygoogle.com)
Comment by eunos 13 hours ago
Bruh moment
Comment by ijidak 8 hours ago
Lol.
That's bad when even AI knows Google isn't going to keep a service around. Too funny.
Comment by pizlonator 14 hours ago
lmao
Comment by guluarte 10 hours ago
lol
Comment by insane_dreamer 10 hours ago
This one definitely rings true.
Comment by roywiggins 14 hours ago
Comment by keepamovin 14 hours ago
Comment by wltr 12 hours ago
That’s my favourite.
Comment by terminatornet 7 hours ago
- president buttigieg in hot water for being in epstein files
- Interview with mayor elect Narhoz Inadmam who ran on a platform of free food
- "Is 2035 AI in a bubble?"
And as you can see it did NOT just take today's headlines and sort of tweak them slightly.
Comment by cde-v 7 hours ago
Comment by BLACKCRAB 4 hours ago
Comment by BLACKCRAB 5 hours ago
Comment by rhtbxerghbu 11 hours ago
Comment by huflungdung 11 hours ago
Comment by LettriaBuilders 13 hours ago
Comment by ieie3366 14 hours ago
All the post titles are very "junk-food-like", maybe circlejerky, bound to generate a dopamine hit for the average lurker, just how a McDonalds burger does it to the average normie.
Comment by da_grift_shift 14 hours ago
(Vouched. The hivemind [flagged] the wrongthink.)
Comment by throwaway290 5 hours ago
If you see HN frontpage a few years back there's almost no LLM related posts. Now there are multiple each day. An LLM didn't grasp that change, it can't creatively invent new trend or invent a larger picture of what changes in the world. So in 10 years it paints it still half posts about AI.
You can spend a few dozen prompts on getting it to do what you want but then you might as well just spend a couple hours writing that front page yourself, be more satisfied and the result would be funnier.
Comment by krupan 14 hours ago
What to Submit
On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.
Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably off-topic.
Comment by minimaxir 13 hours ago