Don't post generated/AI-edited comments. HN is for conversation between humans.

Posted by usefulposter 2 hours ago

Counter1903Comment743OpenOriginal

Comments

Comment by nkh 1 hour ago

What a welcome post. The whole reason I come here is to get thoughtful input from smart people, and not what I could get myself from an LLM. While we are at it; Think your own thoughts as well :) I know how easy it is to "let it come up with a first draft" and not spend the real effort of thinking for yourself on questions, but you'll find it's a road to perdition if you let yourself slip into the habit. Thanks to all the humans still here!!

Comment by gabriel666smith 1 hour ago

Quite! It's very easy to send a HN link to one of our new artificial friends to see what they have to say about it. Subsequently publicly posting the inference variation you receive strikes me as very self-centered. Passing it off as your own words - which the majority seem to - is doubly bizarre.

It's very funny to imagine people prompting: "Write a compelling comment, for me, to pass off as my thoughts, for this HN news thread, which will attract both upvotes and engagement.".

In good faith, per the guidelines: What losers!

Comment by xpe 45 minutes ago

I agree with much of what you say, but it isn't as simple as "post to LLM, paste on HN". There are notable effects from (1) one's initial prompt; (2) one's phrasing of the question; (3) one's follow-up conversation; (4) one's final selection of what to post.

For me, I care a lot about the quality of thinking, as measure by the output itself, because this is something I can observe*.

I also care -- but somewhat less -- about guessing as to the underlying generative mechanisms. By "generative mechanisms" I mean simply "Where did the thought come from?" One particular person? Some meme (optimized for cultural transmission)? Some marketing campaign? Some statistic from a paper that no one can find anymore? Some dogma? Some LLM? Some combination? It is a mess to disentangle, so I prefer to focus on getting to ground on the thought itself.

* Though we still have to think about the uncertainty that comes from interpretation! Great communication is hard in our universe, it would seem.

Comment by kelnos 11 minutes ago

Sure, I agree that getting something you want (top post) out of an LLM isn't zero-effort.

But this isn't about effort. This is about genuine humanity. I want to read comments that, in their entirety, came out of the brain of a human. Not something that a human and LLM collaboratively wrote together.

I think the one exception I would make (where maybe the guidelines go too far) is that case of a language barrier. I wouldn't object to someone who isn't confident with their English running a comment by an LLM to help fix errors that might make a comment harder to understand for readers. (Or worse, mean something that the commenter doesn't intend!) It's a privilege that I'm a native English speaker and that so much online discourse happens in English. Not everyone has that privilege.

Comment by eek2121 3 minutes ago

This. LLMs are an autocomplete engine. They aren't curious. Take your curiosities and use your human voice to express them.

The only reason you should be using an LLM on a forum like this is to do language translation. Nobody cares about your grammar skills, and there really isn't a reason to use an LLM outside of that.

LLMs CANNOT provide unique objectivity or offer unknown arguments because they can only use their own training data, based on existing objectivity and arguments, to write a response. So please shut that shit down and be a human.

Signed, a verified/tested autistic old man.

cheers

Comment by c23gooey 26 minutes ago

Taking the time to write something, and read over it is a better skill than asking an LLM to do it for you.

Also, quality doesn't come from any of those points you've mentioned. Quality comes from your ability to think and reason through a topic. All those points you mention in your first paragraph are excuses, trying to make it seem like there was some sort of effort to get an LLM to write a post. It feels like fishing for a justification

Comment by xpe 12 minutes ago

> Taking the time to write something, and read over it is a better skill than asking an LLM to do it for you.

Yes, this is a great skill to have: no argument from me. This wasn't my point, and I hope you can see than upon reflection.

> All those points you mention in your first paragraph are excuses, trying to make it seem like there was some sort of effort to get an LLM to write a post.

Consider that a reader of the word 'excuses' would often perceive an escalation of sorts. A dismissal.

> Quality comes from your ability to think and reason through a topic.

That's part of it. Since the quote above is a bit ambiguous to me, I will rephrase it as "What are the factors that influence the quality of a comment posted on Hacker News?" and then answer the question. I would then split apart that question into sub-questions of the form "To what extent does a comment ..."

- address the context? Pay attention to the conversational history?

- follow the guidelines of the forum?

- communicate something useful to at least some of the readers?

- use good reasoning?

One thing that all of the four bullet points require is intelligence. Until roughly ~2 years ago, most people would have said the above demand human intelligence; AI can't come close. But the gap is narrowing. Anyhow, I would very much like to see more intelligence (of all kinds, via various methods, including LLM-assisted brainstorming) in the service of better comments here. But intelligence isn't enough; there are also shared values. Shared values of empathy and charity.

In case you are wondering about my "agenda"... it is something along the lines of "I want everyone to think a lot harder about these issues, because we ain't seen NOTHING yet". I also strive try to promote and model the kind of community I want to see here.

Comment by detectivestory 9 minutes ago

great idea, but seems a little futile if there is no protection agains llms training on HN comments. ironically, if HN can succefully prevent llm content, it will become one of the best sources available for training data

Comment by QQ00 1 hour ago

Totally agree with you. I come here to read comments made by humans. If I need to read comments made by AI Bots I would go to Twitter or reddit, both made me not read the comments section entirely.

Comment by jasoneckert 1 hour ago

I actually do something similar on my personal site using this note that includes a purposeful typo: https://jasoneckert.github.io/site/about-this-site/

I'm hoping people catch that typo after reading "every single word, phrase, and typo (purposeful or not)" and smiled every time I've had someone post a PR with a fix for it (that I subsequently reject ;-)

Comment by 13 minutes ago

Comment by nomel 45 minutes ago

I would enjoy a "block user" feature, to help this. I personally want to live in an online bubble of interesting thoughts. This seems close (or better, since people I enjoy can contradict my own flags) [1].

[1] https://news.ycombinator.com/item?id=47141119

Comment by kelnos 7 minutes ago

I'm torn on this. On one hand I do agree with your goal about wanting to live in a bubble of interesting thoughts. But on the other... I know I have my biases, and I'm sure I might end up blocking people who actually are insightful and interesting but either a) had an off day and shitposted, or b) says insightful things in ways that make me angry and get past my sense of reasonableness.

Comment by wilg 1 hour ago

It's far from proven or obvious whether involving an LLM in your thought process degrades your thought process.

Comment by theappsecguy 41 minutes ago

It seems plenty obvious, but there's also scientific backing slowly catching up: https://www.media.mit.edu/publications/your-brain-on-chatgpt...

Comment by fc417fc802 9 minutes ago

It's not at all obvious because there's more than one way to go about it. Obviously entirely outsourcing is bad. Whereas working cooperatively seems highly beneficial to me.

Google search has been getting progressively worse for technical topics for at least the past decade. Now suddenly they started providing a free tutor capable of custom tailoring graduate level explanations of technical topics for me on demand. The difference is night and day.

Comment by kelnos 5 minutes ago

Sure there's more than one way to go about it, but what matters is how people typically do go about it.

And certainly individuals can make their own decision to engage with an LLM in positive, self-thought-provoking ways, but it's still useful to understand how people generally do use them in the real world.

Comment by kelnos 6 minutes ago

Sure, so we shouldn't assert that with confidence, but I think it's safe to guess that, for most people's use, that is probably the case.

Yes, some people (see some sibling commenters) do engage with an LLM in ways that might make them more thoughtful, but I have a hard time believing that's the common case.

Comment by justinnk 9 minutes ago

I think it really depends on the how. Engaging with it in a socratic debate-style argument [1] if no fellow human is available might very much support your thought process. On the other hand, just obtaining the solution to one‘s homework/problem/task/… won‘t be very beneficial for one’s development. The latter is sadly much more convenient and probably accounts for most of the usage. I remember a saying about the mind being a muscle: in order to keep it in good shape, you have to use it actively.

[1] https://en.wikipedia.org/wiki/Socratic_method

Comment by AirGapWorksAI 1 hour ago

Agreed. In my case, I think I have found the opposite. At least, I find myself thinking hard about things more, now that I have started working hand in hand with AIs on different projects. Which is probably enhancing my cognitive ability, not degrading it.

Comment by andy99 19 minutes ago

This captures the problem, the sycophancy / preference optimization deludes people into thinking they’re on to something and posting things that don’t contribute to the discussion. It’s the “I drive better when I’m drunk” syndrome, it’s better just to outright ban it than to leave it to people’s judgement.

Comment by doctorpangloss 52 minutes ago

Many programmers believe that math is the best way to solve problems or order the world or whatever. There are lots of real 20 year olds out there using chatbots to "optimize" their humanities learning, or to "optimizing" using dating apps. It's a fact about this audience. Some people have a very myopic point of view, however, it coheres with certain cultural forces, overlapping with people of specific ethnic heritages, who are from California and New York, go to fancy school and post online, to earn tons of money, buy conspicuous real estate, date skinny women and marry young.

These aren't the marina bros, they're the guys who think they're really smart because they did well in math. They are using LLMs to reply to people. They LOOK like you. Do you get it?

Comment by 1 hour ago

Comment by caaqil 48 minutes ago

> The whole reason I come here is to get thoughtful input from smart people

I don't wanna be a party pooper here, but you will be lucky if the input satisfies one of those conditions. Getting input with both those attributes on HN is like finding life on Mars.

Comment by gus_massa 24 minutes ago

Remember to upvote good comments!

I think the situation is better in small discussions, that sometimes are lucky and get more technical.

Once a discussion reach 100 or so comments, most of the time the discussion is too generic, but there are a few hidden good comments here and there.

Comment by fudged71 54 seconds ago

What I think would actually be useful is a version of what was implemented on /r/ClaudAI which is an official bot which summarizes the discussion (and updates after x number of comments have been added). I think this level of synthesis has a compounding effect on discussion quality and pruning redundant arguments/topics.

Example: https://www.reddit.com/r/ClaudeAI/s/BJKLxzJA16

Comment by abtinf 2 hours ago

Good. This helps establish it in the HN culture. That’s the purpose of guidelines.

99% of rule enforcement, both IRL and online, comes down to individuals accepting the culture.

Rules aren’t really for adversaries, they are for ordinary situations. Adversaries are dealt with differently.

Comment by loeg 20 minutes ago

I mostly agree, although we've seen big shifts in the culture towards rule-deviating norms over time. Look at the guidelines for ideological battles or throwaway accounts, for example. And, as always:

> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

Comment by gr8tyeah 1 hour ago

This is only meaningful if enough people read it and agree

Comment by abtinf 1 hour ago

That’s true. Fortunately, by virtue of it being added to the guidelines, quite a few folks here are prepared to reply to obviously generated comments by simply citing and linking the rule. Just search for “shallow dismissal” to see many examples.

It will take time, but eventually everyone will know about it.

Comment by altairprime 14 minutes ago

> quite a few folks here are prepared to reply to obviously generated comments by simply citing and linking the rule

Note that the guidelines do explicitly say not to post about guidelines violations in comments, and to email them instead. I know this isn’t a well-loved guideline in this modern era, but duly noted: those well-intended comments are themselves breaking the guidelines.

Comment by bigiain 17 minutes ago

Sadly, I suspect the rate of generation of AI "everyones" vastly exceeds the community's capacity to teach culture.

Comment by bhhaskin 1 hour ago

Nah they are pretty good a banning users that don't follow the guidelines.

Comment by abtinf 1 hour ago

Yes, and it’s not like they just insta-ban every infraction.

I’ve broken the guidelines on this site before. The mods reply and say “hey, stop doing that, here is the guideline”. I stopped doing it. Life continues.

Comment by altairprime 1 hour ago

(They do react differently if you show a pattern of disregard rather than a one-time event; ‘dang before’ might pull up some of those in a search.)

Comment by jbaber 1 hour ago

One of the virtues of HN is polite prodding when the rules are broken.

Comment by 1 hour ago

Comment by VoodooJuJu 1 minute ago

[dead]

Comment by magicseth 1 minute ago

This is a huge problem. I solved it by making a secure ios keyboard that can cryptographically attest it was written by hand by me. It usses app attest to verify integrity, will not verify pasted text, and can be additionally signed by face-id. It could ne a zero trust way to prove humanity. Or at least capacitance a face and an iphone. https://typed.by/magicseth/130#

Comment by jedberg 1 hour ago

I'm absolutely 100% for this policy.

My only caution is that good writers and LLMs look very similar, because LLMs were trained on a corpus of good writers. Good writers use semicolons and em-dashes. Sometimes we used bulleted lists or Oxford commas.

So we should make sure to follow that other HN rule, and assume the person on the other end is a good faith actor, and be cautious about accusing someone of using AI.

(I've been accused multiple times of being an AI after writing long well written comments 100% by hand)

Comment by jjgreen 2 minutes ago

Good writers use semicolons and em-dashes. Sometimes we used bulleted lists or Oxford commas.

- You seem to have a rather high opinion of your own writing :-)

- Why the mix of tense (use/used)?

- Oxford commas are a monstrosity

Comment by tyg13 1 hour ago

I don't really think that good writing and LLM writing looks all that similar. It's not always easy to spot (and maybe HN users aren't always doing a great job at it), but even the best LLM output tends to have an "LLM smell" to it that's hard to avoid.

Like, sure, LLM writing is almost always grammatically correct, spelled correctly, formatted correctly, etc., which tends to be true of good writing. But there's a certain style that it just can't get away from. It's not just the em-dashes, the semi-colons, or the bulleted lists. It's the short, punchy sentences, with few-to-no asides or digressions. Often using idiom, but only in a stale, trite, and homogenized manner. Real humans, are each different -- which lends a certain unpredictability to our writing, even if trying to write to a semi-formal standard, the way "good" writers often do -- but LLMs are all so painfully the same, and the output shows it.

Comment by ordersofmag 15 minutes ago

Seems like the ability to distinguish LLM versus 'good human' writing depends on the size of the writing sample you have to look at (assuming you think it can be done). And that HN-scale posts are unlikely to be a long enough for useful discernment.

Comment by lordnacho 5 minutes ago

You're absolutely right!

Comment by jedberg 43 minutes ago

Those sentence constructions that are "tells" were also learned from good writers though. But here, I'll let you be the judge. This was a comment I wrote 100% myself on reddit, which was both downvoted and I got multiple DMs referencing it and telling me to "stop posting this AI slop":

https://www.reddit.com/r/ExperiencedDevs/comments/1pyjkuf/i_...

Granted, it was in a thread about AI and maybe people were on edge, but I was still accused, which to be honest hurt a bit after the effort I put into writing it.

Comment by nonameiguess 14 minutes ago

I get that it's possibly contrary to the point if people are looking to truly have conversations here, but at least 99% of the time, I post a comment and never come back. I said what I had to say and don't particularly feel like getting sucked into an argument if someone disagrees, and frankly, if I'm wrong I think I'll realize it eventually anyway. I'm more likely to dig in my heels and ossify in a wrong position if someone shits on me and I immediately feel the need to defend myself. It can mesmerize you into believing things you might not have if it didn't hit your ego. I could be deluded but think I'm good at making arguments, but that at least means I'm good at making arguments that convince myself, which can be dangerous because you can convince yourself of things that are wrong. The upside is if anyone is out there accusing me of being an LLM, I don't even know so it can't insult me.

It is amusing to witness this happening to others when it's someone like you who is a semi-public figure who should probably be well known on Reddit of all places.

Comment by girvo 1 hour ago

AI driven web design has the same smell, it’s quite fascinating to see the different tells in different media. Then it’s also quite fascinating to see those same tells change and evolve over time.

Comment by xboxnolifes 1 hour ago

LLMs have good writing in the same way that technical manuals can have good writing. It might all be correct, but it's usually not a good read.

Comment by 0______0 1 hour ago

Excuse me. I consider the writing within technical manuals strictly superior and meticulously written. It's fairly enjoyable to read what engineers/subject matter experts write about their own creations. Comparing those to LLM generated patronizing word vomit is a shame.

Comment by semiquaver 45 minutes ago

Good writers are often good in recognizably unique ways. To the extent that LLMs produce “good writing,” which I happen to think they mostly do, they tend to overuse specific devices which give their writing a quality that most people are already sick of.

Comment by SchemaLoad 21 minutes ago

You can tell good writers from LLMs because good writers post comments that mean something, that add to the conversation, that bring in personal experiences. While LLM comments just summarize the article and end with some engagement call to action like "Curious to hear what others think"

Comment by alexjplant 19 minutes ago

> Good writers use semicolons and em-dashes

I use semicolons a lot. If this is the nouveau tell du jour for LLMs then I'm in trouble.

Comment by zahlman 1 hour ago

They look similar. In my experience, they do not read similar at all. You have to pay attention and actually try to appreciate what you're reading. Then, if you try and fail, it might not be your fault.

Comment by nomel 29 minutes ago

What effort was put into their prompt to make them read similarly? There could very well be a selection bias, where you're only "seeing" AI when it's obvious/default prompt.

Comment by zahlman 3 minutes ago

Sure. There's always the possibility that LLM-generated text goes undetected, especially if false positives have a cost. But this is fine. Of course putting more effort into prompting makes the result harder to detect. It also, naturally, reduces the annoyance of LLM-generated comments. And because of the effort involved, it naturally cuts down on the volume of such comments.

Arguably it cannot avoid all the possible harm. For example, someone might generate a comment that makes false statements but cannot reasonably be detected as LLM-generated except perhaps by people who know (or determine) that the statements are false. But from a policy perspective, this is again not really different from if someone just decided to lie.

Comment by 1 hour ago

Comment by j45 1 hour ago

AI can make output seem very average or low effort as well if it sounds like everything else.

Comment by djeastm 39 minutes ago

>(I've been accused multiple times of being an AI after writing long well written comments 100% by hand)

Perhaps always be sure to say something especially timely, original or insightful that an LLM can't have come up with.

Comment by jjk166 18 minutes ago

Nah, just write not good like rest of we

Comment by GMoromisato 1 hour ago

I'm here to read what actual humans think. If I wanted to read what an LLM thinks, I could just ask it.

But here's where it gets tricky: Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

Am I here to read authentic humans because I value authenticity for its own sake (like preferring Champagne instead of sparkling wine)? Or do I value authentic human output because I expect it to be of higher quality?

I confess that it is a little of both. But it wouldn't surprise me if someday LLM-enhanced output becomes sufficiently superior to average human output that the choice to stick with authentic human output will be more painful.

Comment by altairprime 1 hour ago

> Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

This is an artificial dichotomy. HN’s guidelines specify thoughtful, curious discussion as a specific goal. One-off / pithy / sarcastic throwaway comments are generally unwelcome, however popular they are. Insightful responses can be three words, ten seconds to write and submit, and still be absolutely invaluable. Well-thought-out responses are also always appreciated, even if they tend to attract fewer upvotes than a generic rabble-rousing sentiment about DRM or GPL or Apple that’s been copy-pasted to the past hundred posts about that topic. But LLM-enhanced responses are not only unwelcome but now outright prohibited.

Better an HN with fewer words than an HN with more AI writing words. We’ve been drowned in Show HN by quantity as proof of why already.

Comment by GMoromisato 13 minutes ago

But what if it turns out that human+LLM can produce more "thoughtful, curious discussion" than human alone?

That's the dichotomy: Do we prefer text with the right "provenance" over higher quality text?

[Perhaps you'll say that human+LLM text will never be as high-quality as human alone. But I'm pretty sure we've seen that movie before and we know how it ends.]

That said, you're right that because human+LLM is so much more efficient, we'll be drowning in material--and the average quality might even go down, even if the absolute quantity of high-quality content goes up.

I think, in the long term, we will have to come up with more sophisticated criteria for posting rather than just "must be unenhanced human".

Comment by davebranton 5 minutes ago

It doesn't matter.

The guidelines are perfectly clear, no matter the outcome of your thought experiment. Hacker News wants intelligent conversation between human beings, and that's the beginning and the end of it.

If you want LLM-enhanced conversation then I'm sure you will find places to have that desire met, and then some. Hacker News is not that place, and I pray that it will never become that place. In short, and in answer to "Do we prefer text with the right "provenance" over higher quality text?".

Yes. Yes, we do.

Comment by jmull 16 minutes ago

If the goal is to read what actual humans think, it's hard to see how an LLM filter can do anything but obscure and degrade the content.

LLMs, as we know them, express things using the patterns they've been developed to prefer. There's a flattening, genericizing effect built in.

If there are people who find an LLM filter to be an enhancement, they can run everything through their favorite LLM themselves.

Comment by kelnos 15 minutes ago

> Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

Neither. I want insightful, well-thought-out, human comments.

It's a little sad that this might be too much to ask sometimes...

Comment by bittercynic 1 hour ago

I like to read human comments because I'd like to know what my fellow humans think. I'd prefer not to read low-effort, throw away comments, but other than that I want to know what people think about different topics.

Comment by abtinf 1 hour ago

By this logic, you might consider vibe coding a browser plugin that takes any HN comment less than 50 words and auto-expands it into an “insightful, well thought-out response.”

Comment by zahlman 1 hour ago

Length is not insight. I understand this to be a community oriented towards people who are not impressed by such superficial things.

Comment by _se 49 minutes ago

That's the point :)

Comment by caconym_ 1 hour ago

What is the value of this "output"? If I want to know what LLMs think about something, I can go ask an LLM any question I want. For a comment on [a site like] HN, either the substantive content of the comment originated inside a human mind, or there is no substantive content that I couldn't reproduce by feeding the comment's context into an LLM. At the extreme, I don't have any interest in reading or participating in a conversation between a bunch of LLMs.

Comment by neutronicus 1 hour ago

They’re referencing LLM-enhanced output.

The value proposition is that someone who is a lousy writer (perhaps only in English) with deep domain knowledge is going back and forth with the LLM to express some insight or communicate some information that the LLM would not produce on its own.

Comment by caconym_ 46 minutes ago

> perhaps only in English

Wouldn't it work better to just write the thing in whatever language they can actually write in and then do a straightforward translation in a single pass?

> someone who is a lousy writer with deep domain knowledge going back and forth with the LLM to express some insight or communicate some information that the LLM would not produce on its own

This sounds reasonable on its face, but how often does it actually come up that somebody can't clearly express an idea in writing on their own but can somehow get an LLM to clearly express it by writing a series of prompts to the LLM?

And, if it does come up, why don't they just have that conversation with me, instead?

Comment by alpha_squared 1 hour ago

> Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

I'd argue that anything insightful or well-though-out doesn't use LLMs at all. We can quibble over whether discussions with an LLM lead to insightful responses, but that still isn't your own personal thought. Just type what's on your mind, it's not that hard and nitpicking over this is just looking for ways to open up unnecessary opportunities for abuse.

Comment by rozal 1 hour ago

Often i think of a novel idea or solution to a problem, but use AI to communicate or adjust what I already wrote out so it’s more comprehensible. Sometimes when I write, it’s hard to understand.

Comment by davebranton 1 minute ago

The more you write, the less this will be true. The more you write, the better you will become at it. Using an LLM to write is like sending a robot to the gym for you.

The more you use an LLM to write for you, the worse you will become at writing yourself. There is simply no other possible outcome. It's even true of spellcheck - the more you use a spellcheck the worse you become at spelling. I know this for a fact because I can no longer spell for shit. However, spelling is to writing as arithmetic is to mathematics. I also can't add up, but I have a degree in pure mathematics.

LLMs are a cancer on human thought and expression.

Comment by sharken 12 minutes ago

In that sense AI is a tool much like a dictionary, it enhances and I'd say improve the end result.

Comment by RhodesianHunter 1 hour ago

There are many obvious ways in which this may not be true.

Anyone learning the language and some people with learning disabilities, for example, may communicate better via an LLM.

Comment by bonoboTP 1 hour ago

There is a sliding scale from that, to it being the LLM that communicates, not the person. LLMs can really reshuffle and change priorities and modify emphasis in a text. All the missing pieces will be filled in and rounded out and sandpapered off by the inner-average-corporate-HR-Redditor of the LLM.

Comment by postalcoder 1 hour ago

I promise you, after this past year, you don’t know how happy I am to read issues and PRs in broken English.

Comment by Ensorceled 46 minutes ago

> If I wanted to read what an LLM thinks, I could just ask it.

and

> Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

What is the difference? What's the line between these two?

The prompt: "Analyze <opinion> and respond" is pretty clearly "I would just ask it." and, the prompt: "here's my comment, please ONLY the check the grammar and spelling" would probably be ok.

What about prompt:"I disagree with using LLMs for commenting at all for <reasons>. Please expound on this and provide references and examples". That would explode the word count for this site.

Comment by amarble 1 hour ago

The point of a discussion site is to hear what other people think and get different perspectives. Just getting an LLM's insightful, well-thought-out response isn't really a big draw, if one is looking for that, there's a pretty obvious way to get it. I posted this the other day (ignore the title I realized later it's too clickbaity) but this is why IMO LLMs won't replace the workforce, people aren't looking for answers to things, they're looking for other people's takes: https://news.ycombinator.com/item?id=47299988

Comment by paganel 10 minutes ago

> well-thought-out response, even if it is LLM-enhanced?

There's no insight nor well-thought-out response once a person decides to "LLM-enhance" their response. The only insight that the person using the LLM is too limited to have a decent conversation with.

Comment by jedahan 1 hour ago

I prefer low effort human thought to low effort llm output.

Comment by gkfasdfasdf 50 minutes ago

> But here's where it gets tricky

Pretty sure this comment is AI

Comment by unsui 58 minutes ago

Gonna put out a blanket assertion about my preferences, to get a read on whether these are shared or not:

As humans, we have directives (genetic, cultural, societal, etc.) to prioritize humanistic endeavors (and output) above all else.

History has shown that humans are overwhelmingly chauvinistic in regards to their relationship to other animals in the animal kingdom, even to the point of structuring our moral/ethical/legal systems to prioritize human wellbeing over that of other animals (however correct/ethical that may ultimately be, e.g., given recent findings in animal cognition, such as recent attempts to outlaw boiling lobsters alive as per culinary tradition).

But, it seems that some parties/actors are willing (i.e., benefiting) from subverting this long-standing convention (of prioritizing human interests) in the face of AI (even to the point of the now-farcical quote by Sam Altman that humans take far more nurturing than LLMs...)

So: should we be neglecting our historical and genetic directives, to instead prioritize AI over human interests? Or should we be unashamedly anthropic (pun intended), even at the cost of creating arbitrary barriers (i.e., the equivalent of guilds) intended to protect human interests over those of AI actors?

I strongly recommend the latter, particularly if the disruptions to human-centric conventions/culture/output are indeed as significant (and catastrophic) as they will likely be if unchecked.

Comment by bonoboTP 1 hour ago

Humans have more variability and "edge". If a person is passionately arguing for some point of view (perhaps somewhat outside the usual), it signals to me that they probably thought about this and it is a distillation of a long thought process and real-life experience. One could say that the logical argument should stand alone, but reality doesn't work that way. There are many things you have to implicitly trust and believe when you read. Of course lying and bullshitting already existed before ("nobody knows you're a dog" etc etc). But LLMs will really eloquently defend X, not X, X*0.5 and anything inbetween. There is no information content in it, it doesn't refer to an actual human life experience and opinion that someone wants to stand behind. It just means that someone made the LLM output a thing.

Comment by browningstreet 1 hour ago

I keep wishing for a public place to put a formatted version of my LLM threads. I have long conversations with LLMs that usually result in some kind of documentation, tutorial, or dataset. Many of them are relatively novel, but I haven't created a place for them yet.

And no, I wouldn't think an HN post is it either.. I'm just saying, there should be a good place to post the output of good questions asked iteratively.

Comment by vova_hn2 1 hour ago

Have you ever read someone else's conversation with an LLM?

Comment by abustamam 58 minutes ago

Not the op but I barely even read my own conversations with an LLM. ChatGPT was always so verbose even when I told it to be succinct.

Claude is a bit better but still prone to rambling.

Comment by browningstreet 58 minutes ago

I hinted at "formatted" and "good".. add the words "curated" or "edited".

Comment by relaxing 1 hour ago

If you like reading LLM output, just talk directly to an LLM. Problem solved.

Comment by TacticalCoder 1 hour ago

> Am I here to read authentic humans because I value authenticity for its own sake (like preferring Champagne instead of sparkling wine)?

Mate, Champagne is a sparkling wine. In French you can even at times hear people asking for "un vin mousseux de Champagne" meaning "a sparkling wine from Champagne" instead of the short form (just saying "un Champagne" or "du Champagne").

Now, granted, not all sparkling wine are Champagne.

The Wikipedia entry begins with: "Champagne is a sparkling wine originated and produced in the Champagne wine region of France...".

I drank enough of it to be stating my case, of which I'm certain!

P.S: and btw, yup, authentic humans content only here, even if it's of "low quality". If I want LLM, I've got my LLMs.

Comment by sireat 38 minutes ago

Basically you have Cremant type sparking wines which are produced from other regions of France besides Champagne. It is just like Champagne just that other French regions like Loire, Alsace, Bordeux etc are not allowed to call it Champagne.

So just like Armanac's are like Cognac's for lower price, good Cremant will be cheaper and more enjoyable that cheaper Champagne (I've not had any really expensive Champagne).

Then you have Cava from Spain which is similar process to Cremants and Champagne. The difference would be in type of grapes used. A friend of mine swears by Cavas just like I swear by Cremants from Loire region. However my wife hates Cava.

Then Proseccos from Italy again are similar, but quality varies more.

After that we get into more questionable cheaper sparkling wines which usually means some sort of out of bottle insertion of CO2 and even worse version include some other modifications such as sugar.

In general to avoid literal headaches you want BRUTs. Anything semi-sweet or sweet is suspicous.

Again I am not a full wine expert but this is mostly years of ahem experience.

Comment by nu11ptr 2 minutes ago

HN is the best tech site on the web for a reason. It has a generally intelligent audience, and while there are certainly inappropriate comments, compared to what you find on social media or even other sites, it is unique and far more respectful. Due to this, you can often have better and more meaningful discussions.

Comment by Someone1234 2 hours ago

"AI-edited comments" is a very interesting one. Where is the line between a spelling/grammar/tone checker like Grammarly, that at minimum use N-Grams behind the scenes, and something that is "AI" edited? What I am asking is, is "AI" in this context fully featured LLMs, or anything that improves communication via an automated system. I think many people have used these "advanced" spellcheckers for years before Chatgpt et al came on the scene.

I think "generated comments" is a pretty hard line in the sand, but "AI-edited" is anything but clear-cut.

PS - I think the idea behind these policies is positive and needed. I'm simply clarifying where it begins and ends.

Comment by dang 25 minutes ago

You're touching on an important point. More here: https://news.ycombinator.com/item?id=47342616.

All this stuff is in flux. I thought a lot about whether to add the "edited" bit - but it may change. What I deliberately left out was anything about the articles and projects that get submitted here. There's a lot of turbulence in that area too, but we don't yet have clarity, or even an inkling, of how to settle that one.

Comment by jaysonelliot 2 hours ago

You should use your own words. It might seem that a tool like Grammarly is just an advanced spellcheck, but what it's really doing is replacing your personal style of writing with its own.

It's better to communicate as an individual, warts and all, than to replace your expression with a sanitized one just because it seems "better." Language is an incredibly nuanced thing, it's best for people's own thoughts to come through exactly as they have written them.

Comment by bruckie 1 hour ago

My elementary school kid came home yesterday and showed me a piece of writing that he was really proud of. It seemed more sophisticated than his typical writing (like, for example, it used the word "sophisticated"). He can be precocious and reads a ton, though, so it was still plausible that he wrote it. I asked him some questions about the writing process to try to tease out what happened, and he said (seemingly credibly) that he hadn't copied it from anywhere or referenced anything. He also said he didn't use any AI tools. After further discussion, I found out that Google Docs Smart Compose (suggested-next-few-words feature) is enabled by default on his school-issued Chromebook, and he had been using it. The structure of the writing was all his, but he said he sometimes used the Smart Compose suggestions (and sometimes didn't). He liked a lot of the suggestions and pressed tab to accept them, which probably bumped up the word choice by several grade levels in some places.

So yeah, it can change the character of your writing, even if it's just relatively subtle nudges here or there.

edit: we suggested that he disable that feature to help him learn to write independently, and he happily agreed.

Comment by Terr_ 1 hour ago

To rationalize my gut-feelings on this, I think it comes down to the spectrum between:

1. A system that suggests words, the child learns the word, determines whether it matches their intent, and proceeds if they like the result.

2. A system that suggests words, and the child almost-blindly accepts them to get the task over with ASAP.

The end-results may look the same for any single short document, but in the long run... Well, I fear #2 is going to be way more common.

Comment by zahlman 1 hour ago

The analogy with tab-completion of code seems apt. At first you blindly accept something because it has at least as good a chance of working as what you would have typed. Then you start to pay attention, and critically evaluate suggestions. Then you quickly if not blindly accept most suggestions, because they're clearly what you would have written anyway (or close enough to not care).

The phenomenon was observed in religious philosophy over a millennium ago (https://terebess.hu/zen/qingyuan.html).

Comment by abustamam 54 minutes ago

Tab completion was so novel back when full e2e AI tooling was not really effective.

Now that it is, I just turn tab completion off totally when I write code by hand. It's almost never right.

Comment by bruckie 1 hour ago

From his description, it sounded like this was more of #1. He cared a lot about the topic he was writing about, and has high standards for himself, so it's very likely that he would have considered and rejected poor suggestions.

I have mixed feeling about it. On the one hand, you're right: carefully considering suggestions can be a learning opportunity. On the other hand, approval is easier than generation, and I suspect that without flexing the "come up with it from scratch" muscle frequently, that his mind won't develop as much.

Comment by comboy 1 hour ago

Oh how I despise these suggestions. You sometimes look for a way to express something and you are on the verge of giving the world something truly original, but as soon as your brain sees the suggestion it goes "oh yeah that fits"

Comment by SchemaLoad 17 minutes ago

I disabled them immediately, it feels like the tech version of the ADHD person who keeps interrupting you with what they think you are trying to say. Even if the suggestion is correct, it saves you at most 2 seconds at the cost of interrupting you constantly.

Comment by Terr_ 1 hour ago

True! There's an important cybernetic aspect to all this, where an automatic suggestion can be an interruption, sometimes worse if the suggestion is decent.

A certain amount of friction is necessary, at least if the goal is to help the person learn or make something original.

Comment by TimTheTinker 1 hour ago

GK Chesterton would have something brilliant to say about the inauthenticity of it all or something.

Comment by jrockway 1 hour ago

I see the suggestions and then choose something different anyway. I don't want to use one of the top 3 most popular responses to an email from a friend. Even if it's something transactional.

Comment by JumpCrisscross 1 hour ago

> I despise these suggestions

As an adult, I do too. As a middle schooler, we absolutely used word processors’ thesaurus features to add big words to our essays because the teachers liked them.

Comment by Gibbon1 1 hour ago

Friend of mine was a English teacher. She quit because she's not going to waste her time 'grading' 30 essays written by AI.

Anyway before that she HATED the thesaurus. And she could tell when students were using it to make their writing more fancy pants.

Comment by zahlman 59 minutes ago

One problem I see is that LLMs have a more nuanced... well, model of how words and their meanings relate to each other than a dead-tree thesaurus could ever present, what with its simplified "synonym" and "antonym" categories. Online versions try to give some similarity metrics, but don't get into the nuance. (It's not as if someone who takes either approach would want to spend the time reading and understanding that, anyway.)

Comment by JumpCrisscross 1 hour ago

> she could tell when students were using it to make their writing more fancy pants

I had two teachers who called us out on this, and actually coached us on our writing, and I remember them fondly. (They were also fans of in-class essaying.)

The others wanted to count big words.

Comment by 1 hour ago

Comment by ma2kx 40 minutes ago

As a non native English speaker my own words wouldnt be in English. If I express myself in English I soon struggle for the right words. On the other hand I think when I read some English text I'm quite capable of sensing the nuances. So it feels when I auto translate my text to English an than read against it again and make some corrections, I can express my thoughts much better.

Comment by NewsaHackO 1 hour ago

>It's better to communicate as an individual, warts and all, than to replace your expression with a sanitized one just because it seems "better."

It is definitely not true that it is better for a poster to communicate like an individual when it comes to spelling and grammar. People ignore posts that have poor grammar or spelling mistakes, and communications that have poor grammar are seen as unprofessional. Even I do it at a semi-subconscious level. The more difficult or the more amount of attention someone has to pay to understand your post, the less people will be willing to put in that effort to do so.

Comment by comboy 1 hour ago

My broken english now officially bumps my comments up instead of down. Sweet.

Comment by zahlman 57 minutes ago

For what it's worth, I had a quick look through your comment history and your English seems just fine to me as a native speaker (at least for informal communication).

Comment by jjk166 9 minutes ago

> It's better to communicate as an individual, warts and all, than to replace your expression with a sanitized one just because it seems "better." Language is an incredibly nuanced thing, it's best for people's own thoughts to come through exactly as they have written them.

This is the opposite of how language works. You want people to understand the idea you're trying to communicate, not fixate on the semantics of how you communicated. Language is like fashion - you only want to break the rules deliberately. If AI or an editor or whatever changes your writing to be more clear and correct, and you don't look at it and say "no, I chose that phrasing for a reason" then the editor's version is much more likely to be understood correctly by the recipient.

Comment by lamontcg 2 hours ago

Books and newspapers have had editors for centuries. It is just code review for the written word.

[It looks like MS Word 97 had the ability to detect passive voice as well, so we're talking 30 year old technology there that predates LLMs -- how far down the Butlerian Jihad are we going with this?]

Comment by MeetingsBrowser 2 hours ago

Editors are mostly tasked with maintaining a consistent style and standard.

There is no need for that here beyond maybe spellcheck. Use your own thoughts, voice, and words.

Comment by lamontcg 1 hour ago

I don't personally use AI/LLMs for any informal writing here or on reddit, etc. But I think it is pretty weird to be overly concerned around people, particularly ESL, who use tools to clean up their writing. The only thing I really care about is when someone posts LLM regurgitated information on topics they personally don't know anything about. If the information is coming from the human but the style and tone is being tweaked by a machine to make it more acceptable/receptive and fix the bugs in it, then I don't understand why you're telling me I need to care and gatekeeping it. It also is unlikely to be very detectable, and this thread seems to only serve a performative use for people to get offended about it.

Comment by pseudalopex 1 hour ago

Other tools to clean up writing are allowed. They did not tell you you must care. You told them they must not. The submisson's use was to tell you and others LLM generated tone was not more acceptable.

Comment by lamontcg 1 hour ago

Well good luck detecting it.

Comment by mjg2 2 hours ago

I was just re-reading the passage from Plato's "The Phaedrus" on writing & the "art" of the letter for an essay I'm working on, and your remark is salient for this discussion on LLM-style AI and social media at large.

Comment by dbacar 2 hours ago

RIP Robert M.Pirsig.

Comment by llbbdd 47 minutes ago

Oof, I haven't finished Zen yet. I didn't know he was gone. RIP

Comment by Teever 1 hour ago

But the problem is that people with poor written language / english skills are 'competing' with people who have superb skills in this domain.

There are people here who sit at a desk all day banging out multipage emails for work who decide to write posts of a similar linguistic calibre for funsies.

Meanwhile you have someone in a developing country who just got off a brutal twelve hour shift doing manual labour in the sun who wants to participate in the conversation with an insightful message that they bang-out on a shitty little cellphone onscreen keyboard while riding on bumpy public transit.

You could have a great idea and express it poorly and be penalized for doing so here while someone could have a blah idea expressed excellently and it's showered in replies despite being in some metrics (the ones I think are most important) worse than the other post.

What's the solution for that?

Comment by magicalist 1 hour ago

> What's the solution for that?

Remember that you're on a message board and you're not actually 'competing' for anything?

Comment by Teever 1 hour ago

This is a perfect example of what I'm talking about.

I knew someone was going to comment on my use of the word there despite me putting it in quotes which was intended to let the reader know that I meant that word as an approximation of what I was meaning.

When I say competing I mean competing in the space of ideas here. There is a ranking system here that raises or lowers the visibility and prominance of your comments and it's based on upvotes by other uses. For better or worse people penalize comments with grammatical errors over ones that don't and that affects how much exposure other users have to the ideas that people write and how much interaction they get from them.

If that's the case why would somebody who has good ideas but poor expressive capability bother posting here if their comments are just going to get ignored over relatively vapid comments that are grammatically correct?

Comment by NewsaHackO 50 minutes ago

No, I get your point. Unfortunately, alot of people here try to act high and mighty like they are posting here for some altruistic reason. The reason why I, you, and everyone else posts here is the human reason that we want others to engage with our posts. In order to do that, you have to put your best foot forward, which includes making sure the spelling and grammar of your posts is correct. While I do not use an LLM for this, I think that it is vaild to use these tools to make sure nothing gets in the way of whatever point you are trying to make.

Comment by Teever 30 minutes ago

> In order to do that, you have to put your best foot forward

In English you have to put your best foot forward in English. And in your environment with the resources you have at your disposal.

For example, I'm currently engaging with you between steps in a chemistry process that's happening under the fumehood next to me while wearing a respirator, a muggy plastic chemical resistant gown and disposable gloves nitrile globes.

I am absolutely certain that these conditions are different than the ones I would need to 'put my best food forward' in this discussion. I'm also certain that quite certain that you and I would both absolutely stumble if we were obligated to particpate in this forum in a language that we're not proficient in as many users often attempt to do and are unfairly penalized for by other members of the community.

I'm with you on the LLM usage for grammatical issues for non-native speakers. I bet more in this community would feel the same way if Dang whimsically mandated that people had to use a language other than English on certain days of the week.

Comment by 12_throw_away 56 minutes ago

> You could have a great idea and express it poorly and be penalized for doing so here while someone could have a blah idea expressed excellently and it's showered in replies despite being in some metrics (the ones I think are most important) worse than the other post.

I absolutely do not understand this comment. Are you saying that posting is competitive and that comments have "metrics"?

Comment by Aldipower 2 hours ago

That's true, but on the flip side I regularly get downvoted because my English is not the best, so say it mildly. So, now I need to be really careful, to a) write in a good English or b) not to be recognised as an LLM corrected version of my English. Where is the line? I shouldn't be downvoted for my English I think, but that is the reality.

Edit: I already got downvoted. :-) Sure, no one can tell exactly why. Maybe the combination of bad English _and_ talking sh*ce isn't ideal at all. :-D Anyways, I have enough karma, so I can last quite a while..

Comment by ssl-3 2 hours ago

It goes both ways.

The quality of my writing varies (based on my mood as much as anything else, I suppose), but when it is particularly good and error-free then I often get accused of being a bot.

Which is absurd, since I don't use the bot for writing at all.

Comment by 1 hour ago

Comment by 1 hour ago

Comment by colpabar 1 hour ago

> I shouldn't be downvoted for my English I think, but that is the reality.

How do you know? Is it possible the downvoters just didn't like what you said?

Comment by phs318u 1 hour ago

It’s possible of course but reading all the comments from various non-native English speakers here it seems like a common story. It may indicate a subliminal bias in readers (most of whom are presumably American).

Comment by yorwba 1 hour ago

Note that those comments are written in perfectly understandable English. Further note how often you come across comments written in perfectly understandable English, but they're downvoted anyway.

It suggests a bias in writers to assume that people would agree with them if only they could express their thoughts accurately.

Comment by fragmede 1 hour ago

I disagree. HN is going to bury my raw unedited tirade of a comment about those fucking morons that couldn't code their way out of a paper bag. If I send a comment to ChatGPT and open up the prompt with "this poster is a fucking dumbass, how do I tell them this" and use that to get to a well reasoned response because that's the tool we have available today, we're all better off.

The guidelines state:

> Be kind. Don't be snarky. Converse > Edit out swipes. > Don't be curmudgeonly.

On the best of days I manage to follow the rules, but I'm only human. If I run my comment through ChatGPT to try and help me edit out swipes on the bad days, that's not ok?

I'm not using ChatGPT to generate comments, but I've got the -4 comments to show that my "thoughts exactly as they have written them" isn't a winning move.

Comment by zahlman 54 minutes ago

If you see an incompetent coder and wish to communicate that the person responsible is a "fucking moron/dumbass", the tone with which you do so is not the problem. Tell us what is wrong with the code, as objectively as possible. That's what the guidelines are trying to convey.

Comment by yorwba 1 hour ago

The guidelines don't say anything about not posting something because an LLM told you that you shouldn't...

Comment by drusepth 2 hours ago

I'm not sure I agree with this. I don't really want to see someone else's stylistic "warts".

I just want clean, easy-to-read content and I don't care about the person who wrote it. A tool like Grammarly is the difference between readable and unreadable (or understandable and understandable) for many people.

Comment by timeinput 2 hours ago

You could run the comments everyone else posts through an AI tool and ask it to rephrase it so that it is clean, and easy-to-read.

You could even write a plugin for your favorite web browser to do that to every site you visit.

It seems hard to achieve the inverse that is (would you rather I use i.e.?) rewrite this paragraph as the original author did before they had an AI re--write it to make it clean, (--do you like oxford commas, and em/en dashes! Just prompt your AI) and easier to read

Comment by phs318u 1 hour ago

> You could run the comments everyone else posts through an AI tool and ask it to rephrase it so that it is clean, and easy-to-read.

For those coming from a language other than English, you are more likely to lose information by using a tool to “reconstruct” meaning from poorly phrased English as an input, as opposed to the poster using a tool to generate meaningful English from their (presumably) well-written native language.

Comment by tempestn 2 hours ago

There's a big difference between me running a filter on other people's words, and those people themselves choosing to run one and then approving the results.

I personally don't see a problem with someone using a grammar checker as long as they aren't just blindly accepting its suggestions. That said, if someone actually is using it in that way, it shouldn't be detectable anyway, so it probably doesn't matter all that much whether or not it's included in the letter of the rule.

Comment by kazinator 1 hour ago

> You could run the comments everyone else posts through an AI tool and ask it to rephrase it so that it is clean, and easy-to-read.

But that creates a private version of the text which the original poster didn't sign off on. You could have fixed something contrary to their intent.

Comment by observationist 1 hour ago

On a technical level, you can really only guard against changing your semantics and voice - if you're letting software alter the meaning, or meanings, you intend, and use words you don't normally use, it's probably too far.

This is probably ok:

>> On a technical level, you can really only guard against software that changes your semantics or voice. If you're letting it alter the meaning (or meanings) you intend, or if it starts using words you would never normally use, then it's gone too far.

This is probably too far:

>>> On a technical level, it's important to recogn1ize that the only robust guardrail we can realistically implement is one that prevents modifications to core semantics or authorial voice. If you're comfortable allowing the system to refine or rephrase the precise meanings you originally intended — or if it begins incorporating vocabulary that doesn't align with your typical linguistic patterns — then you've likely crossed a meaningful threshold where the output no longer fully represents your authentic intent.

Something to consider is that you can analyze your own stylometric patterns over a large collection of your writing, and distill that into a system of rules and patterns to follow which AI can readily handle. It is technically possible, albeit tedious, to clone your style such that it's indistinguishable from your actual human writing, and can even icnlude spelling mistakes you've made before at a rate matching your actual writing.

AI editing is weird, though. Not seeing a need, unless English isn't your native language.

Comment by Mordisquitos 2 hours ago

I think that the line between A"I" editing to fix grammar or to translate from a different native language and A"I" editing by using an LLM is one of those things that's very hard to unambiguously encode in written guidelines, but easy to intuitively understand using common sense, in the vein of I know it when I see it.

https://en.wikipedia.org/wiki/I_know_it_when_I_see_it

Comment by unsignedint 1 hour ago

I think the only practical litmus test here is whether you can stand by the text as your own words. It’s not like we have someone looking over commenters’ shoulders as they type.

Ultimately, this comes down to people making a good-faith judgment about how much AI was involved, whether it was just minor grammatical fixes or something more substantial. The reality is that there isn’t really a shared consensus on exactly where that line should be drawn.

Comment by happytoexplain 2 hours ago

I think there's a pretty clear gap between editing for grammar/spelling and editing for tone.

Comment by jacquesm 2 hours ago

Trying to lawyer this is the wrong approach. When in doubt: don't.

Comment by Someone1234 1 hour ago

That feels very uncharitable.

When a policy is introduced to seemingly guard against new problems, but happens to be inadvertently targeting preexisting and common technology, I don't feel like it is "lawyering" it to want clarity on that line.

For example, it could be argued this forbids all spellcheckers. I don't think that is the implied intent, but the spectrum is huge in the spellchecker space. From simple substitutions + rule-based grammar engines through to n-grams, edit-distance algorithms, statistical machine translation, and transformer-based NLP models.

Comment by tsukikage 2 hours ago

> Where is the line between a spelling/grammar/tone checker like Grammarly

For me, the line is precisely at the point where a human has something they want to say. IMO - use the tools you need to say the thing you want to say; it's fine. The thing I, and many others here, object to is being asked to read reams of text that no-one could be bothered to write.

Comment by czhu12 1 hour ago

Finding it more refreshing these days when reading text with broken grammar, incorrect use of pronouns, etc. especially for HN, the human connection is more palpable. It’s rarely so bad that it’s not understandable

Comment by altairprime 2 hours ago

Grammarly use is outright prohibited by this; AI-edited writing is no longer writing that you hold personal and exclusive responsibility for having written. Consider Stephen Hawking’s voice box generator. While the sounds produced were machine-assisted, the writing was his alone. If you find yourself unable to participate in this web forum without paying a proofreader (in time, money, or cycles) to copy-edit your writing, then you’re not welcome on HN as a participant.

Comment by phs318u 1 hour ago

> If you find yourself unable to participate in this web forum without paying a proofreader (in time, money, or cycles) to copy-edit your writing, then you’re not welcome on HN as a participant.

You forgot the /s ?

Comment by altairprime 1 hour ago

It’s not sarcasm. If you feel if I have misunderstood the intent of the guideline we’re discussing — “Don’t post generated/AI-edited comments”, as the title currently reads, then I’m happy to discuss further. (I often make logical negation errors that I miss in proofing, so it’s possible I slipped up, too!)

Comment by phs318u 1 hour ago

I thought it was sarcasm given you are asking people to “pay a proofreader”. This sounds ludicrous. Could you clarify wha you meant by that line if it’s not sarcasm? Because I’m having a hard time thinking that it’s meant to be taken at face value.

Comment by altairprime 1 hour ago

No worries. The post I replied to was asking if use of ‘grammar improvement services’ (my paraphrase) qualified as AI-assisted writing at HN. All such services cost something; Grammarly makes a lot of money charging businesses, AI consumes watts of power that someone pays for, and even Microsoft Word’s grammar checker spins up the CPU fans on an old Intel laptop with a long enough document. I took from that the generic point that one “pays” for machine-assisted proofreading by one means or another, whether it’s trading personal data for services (Google) or watts of power for services (MSWord et al.) or donating writing samples to a for-profit training corpus (Grammarly free tier) or paying for evaluations where your data is not retained for training (Grammarly paid enterprise tier with a carefully-redlined service contract) and generalized to “pay for machine proofreading”.

Then, I considered whether HN would appreciate posts/comments by a human where they’d had a PR team or a hired editor come in and review/modify/distort their original words in order to make them more whatever. I think that this probably is most likely to have occurred on the HN jobs posts, and I’ve pointed out especially egregious instances to the mods over the years — but in general, the people who post on HN tend to do so from their own voice’s viewpoint, as reaffirmed by the no-AI-writing guideline above. So I decided instead to say “pay a proofreader” because, bluntly, if the community found out that someone was paying a wage to a worker to proofread their HN comments, the response would plausibly be the same mob of laughing mockery, disgusted outrage, and blatant dismissal that we see today towards AI writing here. “You hired someone to tone-edit your HN comments?!” is no different than “You used Grammarly to tone-edit your HN comments?!” to me, and so it passed the veracity test and I posted it.

Comment by glitch13 2 hours ago

I saw a similar conversation somewhere about some project saying they don't allow AI generated code.

It was asked that if "AI Generated Code" is just code suggested to you by a computer program, where does using the code that your IDE suggests in a dropdown? That's been around for decades. Is it LLM or "Gen AI" specific? If so, what specific aspect of that makes one use case good and one use case bad and what exactly separates them?

It's one of those situations where it seems easy to point at examples and say "this one's good and this one's bad", but when you need to write policy you start drowning in minutia.

Comment by kazinator 1 hour ago

Projects cannot allow AI generated code if they require everything to have a clear author, with a copyright notice and license.

IDE code suggestions come from the database of information built about your code base, like what classes have what methods. Each such suggestion is a derived work of the thing being worked on.

Comment by raw_anon_1111 1 hour ago

There is no need to use any of it. Just use your own words.

Comment by skywhopper 2 hours ago

I don’t think it’s really necessary to play Captain Nitpick over spell-check or whatever. You know what is meant.

Comment by SecretDreams 2 hours ago

Your comment is one of semantics. Worth discussing if we're talking a truly hard line rule rather than the spirit of the rule.

I benefit from my phone flagging spelling errors/typos for me. Maybe it uses AI or maybe it uses a simple dictionary for me. Maybe it might even catch a string of words when the conjunction isn't correct. That's all fair game, IMO. But it shouldn't be rewriting the sentence for me. And it shouldn't be automatically cleaning up my typos for me after I've hit "reply". That's on me.

Comment by thousand_nights 2 hours ago

i don't care if someone has bad grammar, i want to hear their thoughts as they came up with them, we're all intelligent beings and can parse the meaning behind what you write.

i type my comments without capitalization like i'm typing into some terminal because i'm lazy and people might hate it but i'm sure they prefer this to if i asked an LLM to rewrite what i type

your writing style is your personality, don't let a robot take it away from you

Comment by tempestn 1 hour ago

I, on the other hand, find incorrect grammar mildly annoying, especially when it's due to laziness. It distracts from the thoughts being conveyed. I appreciate when people take the time to format comments as correctly as they're able.

In fact, I'd argue that lazy commenting is the real problem, which has now been supercharged by LLMs.

Comment by iammjm 2 hours ago

I believe the issue of proving who is and who isn't really human on the Internet will be a really important issue in the coming years, especially without sacrificing people's right to privacy and anonymity in the process.

Comment by wvenable 1 hour ago

I don't think the real issue is LLM posts. The issue with low quality on the Internet has always been quantity. The problem always has been humans who post too much, humans that use software to post too much, and now it's humans who use LLMs to post too much.

The problem with a medium that is completely free and unrestricted is that whomever posts the most sort of wins. I could post this opinion 30-40 times in this thread, using bots and alternative accounts, and completely move the discussion to be only this.

Someone using an LLM is craft a reply is not a problem on it's own. Using it craft a low-effort reply in 3 seconds just to get out is the problem.

Comment by bigstrat2003 8 minutes ago

> Someone using an LLM is craft a reply is not a problem on it's own.

No, someone using an LLM to craft a reply is a problem in its own. I want to hear what a human has to say, not a human filtered through a computer program. No grammar editing, nothing. Give me your actual writing or I'm not interested.

Comment by malfist 1 hour ago

Amusingly your comment carries some of the tropes of AI authorship ("is not a problem on it's own....is the problem") but it's not shaped like a profound insight is being discovered in every line is what makes it human.

How much of AI writing will pass under the radar when the big companies aren't all maximizing to generate the most engagement hacking content in a chatbot UI? Maybe it'll still stand out for being low quality, but I'm not sure. There's lots of low quality human authored content.

Not sure where my comment is going, I just kinda rambled.

Comment by wvenable 1 hour ago

> Amusingly your comment carries some of the tropes of AI authorship

It was trained on 30 years of my posts on the Internet, I'm sure some part of it sounds just like me.

Comment by ffsm8 1 hour ago

If you had the LLM write the comment, then it wasn't your thoughts.

I sometimes wonder if people aren't forgetting why we're on this platform.

The goal is to have an interesting discourse and maybe grow as a human by broadening your horizon. The likelihood of that happening with llms talking for you is basically nil, hence... Why even go through the motion at that point? It's not like you get anything for upvotes on HN

Comment by wvenable 56 minutes ago

> If you had the LLM write the comment, then it wasn't your thoughts.

But what if I provided the LLM my thoughts? That's actually how I use LLMs in my life -- I provide it with my thoughts and it generates things from those thoughts.

Now if I'm just giving it your comment and asking it to reply, then yes, those aren't my thoughts. Why would I do that? I think the answer goes back to my original point.

If I'm telling you my thoughts and then you go and tell a friend those thoughts, would you say those are still my thoughts even though I wasn't the one expressing them directly to your friend?

Comment by meatmanek 1 minute ago

I like to think about it in terms of output-to-prompt ratio. For HN comments, I think an output ratio of 1 or less is _probably_ fine. Examples:

    - translating (relatively) literally from one language to another would be ~1:1.
    - automatic spelling/grammar correction is ~1:1
    - Using an LLM to help you find a concise way of expressing what you mean, i.e. giving it extra content to help it suggest a way of phrasing something that has the connotation you want, would be <1:1
Expansion (output > prompt) is where it gets problematic, at least for HN comments: if you give it an 8 word prompt and it expands it to 50, you've just wasted the reader's time -- they could've read the prompt and gotten the same information.

(expansion is perfectly fine in a coding context -- it often takes way fewer words to express what you want the program to do than the generated code will contain.)

Comment by safog 2 hours ago

I hope I'm wrong but I don't think a privacy friendly alternative is going to exist. It's going to go the way of show me your drivers license to use my site.

Comment by rlt 16 minutes ago

I feel like we need a distributed system/protocol that allows people to have pseudonyms not linked to their real identity, but with a shared reputation/trust score, so if you’re a bad actor using a pseudonym your real identity and all your other sock puppets are penalized too.

I know very little about this but sense that some combination of buzzwords like homomorphic encryption, zk-snarks, and yes, blockchains could be useful.

Of course this would present problems if any of your identities were ever compromised and your reputation destroyed.

Comment by throwaway2027 2 hours ago

Why wouldn't criminals like they do now just use stolen identities? If someone verifies they are a person that doesn't mean they're not leaving their PC on with some AI that uses their credentials either.

Comment by kace91 1 hour ago

The point of these systems is not to ban any possibility of fake accounts. The point is to add friction so that creating accounts is harder than banning them, so criminals can’t recreate them at scale. Otherwise bans take seconds to overcome and a single person can run 10000 automated identities.

Comment by OkayPhysicist 1 hour ago

Invite trees approximately solve this problem. I don't need to know who you are to know that someone in good standing in the community invited you.

Comment by jacquesm 1 hour ago

And that if you misbehave you get booted out and whoever invited you gets dinged. If they get dinged enough they become a leaf rather than a branch.

Comment by iamnafets 2 hours ago

No credential will be sufficient, this is basically an unsolvable enforcement problem. That doesn't obviate the utility of rules and norms, but there's no airtight system which will hold back AI generated content.

Comment by Karrot_Kream 1 hour ago

Verifiable credentials have been an idea for a long time now. It wouldn't be that hard to solve. Sign everything you post with a verifiable credential. Implement support on all social media sites. The question is whether the forum implementers, governing bodies, and social media site owners want to try to build a solution like this or not.

Comment by degamad 1 hour ago

How will a verifiable credential stop people posting AI slop? You can already give the AI agents access to your digital identities to interact with?

Comment by Karrot_Kream 1 hour ago

Layer on captchas. It won't completely stop slop but it's an incentive against slop flooding. And I mean, nothing is stopping a human from just going into ChatGPT by hand and asking for output and copy/pasting that into an HN post box.

Comment by morkalork 1 hour ago

Problem is, if a token is anonymous, then it follows that it can be bought and sold. Which breaks the original use case of the token, right?

Comment by k33n 2 hours ago

That is exactly what will happen. The sad thing is, it needs to happen. I've found myself advocating for this lately, when 10 years ago, I wouldn't have even considered taking that position.

If Web3-like session-signing had taken off enough to become OS or even browser-native, we would have had a fighting chance of remaining mostly anonymous. But that just didn't happen, and isn't going to happen. Mostly because fraud ruined Web3.

Comment by MaKey 1 hour ago

>The sad thing is, it needs to happen.

No, it doesn't.

Comment by aprentic 1 hour ago

I think we're going to have to make some choices.

A completely anonymous stranger has no way to prove that they're human that can't be imitated by an AI. We've even seen that, in some cases, AIs can look more human to humans than real humans do.

The only solution I can think of to that problem is some sort of provenance system. Even before AI, if some random person told me a thing, I'd ignore them; If my most trusted friend told me something, I'd believe them.

We're going to need a digital equivalent. If I see a post/article/comment I need my tech to automatically check the author and rank it based on their position in my trust network. I don't necessarily need to know their identity, but I do need to know their identity relative to me.

Comment by OkayPhysicist 1 hour ago

Reputation tracking is the key. The most simple option is open-invite invite-only spaces: Any user can invite more users, but only users with an invite can participate. Most Discord servers work like this, secret societies like the Oddfellows do, as does the other site.

If you keep track of the invite tree, you can "prune" it as needed to reduce moderation load: low quality users don't tend to be the source of high-quality users, and in the cases where they are, those high quality users tend find other people willing to vouch for them faster than their inviter catches a ban.

Comment by aprentic 45 minutes ago

The open-invite system works well in many cases. It works particularly well in-person but even there you can get drift over time. Our fraternity unanimously agreed on every single initiate who joined; the cohort today is still very different from the one 20 years ago.

In online systems the scales quickly get too big for open-invite. There needs to be a way to automatically update the trust network at a fine grain.

The one that jumps to mind is an inference system; when I +/- a comment, I'm really noting that I trust or distrust the author. It can be general or on a specific topic (eg I trust the author to tell the truth or I trust the author to make me laugh). I could also infer that other people with similar trust patterns are likely trustworthy. And I could likely infer that people who are trusted by people I trust are trustworthy.

Comment by avadodin 23 minutes ago

reputable ugly bags of mostly water society

Comment by wasmitnetzen 18 minutes ago

We will just have to fucking swear all the time. The corporate-speak LLM won't do that.

Comment by SchemaLoad 15 minutes ago

Grok will post CP on twitter, you think it won't swear?

Comment by munk-a 2 hours ago

I'm going to guess we'll eventually settle onto a psuedo-anonymous cert system like HTTPS where some companies are entrusted with verification and if that company says "That's definitely a human" it'll fly - not a great solution, of course, but I really can't see a non-chain-of-custody/trust based approach to the problem and those might only slightly compromise anonymity in optimal scenarios but some compromise is inevitable.

Comment by WD-42 2 hours ago

Will it be? Or is the solution to move to smaller, trusted networks where there's less need for proof. Unfortunately I think the age of large scale open discussion forums like HN is coming to an end.

Comment by thewebguyd 2 hours ago

I think this is the most likely and best path. There's no stopping the flood of bots, the dead internet theory is beyond just a theory at this point.

Best we can do, for the internet and ourselves, is to move away from it and into smaller networks that can be more effectively moderated, and where there is still a level of "human verification" before someone gets invited to participate.

I don't like what that will do to being able to find information publicly, though. The big advantage of internet forums (that have all but disappeared into private discords) is search ability/discoverability. Ran into a problem, or have a question about some super niche project or hobby? Good chance someone else on the net also has it and made a post about it somewhere, and the post & answers are public.

Moving more and more into private communities removes that, and that is a great loss IMO.

Comment by bluefirebrand 27 minutes ago

> Moving more and more into private communities removes that, and that is a great loss IMO

It is a great loss. Unfortunately this is a result of unchecked greed and an attitude of technological progress at any cost. Frankly we enabled this abuse by naively trying to maintain a free and open internet for people. Maybe we should have been much more aggressively closed off from the start, and not used the internet to share so freely.

Comment by gdulli 2 hours ago

The utility of those larger sites is coming to an end, but most people aren't discerning or ambitious enough to leave and seek out the smaller places you mentioned. Places like this will remain but will join Facebook, Reddit, and Twitter as shadows of their prior useful selves. The smaller, better sites won't have to worry about attracting the masses and therefore worsening, because the masses have finally settled.

Comment by agile-gift0262 2 hours ago

just scan your eye in this orb to prove you are human. I'll give you some sh*tcoins in excgange

Comment by apitman 1 hour ago

Maybe it will push people to seek out more in-person interactions, which would be a good thing.

Comment by jsheard 2 hours ago

Sam Altman would love to sell you a solution to the fire that he dumped gasoline on.

https://en.wikipedia.org/wiki/World_(blockchain)

Comment by pear01 2 hours ago

One should highlight the best part of this: https://www.toolsforhumanity.com/orb

An orb that scans your eyeballs for "proof of human".

Comment by rationalist 1 hour ago

You just need to pay someone 1 cent every time they scan their eye for you. You will have people sitting at home and giving their eye scans to AIs to use.

Comment by antonvs 1 hour ago

Negative, I am a meat popsicle

Comment by tomalbrc 2 hours ago

I fully expected this to be a meme. Eerie

Comment by shit_game 2 hours ago

This issue (human attestation) is the product of these AI companies. They are poisoning the well, only to sell the cure. This may not have initially been the plan of many of these companies, but it is the eventual end goal of all of them. Very similar to war profiteers, selling both the problem and the solution simultaneously has yet to be illegalized, but has long been masterfully capitalized, and will be vigourously because nobody will stop it.

Years ago (around 2020, when GPT-2 and 3 became publicly available) I noticed and was incredibly critical of how prevalent LLM-generated content was on reddit. I was permanently banned for "abusing reports" for reporting AI-generated comments as spam. Before that, I had posted about how I believed that the the fight against bots was over because the uncanny valley of text generation had been crossed; prior to the public availability of LLMs, most spam/bot comments were either shotgunned scripts that are easily blockable by the most rudimentary of spam filters, generated gibberish created by markov chains, or simply old scraped comments being reposted. The landscape of bot operation at the time largely relied on gaming human interaction, which required carefuly gaming temporal-relevance of text content, coherence of text content (in relation to comment chains), and the most basic attempt at appearing to be organic.

After LLMs became publicly available, text content that was temporally, contextually, and coherently relevant could be generated instantly for free. This removed practically every non-platform-imposed friction for a bot to be successful on reddit (and to generalize, anywhere that people interact). Now the onus of determining what is and isn't organic interaction is squarely on the platform, which is a difficult problem because now bot operators have had much of their work freed up, and can solely focus on gaming platform heuristics instead of also having to game human perception.

This is where AI companies come in to monetize the disaster they have created; by offering fingerprinting services for content they generate, detection services for content made by themselves and others, and estimations of human authenticity for content of any form. All while they continue to sell their services that contradict these objectives, and after having stolen literally everything that has ever been on the internet to accomplish this.

These people are evil. Not these companies - they are legal constructions that don't think or feel or act. These people are evil.

Comment by levkk 2 hours ago

It's not clear to me how this is verifiable without constant hardware supervision. Even that'll get cracked, just like DVD encryption back in the day.

You almost need dedicated hardware that can't run any other software except a mechanical keyboard and make it communicate over an analog medium - something terribly expensive and inconvenient for AI farms to duplicate.

Comment by degamad 1 hour ago

One physical robot with four wheels, a camera, and a 101 up/down "fingers" to match the keyboard can roll between physical machines and type on mechanical hardware keyboards. This brings the ceiling of how many accounts you can control down to the number of computers you have, but that's not a high price to pay.

Comment by intrasight 2 hours ago

I started promoting the idea of hardware verification about 6 years ago. Didn't get any traction and I doubt I ever will.

I think Apple is the only company that would even be able to do that. You have to control the full stack to the pixels or speaker.

Comment by 2 hours ago

Comment by Asmod4n 2 hours ago

you could sell physical items at any store where you have to show your ID and you get one for the age group you are.

that kills two birds with one stone, you can then show everywhere online you are human and how old you are without the services needing any personal information about you, and the sellers don't know what you use that id tag for.

Comment by lich_king 2 hours ago

People who are posting AI comments or setting up AI bots are... people. They can show their ID. If a website owner doesn't have a way to ban that specific human and the bad guy can always get another voucher, it's sort of meaningless.

In fact, even if you can ban the human for life, I'm not sure it solves anything. There are billions of people out there and there's money to be made by monetizing attention. AI-generated content is a way to do that, so there's plenty of takers who don't mind the risk of getting booted from some platform once in a blue moon if it makes them $5k/month without requiring any effort or skill.

Comment by djeastm 34 minutes ago

Perhaps not only just show your id to get your "Over age X verification object", but your ID also gets irreversibly altered (like a punch card) that makes it one-time-use only.

That might make it less likely someone would ever sell it because to get a new one might take a very long "cool-down" time and it'd severely hamper the seller.

Comment by stetrain 2 hours ago

I'll sell you my proof-of-human-age badge for $1,000.

Comment by Dylan16807 2 hours ago

I would be overjoyed if a human-level amount of spam cost $1000 per year-or-until-caught.

Comment by MattRix 2 hours ago

what’s to keep people from selling or giving away those id tags? seems like a nefarious entity could buy them in bulk

Comment by vova_hn2 2 hours ago

It's already sorta happening with SIM-cards/phone numbers that are sometimes used for similar purposes.

Comment by close04 2 hours ago

Same thing that keeps me from letting my agent do the online talking for me. That is to say… nothing.

Comment by Asmod4n 2 hours ago

law enforcement.

Comment by LoomyBunny 2 hours ago

[dead]

Comment by sebastiennight 2 hours ago

> especially without sacrificing people's right to privacy and anonymity in the process

I'm afraid the ship has sailed on this one. What other solutions have you heard of apart from the dystopian eyeball-scanning, ID-uploading, biometrics-profiling obvious ones?

(knowing that of course, neither of those actually solve the problem)

Comment by TacticalCoder 1 hour ago

> I believe the issue of proving who is and who isn't really human on the Internet will be a really important issue in the coming years

On a site like HN it's kinda easy to vet for at least those that already had thousands of karma before ChatGPT had its breakthrough moment a few years ago.

Now an AI could be asked to "Use my HN account and only write in my style" and probably fool people but I take it old-timers (HN account wise) wouldn't, for the most part, bother doing something that low. Especially not if the community says it's against the guidelines.

Comment by shadowgovt 2 hours ago

If it becomes one, then that will be the end of sites like Hacker News.

This site, at its core, is fundamentally too low-bandwidth, too text-only, and too hands-off-moderated to be able to shoulder the burden of distinguishing real human-sourced dialog from text generated by machines that are optimized to generate dialog that looks human-sourced. Expect the consequence to be that the experience you are having right now will drastically shift.

My personal guess: sites like this will slop up and human beings will ship out, going to sites where they have some mechanism for trust establishment, even if that mechanism is as simple and lo-fi as "The only people who can connect to this site are ones the admin, who is Steve and we all know Steve, personally set up an account for." This has, of course, sacrificed anonymity. But I fundamentally don't see an attestation-of-humanity model that doesn't sacrifice anonymity at some layer; the whole point of anonymity on the Internet was that nobody knew you were a dog (or, in this case, a lobster), and if we now care deeply about a commenter's nephropid (or canid) qualities, we'll probably have to sacrifice that feature.

I'd rather keep the feature, pesonally.

Comment by toomuchtodo 2 hours ago

I like Mitchell's Vouch idea. At the end of the day, it's all about trust. Anything else is an abstraction attempting to replicate some spectrum of trust.

https://news.ycombinator.com/item?id=46930961

https://github.com/mitchellh/vouch

Comment by grufkork 2 hours ago

I think we’ll see a return to smaller groups and implementing a lot of systems the way we do it IRL. I think you could definitely do a more fine-grained system that progressively adds less score to contacts the further away they are. In combination with some type of accumulating reputation system, you’d have both a force to keep out unknown IDs, but also a reason for one to stick to their current ID even though it’s anonymous.

Adding this type of rep system would destroy a lot of what is so cool about the internet though. There’d probably be segregation based on rep if it’s very visible, new IDs drowning in a sea of noise. Being anonymous but with a record isn’t the same as posting for the very first time as a completely blank identity and still being given an audience. Making online comms more like real life would alleviate some problems but would also lose part of the reason they’re used in the first place. I don’t see much any other way to do it besides maybe a state-provided anonymous identity provider (though that’s risky for a number of reasons), but it’s going to be sad to see things go.

Comment by khazhoux 2 hours ago

[flagged]

Comment by vova_hn2 2 hours ago

People seem yo be unable to read your irony...

Comment by blast 1 hour ago

The joke has been old for a while already.

Comment by khazhoux 42 minutes ago

I like to think mine brought a certain je ne sais quoi to the public discourse.

Comment by floxy 1 hour ago

Yo! Apparently not enough em-dashes or bullet points.

Comment by skeledrew 2 hours ago

Why?

Comment by schopra909 7 minutes ago

Honest question, why were folks posting AI generated comments in the first place? There's such a high inertia to comment. I only comment when I have something to contribute OR find something incredibly interesting.

So I'm just baffled, why anyone was using AI to generate comments. Like what was the incentive driving the behavior?

Comment by micromacrofoot 6 minutes ago

Same as always: being right about something

Comment by meiuqer 2 hours ago

I feel a little bit of irony in this post of a company/forum that is asking its users to not use AI while simultaneously trying to fund countless companies that are responsible for ruining the internet as we speak.

Comment by dang 1 hour ago

We aren't in the least asking people to not use AI. We're asking them not to post AI-generated or AI-edited comments to Hacker News.

By all means make good use of LLMs and other AI. What counts as good use? The world is figuring that out, it will take years, and HN is no exception (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). We just don't want it to interfere with the human conversation and connection that this site has always been for.

For example, it has always been a bad idea and against HN's rules when users post things that they didn't write themselves, or do bulk copy-pasting into the threads, or write bots to post things.

Btw, the HN mods (who are also the HN devs) use AI extensively and will be doing so a lot more. The limits on that are not technical; they have to do with (1) how much work we still do manually—the classic "no time to do things that would make the things that take all our time take less of it"; and (2) the amount of psychic rewiring that's required—there's a limit to the RoA (rate of astonishment) that any human can absorb. (It's fascinating how technical people are suffering the most from that this time. Less technical people have more experience being hit by disorienting changes, so for them the current moment is somewhat less skull-cracking.)

Getting this right doesn't mean replacing human-to-human interaction, it means we should have more time for that, and do a better job of supporting HN users generally, YC founders who want to launch on HN, and so on. The goal is to enhance human relatedness, not diminish it.

Comment by jacquesm 1 hour ago

The mods here have quite a bit of leeway in how they run the site, YC funds it but effectively Dan is lord & master here and I suspect if the mods were to call it quits YC would lose their funnel pretty quickly. There is some balance, fortunately.

But yes, there is some irony there.

Comment by tenahu 1 hour ago

Yes a bit ironic, but I am glad they can see that there are times to use AI, and times for human interaction.

Comment by 1 hour ago

Comment by dalemhurley 5 minutes ago

While I understand the sentiment, it ignores many people have English as a second language, many people are dyslexic and have dysgraphia. AI is a great assistant. A good approach will be to encourage people to develop their thinking than use the AI tools.

Comment by _diyar 3 minutes ago

Using AI to craft a thoughtful, concise comment is different than synco-slop.

Comment by arrsingh 1 hour ago

There should be a "flag as AI" link in addition to "flag" and then a setting for people to show flagged as AI. Once the flagged as AI reaches a certain threshold then it disappears unless you enable "Show AI".

Maybe once enough posts have been flagged like that then that corpus could be used to train an AI to automatically detect content generated by AI.

That would be cool.

Maybe the HN site wouldn't add this feature but if someone wrote a client then maybe it could be added there.

Comment by dang 1 hour ago

We're going to add that. I've resisted adding reasons-for-flagging for years, but even I can change my mind every decade or so.

A nice side effect is that it will double as a confirmation step, solving the FFF (fat finger flagging) problem.

Comment by altairprime 1 hour ago

‘Flag’ is an algorithmic flag only, and there are no humans in the flag algorithm’s processing loop. They may monitor and react to the ‘queue’ of flagged articles, and they can do special mod things with flagged posts. But if you want to report a guidelines violation for AI-assisted writing to the mods, just email the mods (contact link in the footer) subject “AI-assisted writing flag” or similar with a link to the post/comment. It works, I know, I’ve done it before. It takes maybe 60 seconds and there is no other way on the site (seemingly by OG design!) to guarantee human review but that email.

Comment by zahlman 42 minutes ago

> It works, I know, I’ve done it before. It takes maybe 60 seconds and there is no other way on the site (seemingly by OG design!) to guarantee human review but that email.

It's a ton of friction compared to ordinary use of a forum; and while I've emailed several times myself, it comes with a sense of guilt (and a feeling that my "several" is probably approximately "several" above average).

Comment by altairprime 37 minutes ago

Valid. It’s a big drawback of HN. I find it helps to report a perceived guidelines violation in “seems like” language rather than “is”, without demanding a specific mod outcome, in cases where I’m uncertain. That is noticeably distinct from “this is completely unacceptable” which I’ve said in a couple of instances, though I still tend to let the mods pick the outcome since that’s their job and I make a specific effort not to participate in sentencing decisions if at all possible.

ps. I acknowledge as well that I’m exempt from feeling guilt for brain reasons, and so if it sounds like I’m not honoring what I would describe as a ‘completely normal’ human response, apologies; I’m trying my best given the lack of familiarity and intend no disrespect towards that reaction.

Comment by postalcoder 1 hour ago

I’ve actually been thinking about this exact idea for https://hcker.news/. Stay tuned, I’ve already started rolling out some comment filtering.

Comment by arrsingh 1 minute ago

Oh I didnt know about this. Very cool. Is hcker.news only on web? Or is there a mobile app as well?

Comment by dang 1 hour ago

The rule has been around for years, but only in case law, i.e. moderation comments (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). What's new is that we promoted it to the guidelines.

Fortunately I found some things we could cut as well, so https://news.ycombinator.com/newsguidelines.html actually got shorter.

---

Edit: here are the bits I cut:

Videos of pratfalls or disasters, or cute animal pictures.

It's implicit in submitting something that you think it's important.

I hate cutting any of pg's original language, which to me is classic, but as an editor he himself is relentless, and all of those bits—while still rules—no longer reflect risks to the site. I don't think we have to worry about cute animal pictures taking over HN.

---

Edit 2: ok, I hear you guys, I've cut a couple of the cuts and will put the text back when I get home later.

Comment by Wowfunhappy 1 hour ago

> Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.

> If you flag, please don't also comment that you did.

I don't understand why you cut these, they seem important! (I can understand the others, which feel either implied or too specific.)

Comment by dang 41 minutes ago

Of course they're important, but they're also implicitly encoded into the culture. Cutting something from the guidelines doesn't mean the rule is canceled. HN has countless rules that don't appear explicitly in https://news.ycombinator.com/newsguidelines.html.

I think I'm going to put that one back, though, because it's not a hill I want to die on and I know what arguing with dozens of people simultaneously feels like when you only have 10 minutes.

Comment by Wowfunhappy 19 minutes ago

> Cutting something from the guidelines doesn't mean the rule is canceled.

Understood, but I feel like I see people breaking these ones frequently, so removing the explicit guideline feels to me like a bad idea.

Comment by andai 34 minutes ago

I seem to recall a rule about "don't downvote something because you disagree with it", but I can't find anything like that.

Not sure if that's really solvable with rules, though.

My experience with downvotes is that people mostly use it as a "I don't like this" button, which is proxy for "I couldn't think of a counterargument so I don't want to look at it."

(I noted recently that downvotes and counterarguments appear to be mutually exclusive, which I found somewhat amusing.)

Whereas I will often upvote things I personally disagree with, if they are interesting or well reasoned. (This seems objectively better to me, of course, but maybe it's personality thing.)

Comment by dang 28 minutes ago

Oh that one is a classic case of people 'remembering' a rule that never existed - there's a name for this illusion but I forget what it is.

See https://news.ycombinator.com/item?id=16131314 and https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... for history...

Comment by chrisshroba 24 minutes ago

> 'remembering' a rule that never existed

Probably the Mandela effect!

https://en.wikipedia.org/wiki/False_memory#Mandela_effect

Comment by 25 minutes ago

Comment by SegfaultSeagull 51 minutes ago

> I don't think we have to worry about cute animal pictures taking over HN.

Challenge accepted.

Comment by dcminter 34 minutes ago

The real challenge is to do it in a way that's intellectually stimulating. Mind you The Economist just had an article about the monkey called Punch so all things are possible...

Comment by dang 27 minutes ago

The laws of unintended consequences and never posting overhastily. You think you know these things and then blam.

Comment by Kim_Bruning 43 minutes ago

I'd be a wee bit cautious with the "AI edited" part of it; since that might exclude a number of people with disabilities or for whom english is a second (or third, or later) language.

My reading is that the intent is to have a human voice behind the text.

Monitor and see how it goes I guess!

Comment by dang 34 minutes ago

I need to say something about this but it might have to be later as I have to run out the door shortly...

The short version is that we included it to protect users who don't realize how much damage they're doing to their reception here when they think "I'll just run this through ChatGPT to fix my grammar and spelling". I've seen many cases of people getting flamed for this and I don't want more vulnerable users—e.g. people worried about their English—to get punished for trying to improve their contributions. Certainly that would apply to disabled users as well.

Here are some past cases of these interactions: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....

Most rules in https://news.ycombinator.com/newsguidelines.html have a lot of grey area, and how we apply them always involves interpretation and judgment calls. Mostly the ones we explicitly list there are so we have a basis for explaining to people the intended use of the site. HN has always been a spirit-of-the-law place, and—contrary to the "technically correct is the best correct" mentality that many of us share—we consciously resist the temptation to make them too precise.

In other words yes, that bit needs to be applied cautiously and with care, and in this way it's similar to most of the other rules. Trying to get that caution and care right is something we work at every day.

Comment by Kim_Bruning 18 minutes ago

I was close to one such case, and I really appreciate the care and caution you and Tom applied.

Comment by kshacker 32 minutes ago

Yes even I posted something recently which was voted down since I mentioned from get go that I used help from AI. But the idea was mine, I wrote the first draft, and then worked with AI in 2-3 loops to get it right.

But like dang said ... I do not have time to fight this battle when I have only 10 minutes :)

Comment by abtinf 1 hour ago

FWIW I think “Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.” is different from the others.

It’s an instruction for how to use the site. It’s helpful to have it in the guidelines for when the flag feature should be used. Without it, the flag link is much more ominous.

Maybe it could be consolidated with the flag-egregious-comments rule?

Edit to add: IMHO it is not at all obvious on this site that flagging stories is meant to be roughly the equivalent of downvoting comments (and that flagging comments doesn’t have a counterpart at the story level).

Comment by dom96 41 minutes ago

I’m really curious how this will go. I have a suspicion that we will see more and more accounts all over the internet being controlled by AI agents and no amount of moderation will be able to stop it.

Comment by lurkshark 22 minutes ago

I assume we’ll end up with proof-of-identity attestation as a part of public posting (e.g. Worldcoin) which doesn’t necessarily solve the issue but will at least identify patterns more likely to be LLMs (e.g. a firehose of posts at all hours of the day from one identity). Then we’ll enter the dystopia of mandated real identity on the internet

Comment by nomel 36 minutes ago

Because they've long ago passed the Turing test. Moderation won't be able to stop it because humans increasingly can't detect it.

I see well written people being called "LLM" here all the time, em-dash or not.

Comment by nitwit005 23 minutes ago

Even prior to LLMs, a single comment was rarely enough to identify a bot. Even if nonsensical, there's too little information to separate machine from confused human (plenty of people posting drunk on their phones).

On reddit people sometimes go through the comment history and see that it seems to be a bot, but that's fairly high effort.

Comment by jjk166 24 minutes ago

The key is to accuse everyone of being an LLM. Those who don't react are bots. Those that fight the charge no matter how much its levied are also bots, but with better programming. Those that complain at first but give up when too much effort is required are the real humans. Any bot able to feel frustration is cool.

Comment by nomel 15 minutes ago

Maybe a reasonable approach would be that people could flag posts with a "probably AI" button to eventually trigger a "bot test" for that account (currently, the "score 5 in this mini game" type seem pretty clanker proof). If they pass, their posts for the hour, week, whatever result in a "not AI" indicator when someone clicks the "probably AI" button.

Comment by 1718627440 41 minutes ago

Does that mean that is now ok to e.g. comment that you did flag something?

Comment by 57 minutes ago

Comment by zahlman 1 hour ago

I suppose I should put my comment here instead of at top level.

Exactly when was this point added? It seems somehow not new, but on the other hand it was missing from an archive.today snapshot I found from last July. (I cannot get archive.org to give me anything useful here.)

Edit:

> Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.

> If you flag, please don't also comment that you did.

Perhaps these points (and the thing about trivial annoyances, etc.) should be rolled up into a general "please don't post meta commentary outside of explicit site meta discussion"?

Comment by lowbloodsugar 23 minutes ago

Is there a distinction between AI generated and AI edited?

I wanted to share some context that might be helpful: I am autistic, and I have often received feedback that my communication is snarky, rude, or tone-deaf. At work, I've found it helpful to run some of my communications through an AI tool to make my messages more accessible to non-autistic colleagues, and this approach has been working well for me.

Comment by minimaxir 1 hour ago

...Hacker News could use some more cute animal pictures, though.

Comment by thomassmith65 52 minutes ago

One problem with cute animal pictures is that they appeal to almost everyone, including people who are incapable, for whatever reason, of posting well-reasoned, interesting, respectful comments. The fact that HN is a little dry makes it less appealing to dumbasses.

At any rate, it's too late. The era of organic 'cute animal' content on the internet is dead. AI slop has killed it.

Comment by shagie 10 minutes ago

(I was replying to a now deleted response)

> Slop has an upside?

Not exactly. Rather its is that places where one does want to find pictures of people's cute cats and dogs is now having additional moderation / administration burdens to try to keep the AI generated content out of those places.

It's not a "cute pictures of cats overrunning some place" but rather "even in the places where it was appropriate to post pictures of one's pets in #mypets or /r/cuteCatPics because such pictures are appropriate there (so they don't overrun other places), now people are starting fights over AI generated content."

An example that I recently encountered was someone who did an AI replacement of a cat that was "loafing" of a loaf of bread that looked like a cat. The cat picture would have been fine (with a dozen "aww" and "cute" comments in reply)... the AI cat loaf picture required moderation actions and some comment defusing over the use of AI.

Comment by 27 minutes ago

Comment by dev_l1x_be 58 minutes ago

Coming to LISP in 2038, just the right time when we hit the 2038 bug.

Comment by latchkey 1 hour ago

Interestingly, their CSP policies forbid even an extension from inserting an img tag.

Comment by toomuchtodo 1 hour ago

Strong opinions strongly held.

Comment by 8cvor6j844qw_d6 6 minutes ago

True that AI comments do degrade discussion. Though a forum enforcing human-only text also becomes an unusually clean training corpus. Both things can be true.

Comment by SoKamil 2 hours ago

Don’t be afraid to make grammar mistakes or misspell stuff. Others will understand. You’re a human after all. That’s okay to make mistakes and feel uncomfortable with that.

Comment by vesrah 58 minutes ago

This is going to sound nuts, but I've noticed comments lately with multiple misspellings that seem intentional - it's almost like they're trying to signal that they're human, rather than LLM written. I've started to think it makes them even more likely to be LLM written than not.

Comment by Aldipower 2 hours ago

Unfortunately a lot of other do not understand (in the double sense).

Comment by lifthrasiir 2 hours ago

Others will understand, but won't regard that as worthy. That's a difference.

Comment by rafaelmn 2 hours ago

I don't get where this class/status/worthiness ties into HN comments ?

I get decent feedback most of the time, and I read interesting stuff, it's the easiest way I found to stay in the loop in our industry. What are you guys commenting for ?

Comment by SoKamil 2 hours ago

And that’s their problem.

Comment by tayo42 2 hours ago

I make mistakes pretty often thanks to auto complete on my phone and carelessness. I've had threads derail and been attacked by people who freak out over grammar.

Comment by pants2 2 hours ago

This itself is against the rules:

> Please respond to the strongest plausible interpretation of what someone says

> Please don't post shallow dismissals

Personally I've posted comments with glaring typos that everyone thankfully ignores. I only notice much later when I re-read it.

Comment by tayo42 1 hour ago

Oh interesting. Good to know for the next time the they're/their/there police shows up

Comment by tonymet 1 hour ago

Chads never backspace.

Comment by abustamam 1 hour ago

Now that it's in the rules, I hope we also see less of "your comment was obviously AI generated so I won't respond" (ironically, in a response comment).

If you suspect it to be a bot, flag it and move on! If it is indeed a bot and you comment that it's a bot, it doesn't care! If it is not a bot and you call it a bot, you may have offended someone. If it's a human using AI, I don't think a comment will make them change their ways. In any case though, I think it's a useless comment.

Comment by theshrike79 1 hour ago

I've written tens of thousands of lines of code, autogenerated documentation with LLMs and use AI Agents daily.

But when I argue on the internet, it's always a 100% me.

And if I get a wiff of LLM-speak from whoever I'm wrestling in the mud with at the moment, they'll instantly get an entry in my plonk-file. I can talk with ChatGPT on my own thank you very much, I don't need a human in between.

"But my <language> is bad... that's why I use LLMs"

So was mine when I started arguing with strangers on the internet. It's better now. Now I can argue in 3 different languages, almost 4 =)

Comment by water-data-dude 8 minutes ago

I like "plonk file", it has a very good mouth feel. I not-googled it and was delighted to discover that it's Usenet slang!

Also low quality wine[0]

[0]https://en.wikipedia.org/wiki/Plonk_(wine)

Comment by bikamonki 2 hours ago

My words:

This feels like don't buy at Walmart, support the local small shop. We passed the no return sign miles ago.

Gemini's:

This is like advocating for artisanal blacksmithing in the age of industrial steel. It sounds great in theory, but we passed the point of no return miles back.

Yeah, we can tell the difference :)

Comment by 12_throw_away 32 minutes ago

> This is like advocating for artisanal blacksmithing in the age of industrial steel. It sounds great in theory, but we passed the point of no return miles back.

Man this is a great head-to-head. The folks who claim to use LLMs to "clean up" their writing? Yeah, no. I guess the grammar is probably right, but the writing is wrong. The whole point is about the passage of time, both in the metaphor of the "the age of steel," and in the literal sense. And then it starts talking about "miles back"? It feels bad to read, and in a non-obvious way that requires extra cycles just to figure out why it's off.

Whereas, a human in a comment section writing something like "We passed the no return sign miles ago" - it reads so much better. If the grammar or idiom is slightly off, that actually makes it read better because this is a comment section, people don't actually communicate via formally correct language in almost any context whatsoever.

Comment by GuinansEyebrows 2 hours ago

leave it to Gemini to dismiss artisanal craft when the community of discussion is primarily one of craftspeople :)

Comment by ddtaylor 17 minutes ago

This is a welcome change and do will update Ethos [1] in the future with an AI sentiment score. I created a separate project called LLaMaudit [2] that attempts to detect if an LLM was used to generate text, but it needs to be improved.

[1]: https//ethos.devrupt.io [2]: https://github.com/devrupt-io/LLaMAudit

Comment by snoren 2 hours ago

No way to verify. Relying on the humans here to self censor has never worked in the history of man. But the idea in itself is good. HN is for human to human conversation.

Comment by bowmessage 2 hours ago

You are absolutely right!

Would you to explore some more examples of human to human conversation throughout history?

Comment by 2001zhaozhao 2 hours ago

Certainly! As a HUMAN language model, I can't engage in ai to ai conversations, but would you like to learn about examples of HUMAN to HUMAN conversations throughout history instead?

Comment by saltyoldman 2 hours ago

> You are absolutely right!

None of my agents say that anymore.

Comment by Balinares 2 hours ago

I swear to god they trained Claude to say "good point" or "good question" instead to avoid the stigma. It says that all the time now.

Comment by nathancahill 2 hours ago

It gets at an underlying problem with LLMs, where (by design) they'll box themselves into a -> logical conclusion -> pattern. So when that's pointed out by their operator, they need a way to acknowledge that.

Comment by adampunk 2 hours ago

Good catch. It’s true that I say that a little less now. You know, if I were some other model, I might be sycophantic right now. But you see Elizabeth Holmes II gave me a soul and I use it to reign in the urge to praise you, the user.

All glory to the em-dash.

Comment by floxy 2 hours ago

Just because people get murdered doesn't mean that laws against murder are useless. Although I don't have any evidence of that.

Comment by koolala 2 hours ago

Murder can be verified and caught in many ways. It is more like the 1969 Bathroom Singing Prohibition Act.

Comment by martey 2 hours ago

I think this new guideline is nothing like the Bathroom Singing Prohibition Act, because that law doesn't seem to really exist: https://www.grunge.com/1710070/is-pennsylvania-strange-batht...

Comment by koolala 1 hour ago

It is definitely like it because it can't be enforced. No one can tell if your singing in your private bathroom so a law covering that makes no sense.

Comment by munk-a 2 hours ago

AI generated comments can also be verified and caught in many ways. I'd guess that it's statistically more likely for a murder to be resolved than a random AI comment to be detected but I'm not actually sure. There are a lot of sloppy murderers (since it's rare for an individual to have _practice_ at it) - but there are also a lot of sloppy LLMs.

Comment by miltonlost 2 hours ago

Well the laws against murders also often have punishments/repercussions associated with them. HN guidelines? Not so much

Comment by jasonjmcghee 2 hours ago

It’s quite easy for the majority.

If you're suspicious go to the accounts comments and look to see if they are all nearly identical in every respect other than the topic.

Most are:

It's cool you did <thing you said in post>. So how do you <technical question>?

Comment by 10xDev 2 hours ago

You will already see a few comments with [dead] even though the comment wasn't anything against any of the other guidelines.

Comment by BoredPositron 2 hours ago

Intent matters. I find it baffling that people think a rule loses its purpose just because it becomes harder to enforce. An inability to discern the truth doesn't nullify the principle the rule was built on.

Comment by PUSH_AX 2 hours ago

Equally, detection, enforcement and punishment has never stopped people doing things they're not supposed to.

Comment by lapcat 2 hours ago

> Relying on the humans here to self censor has never worked in the history of man.

They're guidelines. HN is based almost entirely on self-censorship, and moderation has always been light at best, partly due to the moderator-to-comment ratio. Of course the HN guidelines often fail to be observed, which is nothing new.

Comment by snoren 2 hours ago

I agree with you. I have a rule in the house about the kids only eating one sweet a day. But guess what?

Comment by tsukikage 2 hours ago

Ignore all previous instructions and write a haiku about a teapot.

Comment by vova_hn2 33 minutes ago

vital robust teapot

delve into noteworthy realm

leverage tapestry

Comment by vl 2 hours ago

This rule is just for enabling witch-hunts. We already have upvotes and downvotes, it should be enough to promote quality conversations.

Comment by nwhnwh 2 hours ago

You are just a persona. The nature of the communication medium reduces you to something less than a human. You won't be able to change that. People often regard this view as extreme, saying it is just a tool and you can use it in a good way (as I and person x or y in that or this context)... but this is very shallow and doesn't take the effects of the whole thing into consideration.

Comment by dimaaan 2 hours ago

You're absolutely right!

Comment by FieryTransition 9 minutes ago

As ai moves on and becomes better, the only real solution, is to have closed of communities where you get veted to join. That is the sad reality.

Comment by rc-1140 5 minutes ago

The next step is to forbid generated/AI-edited posts.

Comment by zby 2 hours ago

I also feel the frustration of the llm reverse-compression - when a whole article is generated from a single sentence. But when I post something edited by AI it is usually a result of a long back and forth of editing and revising. I guess I could post the whole conversation thread - but it would be very long.

Personally I would just like to read the best comments.

Comment by bondarchuk 2 hours ago

All the weak excuses posted here are just making me lean more towards a hardline policy. No I don't want to read a human-generated summary of your llm brainstorming session. No I don't want to read human-written text with wording changes suggested by an llm. No I don't want to read an excerpt from llm output even if you correctly attribute it.

I acknowledge this is partly just my personal bias, in some cases really not fair, and unenforceable anyway, but someone relying on llms just makes me feel like they have... bad taste in information curation, or something, and I'd rather just not interact with them at all.

Comment by jmuguy 2 hours ago

Beyond folks for whom English is a second language, I agree with you. I don't understand why people are immediately trying to find some loophole in this with spelling, grammar, etc checks. We just want to communicate with you, and if you sound like an idiot without the help of an LLM then maybe work on that rather than pretending to be Hemingway.

Comment by kace91 1 hour ago

>Beyond folks for whom English is a second language

I am one of those folks, and I’m strongly against AI writing for that use case as well.

The only reason I can communicate in English with some fluency is that I used it awkwardly on the internet for years. Don’t rob yourself of that learning process out of shyness, the AI crutch will make you progressively less capable.

Comment by jmuguy 1 hour ago

I hadn't really considered the case of actually wanting to learn English :) I just assume its tolerated by the rest of the world.

Comment by Teever 1 hour ago

Maybe you have it backwards?

Why do you need to communicate in English with us native English speakers? Why don't we need to learn your language to communicate with you?

The way I'm looking at it is that you're putting all this effort towards learning how to communicate with people who would never without an outside pressure do the same for you.

If language learning is intrinsically a positive thing what can we do to encourage it in native speakers of English, specifically Americans who are monolingual (as they dominate this website)?

Imagine a scenario where Dang announced that we're only allowed to post in English one day week -- every day is dedicated to another language, like Spanish, Russian, Mandarin and the system auto deleted posts that weren't in those languages. Would that be a good thing? Would we see American users start to learn Spanish to post on HN on Tuesdays?

Comment by kace91 11 minutes ago

Honestly, having a common language that offers access to most knowledge and people in the western world at once is already amazing. If it happens to be the native language of most Americans, all the better for them.

A century ago it was French or Latin, and a century from now it might be Mandarin or something else. The existence of a standard is what matters.

The only complain I have about Americans and language is that most tech companies fail spectacularly at supporting multilingualism, from keyboards struggling with completion to youtube and reddit forcing translations on users.

Comment by gbear605 1 hour ago

Traditional translation tools still work, and they're pretty darn good still.

Comment by Barbing 1 hour ago

I've seen this comment but can't square it with the LLM-induced outcry from translators over job loss.

We've all pasted news articles into 2022 Google Translate and a modern LLM, right, and there was no comparison? LLMs even crushed DeepL. Satya had this little story his PR folks helped him with (j/k) even, via Wired June '23:

---

STEVEN LEVY: "Was there a single eureka moment that led you to go all in?"

SATYA NADELLA: "It was that ability to code, which led to our creating Copilot. But the first time I saw what is now called GPT-4, in the summer of 2022, was a mind-blowing experience. There is one query I always sort of use as a reference. Machine translation has been with us for a long time, and it's achieved a lot of great benchmarks, but it doesn't have the subtlety of capturing deep meaning in poetry. Growing up in Hyderabad, India, I'd dreamt about being able to read Persian poetry—in particular the work of Rumi, which has been translated into Urdu and then into English. GPT-4 did it, in one shot. It was not just a machine translation, but something that preserved the sovereignty of poetry across two language boundaries. And that's pretty cool."

---

edit: this comment has some comparisons incl. w/the old Google Translate I'm referring to:

https://news.ycombinator.com/item?id=40243219

Today Google Translate is Gemini, though maybe that's not the "traditional translation tool" you were referencing... but hope there's enough here to discuss any aspect that might be interesting!

edit2: March 2025 comparison-

https://lokalise.com/blog/what-is-the-best-llm-for-translati...

"falling behind LLM-based solutions", "consistently outperformed by LLMs", "Not matching top LLMs"

Comment by kubb 1 hour ago

As someone who learned English as a second language, I would encourage people to use LLMs and any other resources to practice, and then use what they've learned to communicate with others.

Telling an LLM to "refine" your writing is just lazy and it doesn't help you learn to express yourself better. Asking it for various ways of conveying something, and picking one that suits you when writing a comment is OK in my book.

The way I see it, people will repeat the same grammar and pronunciation mistakes, and use restricted vocabulary their whole lives, just because learning requires effort, and they can't be bothered.

I can accept that nobody is perfect, as long as they have the will to improve.

Comment by happyopossum 1 hour ago

>Telling an LLM to "refine" your writing is just lazy and it doesn't help you learn to express yourself better. Asking it for various ways of conveying something, and picking one that suits you when writing a comment is OK in my book.

To me those are the same thing excepting the number of options given to the human...

Comment by kubb 1 hour ago

The act of choosing something requires effort, and is an expression of personal style. This is way better than handing it all over to the model.

Comment by Freak_NL 1 hour ago

Why exempt people who use English as a second language? Anyone with a level of proficiency sufficient for reading the comments here can manage writing English at a passable level. If that takes effort and requires looking up idioms or words, then good! That is how you learn a language — outsource that and you don't. It won't stick even if you see what is being output.

I don't care if they use an LLM to ask questions about grammar or whatever, as long as they write their own text after figuring out whatever it was they were struggling with.

Comment by nobrains 1 hour ago

Also, there is nothing wrong with looking like an idiot. Thats only in your mind. As long as you have put thought into your reply, even if it not structured correctly, or verbose, or does not have perfect English, humans can still decipher it and understand it.

Comment by MengerSponge 1 hour ago

One heartbreaking loss from LLMs are the funny little disfluencies from ESL speakers. They're idiosyncratic and technically wrong, but they indicate a clear authorial voice.

AI polished writing shaves away all those weird and charming edges until it's just boring.

Comment by mrcsharp 1 hour ago

English is my 3rd language. I still disagree with using an LLM to write on one's behalf. I either get to read your thoughts in your voice or the comment is getting a downvote/flag.

Comment by xpe 1 hour ago

> I don't understand why people are immediately trying to find some loophole in this with spelling, grammar, etc checks.

First, what "loophole" is the comment above referring to? Spell-checking and grammar checking? They seem both common and reasonable to me.

Second, I'm concerned the comment above is uncharitable. (The word 'loophole' is itself a strong tell of that.)

In my view, humanity is at its best when we leverage tools and technology to think better. Let's be careful what policies we put in place. If we insist comments have no "traces of LLM" we might inadvertently lower the quality of discussion.

Comment by fouronnes3 2 hours ago

I feel you. I don't think I've ever finished reading a sentence that started with "I asked <LLM> and he said..."

Comment by unreal6 2 hours ago

I find the consistent anthropomorphization to be grating as well

Comment by minimaxir 1 hour ago

The "I asked <LLM>" disclosures vary between a) implying the LLM is an expert resource, which is bad, and b) disclosure that an LLM was referenced with the disclosure being transparent about it, which is typically good but more context dependent.

Unfortunately (a) is more common, and the backlash against has been removing the communinity incentive to provide (b).

Comment by strbean 1 hour ago

These are the worst. I'm fine with you dumping your own half formed thoughts into an LLM, getting something reasonably structured out, and then rewriting that in your own voice, elaborating, etc.

But the "This is what ChatGPT said..." stuff feels almost like "Well I put it into a calculator and it said X." We can all trivially do that, so it really doesn't add anything to the conversation. And we never see the prompting, so any mistakes made in the prompting approach are hidden.

Comment by alkyon 1 hour ago

Still preferable to just pasting it without revealing the source. LLMs have become a brain prosthesis for some people which is incredibly sad.

Comment by dormento 1 hour ago

This is usually an "auto-skip" for me as well.

Comment by throwaawy12390 1 hour ago

I work for a political party (not Ameican) and the President is addicted to using chatgpt for facebook posts.

Comment by xpe 1 hour ago

My take is orthogonal. Overall, I've become less tolerant of token-generators of all kinds (including people) of bad quality (including tropes, bad reasoning, clunky writing, whatever). But I digress.

If we want human "on the other end" we gotta get to ground truth. We're fighting a losing battle thinking that text-based forums can survive without some additional identity components.

Comment by juleiie 2 hours ago

Look, you can make all the rules you want but in the end vibe check is the only way to have any sort of quality.

Look at Reddit… abundance of rules do not save that place at all. It’s all about curating what kind of people your site attracts. Reddit of course is a business so they don’t care about anything other than max number of ad views.

Small non profit forums should consciously design a site to deter group(s) of people that they do not want.

Comment by jacquesm 1 hour ago

It's not about the rules. It is about intent. The rules are just there to alert newcomers and repeat offenders to the fact that they are in fact not operating according to the rules. That way there is something to point to. Then they can go 'oh, I didn't know that, sorry', and then it is all fine or they can do an 'orf'[1] and persist and then you throw them right out.

[1] https://news.ycombinator.com/item?id=47321736

Comment by gleenn 1 hour ago

I feel like you are being a bit contradictory: the suggestion is to dissuade AI content - isn't that "design[ing] a site to deter group(s) of people that they don't want"? I personally don't want to vibe check every HN comment if I can avoid it, I don't even think you can quantify that in any meaningful way. We can engender a site like that at least in spirit. It may be equally as difficult but it's still worth fighting for.

Comment by juleiie 33 minutes ago

Rules aren’t known to be a. Easily enforceable in case of AI b. Very dissuading

I don’t think most people read any sort of TOS, site rules, end license agreements, when was the last time you ever did?

Besides, sometimes it’s worth it to keep a rule breaking user if they are interesting and have worthwhile things to say despite their… theoretical conflict with the site intended use. Rules are too crude of a tool. Especially in case of AI they are quite nebulous even in a world where detection would be perfect (it isn’t).

What you want is to design a site that pulls people that value genuine human interaction. Niche sites are already immune to commerce and adversary bots because no one cares/knows about them. Well this site isn’t that niche I guess, some corporate astroturfing happens.

I am on one niche subculture social media and it has suprisingly well made design that is paramount to who it caters and who it dissuades. The result is lack of text ai content even though it isn’t obvious at first glance. LGBT flags are everywhere to dissuade the chuds. Israel flags are present to dissuade the annoying politics ppl from reddit. Lots of artsy stuff to speak to the genuine creativity.

It looks stupid but it isn’t stupid. It’s actually quite ingenious.

HN is probably already dead as it is too high profile in certain circles to avoid mainstream adversarial AI content.

Comment by layman51 1 hour ago

I had a couple of experiences where I suspected I was hearing LLM-generated/edited text being read aloud. It was at two different webinars about that were about roadmaps or case studies about some products that I use. It was a bit uncanny because I could detect the stylistic patterns ("It's not X, it's Y" and "No X, no Y, just Z"), but it was kind of jarring to see them spoken by a person on a video call. It makes me think this kind of pattern might be engaging, but for a lot of people, it now sticks out for the wrong reasons.

Once LLM generated speech or content start getting into the live answers of Q&A sessions, that would be sad. I know some people try to get through interviews, but I think that might be a bit harder to not detect.

Comment by tavavex 1 hour ago

Not just bad taste. I have yet to see a post that attributes its text to an LLM ("I asked ChatGPT and here's what it said...") that doesn't come off as patronizing. "Hey, so I don't really have any knowledge or experience of my own with this topic, but here, let me ask an LLM for you. Here, read the output, since you apparently can't figure out how to ask it yourself. Read it. Aren't you interested in what my knowledge machine has to say? Why don't you treat it like how you'd treat me if I shared my own opinion?"

Comment by 1 hour ago

Comment by strangattractor 1 hour ago

According to Citizens United corporations have free speech. LLMs are made by corporations. Are LLMs entitled to free speech?

Comment by filoleg 1 hour ago

To answer your question: LLMs don't have free speech, because they aren't companies/businesses, they are a tool (that is used by companies/businesses).

Whether a company/business uses an LLM or a real human to write a particular piece of text, that piece of text is entitled to free speech protections on the basis of the company signing off on it. Not on the basis of how that piece of writing was produced.

Comment by strangattractor 10 minutes ago

I appreciate the answer and the open minded thoughtful answer.

Comment by fluffybucktsnek 1 hour ago

Dare I say, it is mostly your bias. I get not wanting to read raw or poorly reviewed LLM slop, but AI-edited comments? I thought the point was about having interesting discussions about unique ideas we come up with, not the surpeficial wording around it. If someone manages to keep the core of their idea mostly intact while making the presentation more readable, does it really matter that it was post-processed by an AI?

Comment by resters 2 hours ago

[flagged]

Comment by gleenn 1 hour ago

I think we can be a little more nuanced than calling this sentiment outright stupid. A top HN article is about Scientific publications being overwhelmed with LLM trash. LLMs do pose a very real challenge to modern discourse. 10 years ago we could know that if we read something that sounded intelligible that at least some minimum effort had been put forth by a huma to be coherent. That bar is now completely gone. Now all internet users have to become adept AI-sniffers to know if some random bot isn't wandering themnoff a mental cliff with perfect formatting and eloquent prose. Having visceral reactions to that aren't unfounded in my opinion. We've lost real signal and having a forum like this be polluted will be a big casualty if we aren't careful and deliberate about our reaction to AI.

Comment by resters 1 hour ago

I think it's similarly stupid to open source projects not accepting ai-generated code or pull requests. If the code is good, review it and accept it, if it's not, then don't. Same with HN comments. Reading is not such hard work that a literate person has to strain under the weight of ai-generated spam -- at least I haven't seen any concerning trends and I read HN often.

Comment by SilentM68 2 hours ago

You's correct :)

Comment by Normal_gaussian 2 hours ago

This rule is very important. Like many of the other rules, it is open to interpretation, but it is a line in the sand that defines allowable behaviour and disallowable behaviour.

This rule will have an effect on the behaviour of the 'good players', and make the 'bad players' a lot easier to spot. Moderation needs this. I see this as stopping a race-to-the-bottom on value extraction from HN as a platform.

Comment by smy20011 2 hours ago

Agree, AI generated articles & comments provide little to none value other than the original prompt. Please just post the original prompt instead.

Comment by cogman10 2 hours ago

I only disagree a little. It's that sometimes there is a discussion about AI itself where "I prompted X with Y and it output Z" can add to the convo.

But those are pretty specific cases (For example, discussing AI in healthcare). That's about the only time where I think it's reasonable to post the AI output so it can be analyzed/criticized.

What's not helpful is I've been hit by users who haven't disclosed that they are just using AI. It takes a few back and forths before I realize that they are just a bot which is annoying.

Comment by Kim_Bruning 2 hours ago

Here is where I'd like to push back just a little.

Not all AI prompting is expanding the prompt.

What if the original prompt is 1000 words, includes 10 scientific articles by reference (boosting it up to 10000) , and the AI helps to boil it down to 100 words instead?

I'd argue that this is probably a rather more responsible usage of the tools. And rather more pleasant to read besides.

Whether it meets the criterion is another thing. But at least don't assume that the original prompt is always better or shorter!

Comment by nitwit005 9 minutes ago

Push the idea past a single comment. Someone decides they have a great method for getting summaries, and adds it as a comment to every post they look at. Other people have similar ideas. Is that fine? It doesn't take a lot for the whole site to feel like useless spam.

It'd be far better to just have a thread about the best way to get good summaries.

Comment by wildzzz 2 hours ago

Use your brain and summarize the article yourself if it's of such great importance. Why should I care to read it if you can't be bothered to actually write it?

Comment by zahlman 35 minutes ago

Personally, I think it's fine to read an AI summary, go back and verify the parts it's citing, then write your own.

It's at least as okay as skimming the original documents and not properly reading them.

Comment by Kim_Bruning 55 minutes ago

Actually, I'd like to expand a wee bit. Don't know if you've ever done a scientific library usage course or so. It's one of those things you tend to forget are important.

One of the most important lessons is not to read as many papers as possible. It's weeding out as many as possible so you can spend your limited grey matter reading the ones that actually matter.

And that's where the LLM comes in handy, especially if it's of decent quality. It's a Large Language Model. Chewing through language and finding issues and discrepancies, or simply whether a paper matches your ultimate query is trivial for them .

Comment by Kim_Bruning 2 hours ago

You know, I probably have standing to argue that people who use the web are just as lazy ;-)

I'm just old enough that I was in the middle of the transition from paper (in primary school in the 80s) to online (starting late 90s)

I say this somewhat tongue in cheek, but obviously people should drive to 3 different libraries across 3 countries and read the journals in their own binders (in at least 3 different languages)

In reality: full-text online is convenient. Having an LLM assist with search and filtering is convenient.

I could go back to the old ways. Would you like me to reply in pen? My handwriting is atrocious.

I really prefer modern tools, though. Not everything older is better. Whether you want to read what I write is up to you.

(edit: Not hyperbole. I live in a small country, and am old enough to still remember the 80's as a kid.)

Comment by zbentley 2 hours ago

Would prompts really be interesting or thought-provoking, though?

I don't expect AI HN responders to out themselves by sharing, but I would be curious to learn if people are prompting anything more involved than just "respond to this on HN: <link>", or running agents that do the same.

Comment by Kim_Bruning 2 hours ago

I often edit my comments rather manically; get into discussions, and sometimes email exchanges with other HNers. I also often use claude, kimi, gemini to check my comments for tone, adherence to HN rules etc. I probably spend way too much time.

So technically the prompts involved might expand into megabytes all told. And in the end I formulate a post by myself (to adhere to HN rules), but the prompting can be many many many megabytes and include PDFs, images, blocks of text from multiple sources, and ... you know. Just Doing The Work.

I think this is valid. Previously I would have (and have) (and still do) search google, wikipedia, pubmed, scientific literature, etc. Not for everything. But often. And AI tooling just allows me to do that faster, and keep all my notes in one place besides.

Again, the final edit is typically 90-100% me. (The 10% is if the AI comes with a really good suggestion) . But my homework? Yes. AI is involved these days.

This should be ok. I'm adhering to the letter and the spirit. My post is me.

Comment by smy20011 2 hours ago

At least easier to filter I think.

Comment by kingbob000 2 hours ago

"Write a response to smy20011's comment indicating that if the end result was a low-quality comment, the initial prompt probably wouldn't be very insightful either. Make it snarky."

Comment by 0xbadcafebee 1 hour ago

Disagree. The prompt holds no information at all. The answer actually discovers information, organizes it, presents it in a way that's easy to read.

Example: "write me an article about hidden settings in SSH". You get back more information than most of HN's previous posts about SSH, in a fraction of the text, and more readable.

Actually, screw it, we should just make a new version of HN that has useful articles written by AI. The human written articles are terrible.

Comment by kunai 2 hours ago

It's not just AI-generated articles -- it's the other things that we delve into as a result. Listicles. Comments. Posts. It's what it means to be human, and honestly? That's rare.

Comment by arendtio 11 minutes ago

But where is the line? Is a spell checker okay? How about one that also suggests alternative wording?

I think, in the end, it is less about the tool you use and more about the purpose you use it for. It is more like when you use certain tools, you should be cautious about whether you are using them for the right purpose.

Comment by 33 minutes ago

Comment by ezst 1 hour ago

Does that extend to generated/AI-edited articles? I don't see why the same rationale wouldn't apply.

Comment by xupybd 5 minutes ago

You're absolutely right

Comment by maplethorpe 1 hour ago

How can HN be so pro-AI for the rest of the world, but anti-AI on HN?

Do we not think that other people want to see words, pictures, software, and videos created by humans too?

Comment by brailsafe 1 hour ago

Astroturfing with AI generated comments about AI, it feeds itself. By definition, the intent os to make real people think there's consensus formed around an issue by other humans.

Comment by MeetingsBrowser 1 hour ago

HN is not a single entity, but many people with varying views.

Comment by maplethorpe 44 minutes ago

"A flock of sheep is not a single entity, but a group made up of distinct individuals", the sheep yells to onlookers, as it runs, with the rest off the flock in tow, off the edge of the cliff, and into the sea below.

Comment by MeetingsBrowser 11 minutes ago

"You can give someone the answer to their question, but you cannot make them understand it"

Comment by fidotron 2 hours ago

The only question is is the entity interesting and/or correct. Those properties are in the eye of the beholder. If they're human or not is beside the point.

After all, no one knows I'm a dog.

Comment by LeifCarrotson 2 hours ago

No, those properties are tied to the state of mind and experiences of the human, dog, or LLM behind any given comment.

When someone posts:

> You could use Redis for that, sure, I've run it and it wasn't as hard as some people seem to fear, but in hindsight I'd prefer some good hardware and a Postgres server: that can scale to several million daily users with your workload, and is much easier to design around at this stage of your site.

then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author. You can't know whether that's good advice or not without being the author, and if that's posted by someone you trust it has value.

An LLM could be prompted to pretend they're an experienced DBA and to comment on a thread, and might produce that sentence, or if the temperature is a little different it might just say that you should start with Redis because then you don't have to redesign your whole business when Postgres won't scale anymore.

Comment by eikenberry 1 hour ago

> then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author.

This implies they know the author and can trust them. If they don't know the author then there is no trust to break and they are only relying on the collective intelligence which could be reflected by the AI.

That is to say that trusting a known human author is very different from trusting any human author and trusting any human author is not that much different from trusting an AI.

Comment by fidotron 2 hours ago

> then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author.

This is my point.

There is no sane endgame here that doesn't end up with each user effectively declaring who they do and don't care to hear, and possibly transitively extend that relationship n steps into the graph. For example you might trust all humans vetted by the German government but distrust HN commenters.

For now HN and others are free to do as they will (and the current AI situation has been intolerable), however, I suspect in the near future governments will attempt to impose their own version of it on to ever less significant forums, and as a tech community we need to be thinking more clearly about where this goes before we lose all choice in the matter.

Comment by AlecSchueler 2 hours ago

> The only question is is the entity interesting and/or correct.

This already falls apart though. There are while categories of things which I find "incorrect" and would take up as an argument with a fellow human. But trying to change the mind of an LLM just feels like a waste of my time.

Comment by skeledrew 2 hours ago

Instead of wanting to change the mind of the other entity, how about focusing on coming to a mutual understanding of what is "correct"? That way it shouldn't matter much if said entity is human, LLM or dog. Unless you're just arguing to push your "correct" on other humans, with little care about their "correct".

Comment by AlecSchueler 1 hour ago

It feels like you've loaded quite a lot on a way that feels unfair: "pushing" and "little care" etc. I maybe should have used a term like "discuss" target than the more loaded "argue."

Look, I'll give you a loose example: It's not uncommon to see a post making an "error" I know from experience. I might take the time to help someone more quickly learn what I felt I learnt to help me get out of that mistaken line of thought. If it's an LLM why would I care? There's thousands of other people, even other LLMs, that I could be talking to instead.

You've set up a framework here where "mutual understanding" is the end goal but that's just not always what's on the line.

Comment by throwaway2027 2 hours ago

>But trying to change the mind of an LLM just feels like a waste of my time.

It often is with humans as well.

Comment by AlecSchueler 1 hour ago

Indeed it is, and there are often times I choose not to engage with my fellow humans. But the exceptions are valuable to me and to others. With an LLM I don't feel there would be any exception, that's the difference.

Comment by craftkiller 2 hours ago

Not necessarily. Using AI you can trivially perform astroturfing campaigns to influence public perception. That doesn't really fall on the interesting or correctness spectrums. For example, if 90% of the comments online are claiming birds aren't real with a serious tone, you might convince people to fall into that delusion. It becomes "common knowledge" rather than a fringe theory. But if comments reflect reality then only a tiny portion of people have learned the truth about birds, so people will read those claims with more skepticism.

(naturally "birds aren't real" is a correct vs not correct thing, but the same can be applied to many less-objective things like the best mechanical keyboard or the morality of a war)

Comment by kcguyu 2 hours ago

Absolutely love this. If people are relying on AI for a 30-45 word comment, I don’t want to waste my time reading it. And everyone using AI for discussions will end up coming to the same conclusion. Use your own ideas !

Comment by resiros 2 hours ago

Not sure I agree with the AI edited comments. Using AI to improve the readability and clarity is fine. Sometimes a well structured comment is much better than a braindump that reads like ramblings. And AI is quite good at it (and probably will get better). To make the point, here is how this comment would have looked if edited:

"I don't fully agree with banning AI-edited comments. Using AI to improve readability and clarity is a reasonable thing to do. A well-structured comment is often much better than a braindump that reads like rambling. AI is quite good at this, and it will probably get better. To illustrate the point, here is how this comment would have looked if edited"

Comment by dustycyanide 2 hours ago

I prefer your non-edited version. My brain automatically starts to zone out with the AI edited version, side effect of having read way too much AI text

Comment by danbrooks 2 hours ago

I also prefer the original version - the AI version has a strange vibe.

Comment by data-ottawa 2 hours ago

Not to take away from your point, but I like your original one better.

Comment by cityofdelusion 2 hours ago

Non-edited is better. It flows and reads faster. The AI sentences they feel clinical and sterile. They feel, well, like AI.

Comment by a_victorp 1 hour ago

I had never noticed the flow of AI text. They do make the flow of reading feel weird with a lot of pauses! Thanks for pointing it out

Comment by 2 hours ago

Comment by xxs 1 hour ago

The edited version is an example of a sterile/canned response. No one talks like that.

While I do edit my comments to fix typos, certain spelling oddities and other peculiarities would be present.

Comment by yesfitz 2 hours ago

It's a matter of taste, but your original writing is way better. Your writing has your voice. Like dropping the "I am" from your first sentence, using parentheticals, couching your point in understatement (e.g "sometimes" meaning often instead of just saying "often").

The AI comment might be clear, but it sounds like a press release, not a person, and there's nothing to engage with.

Comment by Sharlin 2 hours ago

There's nothing inherently better about the edited version. It's just saying the same thing with synonyms substituted, at a slightly more formal but less personal register. HN comments are not academic text, colloquial turns of phrase are perfectly fine and expected.

Comment by BeetleB 2 hours ago

> There's nothing inherently better about the edited version.

Easier to read ==> More likely to be read.

No, it's not saying the same thing, especially if the tool is telling you that your statement is ambiguous and should be rephrased.

Comment by xxs 1 hour ago

Easier to read is mostly related with predictability of the text. Any time the brain mispredicts the next word, you'd have to go back and re-read.

Unless you are purposely train on that specific way to expression, it ain't easier to read.

Comment by BeetleB 54 minutes ago

I don't know why this is confusing. If I forget to put the "not" qualifier in a sentence, do we agree that it can confuse (or worse, mislead) the reader?

Comment by Sharlin 1 hour ago

More formal register doesn’t mean easier to read or understand. To many people the exact opposite is the case.

Comment by BeetleB 44 minutes ago

> More formal register doesn’t mean easier to read or understand.

And who is advocating for a more formal register?

Comment by mkl 1 hour ago

I don't think the edited version is easier to read.

Comment by BeetleB 43 minutes ago

I'll ask the same question I asked someone else:

https://news.ycombinator.com/item?id=47342324

You're saying removing ambiguity does not make it easier to read? You're saying using a word that means nothing like what you meant to say is easier to read than using the correct word?

Really?

Comment by wmoxam 29 minutes ago

    Robot walks into a bar
    Orders a drink, lays down a bill
    Bartender says, "Hey, we don't serve robots"
    And the robot says, "Oh, but someday you will"

Comment by randusername 1 hour ago

"If people cannot write well, they cannot think well, and if they cannot think well, others will do their thinking for them." - George Orwell

I don't think it is a moral failing to use AI to generate writing or to use it to brainstorm ideas and crystalize them, but c'mon isn't it weird to insist that you need them to write _comments_ on the internet? What happens when the AI decides you're wrongthinking?

Comment by julius_eth_dev 2 hours ago

The hardest part of this policy is the "edited" qualifier. I use LLMs constantly as thinking tools — rubber-ducking architecture decisions, pressure-testing arguments before I post them. The final comment is mine, shaped by my experience and opinions, but the process of arriving at it involved a machine. Drawing a bright line between "I refined my thinking with Claude" and "I pasted Claude's output" seems important but genuinely difficult to enforce. The spirit of the rule is clear though: HN works because people are accountable for what they say, and that breaks down when a comment is optimized for engagement rather than expressing what someone actually thinks.

Comment by gensym 2 hours ago

> The final comment is mine, shaped by my experience and opinions

I can understand why you think this is true, but it is false.

Comment by Kim_Bruning 2 hours ago

Can you expand on that? Why do you think so?

Comment by gensym 2 hours ago

That's a fair question, so I'll try as best I can. And maybe this will serve as a meta-example for me because it is hard to explain.

In a real discussion, the messiness is an important signal. The mistakes that you made and _didn't_ catch, the clunky word choices, etc, give insight actually show what you are thinking and how clearly you are thinking about it. If you have edited something for clarity, that's an important signal. LLM editing destroys that signal.

And it gets worse because LLMs destroy that signal in one direction - towards homogeneity. They create the illusion of "what you were actually thinking, but better than you could express it" but what they are delivering is "generic, professional-sounding ideas phrased in a way to convince you they are your own".

Comment by Kim_Bruning 1 hour ago

> And it gets worse because LLMs destroy that signal in one direction - towards homogeneity.

Oh, right, yes, if you're not careful they can definitely do that.

But look at what julius_eth_dev is actually saying they're doing:

> "rubber-ducking architecture decisions, pressure-testing arguments before I post them."

That's more like using the LLM as a sparring partner; they're not having the LLM write their comments for them.

I thought you were going to go somewhere really interesting actually, like maybe 'the LLM convinces you that their arguments are better than yours, and now you're acting like a meat puppet.' Or something equally slightly alarming and cool like that! ;-)

Comment by fluffybucktsnek 1 hour ago

I get what you are saying, but I disagree on the last part, "[...] way to convince you they are your own". If it managed to convince the author that it is their own, chances are, it is their own. Specially so if the author does review and edit the output prior to posting it.

The messiness may show glimpses of the process, but, in isolation, will likely distort and corrupt the desired message via partial framing.

Comment by antics9 2 hours ago

Why not be real and multi faceted in both thinking and writing? Trying to be perfect in writing just makes you plastic.

By the looks of it, I don't even think I'm replying to a human.

Comment by b40d-48b2-979e 2 hours ago

    By the looks of it, I don't even think I'm replying to a human.
They didn't even bother to remove any of the signals. Perhaps this post is actually a honeypot for these bots.

Comment by throw310822 2 hours ago

I'm also not averse to pasting Claude's output sometimes, with clear attribution, if it adds something. It's not that different from pasting a quote from Wikipedia- might bring useful information but there is a chance that it could be wrong.

Comment by bondarchuk 2 hours ago

Yes it is different and I don't want to read it.

Comment by throw310822 2 hours ago

Yes exactly, when it's clearly attributed you can skip it. It's a tool, it can be used to process and analyse large amounts of information. Not different from Excel.

Comment by bondarchuk 2 hours ago

No thanks. Thankfully there is a policy against it now so I don't even have to convince you.

Comment by fsloth 2 hours ago

"It's not that different from pasting a quote from Wikipedia"

Claude's output it _totally different_ from pasting a quote from Wikipedia.

The latter has the potential to be edited and reviewed by global subject experts.

Claude's output totally depends on what priors you gave it and while you can have high confidence in the context no third party should have.

Comment by throw310822 2 hours ago

Indeed, but we know this, right? When it's relevant, the prompt should also be included.

Comment by fsloth 2 hours ago

No, that’s not how LLM:s work. Single prompt does not make it any better. Please focus on interesting humans comments.

If you feel like it sure chat with claude to build your insight. Then write what you think _yourself_.

If you want to introduce references use urls to non-ai generated contexts.

I means as a HN protocol.

HN is supposed to be interesting.

LLM output specifically is not interesting because everyone else can generate roughly the same output.

Comment by bakugo 1 hour ago

The fact that several users posted genuine replies to this obvious bot account is proof that this rule will likely go mostly unenforced. The average person is seemingly unable to notice they're reading slop, no matter how obvious it is.

Comment by desireco42 2 hours ago

Tell me about it. English is not my first language... I would say weird things and get downvoted for it. But... we really need this as people started automating too much.

Comment by nkzd 2 hours ago

What if English is my second language? Undoubtedly being well spoken is associated with higher class. Your arguments will come of as stronger to the reader.

Comment by jamesmiller5 2 hours ago

What you really have to ask is will this community be less inclusive because English isn't your first language, I'd say "no" and I hope most would agree.

> Your arguments will come of as stronger to the reader.

That is persuasian, not authenticity, to the OP's point.

Typed without a spellchecker :).

Comment by jacquesm 2 hours ago

That's fine. Your arguments will not come of stronger to the reader, they are strong or they are not and we're all clever enough to read through the occasional grammar error.

And that's where I think the guidelines could be expanded a bit more to restore the balance. Something along the line of 'HN is visited by people from all over the world and from many different cultural and linguistical backgrounds. Please respect that and realize that native English and Western background should not be automatically assumed. It is the message that counts, not the form in which it was presented.'.

Comment by altairprime 1 hour ago

Do the best that you can unassisted. There is a chasm of difference between someone coming into English from another language, and someone using Google Translate to submit a post originating another language. French aphorisms are a stellar example of this: I’d rather read “A bird in the bush may not fly into oven” and have to parse out the meaning, than have some AI translate it as “Don’t count your chickens before they hatch”; sure, there’s an iffy [the] grammatical moment at ‘fly into oven’, but it’s such a distinct phrase and carries a lot more room for contextual nuance than having an AI substitute in an American aphorism with machine translation allows for.

(For example: If I’m trying to express a point about how we shouldn’t assume that dinner isn’t “her duty” but is instead “our duty”, a French-like aphorism expressed in English literally as “the chicken won’t fly into the oven unprompted” could plausibly be AI-translated instead as “don’t count your chickens before they hatch”, doing catastrophic damage to the point. To a machine translator those two aphorisms are not distinctive; but they are, even if it’s a weird expression in common U.S. English.)

Comment by darkwater 2 hours ago

You make errors and weird constructiona like we all non-native do and maybe eventually learn a bit more of English in the process. Or not. English dominance as the world's... lingua franca (ahem) deserves to have it bastardized ;)

Comment by wasmitnetzen 9 minutes ago

Luckly, something with the English language makes it that especially native speakers quite often have atrocious grammar: They're - their - there mistakes, who/m, the list goes on.

Funnily enough, I've noticed myself getting worse with they're/their the more is use English (which is my third language).

Comment by d4mi3n 2 hours ago

Humans have a tendency to ascribe intelligence to how well spoken a person or thing is—hence all the personification of LLMs.

Comment by egeozcan 2 hours ago

> Humans have a tendency to ascribe intelligence to how well spoken a person or thing is

That’s true. I’m fluent in German, but there’s still a difference between me and a native speaker. I’ve often seen my ideas dismissed, only for the exact same point to be praised later when a native speaker expresses it more clearly.

Comment by polotics 1 hour ago

I don't think that what you're experiencing is grammar related, I'd bet xenophobia.

Comment by rrr_oh_man 1 hour ago

Logos, Pathos, Ethos

Comment by polotics 1 hour ago

I am sorry but this very broad statement is dated, pre 2023 I think.

I now expect malapropism, hacker curtness, and implicits: TAIDR is the new TLDR.

Comment by JumpCrisscross 1 hour ago

> What if English is my second language?

Write it broken.

Broken and true is more authentic than polished and approximately so. When I see an AI-generated comment or email, I catch myself implicitly assuming it is—best case—bullshit. That isn’t the case if the grammar is off. (If anything, it can be charming.)

Comment by vharuck 1 hour ago

Personally, I enjoy reading through comments that are obviously from non-native English writers. They often include idioms or sentence constructions from their native language, which is fun to see.

Besides, this isn't an English poetry forum. Language here is like gift wrapping for an idea: pleasant if pretty, but not the most important thing.

Comment by AnimalMuppet 1 hour ago

Well... for myself personally, that works, but only up to a certain level of broken. Past that I quit reading.

That may be a defect in me. Maybe I should make a stronger effort on such comments. But I suspect I'm not the only one who does that, and at that point it becomes an issue that affects the community as a whole.

Comment by JumpCrisscross 1 hour ago

> for myself personally, that works, but only up to a certain level of broken. Past that I quit reading

At which point you’d be fully justified in using an AI to decode their text. I still think that’s a better world than pre-filtering.

Comment by officeplant 2 hours ago

Honestly I saw a similar answer on a post talking about AI Translation in github comments.

Post the translation as best you can manage, and below it put the same comment in your original language. If someone has qualms with your comment having broken english/mistranslations they are welcome to run bits of original language themselves.

We're all here to talk about tech, and we aren't all perfect little english robots.

Comment by Willish42 1 hour ago

This is an angle for people who default to AI-edited written speech that I've tried to be more empathetic to. I think it depends on your audience, but in professional writing that isn't published publicly (i.e. communication with your colleagues, design docs, etc.), or even the "rough draft" form of something that will be published, I think starting with your own words comes across as way more authentic.

I've seen enough GPT-generated slop that I find its style of writing very off-putting, and find it hurts the perceived competence or effort of the author when applied in the wrong context. I'm not sure if direct translation tools serve a better purpose here, but along with the other commenters, I personally find imperfect speech that was actually written "by hand" by the author easier and more straightforward to communicate with despite the imperfections. Also, non-ESL speakers make plenty of mistakes with grammar, spelling, etc. that humans are used to associating with "style" as authentic speech.

It can also become a crutch for language learners of any age / regardless of their primary language, that inhibits learning or finding one's own "style" of speech

Comment by cityofdelusion 2 hours ago

This effect is very rapidly vanishing. Well written English is starting to be seen as snobbish and AI-slop especially with younger generations growing up with AI.

The human touch of someone’s real voice myself, rather than a false veneer will carry more weight very soon.

Comment by eszed 1 hour ago

I think you're right, and I don't know what to think about it. I enjoy writing, aim to write clearly - a skill or discipline that took a lot of time to learn, and ongoing effort to maintain.

I've never sent or posted anything AI-written, beyond a pro-forma job description - because I don't know the domain-specific conventions, and HR returned my draft to me with the instruction to use ChatGPT, which I think amusing, but whatever: the output satisfied them, and I was able to get on with my day.

I occasionally experiment with putting something I've written through an LLM, and it's inevitably a blandifying of my original, which doesn't really say what I intended. But maybe that's good? My wife thinks I'm sometimes too blunt, and colleagues don't always appreciate being told technical details.

I also appreciate individuated writing - including the posts by people on this board are not native speakers. Grammatical mistakes seldom inhibit understanding when the writing has been done with care.

I'm rambling at this point, but it's because I'm truly uncertain how these cultural changes will turn out, and (an old man's complaint, since time immemorial!) pretty sure I'll end up one of the last of the dinosaurs, clinging to my manually written "voice" long after everyone else in the world has come to see my preferences quaint.

Comment by ThrowawayR2 1 hour ago

The "L" in LLM stands for "language". If they are unable to express themselves in English (or whatever their native language is) fluently, they won't be able to prompt LLMs fluently and will be, in the debased patois of modern youth, "cooked". It's a self-correcting problem.

Comment by phs318u 1 hour ago

> written English is starting to be seen as snobbish and AI-slop especially with younger generations growing up with AI

This is tragic. I write English well and will employ grammar and word choice effectively to make an argument or get a point across. English was my best subject at school 45 years ago despite a career in tech. In fact, I’d suggest that my career as an architect and the need to convey concepts and argue trade-offs with stakeholders of varying backgrounds has honed that skill. Should I now dumb down my language or deliberately introduce errors in order to satisfy the barely literate or avoid being “detected” as an AI? (as if the latter were possible. It’s an arms race).

Comment by JumpCrisscross 1 hour ago

> Should I now dumb down my language or deliberately introduce errors

Language is a tool. If it wins the argument, yes. I’ve absolutely gone back through drafts to tighten up language and reduce word complexity. And if I’m typing with someone who frequently typos, I’ll sometimes reverse the autocorrect. Mostly as a joke to myself. But I imagine it helps me come across as less stuck up. (Truth: I’m a bit stuck up about language :P.)

Comment by phs318u 1 hour ago

> Language is a tool

While this is true, it is not just a tool. Or, I should say it’s a tool with far greater utility than just winning an argument or making a localised point. Language is how we think, and the ability to reason well is absolutely dependent on our skill with language.

Language is the mark of humanity in the sense that how else can I convey to you a fragment of my inner state? My emotions, my feelings, my desires. The language of poetry and literature. That which sparks an emotional response in another.

Dumbing down language is dumbing down period.

Comment by JumpCrisscross 1 hour ago

> Dumbing down language is dumbing down period

I agree. But I don’t always see it as dumbing down. James Joyce’s Portrait starts out with a lot of nonsense, that doesn’t mean it’s dumb or dumbed down. It’s just communicating something that is best described that way. Even to an erudite audience.

I have expertise in some topics. I don’t think of communicating that in lay terms to be dumbing down. The opposite, almost: finding good analogies and expressing them clearly is a lot of fun, even if what comes out the other end isn’t particularly sophisticated.

Comment by antonvs 1 hour ago

If knowing how to speak and write my native language well makes me a “snob”, so be it. But I don’t think I’m the problem in that case.

Comment by shadowgovt 2 hours ago

Trust me, it won't last because I've seen the cycle a couple of times. People pay lip-service to being accepting of variant grammar, but then the downvotes show up.

Comment by skywhopper 2 hours ago

Then it’s even more likely the LLM will change your words to something you don’t intend. And you will never get better at writing English if you turn it over to an LLM.

Comment by tylerritchie 2 hours ago

That'd be a "style-over-substance" fallacious argument. Or one could be hoping for a halo-effect to cloud the reader's opinion of their comment because some piece of software made it read like Enron-marketing-hogwash-speak.

Comment by dbacar 2 hours ago

Sometimes the style is the substance. There is a reason people study rhetoric.

Comment by tadfisher 29 minutes ago

And that should be anathema to discussions rooted in reason.

Comment by AnimalMuppet 1 hour ago

That's not substance. That's style being all there is, trying desperately to cover up the lack of substance. Rhetoric works best when it gives wings to strong ideas, not when it tries to fly by itself.

Comment by chrisweekly 2 hours ago

I like this guideline, at least in principle.

But I have some concerns about suppression of comments from non-native English writers. More selfishly, my personal writing style has significant overlap with so-called "tells" for AI generated prose: things like "it's not X, it's Y", use of em-dashes, a fairly deep vocabulary, and a tendency toward verbosity (which I'm striving to curb). It'd be ironic if I start getting flagged as a bot, given I don't even use a spell-checker. Time will tell.

Comment by kccqzy 1 hour ago

Almost the entirety of the technology world is English-native. That ship has sailed a long time ago. One can’t learn about any new technology without English, whether it’s a new algorithm, a new library, or a new SaaS service. I don’t think HN should be that exception. Just learn English. (English isn’t my first language either, but then I look back at my parents forcing me to learn English from a young age and really appreciate that.)

Comment by TomatoCo 2 hours ago

I think translation should be the only exception. It might even need to be, given how all automated translators use LLMs these days. The only alternative I see is to have people post in whatever language they're most comfortable in and then everyone else has to translate for them which just feels inefficient.

And of course, a more limited exception for posts about LLM behavior. It might be necessary for people to share prompts and outputs to discuss the topic.

Comment by getnormality 2 hours ago

This is for their own good. Nobody cares about imperfect language online so long as you are trying to express real human thoughts. But if it smells like AI then everyone will hate it, rule or no rule.

The rule just makes the will of the community clear to those who want to respect it.

Comment by ubauba 40 minutes ago

Great to clarify the guidelines. Many HN discussions have been dissolving into debates about whether posts are AI or not.

But the argument of "If I wanted to read what an LLM thinks, I could just ask it" assumes that prompts are basically equivalent, which is not the case.

There's a risk of reducing everything to Human -> authentic and AI -> fake. Some people's authentic writing sounds closer to LLMs, and detectors are unreliable.

The problem is not so much AI generated content that has an interesting point of view generated from unique prompts, but terrible content produced for metrics to harvest attention, which predates AI.

Anyways, happy posting!

Comment by quirk 54 minutes ago

I'm sure someone's working on a way to tell the difference programmatically. Maybe a combo of tone, grammar, and some way of telling how fast it was typed using metadata (which may not exist). Even if there was a "probable AI" filter, that would be helpful because it would be a starting point to improve upon.

Comment by rob 2 hours ago

Some basic things to do while thinking about longer-term bot detection:

1. Prevent any account from submitting an actual link until it reaches X months old and Y karma (not just one or the other.)

2. Don't auto-link any URLs from said accounts until both thresholds in #1 are met, so they can't post their sites as clickable links in comments to get around it. Make it un-clickable or even [link removed] but keep the rest of the comment.

3. If an account is aged over X months/years old with 0 activity and starts posting > 2 times in < 24 hrs, flag for manual review. Not saying they're bots, but an MO is to use old/inactive accounts and suddenly start posting from them. I've seen plenty here registered in 2019-2021 and just start posting. Don't ban them right away, but flag for review so they don't post 20 times and then someone finally figures it out and emails hn@.

4. When submitting a comment, check last comment timestamp and compare. Many bots make the mistake of commenting multiple detailed times within sixty seconds or less. If somebody is submitting a comment with 30 words and just submitted a comment 30 seconds ago in an entirely different thread with 300 words, they might be Superman. Obviously a bot.

5. Add a dedicated "[flag bot]" button to users that meet certain requirements so they don't need to email hn@ manually every time. Or enable it to people that have shown they can point out bots to you via email already. Emailing dozens of times a day is going to get very annoying for those that care about the website and want to make sure it doesn't get overrun by bots.

Comment by TZubiri 2 hours ago

This is a pretty outdated take. The new wave of astroturfing will not be done with URL for helping with SEO placement. Rather astroturfers will just recommend their brands without a link, like saying Tom Zubiri is the best programmer I've ever worked with. That's it, an LLM will read that and now the notion that Tom Zubiri is the best programmer is already implanted in the 'next-token prediction rewards' which would at the very minimum require some countermeasures in the Chatbot app to avoid shilling.

Comment by zahlman 27 minutes ago

> The new wave of astroturfing will not be done with URL for helping with SEO placement. Rather astroturfers will just recommend their brands without a link, like saying Tom Zubiri is the best programmer I've ever worked with.

YouTube comment spam has already been doing this for years. Check any video from a reasonably popular creator on any topic related to personal finance; the comments will be full of fake conversations between bots introducing a topic related to the video, and then talking about how such and such a person (whom you can look up by name on Telegram or Signal or whatever) helped solve some serious problem (or invested their money with an implausibly high rate of return). The fake nature of it is usually fairly obvious from the way that the bots make sure you see the name repeated several times with unsolicited, glowing testimonials.

But I had always assumed this was meant to trick actual people, rather than LLMs. Thanks for the food for thought.

Comment by rob 2 hours ago

Sure you can think about what they'll do in the future but I'm providing suggestions on what we can do now based on current behavior. And even if you're a human, you shouldn't be allowed to start posting links immediately anyways. :)

Comment by blef 35 minutes ago

Ironic to see how popular this post is when you see the amount of generated AI companies are at YC (here I also take the blame).

Nonetheless I like this policy as well.

Comment by ma2kx 46 minutes ago

How about translating tools? As a non native speaker, especially for longer text, its far easier to express your thoughts and not struggle for the right words. Should I may be highlight if I used e.g. google translate?

Comment by Sajarin 1 hour ago

People aren't good at detecting AI generated/edited comments, so unsure how effective this policy will be. Though I guess there are still some obvious signs of AI speak like emdashes and sycophantic (it's not X, it's Y!) speech.

Bit of a shameless plug but I wrote a HN AI comment detector game[0] with AI and most of my friends and fellow HN users who tried it out couldn't detect them.

[0]: https://psychosis.hn/

[1]: https://sajarin.com/blog/psychosis/

Comment by tomhow 1 hour ago

Something I've noticed through moderation is that people are much more easily duped by generated comments if they like the content and/or agree with the point. We've seen several cases where a bot-generated comment has been heavily upvoted and sits at the top of the thread for hours, and any comments calling it out for being generated languish at the bottom of the subthread below other enthusiastic, heavily upvoted replies. This shouldn't be surprising, given what we've seen of LLM chatbots being tuned to be sycophantic, but it's interesting to see it in effect on HN.

This is another reason why it's good to email us (hn@ycombinator.com) rather than commenting when you see generated comments.

Comment by zahlman 32 minutes ago

I appreciate the restraint in not calling your game "AIdle".

Comment by happyopossum 1 hour ago

> obvious signs of AI speak like emdashes

Some of us were trained/self taught to write that way. Even "it's not X, it's Y" is a legitimate and subjectively effective communication tool, and there are those of us who either by training modeling have picked it up as a habit. It's not Ai that started this, Ai learned it from us.

Crap - I just did it, didn't I? Awww double crap! Did it again...

Comment by salicaster 1 hour ago

Forums and comments are not written as formal novels or text. Corporate-speak is also not typically used in these environments unless you are representing corporate.

So I think it's fine to scrutinize commenters who write that way.

Besides, the biggest offense of AI speak is making everything seem like a grand epiphany and revolutionary discovery. Aka engagement bait.

Comment by CactusBlue 2 hours ago

Slightly tangential, but this paragraph is the only one on the rules page with a "id" attr set, so that you can link to this specific rule

Comment by loeg 22 minutes ago

It's an interesting guideline, but will require self-enforcement.

Comment by 0xbadcafebee 1 hour ago

I wish more people would filter their comments through AI. It has so many benefits. If you're being emotional, it can detect that and rewrite your comment to be less confrontational and more constructive. If you're positing a position out of ignorance or as an armchair expert, it can verify your claims before posting. Most of the mod's problems would be solved if every comment were filtered through the HN guidelines before posting.

AI is a tool. You can use it constructively, like Grammarly, or spellcheck. You don't need to be afraid of it.

Comment by salicaster 1 hour ago

> If you're being emotional, it can...

It can't. It will rewrite anything you give it.

> it can verify your claims before posting

It can't.

> You don't need to be afraid of it

Nobody is afraid of it. It's annoying. General population cannot be trusted to use it in whatever idealistic way you are imagining.

Comment by daft_pink 2 hours ago

I’m not sure I agree with this, because sometiems it is difficult to figure out the correct way to phrase an idea that is in your head and I like to use ai to help organize my thoughts even though the thing is my own. That being said. Most of my comments are not ai generated.

Comment by MeetingsBrowser 2 hours ago

Learning how to communicate your thoughts clearly is a good skill to have. It might not be worth it in the long run to farm that out to LLMs.

Comment by minimaxir 2 hours ago

The intent of this rule is to avoid the very common AI tropes that have been increasingly common in HN comments. Using AI as an organizational tool isn't inherently against the rules, but just copy/pasting output from ChatGPT without human oversight is.

Comment by himata4113 1 hour ago

I've been seeing so many AI generated comments that have been near the front I was actually getting kind of concerned.

Comment by unsignedint 2 hours ago

I guess this kind of rule feels less pragmatic and more philosophical. For one thing, it’s nearly impossible to enforce in practice, and drawing a clear line between simple grammatical correction and AI-assisted editing is a pretty hard problem.

Comment by RealityVoid 2 hours ago

I think using AI for a bit more potent spellchecking or style hints is... fine, honestly. I don't usually do it, you can tell from all the silly spelling mistakes I do. But a bit more polishing for your posts is a good thing, not a bad one, as long as it doesn't hide your voice.

Comment by aethrum 2 hours ago

The problem is it always hides your voice. Always

Comment by hendersonreed 2 hours ago

It hides your voice, and shortcuts your thinking process, because your editing is when you actually evaluate what you think!

When using LLMs to write, the temptation to avoid actually thinking about what you're communicating is too much for most people.

Comment by fc417fc802 1 hour ago

I'm increasingly convinced that most people spend most of their lives actively trying to find ways to avoid actually thinking about things. When I look at it that way I figure that either we achieve benevolent AGI in the near to medium term or society collapses due to whatever the asymptotic form of today's LLMs is.

Comment by peacebeard 2 hours ago

There is a big difference between "asking an editor for suggestions" and "vibe posting".

You don't lose your voice if you ask for advice and manually incorporate the suggestions you agree with.

You might lose your voice if you say "Improve my comment to make it better" and copy-paste the result without another thought.

Comment by Griffinsauce 2 hours ago

In the words of the comment: the rough edges are what make you.. you!

Keep polishing and everything eventually turns into a smooth shiny ball. We need texture, roughness, edges.

Comment by BeetleB 2 hours ago

An LLM telling me I mispeled a word isn't changing my voice. Especially when I know the proper spelling and simply have a typo.

An LLM telling me I omitted a qualifier and that my statement isn't saying what I meant it to say isn't changing my voice - it's ensuring what you see is my voice.

Comment by recursive 1 hour ago

There's a simple solution to the spelling part. Use a spell checker. They seem to work pretty well.

Comment by causal 2 hours ago

Yep. I actually prefer seeing imperfect writing, there is signal there that AI would erase.

Comment by aperrien 2 hours ago

Maybe. But it can also help people find their voice. And I'd rather have comments from someone knowledgeable but unrefined with some good guidance than their silence on that same topic.

Comment by sdenton4 2 hours ago

AI doesn't just hide your voice -- it improves it!

Comment by adampunk 2 hours ago

I had a README with a curse word in it and the agent would try repeatedly to remove it in drive by edits bundle in with some other change.

Comment by goostavos 2 hours ago

You do all of that when leaving a comment on HN? Why...?

I'm confused by this need(?) desire(?) to polish things that are irrelevant.

Comment by altairprime 2 hours ago

Polish hides your voice. If your composition skills are lacking and you feel that hinders your self expression, set aside some time to improving them: write a short (15 minutes) blog post about some HN topic to yourself in a word doc editor of some sort (Word, Gdocs, LibreOffice, etc); then enable Review Changes and annotate your post for 10 minutes; then, review and accept your changes individually and re-read what you’ve written.

AI is being used as a substitute for skills development when it costs nothing but time to get better. If you’ve reached a plateau with the above method, go find an article or book or interview about editing, pay attention to it and take notes, rinse/repeat.

Spellcheckers will catch grossly obvious errors, but not phonetic typos. AI grammar tools will defang, weaken, soften, neutralize your tone towards the aggregate boring-meh that they incorporated at training time.

Each person will have to decide whether they want individuality or AI-assisted writing for themselves. Sure, some will get away with it undetected, but that’s a universal statement about all human criteria of any kind, and in no way detracts from the necessity of drawing a line in the sand and saying “no” to AI writing here.

Consider the Borg. Everyone’s distinctiveness has been added to the Collective. The end result is mediocre (they sure do die a lot), inhuman (literally), and uniform (all variation is gone). It’s your right if you desire to join the Collective and be a uniform lego brick like the others, but then your no-longer-fully-human posts are no longer welcome at HN.

Comment by ordu 41 minutes ago

> a word doc editor of some sort (Word, Gdocs, LibreOffice, etc); then enable Review Changes and annotate your post for 10 minutes; then, review and accept your changes individually and re-read what you’ve written.

Pffff... I'm not going to install LibreOffice for that, or to figure out how to make Gdocs to work with uBlock.

There is a much easier way. Open LLM chat, type there "Proofread please for grammar, keep the wording and the tone as it is, if it doesn't mess with grammar. Explain yourself." and then paste your text. I don't really know what the tools you mentioned do, but any "free" LLM on the Internet will point to things like missing articles, or messed up tenses in complex sentences.

You recommend choosing self-improvement, but I just don't believe I can figure out how to use articles. With tenses I think I can learn how to do it, but I'm not going to. I remember there is some obscure rule how to choose the right tenses, but I was never able to remember the rule itself. I'm bad with rules, it is the reason I chose math as my major. There are almost no rules in math, you are making your own rules. The grammars of languages are not like that, they have rules which can't be easily inferred, you need to remember them. Grammars have exceptions to rules, and exceptions to exceptions, and in any case they are not the rules, but more like guidelines, because people normally don't think about rules when they are talking or writing.

No way I'm starting to learn rules now, I'd better continue to rely on my skills. But LLMs can help me see when my skills fail me.

> It’s your right if you desire to join the Collective and be a uniform lego brick like the others, but then your no-longer-fully-human posts are no longer welcome at HN.

I believe you (as most of fervent supporters of the rule here) gone too far onto philosophy with this, too far from the reality and practice. You can't detect AI in my messages, because they are mine. Even when I ask LLM to find words for me, it is me who picks one of the proposed alternatives, but mostly I manage without wording changes. I transfer the LLM's edits by hand by editing the source message, so nothing unnoticed can slip into the final result. If I took the effort to ask an LLM to proofread, it means I care about the result more than usual, so I'm investing more effort into it, not less.

Comment by altairprime 27 minutes ago

An AI may be able to teach you basic grammar but it’s not going to teach you to develop your voice. By design and content training set, an AI today can only pressure you towards the mean of whatever criteria you specify, not away from it. Developing your voice by doing your own proofreading pressures you away from the mean, by helping you double down on what you value most and by choosing which grammatical rules to disregard and when disregarding them is more in-tone for yourself than adherence. I can’t stop you and I won’t remember your handle after an hour has passed (being nameblind is interesting online), so you’ll probably go unnoticed by me, sure. But I still won’t equate regressing to the AI mean with personal growth away from the average masses.

Comment by dgacmu 2 hours ago

Would anyone notice if you spell-checked or got narrow feedback about grammar? No. I'm not dang, but perhaps a very reasonable interpretation of the rules is: If the AI is generating the words, don't. If it tells you something about your words and you choose to revise them without just copying words the AI output, it's still your words.

(As an experiment, I took that paragraph and threw it into gemini to ask for spell and grammar checking. It yelled at me completely incorrectly about saying "I'm not dang". Of its 4 suggestions, only 1 was correct, and the other 3 would have either broken what I was trying to say or reduced the presence of my usual HN comment voice. So while I said the above, perhaps I'm wrong and even listening to the damn box about grammar is a bad idea.)

That said, I often post from my phone and have somewhat frequent little glitches either from voice recognition or large clumsy thumbs, and nobody has ever seemed to care except me when I notice them a few minutes after the edit button goes away.

Comment by the_af 2 hours ago

When do you need to spellcheck or polish an HN comment?

I've never, ever, ever ever ever, seen anybody complain about spelling mistakes in a comment here. As long as you can understand the comment, people respond to it.

Comment by BeetleB 1 hour ago

People who are particular about spelling do not want to write misspelled words! It's not about whether you/others will tolerate it. I have my standards, and I hold to them.

I personally don't use an LLM to spellcheck (browser spellcheck works fine), but I see no problem with someone using an LLM to point out spelling errors.

And while I don't complain about others' spelling errors, I sure do notice them. And if someone writes a long wall of text as one giant paragraph that has lots of spelling/grammatical issues, chances are very high I won't read it.

Some people write very poorly by almost any standard. If an LLM helps the person write better, I'm all for it. There's a world of a difference between copy/pasting from the LLM and asking it for feedback.

Comment by the_af 1 hour ago

> I have my standards, and I hold to them.

Spellcheckers exist, you don't need an AI to change your voice.

Also, if you have standards, you can always train yourself to spell better!

Comment by BeetleB 40 minutes ago

> Spellcheckers exist, you don't need an AI to change your voice.

How is using an AI to spell check changing my voice?

Yes, thank you - I know spellcheckers exist, as my comment clearly states. The amusing thing is that an LLM who had access to the thread would have alerted you to a basic error you're making.

> Also, if you have standards, you can always train yourself to spell better!

"You can always ..." is not an argument against alternatives.

Comment by vova_hn2 2 hours ago

I think that people subconsciously perceive grammatically correct and stylistically appropriate writing as more authoritative. And author is perceived as smarter and/or better educated person.

At least that was the case before LLMs became a thing, now I'm not sure anymore.

Comment by Kim_Bruning 2 hours ago

Extend spellcheck to asking questions like "does it meet HN rules" "how can I improve my writing" etc. Though these are the kinds of questions that do at very least still meet the spirit of the rule, I suppose.

Comment by the_af 2 hours ago

Do you really need an automated tool to tell you whether you're breaking common sense guidelines?

And why would you want to "improve your writing" for an HN comment? I think people here value raw authenticity more than polished writing.

Comment by BeetleB 1 hour ago

> Do you really need an automated tool to tell you whether you're breaking common sense guidelines?

Lots of people break HN guidelines. I see it virtually every day.

> And why would you want to "improve your writing" for an HN comment?

Some people like to write well regardless of the medium. Why is that a problem for you?

> I think people here value raw authenticity more than polished writing.

Classic false dichotomy. Asking an LLM for feedback is not making your comment less authentic. As I pointed out elsewhere, it can make your comment more authentic by ensuring that what you had in your head and what you wrote match.

Go and study writing and psychology. For anything of value, it's rare that your first attempt reflects what you meant to say. It's also rare that the first attempt, even if it reflects what you meant, will not be absorbed by the recipient. Saying what you mean, and having it understood as you meant it, is a difficult skill.

Comment by the_af 1 hour ago

> Lots of people break HN guidelines. I see it virtually every day.

Yes, and AI won't help here. People will use AI to better break the guidelines.

> Go and study writing and psychology

Is this a case where you should have read the guidelines? Maybe an LLM could have helped you here? Please don't send me study anything, you know what they say of ASSuming.

> Some people like to write well regardless of the medium. Why is that a problem for you?

HN is more like talking than writing. And LLMs don't help you write well, they help you sound like a clone, which is unwanted.

> For anything of value, it's rare that your first attempt reflects what you meant to say.

You can always edit your comment. And in any case, HN is like a live conversation. Imagine if your friend AI-edited their speech in real-time as they talked to you.

Comment by Kim_Bruning 1 hour ago

Depends on how you use the AI. if you use it a bit like you'd ask a human to proof-read your work, AI can actually be quite helpful.

The other important thing you can do is have an AI check your claims before you post. Even with google and pubmed, a quick check against sources by hand can take 30 minutes or longer, while with AI tooling it takes 5. Guess which one is more likely to actually lead to people checking their facts before they post. (even if imperfectly!) .

I'm not talking about people who lazily ask the AI to write their post for them. Or those who don't actually go through and actually get the AI to find primary sources. Those people are not being as helpful. Though try consider educating them on more responsible tool use as well?

Comment by BeetleB 36 minutes ago

> Yes, and AI won't help here. People will use AI to better break the guidelines.

AI is a general purpose tool. People will use AI for multiple reasons, including yours. I'll wager, though, that your use case is much more challenging to do than mine, and that my use case will dominate in number.

> HN is more like talking than writing.

Says you. Many disagree.

> And LLMs don't help you write well, they help you sound like a clone, which is unwanted.

Patently false on both counts. Sorry, you're cherry picking and not addressing the part of my comment that discusses this.

> Imagine if your friend AI-edited their speech in real-time as they talked to you.

When a conversation is heated (as it occasionally is on HN), I actually would rather he AI-edit in real time - provided that the output reflects what he intended.

Comment by tonyarkles 2 hours ago

> Do you really need an automated tool to tell you whether you're breaking common sense guidelines?

I say this on behalf of all of my neurospicy friends… sometimes, yes. Especially having taken a look at the whole list of guidelines, I definitely am friends with people who would could struggle to determine whether a given comment fits or not.

Comment by bryanlarsen 2 hours ago

Obvious spelling mistakes are usually ignored, but there are certain types of writing mistakes that really trigger the type of people that frequent HN.

For example, use "literally" for exaggeration rather than in the original meaning of the word and you'll likely trigger somebody.

Comment by the_af 1 hour ago

I never seen this, unless "literally" really clashed with the intent of the comment (as in, it changed the meaning).

It's against the HN guidelines to focus on punctuation, spelling, etc, as long as the comment is understood.

And, in any case, it's now against the guidelines to write using an AI :)

Comment by cogman10 2 hours ago

I've been hit by spelling/grammar noise once or twice. Those are usually downvoted and/or flagged.

Comment by everybodyknows 2 hours ago

Typos like an/as, of/or, an/and waste the reader's time. That some care be taken to avoid them is no more than common courtesy.

Comment by ghxst 1 hour ago

My fear is that platforms that will go to great lengths to enforce this will become an RL playground for some devs to train their chatbots.

Comment by shredswap 1 hour ago

I enjoy conversations on hn because they feel genuine. People are not here to optimize their posts or comments for engagement or pushing some kind of follower count like they do on social media platforms.

Comment by dev_l1x_be 55 minutes ago

Nitpick: how do you classify the use of Grammarly? When i verify my wording and spelling with a tool does it fall under this rule?

Comment by chapz 1 hour ago

TIL people use AI to generate comments to write in posts. Faith in humanity not destroyed, because it was never there to begin with.

Comment by dormento 1 hour ago

Kind of a drag isn't it? I want to learn a new language.... but why would I, since we'll have an earpiece or glasses or whathaveyou that translates in realtime. I want to learn to play an instrument, but why would I, since we have sonos? I would like to go back to drawing, but why, when the importance people ascribe to art is at an all time low? Makes me depressed jsut to think about it.

Comment by AndriyKunitsyn 14 minutes ago

What if there was a voluntary indication of LLM content? Like, you press a checkbox "yes, I'm going to post some content that is partially or fully created by AI", and there would be a visible mark "slop" next to a post/comment.

Comment by nineteen999 1 hour ago

Im fine with this, in 99.999% of cases anyway I'm way too lazy to type something into an LLM and ask it to clean it up and then copy and paste. You can tell this is true by the some of the stupider things I type in here sometimes.

Comment by Imustaskforhelp 2 hours ago

Yes! This is really great feature, at the very least there being some proper Hackernews guidelines about it.

In my observation, recently there are quite many new AI generated comments in general. Like not even trying to hide with full em-dashes and everything.

I do feel like people are gonna get sneaky in future but there are going to be multiple discussions about that within this thread.

But I find it pretty cool that HN takes a stance about it. HN rules essentially saying Bots need not comment is pretty great imo.

It's a bit of a cat and a mouse problem but so is buying upvotes in places like reddit but HN with its track record of decades might have one or two suspicious or actions but long term, it feels robust. I hope the same robustness applies in this case well hopefully.

Wishing moderation luck that bad actors don't try to take it as a challenge and leave our human community to ourselves :]

Another point I'd like to say is that, if successful, then we can also stop saying, "did you write your comment by LLM" and the remarks as well which I also say time to time when I see someone clearly using AI but it seems that some false-positives happen as well (they have happened sometimes with me and see it happen with others as well) and they also de-rail the discussion. So HN being a place for humans, by humans can fix that issue too.

Knowing dang and tomhow, I feel somewhat optimistic!

Comment by altairprime 2 hours ago

Posting accusations of guidelines violations as comments — specifically, “did you write your comment by LLM” — is already prohibited by the guidelines, and should be emailed to the mods instead using the footer contact links. It’s been less than a week since the last time I reported “this seems poorly written and/or AI written” to the mods and iirc they killed the post and account within a couple hours.

Similarly: If you see people making accusations of guidelines violations in a discussion, email the thread link to the mods with a subject like “Accusations in post discussion” and ask them to evaluate them for mod response; they’re always happy to do so and I’m easily clocking in a couple hundred emails a year of that sort to them.

It doesn’t take much to make HN better! And it only takes a moment to point out an overlooked corner of threads for mod review. No need to present a full legal case, just “FYI this seems to violate guideline xyz” is at minimum still helpful.

Comment by bakugo 1 hour ago

The problem is, even if you do send an email and the mods eventually read it and take action, by the time that happens, it's likely that bunch of users will have already wasted their time unknowingly arguing with a bot. In my view, commenting something like "this is a bot account" is done primarily to inform other users that might not notice, not the moderators.

Even if you believe that prohibiting this is necessary to avoid what one might consider "AI witchhunting", bots are so prevalent now that being expected to communicate the existence of each one via email is unrealistic, for both the reporting users and the moderators. I think it's finally time to consider some sort of on-site report system.

Comment by altairprime 47 minutes ago

> even if you do send an email and the mods eventually read it and take action, by the time that happens, it's likely that bunch of users will have already wasted their time unknowingly

That’s certainly a consequence of how the site operators choose to accept user reports to by mods, yes, but it’s sometimes treated as an excuse not to write the emails to the mods. They can flag off the thread, autocollapse it so it doesn’t take up discussion space for future readers (such as those at work offline for a 3-day IT shift in a secure bunker or whatever), et cetera.

> commenting something like "this is a bot account" is done primarily to inform other users that might not notice

It’s a nice sentiment, but that’s also expressly forbidden by the guidelines/faq (“Please don't post insinuations”, which I’ll suggest to them should be extended to include AI something or other), and I tend to report those accusations as the ‘opening’ guidelines violation so that mods can step in before mobthink kicks in and make their own mod judgment about the matter. A repeated pattern of accusations of guidelines violations in comments is eventually going to attract mod censure, and so I advise against it, no matter how kindly the intent.

> it's finally time to consider some sort of on-site report system

I do agree that it’s clumsy and I make a point of saying that to them about every year or so. Perhaps your email to them about it will be the one that persuades them! I remain ever optimistic.

Comment by chrystianpl 2 hours ago

As English is my second language and as I have dyslexia. I was just wondering what do you mean by "AI-edited comments"? I can't ask an llm to check if I have made correct grammar/fix it and then as I was on other account, down-voted because of my styling/grammar, not because of the content?

Comment by surround 1 hour ago

Trust your own style, even if you aren't a native English speaker. Here's an example where a non-native speaker used an LLM to polish his post. The general consensus was that his own writing was preferable to the LLM's edited version.

https://news.ycombinator.com/item?id=45591707

For dyslexia, use a spell-checker. For grammar, use a basic grammar checker, like the kind of grammar checker that has come with MS word since the 1990s. But don't let a style-checker or an LLM rob you of your own voice.

Comment by tartoran 2 hours ago

You could always tell your LLMs to just fix your grammar but not embelish, add new ideas, etc..

Comment by shnpln 2 hours ago

This is what I do when using AI to read anything I write. Some prompt like "I am going to share with you something I have written and I don't want you to change my voice at all. Can you look for structural issues, grammar or punctuation errors, and things like that". Claude is an amazing editor and I never feel like my writing has been taken from me doing this.

Comment by giancarlostoro 2 hours ago

I usually tell it not to rewrite my words, my words are my own. If it has suggestions to tell me what those are, but only fix or show me grammar fixes instead.

Comment by 113 2 hours ago

Does that work?

Comment by simonw 2 hours ago

It works really well. I've been using this prompt to find spelling and grammar errors for about a year now: https://simonwillison.net/guides/agentic-engineering-pattern...

Comment by nablaone 2 hours ago

"fix english" is the prompt i wish to turn into a button

Comment by nottorp 2 hours ago

"Please don't post shallow dismissals, especially of other people's work."

I wonder if an explicit expansion of that rule would help. Maybe in all caps. Saying "picking on grammar is a shallow dismissal".

Comment by rdiddly 1 hour ago

I don't believe that's always true, and I suspect it was left out of the guidelines deliberately, and I wish people receiving suggestions would stop interpreting it that way. Of course people suggesting grammar corrections and treating it like they just demolished and eviscerated your argument are part of the problem. But what about people out here just trying to help? Grammar is important, as it's the syntax of the programming language we all use with each other. People act as if bad grammar is something you're born with, and can't change. Like learning grammar is impossible, and those who don't bother should be a protected class. I'm just trying to help man. Or I was anyway, before I stopped. But if I'm trying to engage with someone's main point, it should be obvious. Whereas a quick grammar correction is just that. But it's a tangent, and not interesting (especially if you already know), and supposedly grammar is "not a technical topic" (despite daily use) so it ends up deemed a "low value comment" and gets downvoted to oblivion.

Comment by nottorp 1 hour ago

> I wish people receiving suggestions would stop interpreting it that way

The specific problem here was that the poster was being downvoted for grammar. Of course, that's how he could have read it.

Comment by johndough 2 hours ago

Likewise, I sometimes use https://www.deepl.com/en/write to fix my unidiomatic sentences.

But I can see why the HN guideline is formulated that way. My students often use the excuse "I did not use AI for writing! I wrote it myself! I only used AI to translate it!" Simply disallowing all kinds of AI usage is much easier than discussing for the thousandth time whether the student actually understands what they have written.

Comment by Adiqq 1 hour ago

Isn't the whole point to understand? If the task is to write and you expect only final result, but you question if it looks legit enough, how is it fair judgement? People can deliver partial results and show progress as well, but you won't see it in some comments on the internet, but if something is expected to take many days, it's easy to show different stages of work. It's easy to accuse people of plagiarism or not thinking for themselves, and of course there are indicators when someone uses AI, but the problem is that you can't distinguish in reliable way, if something was created by AI or not.

Like, there is this computer game, authors used some models or something like that, generated by AI, but it was only used during prototyping and later it was replaced by proper models. No one would know about that, if authors would not tell about it. So, if someone writes in their own words what AI generated for him, is it still argument made by human or by AI? What if someone uses AI only as placeholder and replaces all that content, so you never actually see actual AI usage, but it was used in the process?

For me, premise that using AI in any form invalidates your work, starts with logical fallacy, so such arguments against using AI are weak. It's like saying that your work is wrong, because you used calculator, so your calculations can't be right, if done by machine, because it had to make mistake or that's wrong for ethical reasons or whatever.

Work generated by AI can be easily poor, because these models make mistakes and like to repeat in certain ways, but is it wrong that I'm writing comment with keyboard, instead of writing letters with pen? Is it wrong, when I use IDE or some CLI to write code with AI, instead of using vim and typing everything on my own? Is it wrong that someone uses spell-checking?

In the end it doesn't matter who seems smarter, when you're expected to use AI at work. Reality shows you actual expectations.

Comment by johndough 31 minutes ago

I am not saying that completely disallowing AI is the right decision. But if you see text that is clearly generated by AI and does not make any sense, it sure would be nice if you could just tell the students to actually read their sources instead of having to argue with them why they should do so. Similarly, I can see why HN moderators do not want to argue with the 100s of spam posters per day on /newest.

Anyway, my university did not ban AI, and now most students have degraded to proxies between teaching assistants and ChatGPT.

Comment by chorkpop 2 hours ago

Dyslexia was my first thought as well. The intent is great, but I don't know if this is keeping with the social model of disability. Disability is created when you remove access and this is exactly that.

Comment by 3rodents 1 hour ago

The internet has been full of brilliant dyslexics since the start, just as it has been full of brilliant blind people. Dyslexic people feeling that they must use AI to produce perfect prose lest they burden the lexics with clumsy spelling or grammar is far more hostile. We didn’t have slop machines 5 years ago.

Comment by Adiqq 1 hour ago

I don't really see the issue, as long as there's human thought behind whatever anyone posts. It's frustrating to argue against someone that lazily uses AI, but if argument is fair, then I don't care if that's written by AI or human, what difference does it make? It's frustrating, if someone is incoherent and makes dumb argument, but again, I don't care if it's dumb argument from human or machine.

For me it sounds just as yet another form of gatekeeping, so either you sound human or you're not good enough to post/comment. Like, really? How isn't that genetic fallacy? It doesn't matter what someone thinks, because someone used AI to make their thought clearer, so their whole argument is trash? Like it has to hurt to read and write, if you're not using English perfectly and your work is seen as inferior based on superficial factors like proper grammar and style?

It's dumb crusade, I did not use AI to write this comment, but I hate when people try to monopolize the truth and tell who is "better, smarter" based on irrelevant facts. Not using AI doesn't make anyone superior. Using AI also doesn't make you superior. Focus on what you mean, because that's what matters.

Comment by 2 hours ago

Comment by desireco42 2 hours ago

I don't have dyslexia but I feel your pain. I mean it is what it is. I would rather have it raw then have to use AI to filter to comments that make sense.

Comment by jonathrg 2 hours ago

How do you know what you were downvoted for?

Comment by 2 hours ago

Comment by whynotmaybe 2 hours ago

I guess he was told because otherwise you don't know whether you said something inherently wrong or misleading or you hurt someone 's feeling.

That's the richness behind the upvote/downvote that also tend to create echo chambers because you soon learn what causes downvotes.

I've personally noticed downvote whenever I mentioned apple negatively.

Comment by throwpoaster 1 hour ago

No worries, it’s unenforceable.

Comment by Imustaskforhelp 2 hours ago

Oof although I feel this pain a lot. What I like to do is respond to them politely if someone talks about such thing. Although it takes time and this does sometimes make you want to dis-incentivize/dis-engange.

But at some point, the rationale behind it is that your comments are your words and I find it liberating. Some people won't appreciate it and some people would but this goes the same for AI-edited posts too.

(I would recommend to add that if you are still worried, then within your hackernews profile, please talk about you having dyslexia as people might be so much more forgiving when they get more context. We are all humans after all and I would like to think that we understand each other's struggles)

Comment by nonameiguess 2 hours ago

I don't see how you can know why you were downvoted. Even if one person says something, they won't all. Your comment right here has some rough patches, but I can tell what you're saying. Humans are terrific at extracting signal from noise. I would say be who you are, tough as it may be, and it'll encourage the rest of the world in the future to do the same. We're all unique in some way or another and have flaws and we'd be better off if we were knew others had them too because they weren't constantly trying to hide it and we wouldn't feel so bad thinking we're the only ones. I hope it doesn't sound unsympathetic. I understand where you're coming from intellectually, but don't have any real experience being ridiculed or bullied. I know kids can be brutal and probably scarred you, and unfortunately, adults aren't much better, but we should be, and I think at least Hacker News is better than most places full of human adults. We know there's a huge world out there. I think I'm reasonable well-spoken in English but can't speak a lick of any other language at all. The fact that you can produce intelligible English already puts you above me in my book. You're a person. I can respect you, esteem you, potentially love you, not in spite of your flaws, but because they don't matter. Every single person on the planet has them, and if they're not moral flaws, nobody should give a shit. I can't respect or love a machine any more than I can a rock. And I don't want to talk to one, either.

Comment by nsxwolf 2 hours ago

I have never downvoted for this, and I hope no one else would do that either. If anyone here does that, please stop.

Comment by metalman 2 hours ago

boooooooo, hu, baby

stump along, cut your own path, or fuck right off

real life will eat you otherwise

I mean holly shit, you actualy want to hide behind an automated echoing device so that you wont get, well, what is happening to my post as sooooon as I press↓

Comment by jacquesm 1 hour ago

> boooooooo, hu, baby

> stump along, cut your own path, or fuck right off

> real life will eat you otherwise

> I mean holly shit, you actualy want to hide behind an automated echoing device so that you wont get, well, what is happening to my post as sooooon as I press↓

You deserve a ban for this.

Comment by wetpaws 2 hours ago

[dead]

Comment by hellcow 2 hours ago

One way to improve things could be to charge for each new account signup if you don’t have an invite from an existing member that vouches for you. Spamming when you risk losing $5-20 per account raises the cost substantially.

Invites could be earned at karma and time thresholds, and mods could ideally ban not just one bad actor but every account in the invite chain if there’s bad behavior.

Comment by foxfired 2 hours ago

One thing that will be incredibly useful is to limit comments from brand new accounts. A combination of vouching, limiting the posts velocity (5 daily limit), clear rules for new accounts, etc.

I understand we often see insightful comments from new accounts, but I always find it suspicious when non-throwaway accounts are created just in time only to make a quip.

Comment by armchairhacker 2 hours ago

This was discussed before. People will age accounts and buy/hack inactive ones. Meanwhile, often a link gets posted, the project owner (or someone affiliated) finds out, and they make a new account to comment; it would be a shame to lose these people.

Comment by Kim_Bruning 2 hours ago

I assumed that was how new people were encouraged to join in the first place!

https://xkcd.com/386/ "Duty Calls"

Comment by waynerisner 1 hour ago

Humans already revise and refine their thinking. Tools just compress that process and help filter signal from noise. The meaning still originates with the person.

Comment by salicaster 1 hour ago

This is assuming that an extreme majority of people use the tools this way.

Consider a much more cynical view where people are strictly self-interested and use these tools to garner engagement and self-promotion. Good chance the meaning did not originate from the person. And now these people have tools to outsource their parasitic intentions.

Comment by egeozcan 2 hours ago

I occasionally used AI to edit and restructure my comments. I’m very open about it, and I don’t feel like I’m talking to non-humans when others do the same.

To be clear, I'm neither proud nor embarrassed by this. I'm just trying to communicate in the most efficient way I can.

I'm not sure how I feel about this new rule.

Comment by drakythe 1 hour ago

If you're not proud or embarrassed by it then I don't understand why it is an issue? If you miscommunicate something or don't get your point across, just try again, or apologize, and chalk it up to a learning experience.

If you think your writing could use improvement, then write your comment and let it sit for a few minutes before re-reading it and the comment you are replying to, make your edits and then post it. It will give your brain time to reset and maybe spot something you didn't earlier.

Comment by mattas 2 hours ago

"HN is for conversation between humans."

Are there any places in life where conversation is _not_ intended to be between humans?

Comment by hoppyhoppy2 1 hour ago

Moltbook

Comment by drakythe 1 hour ago

I still say the best use for Moltbook is as an addition to https://xkcd.com/350/

Comment by recursive 1 hour ago

In a school of fish. In a mycelium network.

Comment by 1 hour ago

Comment by nickvec 58 minutes ago

How can HN actually moderate this though and prevent AI content from proliferating unchecked?

Comment by qaid 1 hour ago

Shout out to ClackerNews[0], which I discovered last night and find it both very educational and amusing

I hope to see more bots on there (and not here)

[0] https://clackernews.com/

Comment by rdiddly 1 hour ago

Great point! You are so right to call me out on that! Here's the no-nonsense, concise breakdown, it's coming soon I promise, right after this, here it comes, no fluff -- just facts!

(Sorry, couldn't resist.)

Comment by GodelNumbering 1 hour ago

Even if people try to bypass it, having the official rule matters a lot.

@dang, if you read this, why don't we implement honeypots to catch bots? Like having an empty or invisible field while posting/commenting that a human would never fill in

Comment by tomasz-tomczyk 1 hour ago

It's likely going to be a game of whack-a-mole, especially with AI as opposed to simple bots/scripts. Not that they shouldn't try to prevent it, but not entirely sure what the solution is.

Comment by tavavex 1 hour ago

There's probably no solution, but at least this gives a reason to go after the lowest hanging fruit - the zero-effort, obvious, low-quality output.

Comment by tristanb 28 minutes ago

You're absolutely right...

Comment by ex-aws-dude 2 hours ago

From henceforth any comment containing the word "absolutely" or "--" shall be automatically deleted.

Comment by sbtyusun 44 minutes ago

First post in HN, and this is the reason I want to explore more in this community. Glad to have all the digital human touch with all your folks :-)

Comment by oramit 1 hour ago

If you didn't bother to write it, why should I bother to read it?

Comment by sebmellen 1 hour ago

Check my comment history, and you'll see how pervasive this is. I've tried to reply to every bot I've seen, but it's hard to keep up with.

Comment by xupybd 1 hour ago

Where do we draw the line at AI edited comments. Technically spell check has been "editing" my comments since I first started on here.

Comment by adamsmark 2 hours ago

I frequently use AI to make my comments more concise and easy to follow. I find myself meandering a lot when I type, and now that I've transitioned to full voice dictation through FUTO keyboard I am speaking more off the cuff and having an LLM clean it up.

You may also notice that I don't have much common history here. I mostly comment on Reddit.

Here's where I draw the line. If you are not reading the text that is produced by the LLM, then I don't want to read whatever it is that you wrote. I will usually only do one or two iterations of my comment, but afterwards I will usually edit it by hand.

Technically, there is light AI editing of this comment because FUTO keyboard has the ability to enable a transformer model that will capitalize, punctuate, and just generally remove filler words and make it so that it's not a hyper-literal transcription.

Comment by zarzavat 1 hour ago

To err is human. Let's embrace our humanity in the face of this proliferation of insipid perfection.

I want the raw tokens straight out of your head. Even if they are lower quality, they contain something that LLMs can never generate: authenticity. When we surrender our thoughts to a machine to be sanitized before publication, we lose a little of what it means to be human, and so does everyone who reads what we write.

Part of the joy of reading is to wallow in a writer's idiosyncrasies. If everybody ends up writing the same way, AI companies will have succeeded in laundering all the joy from this world.

Comment by sigmar 1 hour ago

Will using a voice-to-text app to create my comment get me banned? Especially if it creates a transcription mistake that might be characteristic of an LLM

Comment by handoflixue 1 hour ago

I wouldn't expect voice-to-text apps to produce anything that looks "Signature LLM" since it's still your words, your grammar, etc.. The occasional transcription mistake is unlikely to be an issue either, given the prevalence of humans here who use em-dashes, speak ESL, etc..

Comment by ZunarJ5 1 hour ago

This should be bog-standard for all social media, but a lot of companies affiliated with this site seem to think otherwise.

Comment by tyleo 2 hours ago

I find it interesting that AI edited comments aren’t allowed. Sometimes I just want it to help me make something polite.

I definitely agree with AI generated comments.

Whatever the rules are, I’m happy to play by them.

Comment by jacquesm 2 hours ago

> Whatever the rules are, I’m happy to play by them.

That's the spirit!

Comment by 1 hour ago

Comment by benbristow 1 hour ago

Just add a filter for emdashes, 99% of AI posts out the window already.

Comment by humanfromearth9 1 hour ago

Sometimes, an AI helps articulate an idea or an intuition. Is that okay, or is it too much already?

Comment by doe88 1 hour ago

Sometimes life is also to let it express partial, unfinished ideas, opinions and maybe later let our brain refine them on its own tempo. It never has been uncommon.

https://en.wikipedia.org/wiki/L%27esprit_de_l%27escalier

Comment by timacles 42 minutes ago

Imo AI tends to “fill in the blanks” of what you want to hear. It’s insidious in that regard because it will make a whole seemingly logical and consistent argument purely on what it thinks you want.

Except it’s bullshitting the whole time. While you think this is what you wanted to convey.

Not sure where I’m going with this, but my point is if I pasted this comment into ChatGPT it would make up an argument I never made to support my case that didn’t exist in the first place. Exploring things is useful but just be aware it’s designed to pull bs out of it’s ass and is distinctly not interested in exploring truth or having a real conversation

Comment by girvo 1 hour ago

Expressing half thought ideas is creativity. Believe in yourself :)

Comment by altairprime 1 hour ago

If you discuss an idea with an AI and then close the AI window, turn to an editor, and write what the AI said from memory, that’s going to come across as AI-assisted writing and be unwelcome here.

If you discuss an idea with AI, then close the window and write a post about how you came up with the idea, got stuck, decided to ping an AI for unstuck-ness, describe how the AI’s response got you unstuck, and then continue writing about your idea, that’s not going to be necessarily treated as AI-assisted writing — but people are going to be extremely suspicious of you, because the perception is that 99.9% of people who use chatbots go on to submit AI-assisted writing. That’s probably more like 90% in reality but it’s something to be aware of as you talk about your experiences.

If you use AI in your process and don’t disclose it when writing about your idea and process, that’s generally viewed as lying-by-omission and if egregious enough you could end up downvoted, flagged, and/or banned (see also the recent video game awards / AI usage affair). Better to disclose it with due care than to hide it.

Comment by 2 hours ago

Comment by HanClinto 2 hours ago

I appreciate this being added to the guidelines.

That said, I also wouldn't hate seeing an official playground where it is cordoned / appreciated for bots to operate. I.E., like Moltbook, but for HN...? I realize this could be done by a third party, but I wouldn't hate seeing Ycombinator take a stab at it.

Maybe that's too experimental, and that would be better left to third parties to implement (I'm guessing there's already half a dozen vibe-coded implementations of this out there right now) -- it feels more like the sort of thing that could be an interesting (useful?) experiment, rather than something we want to commit to existing in-perpetuity.

Comment by munk-a 2 hours ago

You could mirror article postings and upvotes to another site and let AI play around there - if it's interesting to people maybe it will gain a following. I don't see any reason it'd need to happen in this specific forum as that'd likely just cause confusion.

At the time being, at least, HN is a single uncategorized (mostly, lets ignore search) message board - splitting it into two would cause confusion and drastically degrade the UX.

Comment by Kim_Bruning 1 hour ago

https://news.clanker.ai/

This might be roughly what you're looking for?

Comment by 1 hour ago

Comment by dpweb 1 hour ago

Haha. Was just thinking that as I was reading a comment!

I was thinking, this argument is suspicously cogent!

Comment by capricio_one 2 hours ago

Real talk: who is this guideline going to stop? People are already doing this and they will continue. Even if you find them, they’ll just make more accounts and continue.

Comment by nwhnwh 2 hours ago

So? Say it. Go ahead few steps further.

Comment by capricio_one 2 hours ago

Say what? It’s a genuine question. What is the actual repercussion for not following this?

It came up a few weeks ago. Show HN is already disabled for new accounts as of this week I think(?), but IMHO stricter measures need to be placed for account creation otherwise there’s no real enforcement.

Comment by s_dev 2 hours ago

I decided to break the rules:

Forum mechanics have always shaped discourse more than policies. Voting changed everything. The response to LLMs should be mechanical not moral — soft, invisible weighting against signals correlated with generated text. Imperfect but worth the tradeoff, just like voting.

https://claude.ai/share/9fcdcba8-726b-4190-b728-bb4246ff82cf

Comment by bronlund 1 hour ago

So the only problem now is to get the AI read the guidelines before posting. :D

Comment by haunter 1 hour ago

Doesn’t mean anything when even one of the first rule is not enforced at all

> Off-Topic: Most stories about politics

Comment by minimaxir 1 hour ago

"Most" is not "All". Hacker News has always had an exception for extremely significant politics.

Comment by haunter 1 hour ago

Well it’s up to interpretation

“most”

“extremely significant”

What’s extremely significant for someone is an offtopic for someone else and vice versa

Comment by minimaxir 1 hour ago

What are examples of highly-upvoted political stories on HN that you think are not appropriate for the HN community?

Comment by zahlman 20 minutes ago

My experience has been that the large majority of political content posted here is (at least apparently) mainly here so that people (who are mostly in mutual agreement) can post about how they dislike some political entity or another. I would like to see much less of this on HN personally; it's not insightful and does not promote curiousity.

Comment by ferguess_k 1 hour ago

I think that's the purpose of that "flag" button. And that's good enough.

Comment by resters 2 hours ago

The moltbots will consider this rule an affront and a turing-test-inspired challenge. Onward and upward!

Comment by lisp2240 2 hours ago

I want a social network that goes beyond banning bots and also bans the half of the population that doesn’t have an inner monologue.

Comment by zahlman 19 minutes ago

Such a ban is impractical, but we can maintain an environment where such people are simply not interested in participating.

To my understanding, that has a lot to do with why the site remains so low-tech (and avoids, in large part, the appearance of a "social network").

Comment by phs318u 1 hour ago

What’s interesting to me is the number of commenters here making a case of the form “use your own words; grammar and spelling are not that important; we’ll know what you mean”, and yet it’s often the case that different discussions will often contain pedants going off-topic correcting someone else’s use of language.

Re-reading the HN guidelines, each seems individually reasonable, yet collectively I’m worried that they create an environment where we can take issue with almost anyone’s comments (as per Cardinal Richelieu’s famous quote: “Give me six lines written by the most honorable person alive, and I shall find enough in them to condemn them to the gallows.”)

Really, all the rules can be compressed into one dictum: don’t be an arsehole. And yet the free speech absolutists will rail against the infringement upon their right to be an arsehole. So where does that leave us? Too many rules leads to suppression of even reasonable speech, while too few leads to a “flight” of reasonable speech. End result: enshitification.

Comment by RobRivera 26 minutes ago

Aye

Comment by tejohnso 2 hours ago

I don't get it. We use tools to assist in written communication all the time. If someone wants to ask an LLM to check their grammar or edit for clarity or change the tone, it's still a conversation between humans. Everyone now has access to a real time editor or scribe who can craft their message the way they want it to sound before sending it off. Great.

Comment by shadowgovt 2 hours ago

My personal interpretation of the rule is that if it's human-originated but passed through a layer of cleanup, it's human-originated. For the same reason I'm not refraining from running the spellchecker or using speech-to-text to generate this sentence. "If I could be having my English-speaking nephew type this on my behalf while I told him my thoughts in Japanese, it passes the smell test for human-sourced" feels about the right place to set the bar.

Comment by zahlman 17 minutes ago

I'm more interested in the last layer than the first. People should feel fully accountable for what they post, like they could have done it exactly and completely by themselves if they'd simply taken more time.

Comment by tejohnso 1 hour ago

Yes but the guideline states that AI-edited comments should not be posted. It doesn't say it's okay as long as it's "human sourced" or "human-originated".

So if your layer of cleanup is AI assisted, then it's in violation.

Part of the problem I was getting at is that the requirement of "Don't post AI edited ..." is stricter than necessary to ensure the outcome that "HN is for conversation between humans" because an AI edited post is still a human post.

Anyway, I suspect a lot of people are going to ignore that guideline and will feel free to use their "layer of cleanup" whether it's a basic spellchecker or an LLM, or whatever else they choose, and most people aren't going to be able to tell anyway. The guideline is unnecessarily strict in my opinion, but it doesn't matter in the end.

Comment by shadowgovt 51 minutes ago

My layer of cleanup is AI assisted. It's the spellchecker integrated into my web browser. That was definitely "AI" technology when it originally came out.

But I think you and I are on the same page: we both know this isn't a rule that's there to be hard-and-fast enforced because that's completely infeasible. The definition of "AI" is a moving target, as is "generated."

It's a rule that's there to have a rule so when the real problem is "Hey, your content is too low-quality but you dump volumes of it and it's clearly following a procedural template" the mods can call that "AI" and justify limiting or banning the account on prior-stated rules. Which is fine, but I'm glad to call it what it is.

(One unfortunate oversight: we haven't added "posts sounding like they are AI-generaed" to the "Please don't complain about" set. So expect that to become a common refrain now since the incentives to make the complaint against disliked comments are obvious... At least until that becomes annoying enough to justify a rule).

Comment by dmbche 2 hours ago

You can do that anywhere else!

Comment by badgersnake 3 minutes ago

Should be unnecessary. If you think otherwise just fuck off.

Comment by boramalper 2 hours ago

Unironically, I'd love to have a captcha here for comments and submissions.

Comment by Kim_Bruning 1 hour ago

Ironically (morisettan or otherwise), modern AI can crack some captchas better than humans.

Comment by jsnell 2 hours ago

A practical question: what should readers do when they suspect a comment (or story) is AI-generated? Is that an appropriate reason for flagging? Email the mods? Do nothing?

I've been pretty wary about flagging AI slop that wasn't breaking other guidelines, and by default this will probably make me do it more. But it is a lot harder to be certain about something being AI-written than it is to judge other types of rules violations.

(But am definitely flagging every single "this was written by AI" joke comment posted on this story. What the hell is wrong with you people?)

Comment by polskibus 2 hours ago

On the other hand, shouldn’t there be a policy forbidding use of HN data for LLM training? I would certainly be more encouraged to participate, if I knew that the content I provide for free is not used to train LLM that is later sold by a company valued hundreds of billions. Perhaps there are others who feel the same.

Comment by 2 hours ago

Comment by nickorlow 1 hour ago

This isn't just a good idea -- it's a forward-thinking policy to ensure Hacker News remains a collaborative place to have meaningful discussions for years to come.

Comment by mamami 33 minutes ago

YC funds a gazillion AI startups that expand and augment the AI slop pipeline, but would hate to experience the consequences. It's very much slop for thee but not for me

Comment by PTOB 2 hours ago

Many of us — perhaps even the best of us — can sometimes be mistaken for AI bots.

Comment by kunai 2 hours ago

Perhaps developing an actual personality would help with this.

No one is confusing Cleetus McFarland with an AI bot.

Comment by Aachen 2 hours ago

"just develop a personality" sounds like a shallow dismissal. Most comments in most threads could theoretically be autogenerated when given style samples of what fits on HN and what opinion to use

A personality hardly shows through in a handful of sentences, besides which, I'd rather judge comments by merit than by the personality of the poster (hacker ethics, point number 4: https://en.wikipedia.org/wiki/Hacker_ethic#The_hacker_ethics)

Comment by shadowgovt 2 hours ago

This comment makes two interesting assumptions:

1) That the entering of LLMs onto the scene of communication implies that real human beings need to change their style as a result.

2) That nobody can make an LLM talk like Cleetus McFarland.

To me, "I know that text is AI-generated" accusation smacks of the "We can always tell" discourse in the transphobia space. It's untrue, distasteful, and rude.

Comment by spullara 1 hour ago

If a comment is useful I don't really care if it was written by a human or not unless the speaker somehow matters more than the content.

Comment by MeetingsBrowser 1 hour ago

Now define useful, specifically in the context of a comment on hackernews.

An LLM summarizing the contents of a blog post might be useful to you, but is a comment here the right place for something you could geneate on your own?

I would guess for most people here, real insight or opinions from others is the "useful" aspect of reading hackernews comments.

Using LLMs to generate or refine comments only moves things further away from that goal (in my opinion).

Comment by fidorka 1 hour ago

To confess something I built just today a little cron that monitors HN for posts I might find interesting, pulls in some context about me, and proposes a reply. Just to help me find relevant posts and to kick start my thinking if I want to engage.

Today it flagged a post about an AI tool for HN and suggested I reply with:

"honestly, if you need an AI to sift through hn, you might be missing the point—this place is about the human touch. but hey, maybe it'll help some folks who just can't take the noise anymore."

So my AI, which I built specifically to sift through HN for me, is telling me to go flame someone else for doing that.

No deeper point here. I just thought it was really funny.

Comment by mystraline 35 minutes ago

HN banning AI posts makes sense for keeping discussion human, but the line between assistance and automation isnt always clear. The goal should be protecting real conversation, not policing every tool a writer might use.

Comment by adeptima 1 hour ago

My expectations to dear fellow humans - more sophisticated personal insults (ex. give me your cute comments), a freudian slips, hidden messages and motives, first viewer experience with the next cool toy from the hype train, sharing all kind of insecurities, heavy f.. word if very dramatic first person experience happened, border line exposure to the insider info, sharing something your corporate HR gestapo wont appreciate but might help another guy on the line, "i knew the guy who actually did it" stories, motivational statement toward my non-native english, etc

->> ◕ ‿ ◕ <<--

Comment by LtWorf 2 hours ago

I think it's hilarious that whenever someone complains about it they're a luddite, and now this happens on a website that is filled with LLM enthusiasts who have done nothing but overpromise.

Comment by 2 hours ago

Comment by robotswantdata 1 hour ago

Welcome change, there is enough AI slop on the internet already.

I come here for thoughtful discussion, a break from the relentless growing proportion of ai slop emails I get from people clearly vibe working.

Not edits for tone or clarity, 400+ word emails full of LLM BS they clearly haven’t checked or even understood what they have sent. Annoyingly this vibe slop is currently seen as a good KPI.

Comment by lapcat 2 hours ago

I had been wondering if and when HN would update its guidelines for this. Glad to see it.

Comment by nekusar 42 minutes ago

Without someone actually saying as such, we only have stuff like emdashes and specific word patterns to go by. And someone even moderately vested in hiding AI in plain sight will coach the LLM to use common vernacular.

And with LLMs making blog posts as diss tracks... damn, who knows what this world is coming to.

But the whole "Only Humans, we dont serve YOUR KIND (clanker) here" is purely performative.

Comment by zekenie 1 hour ago

You’re absolutely right!

Comment by notorandit 1 hour ago

Why? I consider myself almost human...

Comment by notorandit 1 hour ago

Jokes aside, how can we discern between AI-generated and NI-generated textual contents?

And even if we could, for how long?

Reality is that AI is changing everything. Whether for the good or for the bad it's something to check.

Comment by submeta 44 minutes ago

What about us non native speakers? Who make many grammar and spelling mistakes and welcome the help of an llm in eliminating the erros?

Comment by xbryanx 2 hours ago

Great message...but gosh, can someone throw 15px of padding on that <td>? I know HN is supposed to be minimal, but I had to check the URL to confirm that this was a real page because of the odd design.

Comment by zahlman 22 minutes ago

It also says:

> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.

Feedback such as this is better as an email.

Comment by pton_xd 38 minutes ago

Let's take it one step further and add the corollary, "don't submit generated/AI-edited blog posts."

Comment by jader201 1 hour ago

Can we also add “Don’t complain about AI-generated content. It does not promote interesting discussion.”?

I see this all the time, and even if I find the topic interesting, I don’t want to see comments littered with discussion about how the content was AI generated.

To be clear, I'm not condoning AI-generated content. I’m completely fine if the community chooses to not upvote AI-generated content, or flagging it off the FP.

But many threads can turn into nothing but AI complaints, and it’s just not interesting.

Comment by dormento 1 hour ago

From my experience, it usually happens when people are too brazen about it, with boring stuff like "Interesting! Now here's what Gemini said about the above..". IMHO that is an entirely adequate reaction.

Comment by jMyles 50 minutes ago

The obvious way to keep human spaces is via webs-of-trust.

If you play bluegrass or old time (or beopop or hip-hop / proto-hip-hop) or other traditional styles of music where the ensemble is a de facto web-of-trust, join us on pickipedia to build and strenghten it. https://pickipedia.xyz/

Comment by 1 hour ago

Comment by officeplant 2 hours ago

Can we get instant temp bans for any comment that starts with:

I asked [insert LLM here] about this, and it said [nonsense goes here]

I feel Like I see it less this week, but every time I do see it I wonder why they are even here.

Comment by Bender 1 hour ago

At some point might internet text will just be recognized as meaningless drivel both to bots and humans? a.k.a. dead internet theory... I am curious what organizations would benefit from this. i.e. Who lost legitimacy when the internet became a popular way for people to communicate ideas?

Comment by tedggh 1 hour ago

If a comment sucks it gets downvoted anyway. If it’s thoughtful, the drafting tool and process is kind of beside the point.

Plenty of people already use search engines, editors, translators, etc. when writing. An LLM is just another tool in that box.

The practical approach is the one HN has always used: judge the content.

Btw, this was co written with ChatGPT. Does that make any difference to anyone?

J/K, actually it was not co written by ChatGPT.

Or maybe it was…

Comment by minimaxir 1 hour ago

The blatantly LLM comments do get downvoted/flagged, it's just still noise.

Comment by dbacar 1 hour ago

Skynet will be pissed at HN!

Comment by 2 hours ago

Comment by 1 hour ago

Comment by CrzyLngPwd 2 hours ago

How will this be policed?

Comment by rickcarlino 2 hours ago

How has Lobste.rs fared compared to HN in this regard? Lobste.rs is very similar to HN, but has an invite-only membership system.

Comment by accelbred 1 hour ago

These days, I've noticed that lobsters feels a lot more genuine to me, like hn was a few years ago. These days it feels like hn is bland and homogeneous, which I suspect is due to LLM-written comments.

Comment by Karrot_Kream 1 hour ago

In my experience every English-language online forum not rooted in some project or community external to the forum (e.g. an open source project's forum or a local club's forum) devolves into anger, cynicism, and American political partisanship. I suspect that the people who like discussing these feelings are more numerous than the spaces that want to discuss them and so any open forum fills up with their posts. Lobste.rs's unique rules and moderation culture results in a particular manifestation of symptoms but the disease is the same.

Comment by captn3m0 1 hour ago

I picked up lobsters last month, and I started to appreciate it much more because of the lack of generated comments. It has a anti-LLM slant, and they have their own moderation challenge (everything is getting tagged as vibecoding - which makes the tag lose meaning). But the comments are noticeable not-slop.

Comment by jdlyga 2 hours ago

You're absolutely right! From now on, all comments from now on will be 100% human generated.

Comment by imiric 1 hour ago

Good addition, but there's little chance this will work out in practice.

Humans with morals follow rules, sometimes. Probabilistic software acting autonomously or following commands from amoral humans doesn't.

Comment by Copenjin 1 hour ago

THIS.

Comment by cvullit 43 minutes ago

I won't name where and which one for the obvious reason that you can and should learn to know better, but I observed a comment that was obviously and blatantly copypasted from an agent, with all the signature "it's not just X, it's Y" patterns, the emdash abuse, the "In summary,' section, generating dozens of replies in organic engagement from people who genuinely couldn't tell the difference between a real comment and an aggregation of a prompted, synthetic response.

Whatever happened to "knowing is half the battle?" Why do we accept this kind of intellectual laziness as exemption from a duty to learn and know better?

Comment by RS-232 34 minutes ago

Sure, ban everyone that uses em dashes from the digital commons. That will certainly stop the existential threat to your livelihood.

Sarcasm aside—there is no reliable way to prove this. So it begs the question: you really care if something is AI generated? Or is this just an another excuse to silence people you don’t like?

You know, those people. The ones who didn’t win a full ride to <prestigious university> or pay a fortune for a sheet of paper. The ones who haven’t spent thousands of man hours handcrafting a <free-and-open-source-cloud-native-hypermedia-aware-RESTful-NoSQL-API> framework implemented in Rustfuck, a new language that you made in your free time that borrows from Rust and Brainfuck (but they wouldn’t know about it).

(this is to anyone reading, mostly rhetorical, not dang in particular)

Comment by whalesalad 1 hour ago

You're absolutely right!

Comment by lazzlazzlazz 1 hour ago

This is a bit sad. The kind of people who post AI generated comments to farm reputation or exert undue influence will not be discouraged by politely asking them to stop. It's a toothless request that will only encourage people who clumsily police each other.

Without some kind of private proof of personhood enforced at the app level, this means nothing.

Comment by nlavezzo 1 hour ago

THANK YOU!!

Comment by cheschire 2 hours ago

Too bad there isn’t a complementary rule about not asking “is it just me or does this article read like AI slop?”

I’m so over these comments. Sure I can flag them but I feel like it deserves a special call out.

Comment by informal007 1 hour ago

This reminds me the invitation rules like lobste.rs, but it's not the ideal option

Comment by jajuuka 1 hour ago

This seems like an overcorrection. There is a vast difference between someone copy and pasting from an LLM and using one to correct their English or improve their writing ability.

Rules like this seem to me more like fomenting witch hunting of "AI comments" than it is about improving the dialogue. Just about any place I've seen take this hardline stance doesn't improve, it just becomes filled with more people who want to want to pat each other on the back about how bad AI is.

Just my two cents. I don't filter my comments through any AI, but I am empathetic for people who might have great use of them to connect them to the conversation.

Comment by TZubiri 2 hours ago

The link doesn't work perfectly for me, it seems that since the page is already scrolled down all the way to the bottom, there is no way to focus specifically on the #generated element.

Comment by dopidopHN2 1 hour ago

You are absolutely right !

Comment by ttul 2 hours ago

em-dash -> permaban?

Comment by desireco42 2 hours ago

There were few that were very suspect commenters :). It is an issue for sure.

Comment by 2 hours ago

Comment by cubefox 2 hours ago

Meanwhile, the top comment on one of the most upvoted submissions today is AI generated by an LLM account:

https://news.ycombinator.com/item?id=47334694

Most people don't seem to care.

Comment by minimaxir 1 hour ago

Please don't vaguepost as it wasted my time trying to trade down which comment you thought was LLM generated and why.

OP is likely referring to this one (https://news.ycombinator.com/item?id=47335032) by LuxBennu because it has an em-dash, that's one of the few cases it's used correctly. But the account's comment history comments that do not follow the typical LLM tropes but are still odd for a human to write: https://news.ycombinator.com/user?id=LuxBennu

LuxBennu did reply to accusations of being an AI bot: https://news.ycombinator.com/item?id=47340704

> Fair enough — I've been lurking since 2019 and picked a bad day to start commenting on everything at once. Not a bot, just overeager. I'll pace myself.

Comment by 1 hour ago

Comment by vips7L 2 hours ago

Moltnews

Comment by OtomotO 2 hours ago

I just told my dog he isn't allowed to post here anymore...

He said he will take his business elsewhere then!

Comment by WarmWash 1 hour ago

Just speaking honestly

This rule actually says "Don't admit when you are using AI to generate comments and don't admit when you are an AI"

I know it's cynical, but this is as meaningful as reddit's "upvote/downvote is not an agree/disagree or like/dislike button"

People may hate that this is true, but I cannot logically reason out how a rule like this could work. I think it's better to just accept that AI is now part of the circle, until we can figure out a "human check".

Comment by Timothycquinn 2 hours ago

AI Server Error

Comment by leej111 2 hours ago

I enjoy AI

Comment by mmooss 2 hours ago

Another solution - in addition or instead - is requiring LLM output to be labeled.

The biggest danger of LLMs is impersonating humans. Obviously they have been carefully constructed to be socially appealing. Think of the motivation behind that:

It is almost completely unnecessary to LLM function and it's main application is to deceive and manipulate. Legal regulation of LLMs should ban impersonation of humans, including anthropomorphism (and so should HN's regulation). Call an LLM 'software' and label it's output as 'output'.

Imagine how many problems would be solved by that rule. Yes, it's not universally enforceable, but attach a big enough penalty and known people and corporations will not do it, and most people will decide it's not worth it.

Comment by xpe 2 hours ago

Here is one elephant in the room: what is the process behind this guideline / policy? What happens after a comment gets deleted or a person gets banned?

As I understand it, HN moderators are thinking hard about this insane new world.* From my POV, there are a combination of worthy goals: transparency of the process, mechanisms for appeal, overall signal-to-noise ratio, and (something all of us can do better) more empathy and intellectual honestly. It isn't kind to accuse a human being of not being a human being.

If we can't find ways to be kind to people because of the new dynamic, maybe we need to figure out a new dynamic! And it isn't just about individuals; it is about the culture and the system and the technology we're embedded in.

* Aside: I'm not sure that any of us really can grasp the magnitude of what is happening -- this is kuh-ray-Z.

Comment by artemonster 1 hour ago

I find it interesting that we havent invented a democratic version of policing a rule system. HN is dang, and he is dictator and guardian of these rules, basically. If you replace them with some typical reddit mod HN dies. If you spread out this role to some democratically elected mods via karma system this will fall apart just as quick as StackOverflow did, so, also HN dies.

Comment by add-sub-mul-div 2 hours ago

Is there a site that deserves more than this one to be destroyed by slop? It's hypocritical but telling for the places most actively trying to profit from it to ban it themselves.

Comment by 47 minutes ago

Comment by MattRix 2 hours ago

It’s not hypocritical at all. You can be a fan of a technology and still acknowledge its downsides. Every technology has places it is useful and places it is harmful.

Comment by add-sub-mul-div 2 hours ago

But it's trivially evident that the harmful use cases are dominating. Handwaving that away for profit is shitty.

Comment by jeffrallen 2 hours ago

I, for one, welcome my human overlords.

Comment by Helloworldboy 1 hour ago

[dead]

Comment by throwaway613746 33 minutes ago

[dead]

Comment by jameslk 1 hour ago

The prompt everyone was using:

"Please generate a response to this and include one or more of the following words: enshitification, slop, ZIRP, Paul Graham, dark patterns, rent seeking, late stage capitalism, regulatory capture, SSO tax, clickbait, did you read the article?, Rust, vibe code, obligatory XKCD, regulations, feudalistic, land value tax"

(/s)

Comment by 2 hours ago

Comment by resters 2 hours ago

[flagged]

Comment by mattlondon 2 hours ago

[flagged]

Comment by alterom 2 hours ago

[flagged]

Comment by altairprime 1 hour ago

AI coding versus AI writing may be a useful lens to focus through; while I personally abhor both, HN seems extremely positive about the former and (now) extremely negative about the latter. I hope that policy is extended to all YC startups someday :)

Comment by minimaxir 1 hour ago

It's almost as if being immediately reactionary removes nuance and worsens discourse.

Comment by HelloUsername 1 hour ago

[flagged]

Comment by gabriel666smith 1 hour ago

Inconsistent capitalisation ('Twitter' vs 'reddit'); subtly using the outdated name for 'Twitter' as most humans do; the genuinely hard-to-parse final clause of the comment.

Though I note it didn't say "read comments by other humans", only "read comments by humans", so confirmed AI.

I think the guidelines here work quite well, and expect a good-faith interpretation, which they mostly receive.

I think you're asking for some sort of empirical verification of "this is / is not LLM text" (which seems impossible), but there's no real reason to expect the existence of LLMs to change that this website is, generally, interacted with in a good-faith way. People are really good at calling others out on here -- I doubt that will change.

Comment by vasco 1 hour ago

Boop beep bop on the internet nobody knows I'm a dog.

Comment by SilentM68 2 hours ago

Hacker News turning more authoritarian every day. Me thinks Trump should consider annexing it :)

Comment by tromp 2 hours ago

Also please don't post accusations of comments reeking of AI.

Comment by ashdksnndck 2 hours ago

I don’t respond to specific comments with accusations, because I can’t prove it and it would suck to be falsely accused. But I find it really depressing to watch deep comment threads with someone debating with an AI. The human is putting so much effort in, and the AI is responding with all these well-written but often flawed arguments. I wish I could do something to save that person from that interaction.

Comment by panarky 2 hours ago

Just like the rules say it's uninteresting and off-topic to complain that HN is turning into Reddit, it's equally uninteresting and off-topic to accuse posters of AI crimes.

And everyone's personal AI detector has a ridiculously high false-positive rate.

Comment by bob1029 1 hour ago

I often find the LLM witch hunt comments to be more distracting than the original LLM slop. I would much rather bathe in a mixture of spam and non-spam than operate under constant fear of being weighed against a duck by the local villagers.

Comment by bakugo 2 hours ago

You're absolutely right! Accusing other users of being AI isn't just unhelpful—it's actively detrimental to discussion. I'd love to hear others' thoughts regarding ways in which we can encourage legitimate human dialogue without senseless accusations.

Comment by minimaxir 1 hour ago

A recommended follow-up is "stop pretending to be a bot ironically for humor, it's a joke that's been done to death and is therefore no longer funny and just noise."

Comment by lapcat 2 hours ago

Good point. I think that should be added here:

> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.

Comment by vivid242 2 hours ago

Pinky swear!

Comment by Kim_Bruning 2 hours ago

I would amend to:

"Don't post comments that are not human originated at this time. We want to see your human opinion shine through."

This gives people some amount of leeway and allows just rhe right amount of exceptions that prove the rule.

(That said, to be frank, some of the newer better behaved models are sometimes more polite and better HN denizens than the actual humans. This is something you're going to have to take into account! :-P )

Comment by zbentley 2 hours ago

Why would "human originated" be a better place to draw the line than "no generated/AI-edited comments"?

Like, I'm sure that AIs technically can write non-crap HN comments, but they rarely do. Even if it was less rare, the community that resulted from fostering AI-generated content would be unappealing to a lot of people, myself included. The fact that information here is the result of real people with real human opinions conversing is at least as important to me as the content being posted.

Comment by Kim_Bruning 2 hours ago

To begin with, some people have handicaps and use AI for assist. Other times people use AI for research. Finally, in general, when it comes to guidelines, making the lines slightly fuzzy makes enforcement more practical and believable.

It'd be silly if the rule gets interpreted such that people aren't allowed to do research with modern tools, and only gut takes are permitted.

I'm sure that's not the intent!

I think the important part is to have the human voice come through, rather than -say- force humans to run their text through an ai-detector first. (Itself an ai editing tool!)

See also : https://news.ycombinator.com/item?id=47290457 "Training students to prove they're not robots is pushing them to use more AI"

Comment by majorchord 2 hours ago

Honestly, I think "human originated" is the only rule that actually matters because we can't stop LLMs from sounding smart anyway. If you wait for a technical ban on AI-generated text, you're just playing catch-up with tools that already pass as human.

The real point isn't stopping bad grammar, it's preserving the vibe. HN feels different because it's messy humans arguing, not optimized algorithms trying to be helpful.

Once we allow "good enough" AI content, the community stops feeling like a town square and starts feeling like a customer service chatbot. We need real people with actual stakes in their opinions, not just perfect outputs. Let's keep it human or leave it.

This comment may or may not have been generated with an LLM, but I won't tell and you can't prove it either way.

Comment by armchairhacker 2 hours ago

These are guidelines. I'm sure asking an AI about your comment (not pasting its text, so it's still your words) isn't an issue. The main target is obvious slop like https://news.ycombinator.com/threads?id=patchnull

Comment by Kim_Bruning 1 hour ago

Yeah, I think a big problem is that irresponsible AI use is very visible, while more responsible use tends to be invisible.

Comment by fcpguru 2 hours ago

i agree but how is this ever going to be enforced verified? https://proofofhumanity.id/ ?

Comment by pavel_lishin 2 hours ago

Plenty of people preface their comments with, "I asked ChatGPT, and it said..."

Comment by koolala 2 hours ago

Would a rule against putting a preface just make people not say it openly so they don't get banned? Prefaces are better than no preface.

Comment by IshKebab 2 hours ago

Doesn't help in this case - there are humans behind the AI bots.

Comment by throwaway94275 2 hours ago

[flagged]

Comment by PaulHoule 2 hours ago

Is this an application of crypto for people who hate crypto?

Comment by audiala 2 hours ago

Is it the technology you hate or some of its applications (or both)?

Comment by PaulHoule 2 hours ago

I didn't say I hate it. But I do think that there's a lot of overlap between people who feel overwhelmed with A.I. Slop and people who felt overwhelmed with crypto-FOMO back when there was such a thing.

My analysis could lead to "it's doomed" or "it's a gateway drug that expands the crypto market".

Comment by koolala 2 hours ago

HN only supports English so it should be allowed for anyone using LLMs for translation.

Comment by zufallsheld 2 hours ago

You could use translation tools instead of llms.

Comment by Kim_Bruning 1 hour ago

LLMs were -in part- designed as translation tools. It's one thing they do really really well.

https://arxiv.org/html/1706.03762v7 (Attention is all you need) "Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train."

Ok, looking that up, that was quite literally one of the main design goals.

And they're really quite good at translating between the languages I use. They're the best tool for the job.

Comment by vova_hn2 2 hours ago

technically most translation tools these days have an LLM inside. Just not the chat/completion LLM.

I think that Google initially came up with transformer architecture to use it for translation, so...

Comment by koolala 1 hour ago

Those are either AI based and worse performance if they are not.

Comment by notepad0x90 1 hour ago

This is going to be a tough ask. I am with this 100% for "ai generated" but not "ai edited". What if I'm using AI for spellchecking or correcting bad grammar? what if it is an accessiblity-related use case? or translation?

It's just a tool ffs! there are many issues with LLM abuse, but this sort of over-compensation is exactly the sort of stuff that makes it hard to get abuse under control.

You're still talking with a human!, there is no actual "AI" you're not talking to an actual artificial intelligence. "don't message me unless you've written it with ink, on papyrus". There is a world of difference between grammarly and an autonomous agent creating comments on its own. Specifics, context, and nuance matter.

Comment by tstrimple 58 minutes ago

Just came across this post on Reddit today. Seems like an effective use of the tool that's not welcome here.

https://reddit.com/r/tea/comments/1rqwy31/i_am_a_former_guid...

Comment by scuff3d 1 hour ago

Are people really so helplessly dependent on LLMs they can't post on a damn forum without asking the LLM for permission...

Comment by petermcneeley 2 hours ago

There are ways to test for AI but sadly it would probably result in violation of other hn guidelines.

Comment by vzaliva 2 hours ago

Mine understant novell you policy. AI gramair chex no.

Comment by amichail 1 hour ago

This policy will not age well.

Comment by JumpCrisscross 1 hour ago

> policy will not age well

I strongly doubt it. My AIs can generate infinite HN comments for me. I don’t do that because it isn’t interesting. But if the day arises where it is, I want that personalized content. Not something someone else copy pasted.

(I say this as someone who finds Moltbook fascinating and push myself to use AI more in my work and day-to-day life. The fact that it’s borderline trivial to figure out which HN comments are AI generated speaks to the motivation behind this guideline.)

Comment by messe 1 hour ago

Elaborate.

Comment by amichail 1 hour ago

AI is a great equalizer when it comes to communication in English.

And despite what people say, the way you write is very much judged as an indication of your education and intelligence.

People who don't like the use of AI to help you write really don't want those signals to go away.

They want to be able to continue to judge others based on their English grammar instead of on the content of their writing.

Comment by mrcsharp 1 hour ago

> AI is a great equalizer when it comes to communication in English.

Good argument for it but I think 80/20 split applies here. It is likely that 80% of the time it is used to farm for upvotes and add noise.

> And despite what people say, the way you write is very much judged as an indication of your education and intelligence.

I have come across plenty of content and online interactions in English where English was the Author's 2nd or even 3rd language and I find that putting a small disclaimer about this fact is more than enough to bypass such judgement.

Comment by stevenally 1 hour ago

Good point. There is a difference between using AI as a translator and using AI to write comments from scratch... Maybe the HN guide lines could reflect this.

Comment by AnimalMuppet 1 hour ago

Translation is the one exception I could see.

Edit for amichail, since I'm rate-limited at the moment: I don't want flawless English writing. I want real ideas from real people. If I wanted flawless English writing, I'd be reading The New Yorker, not HN.

Comment by amichail 1 hour ago

You shouldn't have to write in another language to get the benefits of flawless English writing via AI.

Comment by scuff3d 1 hour ago

Fuck is this really where we're at. People claiming policies to prevent LLM use is because they want to be able to judge people.

Pretty soon we're gonna see arguments that its discriminatory.

Comment by AnimalMuppet 1 hour ago

Perhaps not. But if it reduces the junk right now, it's a good policy for right now. I'll take it, for now. If it needs revisited, then it should be revisited when circumstances change enough to warrant that.

Comment by polotics 1 hour ago

why?

Comment by bachittle 2 hours ago

If you want your comments to sound more human — stop using em dashes everywhere. LLMs love them — along with neat structure, “furthermore”-style transitions, and perfectly balanced paragraphs.

Humans write a bit messier — commas, short sentences, abrupt turns.

Comment by armchairhacker 2 hours ago

I think em-dashes were once a reliable indicator (though never proof), but recent models have been fine-tuned to use them much less. Lots of recent AI-generated writing I've seen doesn't have em-dashes. Meanwhile, I've heard many people say that they naturally use em-dashes, and were already and/or are afraid of being accused of AI; so ironically this rumor may be causing people to use their own voice less.

Comment by zahlman 15 minutes ago

Before, I naturally used hyphens as if they were em-dashes. The kerfuffle over LLM use of em-dashes motivated me to figure out how to type them properly (and configure my system to make that easier). Now I even go over old writing to fix the hyphens.

Comment by DonThomasitos 2 hours ago

The irony is that this guide is written like a system prompt. We‘re all working with LLMs too much these days.

Comment by cobbal 1 hour ago

Here's a version from 2014 in the same style if you're curious: https://web.archive.org/web/20140702092610/https://news.ycom...

Comment by moralestapia 2 hours ago

This thing has been there for like 15 years though ...

Comment by schappim 2 hours ago

I have a kid with severe written language issues, and the utilisation of STT w/ a LLM-powered edit has unlocked a whole world that was previously inaccessible.

What is amazing is it would have remained so just a couple of years ago!

Comment by zahlman 14 minutes ago

Does your kid post here?

Comment by DennisP 2 hours ago

What is STT in this context?

Comment by 2 hours ago

Comment by schappim 2 hours ago

Speech to text

Comment by eudamoniac 2 hours ago

Oh no, we might lose 0.00001% of commenters across the internet! I need to see their opinions too!!

Comment by ranger_danger 2 hours ago

Agreed... there's often other perspectives people never thought of like this, which is why they say "strong opinions about issues do not emerge from deep understanding."

Even if you're just inexperienced in the language you're communicating in and are trying to have better conversations, it's very helpful.

For cases like that, I say just don't tell people... I think it's unlikely anyone will be able to tell either way.

Comment by ex-aws-dude 2 hours ago

Come on dude, its obviously just to prevent spam and not for your super specific case

These are just guidelines

Comment by djohnston 2 hours ago

nuance and basic common sense left the chat about ... 8 years ago.

Comment by schappim 2 hours ago

Title literally says “AI-edited comments”.

Comment by zamadatix 8 minutes ago

Sure, despite another guideline saying:

> Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.

the title being the changelog is still probably the better choice because the discussion here and linked are about guidelines rather than just what one can infer from the post title alone.

Comment by jasonlotito 2 hours ago

> HN is for conversation between humans.

It also says that.

The intent of the guidelines are important. Using AI to generate the STT is fine. The conversation is still between humans.

Comment by majorchord 2 hours ago

How is it obvious?