Don't post generated/AI-edited comments. HN is for conversation between humans.
Posted by usefulposter 2 hours ago
Comments
Comment by nkh 1 hour ago
Comment by gabriel666smith 1 hour ago
It's very funny to imagine people prompting: "Write a compelling comment, for me, to pass off as my thoughts, for this HN news thread, which will attract both upvotes and engagement.".
In good faith, per the guidelines: What losers!
Comment by xpe 45 minutes ago
For me, I care a lot about the quality of thinking, as measure by the output itself, because this is something I can observe*.
I also care -- but somewhat less -- about guessing as to the underlying generative mechanisms. By "generative mechanisms" I mean simply "Where did the thought come from?" One particular person? Some meme (optimized for cultural transmission)? Some marketing campaign? Some statistic from a paper that no one can find anymore? Some dogma? Some LLM? Some combination? It is a mess to disentangle, so I prefer to focus on getting to ground on the thought itself.
* Though we still have to think about the uncertainty that comes from interpretation! Great communication is hard in our universe, it would seem.
Comment by kelnos 11 minutes ago
But this isn't about effort. This is about genuine humanity. I want to read comments that, in their entirety, came out of the brain of a human. Not something that a human and LLM collaboratively wrote together.
I think the one exception I would make (where maybe the guidelines go too far) is that case of a language barrier. I wouldn't object to someone who isn't confident with their English running a comment by an LLM to help fix errors that might make a comment harder to understand for readers. (Or worse, mean something that the commenter doesn't intend!) It's a privilege that I'm a native English speaker and that so much online discourse happens in English. Not everyone has that privilege.
Comment by eek2121 3 minutes ago
The only reason you should be using an LLM on a forum like this is to do language translation. Nobody cares about your grammar skills, and there really isn't a reason to use an LLM outside of that.
LLMs CANNOT provide unique objectivity or offer unknown arguments because they can only use their own training data, based on existing objectivity and arguments, to write a response. So please shut that shit down and be a human.
Signed, a verified/tested autistic old man.
cheers
Comment by c23gooey 26 minutes ago
Also, quality doesn't come from any of those points you've mentioned. Quality comes from your ability to think and reason through a topic. All those points you mention in your first paragraph are excuses, trying to make it seem like there was some sort of effort to get an LLM to write a post. It feels like fishing for a justification
Comment by xpe 12 minutes ago
Yes, this is a great skill to have: no argument from me. This wasn't my point, and I hope you can see than upon reflection.
> All those points you mention in your first paragraph are excuses, trying to make it seem like there was some sort of effort to get an LLM to write a post.
Consider that a reader of the word 'excuses' would often perceive an escalation of sorts. A dismissal.
> Quality comes from your ability to think and reason through a topic.
That's part of it. Since the quote above is a bit ambiguous to me, I will rephrase it as "What are the factors that influence the quality of a comment posted on Hacker News?" and then answer the question. I would then split apart that question into sub-questions of the form "To what extent does a comment ..."
- address the context? Pay attention to the conversational history?
- follow the guidelines of the forum?
- communicate something useful to at least some of the readers?
- use good reasoning?
One thing that all of the four bullet points require is intelligence. Until roughly ~2 years ago, most people would have said the above demand human intelligence; AI can't come close. But the gap is narrowing. Anyhow, I would very much like to see more intelligence (of all kinds, via various methods, including LLM-assisted brainstorming) in the service of better comments here. But intelligence isn't enough; there are also shared values. Shared values of empathy and charity.
In case you are wondering about my "agenda"... it is something along the lines of "I want everyone to think a lot harder about these issues, because we ain't seen NOTHING yet". I also strive try to promote and model the kind of community I want to see here.
Comment by detectivestory 9 minutes ago
Comment by QQ00 1 hour ago
Comment by jasoneckert 1 hour ago
I'm hoping people catch that typo after reading "every single word, phrase, and typo (purposeful or not)" and smiled every time I've had someone post a PR with a fix for it (that I subsequently reject ;-)
Comment by nomel 45 minutes ago
Comment by kelnos 7 minutes ago
Comment by wilg 1 hour ago
Comment by theappsecguy 41 minutes ago
Comment by fc417fc802 9 minutes ago
Google search has been getting progressively worse for technical topics for at least the past decade. Now suddenly they started providing a free tutor capable of custom tailoring graduate level explanations of technical topics for me on demand. The difference is night and day.
Comment by kelnos 5 minutes ago
And certainly individuals can make their own decision to engage with an LLM in positive, self-thought-provoking ways, but it's still useful to understand how people generally do use them in the real world.
Comment by kelnos 6 minutes ago
Yes, some people (see some sibling commenters) do engage with an LLM in ways that might make them more thoughtful, but I have a hard time believing that's the common case.
Comment by justinnk 9 minutes ago
Comment by AirGapWorksAI 1 hour ago
Comment by andy99 19 minutes ago
Comment by doctorpangloss 52 minutes ago
These aren't the marina bros, they're the guys who think they're really smart because they did well in math. They are using LLMs to reply to people. They LOOK like you. Do you get it?
Comment by caaqil 48 minutes ago
I don't wanna be a party pooper here, but you will be lucky if the input satisfies one of those conditions. Getting input with both those attributes on HN is like finding life on Mars.
Comment by gus_massa 24 minutes ago
I think the situation is better in small discussions, that sometimes are lucky and get more technical.
Once a discussion reach 100 or so comments, most of the time the discussion is too generic, but there are a few hidden good comments here and there.
Comment by fudged71 54 seconds ago
Comment by abtinf 2 hours ago
99% of rule enforcement, both IRL and online, comes down to individuals accepting the culture.
Rules aren’t really for adversaries, they are for ordinary situations. Adversaries are dealt with differently.
Comment by loeg 20 minutes ago
> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
Comment by gr8tyeah 1 hour ago
Comment by abtinf 1 hour ago
It will take time, but eventually everyone will know about it.
Comment by altairprime 14 minutes ago
Note that the guidelines do explicitly say not to post about guidelines violations in comments, and to email them instead. I know this isn’t a well-loved guideline in this modern era, but duly noted: those well-intended comments are themselves breaking the guidelines.
Comment by bigiain 17 minutes ago
Comment by bhhaskin 1 hour ago
Comment by abtinf 1 hour ago
I’ve broken the guidelines on this site before. The mods reply and say “hey, stop doing that, here is the guideline”. I stopped doing it. Life continues.
Comment by altairprime 1 hour ago
Comment by jbaber 1 hour ago
Comment by VoodooJuJu 1 minute ago
Comment by magicseth 1 minute ago
Comment by jedberg 1 hour ago
My only caution is that good writers and LLMs look very similar, because LLMs were trained on a corpus of good writers. Good writers use semicolons and em-dashes. Sometimes we used bulleted lists or Oxford commas.
So we should make sure to follow that other HN rule, and assume the person on the other end is a good faith actor, and be cautious about accusing someone of using AI.
(I've been accused multiple times of being an AI after writing long well written comments 100% by hand)
Comment by jjgreen 2 minutes ago
- You seem to have a rather high opinion of your own writing :-)
- Why the mix of tense (use/used)?
- Oxford commas are a monstrosity
Comment by tyg13 1 hour ago
Like, sure, LLM writing is almost always grammatically correct, spelled correctly, formatted correctly, etc., which tends to be true of good writing. But there's a certain style that it just can't get away from. It's not just the em-dashes, the semi-colons, or the bulleted lists. It's the short, punchy sentences, with few-to-no asides or digressions. Often using idiom, but only in a stale, trite, and homogenized manner. Real humans, are each different -- which lends a certain unpredictability to our writing, even if trying to write to a semi-formal standard, the way "good" writers often do -- but LLMs are all so painfully the same, and the output shows it.
Comment by ordersofmag 15 minutes ago
Comment by lordnacho 5 minutes ago
Comment by jedberg 43 minutes ago
https://www.reddit.com/r/ExperiencedDevs/comments/1pyjkuf/i_...
Granted, it was in a thread about AI and maybe people were on edge, but I was still accused, which to be honest hurt a bit after the effort I put into writing it.
Comment by nonameiguess 14 minutes ago
It is amusing to witness this happening to others when it's someone like you who is a semi-public figure who should probably be well known on Reddit of all places.
Comment by girvo 1 hour ago
Comment by xboxnolifes 1 hour ago
Comment by 0______0 1 hour ago
Comment by semiquaver 45 minutes ago
Comment by SchemaLoad 21 minutes ago
Comment by alexjplant 19 minutes ago
I use semicolons a lot. If this is the nouveau tell du jour for LLMs then I'm in trouble.
Comment by zahlman 1 hour ago
Comment by nomel 29 minutes ago
Comment by zahlman 3 minutes ago
Arguably it cannot avoid all the possible harm. For example, someone might generate a comment that makes false statements but cannot reasonably be detected as LLM-generated except perhaps by people who know (or determine) that the statements are false. But from a policy perspective, this is again not really different from if someone just decided to lie.
Comment by j45 1 hour ago
Comment by djeastm 39 minutes ago
Perhaps always be sure to say something especially timely, original or insightful that an LLM can't have come up with.
Comment by jjk166 18 minutes ago
Comment by GMoromisato 1 hour ago
But here's where it gets tricky: Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?
Am I here to read authentic humans because I value authenticity for its own sake (like preferring Champagne instead of sparkling wine)? Or do I value authentic human output because I expect it to be of higher quality?
I confess that it is a little of both. But it wouldn't surprise me if someday LLM-enhanced output becomes sufficiently superior to average human output that the choice to stick with authentic human output will be more painful.
Comment by altairprime 1 hour ago
This is an artificial dichotomy. HN’s guidelines specify thoughtful, curious discussion as a specific goal. One-off / pithy / sarcastic throwaway comments are generally unwelcome, however popular they are. Insightful responses can be three words, ten seconds to write and submit, and still be absolutely invaluable. Well-thought-out responses are also always appreciated, even if they tend to attract fewer upvotes than a generic rabble-rousing sentiment about DRM or GPL or Apple that’s been copy-pasted to the past hundred posts about that topic. But LLM-enhanced responses are not only unwelcome but now outright prohibited.
Better an HN with fewer words than an HN with more AI writing words. We’ve been drowned in Show HN by quantity as proof of why already.
Comment by GMoromisato 13 minutes ago
That's the dichotomy: Do we prefer text with the right "provenance" over higher quality text?
[Perhaps you'll say that human+LLM text will never be as high-quality as human alone. But I'm pretty sure we've seen that movie before and we know how it ends.]
That said, you're right that because human+LLM is so much more efficient, we'll be drowning in material--and the average quality might even go down, even if the absolute quantity of high-quality content goes up.
I think, in the long term, we will have to come up with more sophisticated criteria for posting rather than just "must be unenhanced human".
Comment by davebranton 5 minutes ago
The guidelines are perfectly clear, no matter the outcome of your thought experiment. Hacker News wants intelligent conversation between human beings, and that's the beginning and the end of it.
If you want LLM-enhanced conversation then I'm sure you will find places to have that desire met, and then some. Hacker News is not that place, and I pray that it will never become that place. In short, and in answer to "Do we prefer text with the right "provenance" over higher quality text?".
Yes. Yes, we do.
Comment by jmull 16 minutes ago
LLMs, as we know them, express things using the patterns they've been developed to prefer. There's a flattening, genericizing effect built in.
If there are people who find an LLM filter to be an enhancement, they can run everything through their favorite LLM themselves.
Comment by kelnos 15 minutes ago
Neither. I want insightful, well-thought-out, human comments.
It's a little sad that this might be too much to ask sometimes...
Comment by bittercynic 1 hour ago
Comment by abtinf 1 hour ago
Comment by caconym_ 1 hour ago
Comment by neutronicus 1 hour ago
The value proposition is that someone who is a lousy writer (perhaps only in English) with deep domain knowledge is going back and forth with the LLM to express some insight or communicate some information that the LLM would not produce on its own.
Comment by caconym_ 46 minutes ago
Wouldn't it work better to just write the thing in whatever language they can actually write in and then do a straightforward translation in a single pass?
> someone who is a lousy writer with deep domain knowledge going back and forth with the LLM to express some insight or communicate some information that the LLM would not produce on its own
This sounds reasonable on its face, but how often does it actually come up that somebody can't clearly express an idea in writing on their own but can somehow get an LLM to clearly express it by writing a series of prompts to the LLM?
And, if it does come up, why don't they just have that conversation with me, instead?
Comment by alpha_squared 1 hour ago
I'd argue that anything insightful or well-though-out doesn't use LLMs at all. We can quibble over whether discussions with an LLM lead to insightful responses, but that still isn't your own personal thought. Just type what's on your mind, it's not that hard and nitpicking over this is just looking for ways to open up unnecessary opportunities for abuse.
Comment by rozal 1 hour ago
Comment by davebranton 1 minute ago
The more you use an LLM to write for you, the worse you will become at writing yourself. There is simply no other possible outcome. It's even true of spellcheck - the more you use a spellcheck the worse you become at spelling. I know this for a fact because I can no longer spell for shit. However, spelling is to writing as arithmetic is to mathematics. I also can't add up, but I have a degree in pure mathematics.
LLMs are a cancer on human thought and expression.
Comment by sharken 12 minutes ago
Comment by RhodesianHunter 1 hour ago
Anyone learning the language and some people with learning disabilities, for example, may communicate better via an LLM.
Comment by bonoboTP 1 hour ago
Comment by postalcoder 1 hour ago
Comment by Ensorceled 46 minutes ago
and
> Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?
What is the difference? What's the line between these two?
The prompt: "Analyze <opinion> and respond" is pretty clearly "I would just ask it." and, the prompt: "here's my comment, please ONLY the check the grammar and spelling" would probably be ok.
What about prompt:"I disagree with using LLMs for commenting at all for <reasons>. Please expound on this and provide references and examples". That would explode the word count for this site.
Comment by amarble 1 hour ago
Comment by paganel 10 minutes ago
There's no insight nor well-thought-out response once a person decides to "LLM-enhance" their response. The only insight that the person using the LLM is too limited to have a decent conversation with.
Comment by jedahan 1 hour ago
Comment by gkfasdfasdf 50 minutes ago
Pretty sure this comment is AI
Comment by unsui 58 minutes ago
As humans, we have directives (genetic, cultural, societal, etc.) to prioritize humanistic endeavors (and output) above all else.
History has shown that humans are overwhelmingly chauvinistic in regards to their relationship to other animals in the animal kingdom, even to the point of structuring our moral/ethical/legal systems to prioritize human wellbeing over that of other animals (however correct/ethical that may ultimately be, e.g., given recent findings in animal cognition, such as recent attempts to outlaw boiling lobsters alive as per culinary tradition).
But, it seems that some parties/actors are willing (i.e., benefiting) from subverting this long-standing convention (of prioritizing human interests) in the face of AI (even to the point of the now-farcical quote by Sam Altman that humans take far more nurturing than LLMs...)
So: should we be neglecting our historical and genetic directives, to instead prioritize AI over human interests? Or should we be unashamedly anthropic (pun intended), even at the cost of creating arbitrary barriers (i.e., the equivalent of guilds) intended to protect human interests over those of AI actors?
I strongly recommend the latter, particularly if the disruptions to human-centric conventions/culture/output are indeed as significant (and catastrophic) as they will likely be if unchecked.
Comment by bonoboTP 1 hour ago
Comment by browningstreet 1 hour ago
And no, I wouldn't think an HN post is it either.. I'm just saying, there should be a good place to post the output of good questions asked iteratively.
Comment by vova_hn2 1 hour ago
Comment by abustamam 58 minutes ago
Claude is a bit better but still prone to rambling.
Comment by browningstreet 58 minutes ago
Comment by relaxing 1 hour ago
Comment by TacticalCoder 1 hour ago
Mate, Champagne is a sparkling wine. In French you can even at times hear people asking for "un vin mousseux de Champagne" meaning "a sparkling wine from Champagne" instead of the short form (just saying "un Champagne" or "du Champagne").
Now, granted, not all sparkling wine are Champagne.
The Wikipedia entry begins with: "Champagne is a sparkling wine originated and produced in the Champagne wine region of France...".
I drank enough of it to be stating my case, of which I'm certain!
P.S: and btw, yup, authentic humans content only here, even if it's of "low quality". If I want LLM, I've got my LLMs.
Comment by sireat 38 minutes ago
So just like Armanac's are like Cognac's for lower price, good Cremant will be cheaper and more enjoyable that cheaper Champagne (I've not had any really expensive Champagne).
Then you have Cava from Spain which is similar process to Cremants and Champagne. The difference would be in type of grapes used. A friend of mine swears by Cavas just like I swear by Cremants from Loire region. However my wife hates Cava.
Then Proseccos from Italy again are similar, but quality varies more.
After that we get into more questionable cheaper sparkling wines which usually means some sort of out of bottle insertion of CO2 and even worse version include some other modifications such as sugar.
In general to avoid literal headaches you want BRUTs. Anything semi-sweet or sweet is suspicous.
Again I am not a full wine expert but this is mostly years of ahem experience.
Comment by nu11ptr 2 minutes ago
Comment by Someone1234 2 hours ago
I think "generated comments" is a pretty hard line in the sand, but "AI-edited" is anything but clear-cut.
PS - I think the idea behind these policies is positive and needed. I'm simply clarifying where it begins and ends.
Comment by dang 25 minutes ago
All this stuff is in flux. I thought a lot about whether to add the "edited" bit - but it may change. What I deliberately left out was anything about the articles and projects that get submitted here. There's a lot of turbulence in that area too, but we don't yet have clarity, or even an inkling, of how to settle that one.
Comment by jaysonelliot 2 hours ago
It's better to communicate as an individual, warts and all, than to replace your expression with a sanitized one just because it seems "better." Language is an incredibly nuanced thing, it's best for people's own thoughts to come through exactly as they have written them.
Comment by bruckie 1 hour ago
So yeah, it can change the character of your writing, even if it's just relatively subtle nudges here or there.
edit: we suggested that he disable that feature to help him learn to write independently, and he happily agreed.
Comment by Terr_ 1 hour ago
1. A system that suggests words, the child learns the word, determines whether it matches their intent, and proceeds if they like the result.
2. A system that suggests words, and the child almost-blindly accepts them to get the task over with ASAP.
The end-results may look the same for any single short document, but in the long run... Well, I fear #2 is going to be way more common.
Comment by zahlman 1 hour ago
The phenomenon was observed in religious philosophy over a millennium ago (https://terebess.hu/zen/qingyuan.html).
Comment by abustamam 54 minutes ago
Now that it is, I just turn tab completion off totally when I write code by hand. It's almost never right.
Comment by bruckie 1 hour ago
I have mixed feeling about it. On the one hand, you're right: carefully considering suggestions can be a learning opportunity. On the other hand, approval is easier than generation, and I suspect that without flexing the "come up with it from scratch" muscle frequently, that his mind won't develop as much.
Comment by comboy 1 hour ago
Comment by SchemaLoad 17 minutes ago
Comment by Terr_ 1 hour ago
A certain amount of friction is necessary, at least if the goal is to help the person learn or make something original.
Comment by TimTheTinker 1 hour ago
Comment by jrockway 1 hour ago
Comment by JumpCrisscross 1 hour ago
As an adult, I do too. As a middle schooler, we absolutely used word processors’ thesaurus features to add big words to our essays because the teachers liked them.
Comment by Gibbon1 1 hour ago
Anyway before that she HATED the thesaurus. And she could tell when students were using it to make their writing more fancy pants.
Comment by zahlman 59 minutes ago
Comment by JumpCrisscross 1 hour ago
I had two teachers who called us out on this, and actually coached us on our writing, and I remember them fondly. (They were also fans of in-class essaying.)
The others wanted to count big words.
Comment by ma2kx 40 minutes ago
Comment by NewsaHackO 1 hour ago
It is definitely not true that it is better for a poster to communicate like an individual when it comes to spelling and grammar. People ignore posts that have poor grammar or spelling mistakes, and communications that have poor grammar are seen as unprofessional. Even I do it at a semi-subconscious level. The more difficult or the more amount of attention someone has to pay to understand your post, the less people will be willing to put in that effort to do so.
Comment by comboy 1 hour ago
Comment by zahlman 57 minutes ago
Comment by jjk166 9 minutes ago
This is the opposite of how language works. You want people to understand the idea you're trying to communicate, not fixate on the semantics of how you communicated. Language is like fashion - you only want to break the rules deliberately. If AI or an editor or whatever changes your writing to be more clear and correct, and you don't look at it and say "no, I chose that phrasing for a reason" then the editor's version is much more likely to be understood correctly by the recipient.
Comment by lamontcg 2 hours ago
[It looks like MS Word 97 had the ability to detect passive voice as well, so we're talking 30 year old technology there that predates LLMs -- how far down the Butlerian Jihad are we going with this?]
Comment by MeetingsBrowser 2 hours ago
There is no need for that here beyond maybe spellcheck. Use your own thoughts, voice, and words.
Comment by lamontcg 1 hour ago
Comment by pseudalopex 1 hour ago
Comment by lamontcg 1 hour ago
Comment by mjg2 2 hours ago
Comment by Teever 1 hour ago
There are people here who sit at a desk all day banging out multipage emails for work who decide to write posts of a similar linguistic calibre for funsies.
Meanwhile you have someone in a developing country who just got off a brutal twelve hour shift doing manual labour in the sun who wants to participate in the conversation with an insightful message that they bang-out on a shitty little cellphone onscreen keyboard while riding on bumpy public transit.
You could have a great idea and express it poorly and be penalized for doing so here while someone could have a blah idea expressed excellently and it's showered in replies despite being in some metrics (the ones I think are most important) worse than the other post.
What's the solution for that?
Comment by magicalist 1 hour ago
Remember that you're on a message board and you're not actually 'competing' for anything?
Comment by Teever 1 hour ago
I knew someone was going to comment on my use of the word there despite me putting it in quotes which was intended to let the reader know that I meant that word as an approximation of what I was meaning.
When I say competing I mean competing in the space of ideas here. There is a ranking system here that raises or lowers the visibility and prominance of your comments and it's based on upvotes by other uses. For better or worse people penalize comments with grammatical errors over ones that don't and that affects how much exposure other users have to the ideas that people write and how much interaction they get from them.
If that's the case why would somebody who has good ideas but poor expressive capability bother posting here if their comments are just going to get ignored over relatively vapid comments that are grammatically correct?
Comment by NewsaHackO 50 minutes ago
Comment by Teever 30 minutes ago
In English you have to put your best foot forward in English. And in your environment with the resources you have at your disposal.
For example, I'm currently engaging with you between steps in a chemistry process that's happening under the fumehood next to me while wearing a respirator, a muggy plastic chemical resistant gown and disposable gloves nitrile globes.
I am absolutely certain that these conditions are different than the ones I would need to 'put my best food forward' in this discussion. I'm also certain that quite certain that you and I would both absolutely stumble if we were obligated to particpate in this forum in a language that we're not proficient in as many users often attempt to do and are unfairly penalized for by other members of the community.
I'm with you on the LLM usage for grammatical issues for non-native speakers. I bet more in this community would feel the same way if Dang whimsically mandated that people had to use a language other than English on certain days of the week.
Comment by 12_throw_away 56 minutes ago
I absolutely do not understand this comment. Are you saying that posting is competitive and that comments have "metrics"?
Comment by Aldipower 2 hours ago
Edit: I already got downvoted. :-) Sure, no one can tell exactly why. Maybe the combination of bad English _and_ talking sh*ce isn't ideal at all. :-D Anyways, I have enough karma, so I can last quite a while..
Comment by ssl-3 2 hours ago
The quality of my writing varies (based on my mood as much as anything else, I suppose), but when it is particularly good and error-free then I often get accused of being a bot.
Which is absurd, since I don't use the bot for writing at all.
Comment by colpabar 1 hour ago
How do you know? Is it possible the downvoters just didn't like what you said?
Comment by phs318u 1 hour ago
Comment by yorwba 1 hour ago
It suggests a bias in writers to assume that people would agree with them if only they could express their thoughts accurately.
Comment by fragmede 1 hour ago
The guidelines state:
> Be kind. Don't be snarky. Converse > Edit out swipes. > Don't be curmudgeonly.
On the best of days I manage to follow the rules, but I'm only human. If I run my comment through ChatGPT to try and help me edit out swipes on the bad days, that's not ok?
I'm not using ChatGPT to generate comments, but I've got the -4 comments to show that my "thoughts exactly as they have written them" isn't a winning move.
Comment by zahlman 54 minutes ago
Comment by yorwba 1 hour ago
Comment by drusepth 2 hours ago
I just want clean, easy-to-read content and I don't care about the person who wrote it. A tool like Grammarly is the difference between readable and unreadable (or understandable and understandable) for many people.
Comment by timeinput 2 hours ago
You could even write a plugin for your favorite web browser to do that to every site you visit.
It seems hard to achieve the inverse that is (would you rather I use i.e.?) rewrite this paragraph as the original author did before they had an AI re--write it to make it clean, (--do you like oxford commas, and em/en dashes! Just prompt your AI) and easier to read
Comment by phs318u 1 hour ago
For those coming from a language other than English, you are more likely to lose information by using a tool to “reconstruct” meaning from poorly phrased English as an input, as opposed to the poster using a tool to generate meaningful English from their (presumably) well-written native language.
Comment by tempestn 2 hours ago
I personally don't see a problem with someone using a grammar checker as long as they aren't just blindly accepting its suggestions. That said, if someone actually is using it in that way, it shouldn't be detectable anyway, so it probably doesn't matter all that much whether or not it's included in the letter of the rule.
Comment by kazinator 1 hour ago
But that creates a private version of the text which the original poster didn't sign off on. You could have fixed something contrary to their intent.
Comment by observationist 1 hour ago
This is probably ok:
>> On a technical level, you can really only guard against software that changes your semantics or voice. If you're letting it alter the meaning (or meanings) you intend, or if it starts using words you would never normally use, then it's gone too far.
This is probably too far:
>>> On a technical level, it's important to recogn1ize that the only robust guardrail we can realistically implement is one that prevents modifications to core semantics or authorial voice. If you're comfortable allowing the system to refine or rephrase the precise meanings you originally intended — or if it begins incorporating vocabulary that doesn't align with your typical linguistic patterns — then you've likely crossed a meaningful threshold where the output no longer fully represents your authentic intent.
Something to consider is that you can analyze your own stylometric patterns over a large collection of your writing, and distill that into a system of rules and patterns to follow which AI can readily handle. It is technically possible, albeit tedious, to clone your style such that it's indistinguishable from your actual human writing, and can even icnlude spelling mistakes you've made before at a rate matching your actual writing.
AI editing is weird, though. Not seeing a need, unless English isn't your native language.
Comment by Mordisquitos 2 hours ago
Comment by unsignedint 1 hour ago
Ultimately, this comes down to people making a good-faith judgment about how much AI was involved, whether it was just minor grammatical fixes or something more substantial. The reality is that there isn’t really a shared consensus on exactly where that line should be drawn.
Comment by happytoexplain 2 hours ago
Comment by jacquesm 2 hours ago
Comment by Someone1234 1 hour ago
When a policy is introduced to seemingly guard against new problems, but happens to be inadvertently targeting preexisting and common technology, I don't feel like it is "lawyering" it to want clarity on that line.
For example, it could be argued this forbids all spellcheckers. I don't think that is the implied intent, but the spectrum is huge in the spellchecker space. From simple substitutions + rule-based grammar engines through to n-grams, edit-distance algorithms, statistical machine translation, and transformer-based NLP models.
Comment by tsukikage 2 hours ago
For me, the line is precisely at the point where a human has something they want to say. IMO - use the tools you need to say the thing you want to say; it's fine. The thing I, and many others here, object to is being asked to read reams of text that no-one could be bothered to write.
Comment by czhu12 1 hour ago
Comment by altairprime 2 hours ago
Comment by phs318u 1 hour ago
You forgot the /s ?
Comment by altairprime 1 hour ago
Comment by phs318u 1 hour ago
Comment by altairprime 1 hour ago
Then, I considered whether HN would appreciate posts/comments by a human where they’d had a PR team or a hired editor come in and review/modify/distort their original words in order to make them more whatever. I think that this probably is most likely to have occurred on the HN jobs posts, and I’ve pointed out especially egregious instances to the mods over the years — but in general, the people who post on HN tend to do so from their own voice’s viewpoint, as reaffirmed by the no-AI-writing guideline above. So I decided instead to say “pay a proofreader” because, bluntly, if the community found out that someone was paying a wage to a worker to proofread their HN comments, the response would plausibly be the same mob of laughing mockery, disgusted outrage, and blatant dismissal that we see today towards AI writing here. “You hired someone to tone-edit your HN comments?!” is no different than “You used Grammarly to tone-edit your HN comments?!” to me, and so it passed the veracity test and I posted it.
Comment by glitch13 2 hours ago
It was asked that if "AI Generated Code" is just code suggested to you by a computer program, where does using the code that your IDE suggests in a dropdown? That's been around for decades. Is it LLM or "Gen AI" specific? If so, what specific aspect of that makes one use case good and one use case bad and what exactly separates them?
It's one of those situations where it seems easy to point at examples and say "this one's good and this one's bad", but when you need to write policy you start drowning in minutia.
Comment by kazinator 1 hour ago
IDE code suggestions come from the database of information built about your code base, like what classes have what methods. Each such suggestion is a derived work of the thing being worked on.
Comment by raw_anon_1111 1 hour ago
Comment by skywhopper 2 hours ago
Comment by SecretDreams 2 hours ago
I benefit from my phone flagging spelling errors/typos for me. Maybe it uses AI or maybe it uses a simple dictionary for me. Maybe it might even catch a string of words when the conjunction isn't correct. That's all fair game, IMO. But it shouldn't be rewriting the sentence for me. And it shouldn't be automatically cleaning up my typos for me after I've hit "reply". That's on me.
Comment by thousand_nights 2 hours ago
i type my comments without capitalization like i'm typing into some terminal because i'm lazy and people might hate it but i'm sure they prefer this to if i asked an LLM to rewrite what i type
your writing style is your personality, don't let a robot take it away from you
Comment by tempestn 1 hour ago
In fact, I'd argue that lazy commenting is the real problem, which has now been supercharged by LLMs.
Comment by iammjm 2 hours ago
Comment by wvenable 1 hour ago
The problem with a medium that is completely free and unrestricted is that whomever posts the most sort of wins. I could post this opinion 30-40 times in this thread, using bots and alternative accounts, and completely move the discussion to be only this.
Someone using an LLM is craft a reply is not a problem on it's own. Using it craft a low-effort reply in 3 seconds just to get out is the problem.
Comment by bigstrat2003 8 minutes ago
No, someone using an LLM to craft a reply is a problem in its own. I want to hear what a human has to say, not a human filtered through a computer program. No grammar editing, nothing. Give me your actual writing or I'm not interested.
Comment by malfist 1 hour ago
How much of AI writing will pass under the radar when the big companies aren't all maximizing to generate the most engagement hacking content in a chatbot UI? Maybe it'll still stand out for being low quality, but I'm not sure. There's lots of low quality human authored content.
Not sure where my comment is going, I just kinda rambled.
Comment by wvenable 1 hour ago
It was trained on 30 years of my posts on the Internet, I'm sure some part of it sounds just like me.
Comment by ffsm8 1 hour ago
I sometimes wonder if people aren't forgetting why we're on this platform.
The goal is to have an interesting discourse and maybe grow as a human by broadening your horizon. The likelihood of that happening with llms talking for you is basically nil, hence... Why even go through the motion at that point? It's not like you get anything for upvotes on HN
Comment by wvenable 56 minutes ago
But what if I provided the LLM my thoughts? That's actually how I use LLMs in my life -- I provide it with my thoughts and it generates things from those thoughts.
Now if I'm just giving it your comment and asking it to reply, then yes, those aren't my thoughts. Why would I do that? I think the answer goes back to my original point.
If I'm telling you my thoughts and then you go and tell a friend those thoughts, would you say those are still my thoughts even though I wasn't the one expressing them directly to your friend?
Comment by meatmanek 1 minute ago
- translating (relatively) literally from one language to another would be ~1:1.
- automatic spelling/grammar correction is ~1:1
- Using an LLM to help you find a concise way of expressing what you mean, i.e. giving it extra content to help it suggest a way of phrasing something that has the connotation you want, would be <1:1
Expansion (output > prompt) is where it gets problematic, at least for HN comments: if you give it an 8 word prompt and it expands it to 50, you've just wasted the reader's time -- they could've read the prompt and gotten the same information.(expansion is perfectly fine in a coding context -- it often takes way fewer words to express what you want the program to do than the generated code will contain.)
Comment by safog 2 hours ago
Comment by rlt 16 minutes ago
I know very little about this but sense that some combination of buzzwords like homomorphic encryption, zk-snarks, and yes, blockchains could be useful.
Of course this would present problems if any of your identities were ever compromised and your reputation destroyed.
Comment by throwaway2027 2 hours ago
Comment by kace91 1 hour ago
Comment by OkayPhysicist 1 hour ago
Comment by jacquesm 1 hour ago
Comment by iamnafets 2 hours ago
Comment by Karrot_Kream 1 hour ago
Comment by degamad 1 hour ago
Comment by Karrot_Kream 1 hour ago
Comment by morkalork 1 hour ago
Comment by k33n 2 hours ago
If Web3-like session-signing had taken off enough to become OS or even browser-native, we would have had a fighting chance of remaining mostly anonymous. But that just didn't happen, and isn't going to happen. Mostly because fraud ruined Web3.
Comment by MaKey 1 hour ago
No, it doesn't.
Comment by aprentic 1 hour ago
A completely anonymous stranger has no way to prove that they're human that can't be imitated by an AI. We've even seen that, in some cases, AIs can look more human to humans than real humans do.
The only solution I can think of to that problem is some sort of provenance system. Even before AI, if some random person told me a thing, I'd ignore them; If my most trusted friend told me something, I'd believe them.
We're going to need a digital equivalent. If I see a post/article/comment I need my tech to automatically check the author and rank it based on their position in my trust network. I don't necessarily need to know their identity, but I do need to know their identity relative to me.
Comment by OkayPhysicist 1 hour ago
If you keep track of the invite tree, you can "prune" it as needed to reduce moderation load: low quality users don't tend to be the source of high-quality users, and in the cases where they are, those high quality users tend find other people willing to vouch for them faster than their inviter catches a ban.
Comment by aprentic 45 minutes ago
In online systems the scales quickly get too big for open-invite. There needs to be a way to automatically update the trust network at a fine grain.
The one that jumps to mind is an inference system; when I +/- a comment, I'm really noting that I trust or distrust the author. It can be general or on a specific topic (eg I trust the author to tell the truth or I trust the author to make me laugh). I could also infer that other people with similar trust patterns are likely trustworthy. And I could likely infer that people who are trusted by people I trust are trustworthy.
Comment by avadodin 23 minutes ago
Comment by wasmitnetzen 18 minutes ago
Comment by SchemaLoad 15 minutes ago
Comment by munk-a 2 hours ago
Comment by WD-42 2 hours ago
Comment by thewebguyd 2 hours ago
Best we can do, for the internet and ourselves, is to move away from it and into smaller networks that can be more effectively moderated, and where there is still a level of "human verification" before someone gets invited to participate.
I don't like what that will do to being able to find information publicly, though. The big advantage of internet forums (that have all but disappeared into private discords) is search ability/discoverability. Ran into a problem, or have a question about some super niche project or hobby? Good chance someone else on the net also has it and made a post about it somewhere, and the post & answers are public.
Moving more and more into private communities removes that, and that is a great loss IMO.
Comment by bluefirebrand 27 minutes ago
It is a great loss. Unfortunately this is a result of unchecked greed and an attitude of technological progress at any cost. Frankly we enabled this abuse by naively trying to maintain a free and open internet for people. Maybe we should have been much more aggressively closed off from the start, and not used the internet to share so freely.
Comment by gdulli 2 hours ago
Comment by agile-gift0262 2 hours ago
Comment by apitman 1 hour ago
Comment by jsheard 2 hours ago
Comment by pear01 2 hours ago
An orb that scans your eyeballs for "proof of human".
Comment by rationalist 1 hour ago
Comment by antonvs 1 hour ago
Comment by tomalbrc 2 hours ago
Comment by shit_game 2 hours ago
Years ago (around 2020, when GPT-2 and 3 became publicly available) I noticed and was incredibly critical of how prevalent LLM-generated content was on reddit. I was permanently banned for "abusing reports" for reporting AI-generated comments as spam. Before that, I had posted about how I believed that the the fight against bots was over because the uncanny valley of text generation had been crossed; prior to the public availability of LLMs, most spam/bot comments were either shotgunned scripts that are easily blockable by the most rudimentary of spam filters, generated gibberish created by markov chains, or simply old scraped comments being reposted. The landscape of bot operation at the time largely relied on gaming human interaction, which required carefuly gaming temporal-relevance of text content, coherence of text content (in relation to comment chains), and the most basic attempt at appearing to be organic.
After LLMs became publicly available, text content that was temporally, contextually, and coherently relevant could be generated instantly for free. This removed practically every non-platform-imposed friction for a bot to be successful on reddit (and to generalize, anywhere that people interact). Now the onus of determining what is and isn't organic interaction is squarely on the platform, which is a difficult problem because now bot operators have had much of their work freed up, and can solely focus on gaming platform heuristics instead of also having to game human perception.
This is where AI companies come in to monetize the disaster they have created; by offering fingerprinting services for content they generate, detection services for content made by themselves and others, and estimations of human authenticity for content of any form. All while they continue to sell their services that contradict these objectives, and after having stolen literally everything that has ever been on the internet to accomplish this.
These people are evil. Not these companies - they are legal constructions that don't think or feel or act. These people are evil.
Comment by levkk 2 hours ago
You almost need dedicated hardware that can't run any other software except a mechanical keyboard and make it communicate over an analog medium - something terribly expensive and inconvenient for AI farms to duplicate.
Comment by degamad 1 hour ago
Comment by intrasight 2 hours ago
I think Apple is the only company that would even be able to do that. You have to control the full stack to the pixels or speaker.
Comment by Asmod4n 2 hours ago
that kills two birds with one stone, you can then show everywhere online you are human and how old you are without the services needing any personal information about you, and the sellers don't know what you use that id tag for.
Comment by lich_king 2 hours ago
In fact, even if you can ban the human for life, I'm not sure it solves anything. There are billions of people out there and there's money to be made by monetizing attention. AI-generated content is a way to do that, so there's plenty of takers who don't mind the risk of getting booted from some platform once in a blue moon if it makes them $5k/month without requiring any effort or skill.
Comment by djeastm 34 minutes ago
That might make it less likely someone would ever sell it because to get a new one might take a very long "cool-down" time and it'd severely hamper the seller.
Comment by stetrain 2 hours ago
Comment by Dylan16807 2 hours ago
Comment by MattRix 2 hours ago
Comment by vova_hn2 2 hours ago
Comment by close04 2 hours ago
Comment by Asmod4n 2 hours ago
Comment by LoomyBunny 2 hours ago
Comment by sebastiennight 2 hours ago
I'm afraid the ship has sailed on this one. What other solutions have you heard of apart from the dystopian eyeball-scanning, ID-uploading, biometrics-profiling obvious ones?
(knowing that of course, neither of those actually solve the problem)
Comment by TacticalCoder 1 hour ago
On a site like HN it's kinda easy to vet for at least those that already had thousands of karma before ChatGPT had its breakthrough moment a few years ago.
Now an AI could be asked to "Use my HN account and only write in my style" and probably fool people but I take it old-timers (HN account wise) wouldn't, for the most part, bother doing something that low. Especially not if the community says it's against the guidelines.
Comment by shadowgovt 2 hours ago
This site, at its core, is fundamentally too low-bandwidth, too text-only, and too hands-off-moderated to be able to shoulder the burden of distinguishing real human-sourced dialog from text generated by machines that are optimized to generate dialog that looks human-sourced. Expect the consequence to be that the experience you are having right now will drastically shift.
My personal guess: sites like this will slop up and human beings will ship out, going to sites where they have some mechanism for trust establishment, even if that mechanism is as simple and lo-fi as "The only people who can connect to this site are ones the admin, who is Steve and we all know Steve, personally set up an account for." This has, of course, sacrificed anonymity. But I fundamentally don't see an attestation-of-humanity model that doesn't sacrifice anonymity at some layer; the whole point of anonymity on the Internet was that nobody knew you were a dog (or, in this case, a lobster), and if we now care deeply about a commenter's nephropid (or canid) qualities, we'll probably have to sacrifice that feature.
I'd rather keep the feature, pesonally.
Comment by toomuchtodo 2 hours ago
Comment by grufkork 2 hours ago
Adding this type of rep system would destroy a lot of what is so cool about the internet though. There’d probably be segregation based on rep if it’s very visible, new IDs drowning in a sea of noise. Being anonymous but with a record isn’t the same as posting for the very first time as a completely blank identity and still being given an audience. Making online comms more like real life would alleviate some problems but would also lose part of the reason they’re used in the first place. I don’t see much any other way to do it besides maybe a state-provided anonymous identity provider (though that’s risky for a number of reasons), but it’s going to be sad to see things go.
Comment by schopra909 7 minutes ago
So I'm just baffled, why anyone was using AI to generate comments. Like what was the incentive driving the behavior?
Comment by micromacrofoot 6 minutes ago
Comment by meiuqer 2 hours ago
Comment by dang 1 hour ago
By all means make good use of LLMs and other AI. What counts as good use? The world is figuring that out, it will take years, and HN is no exception (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). We just don't want it to interfere with the human conversation and connection that this site has always been for.
For example, it has always been a bad idea and against HN's rules when users post things that they didn't write themselves, or do bulk copy-pasting into the threads, or write bots to post things.
Btw, the HN mods (who are also the HN devs) use AI extensively and will be doing so a lot more. The limits on that are not technical; they have to do with (1) how much work we still do manually—the classic "no time to do things that would make the things that take all our time take less of it"; and (2) the amount of psychic rewiring that's required—there's a limit to the RoA (rate of astonishment) that any human can absorb. (It's fascinating how technical people are suffering the most from that this time. Less technical people have more experience being hit by disorienting changes, so for them the current moment is somewhat less skull-cracking.)
Getting this right doesn't mean replacing human-to-human interaction, it means we should have more time for that, and do a better job of supporting HN users generally, YC founders who want to launch on HN, and so on. The goal is to enhance human relatedness, not diminish it.
Comment by jacquesm 1 hour ago
But yes, there is some irony there.
Comment by tenahu 1 hour ago
Comment by dalemhurley 5 minutes ago
Comment by _diyar 3 minutes ago
Comment by arrsingh 1 hour ago
Maybe once enough posts have been flagged like that then that corpus could be used to train an AI to automatically detect content generated by AI.
That would be cool.
Maybe the HN site wouldn't add this feature but if someone wrote a client then maybe it could be added there.
Comment by dang 1 hour ago
A nice side effect is that it will double as a confirmation step, solving the FFF (fat finger flagging) problem.
Comment by altairprime 1 hour ago
Comment by zahlman 42 minutes ago
It's a ton of friction compared to ordinary use of a forum; and while I've emailed several times myself, it comes with a sense of guilt (and a feeling that my "several" is probably approximately "several" above average).
Comment by altairprime 37 minutes ago
ps. I acknowledge as well that I’m exempt from feeling guilt for brain reasons, and so if it sounds like I’m not honoring what I would describe as a ‘completely normal’ human response, apologies; I’m trying my best given the lack of familiarity and intend no disrespect towards that reaction.
Comment by postalcoder 1 hour ago
Comment by arrsingh 1 minute ago
Comment by dang 1 hour ago
Fortunately I found some things we could cut as well, so https://news.ycombinator.com/newsguidelines.html actually got shorter.
---
Edit: here are the bits I cut:
Videos of pratfalls or disasters, or cute animal pictures.
It's implicit in submitting something that you think it's important.
I hate cutting any of pg's original language, which to me is classic, but as an editor he himself is relentless, and all of those bits—while still rules—no longer reflect risks to the site. I don't think we have to worry about cute animal pictures taking over HN.
---
Edit 2: ok, I hear you guys, I've cut a couple of the cuts and will put the text back when I get home later.
Comment by Wowfunhappy 1 hour ago
> If you flag, please don't also comment that you did.
I don't understand why you cut these, they seem important! (I can understand the others, which feel either implied or too specific.)
Comment by dang 41 minutes ago
I think I'm going to put that one back, though, because it's not a hill I want to die on and I know what arguing with dozens of people simultaneously feels like when you only have 10 minutes.
Comment by Wowfunhappy 19 minutes ago
Understood, but I feel like I see people breaking these ones frequently, so removing the explicit guideline feels to me like a bad idea.
Comment by andai 34 minutes ago
Not sure if that's really solvable with rules, though.
My experience with downvotes is that people mostly use it as a "I don't like this" button, which is proxy for "I couldn't think of a counterargument so I don't want to look at it."
(I noted recently that downvotes and counterarguments appear to be mutually exclusive, which I found somewhat amusing.)
Whereas I will often upvote things I personally disagree with, if they are interesting or well reasoned. (This seems objectively better to me, of course, but maybe it's personality thing.)
Comment by dang 28 minutes ago
See https://news.ycombinator.com/item?id=16131314 and https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... for history...
Comment by chrisshroba 24 minutes ago
Probably the Mandela effect!
Comment by SegfaultSeagull 51 minutes ago
Challenge accepted.
Comment by dcminter 34 minutes ago
Comment by dang 27 minutes ago
Comment by Kim_Bruning 43 minutes ago
My reading is that the intent is to have a human voice behind the text.
Monitor and see how it goes I guess!
Comment by dang 34 minutes ago
The short version is that we included it to protect users who don't realize how much damage they're doing to their reception here when they think "I'll just run this through ChatGPT to fix my grammar and spelling". I've seen many cases of people getting flamed for this and I don't want more vulnerable users—e.g. people worried about their English—to get punished for trying to improve their contributions. Certainly that would apply to disabled users as well.
Here are some past cases of these interactions: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....
Most rules in https://news.ycombinator.com/newsguidelines.html have a lot of grey area, and how we apply them always involves interpretation and judgment calls. Mostly the ones we explicitly list there are so we have a basis for explaining to people the intended use of the site. HN has always been a spirit-of-the-law place, and—contrary to the "technically correct is the best correct" mentality that many of us share—we consciously resist the temptation to make them too precise.
In other words yes, that bit needs to be applied cautiously and with care, and in this way it's similar to most of the other rules. Trying to get that caution and care right is something we work at every day.
Comment by Kim_Bruning 18 minutes ago
Comment by kshacker 32 minutes ago
But like dang said ... I do not have time to fight this battle when I have only 10 minutes :)
Comment by abtinf 1 hour ago
It’s an instruction for how to use the site. It’s helpful to have it in the guidelines for when the flag feature should be used. Without it, the flag link is much more ominous.
Maybe it could be consolidated with the flag-egregious-comments rule?
Edit to add: IMHO it is not at all obvious on this site that flagging stories is meant to be roughly the equivalent of downvoting comments (and that flagging comments doesn’t have a counterpart at the story level).
Comment by dom96 41 minutes ago
Comment by lurkshark 22 minutes ago
Comment by nomel 36 minutes ago
I see well written people being called "LLM" here all the time, em-dash or not.
Comment by nitwit005 23 minutes ago
On reddit people sometimes go through the comment history and see that it seems to be a bot, but that's fairly high effort.
Comment by jjk166 24 minutes ago
Comment by nomel 15 minutes ago
Comment by 1718627440 41 minutes ago
Comment by zahlman 1 hour ago
Exactly when was this point added? It seems somehow not new, but on the other hand it was missing from an archive.today snapshot I found from last July. (I cannot get archive.org to give me anything useful here.)
Edit:
> Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it.
> If you flag, please don't also comment that you did.
Perhaps these points (and the thing about trivial annoyances, etc.) should be rolled up into a general "please don't post meta commentary outside of explicit site meta discussion"?
Comment by lowbloodsugar 23 minutes ago
I wanted to share some context that might be helpful: I am autistic, and I have often received feedback that my communication is snarky, rude, or tone-deaf. At work, I've found it helpful to run some of my communications through an AI tool to make my messages more accessible to non-autistic colleagues, and this approach has been working well for me.
Comment by minimaxir 1 hour ago
Comment by thomassmith65 52 minutes ago
At any rate, it's too late. The era of organic 'cute animal' content on the internet is dead. AI slop has killed it.
Comment by shagie 10 minutes ago
> Slop has an upside?
Not exactly. Rather its is that places where one does want to find pictures of people's cute cats and dogs is now having additional moderation / administration burdens to try to keep the AI generated content out of those places.
It's not a "cute pictures of cats overrunning some place" but rather "even in the places where it was appropriate to post pictures of one's pets in #mypets or /r/cuteCatPics because such pictures are appropriate there (so they don't overrun other places), now people are starting fights over AI generated content."
An example that I recently encountered was someone who did an AI replacement of a cat that was "loafing" of a loaf of bread that looked like a cat. The cat picture would have been fine (with a dozen "aww" and "cute" comments in reply)... the AI cat loaf picture required moderation actions and some comment defusing over the use of AI.
Comment by dev_l1x_be 58 minutes ago
Comment by latchkey 1 hour ago
Comment by toomuchtodo 1 hour ago
Comment by 8cvor6j844qw_d6 6 minutes ago
Comment by SoKamil 2 hours ago
Comment by vesrah 58 minutes ago
Comment by Aldipower 2 hours ago
Comment by lifthrasiir 2 hours ago
Comment by rafaelmn 2 hours ago
I get decent feedback most of the time, and I read interesting stuff, it's the easiest way I found to stay in the loop in our industry. What are you guys commenting for ?
Comment by SoKamil 2 hours ago
Comment by tayo42 2 hours ago
Comment by pants2 2 hours ago
> Please respond to the strongest plausible interpretation of what someone says
> Please don't post shallow dismissals
Personally I've posted comments with glaring typos that everyone thankfully ignores. I only notice much later when I re-read it.
Comment by tayo42 1 hour ago
Comment by tonymet 1 hour ago
Comment by abustamam 1 hour ago
If you suspect it to be a bot, flag it and move on! If it is indeed a bot and you comment that it's a bot, it doesn't care! If it is not a bot and you call it a bot, you may have offended someone. If it's a human using AI, I don't think a comment will make them change their ways. In any case though, I think it's a useless comment.
Comment by theshrike79 1 hour ago
But when I argue on the internet, it's always a 100% me.
And if I get a wiff of LLM-speak from whoever I'm wrestling in the mud with at the moment, they'll instantly get an entry in my plonk-file. I can talk with ChatGPT on my own thank you very much, I don't need a human in between.
"But my <language> is bad... that's why I use LLMs"
So was mine when I started arguing with strangers on the internet. It's better now. Now I can argue in 3 different languages, almost 4 =)
Comment by water-data-dude 8 minutes ago
Also low quality wine[0]
Comment by bikamonki 2 hours ago
This feels like don't buy at Walmart, support the local small shop. We passed the no return sign miles ago.
Gemini's:
This is like advocating for artisanal blacksmithing in the age of industrial steel. It sounds great in theory, but we passed the point of no return miles back.
Yeah, we can tell the difference :)
Comment by 12_throw_away 32 minutes ago
Man this is a great head-to-head. The folks who claim to use LLMs to "clean up" their writing? Yeah, no. I guess the grammar is probably right, but the writing is wrong. The whole point is about the passage of time, both in the metaphor of the "the age of steel," and in the literal sense. And then it starts talking about "miles back"? It feels bad to read, and in a non-obvious way that requires extra cycles just to figure out why it's off.
Whereas, a human in a comment section writing something like "We passed the no return sign miles ago" - it reads so much better. If the grammar or idiom is slightly off, that actually makes it read better because this is a comment section, people don't actually communicate via formally correct language in almost any context whatsoever.
Comment by GuinansEyebrows 2 hours ago
Comment by ddtaylor 17 minutes ago
[1]: https//ethos.devrupt.io [2]: https://github.com/devrupt-io/LLaMAudit
Comment by snoren 2 hours ago
Comment by bowmessage 2 hours ago
Would you to explore some more examples of human to human conversation throughout history?
Comment by 2001zhaozhao 2 hours ago
Comment by saltyoldman 2 hours ago
None of my agents say that anymore.
Comment by Balinares 2 hours ago
Comment by nathancahill 2 hours ago
Comment by adampunk 2 hours ago
All glory to the em-dash.
Comment by floxy 2 hours ago
Comment by koolala 2 hours ago
Comment by martey 2 hours ago
Comment by koolala 1 hour ago
Comment by munk-a 2 hours ago
Comment by miltonlost 2 hours ago
Comment by jasonjmcghee 2 hours ago
If you're suspicious go to the accounts comments and look to see if they are all nearly identical in every respect other than the topic.
Most are:
It's cool you did <thing you said in post>. So how do you <technical question>?
Comment by 10xDev 2 hours ago
Comment by BoredPositron 2 hours ago
Comment by PUSH_AX 2 hours ago
Comment by lapcat 2 hours ago
They're guidelines. HN is based almost entirely on self-censorship, and moderation has always been light at best, partly due to the moderator-to-comment ratio. Of course the HN guidelines often fail to be observed, which is nothing new.
Comment by snoren 2 hours ago
Comment by tsukikage 2 hours ago
Comment by vova_hn2 33 minutes ago
delve into noteworthy realm
leverage tapestry
Comment by vl 2 hours ago
Comment by nwhnwh 2 hours ago
Comment by dimaaan 2 hours ago
Comment by FieryTransition 9 minutes ago
Comment by rc-1140 5 minutes ago
Comment by zby 2 hours ago
Personally I would just like to read the best comments.
Comment by bondarchuk 2 hours ago
I acknowledge this is partly just my personal bias, in some cases really not fair, and unenforceable anyway, but someone relying on llms just makes me feel like they have... bad taste in information curation, or something, and I'd rather just not interact with them at all.
Comment by jmuguy 2 hours ago
Comment by kace91 1 hour ago
I am one of those folks, and I’m strongly against AI writing for that use case as well.
The only reason I can communicate in English with some fluency is that I used it awkwardly on the internet for years. Don’t rob yourself of that learning process out of shyness, the AI crutch will make you progressively less capable.
Comment by jmuguy 1 hour ago
Comment by Teever 1 hour ago
Why do you need to communicate in English with us native English speakers? Why don't we need to learn your language to communicate with you?
The way I'm looking at it is that you're putting all this effort towards learning how to communicate with people who would never without an outside pressure do the same for you.
If language learning is intrinsically a positive thing what can we do to encourage it in native speakers of English, specifically Americans who are monolingual (as they dominate this website)?
Imagine a scenario where Dang announced that we're only allowed to post in English one day week -- every day is dedicated to another language, like Spanish, Russian, Mandarin and the system auto deleted posts that weren't in those languages. Would that be a good thing? Would we see American users start to learn Spanish to post on HN on Tuesdays?
Comment by kace91 11 minutes ago
A century ago it was French or Latin, and a century from now it might be Mandarin or something else. The existence of a standard is what matters.
The only complain I have about Americans and language is that most tech companies fail spectacularly at supporting multilingualism, from keyboards struggling with completion to youtube and reddit forcing translations on users.
Comment by gbear605 1 hour ago
Comment by Barbing 1 hour ago
We've all pasted news articles into 2022 Google Translate and a modern LLM, right, and there was no comparison? LLMs even crushed DeepL. Satya had this little story his PR folks helped him with (j/k) even, via Wired June '23:
---
STEVEN LEVY: "Was there a single eureka moment that led you to go all in?"
SATYA NADELLA: "It was that ability to code, which led to our creating Copilot. But the first time I saw what is now called GPT-4, in the summer of 2022, was a mind-blowing experience. There is one query I always sort of use as a reference. Machine translation has been with us for a long time, and it's achieved a lot of great benchmarks, but it doesn't have the subtlety of capturing deep meaning in poetry. Growing up in Hyderabad, India, I'd dreamt about being able to read Persian poetry—in particular the work of Rumi, which has been translated into Urdu and then into English. GPT-4 did it, in one shot. It was not just a machine translation, but something that preserved the sovereignty of poetry across two language boundaries. And that's pretty cool."
---
edit: this comment has some comparisons incl. w/the old Google Translate I'm referring to:
https://news.ycombinator.com/item?id=40243219
Today Google Translate is Gemini, though maybe that's not the "traditional translation tool" you were referencing... but hope there's enough here to discuss any aspect that might be interesting!
edit2: March 2025 comparison-
https://lokalise.com/blog/what-is-the-best-llm-for-translati...
"falling behind LLM-based solutions", "consistently outperformed by LLMs", "Not matching top LLMs"
Comment by kubb 1 hour ago
Telling an LLM to "refine" your writing is just lazy and it doesn't help you learn to express yourself better. Asking it for various ways of conveying something, and picking one that suits you when writing a comment is OK in my book.
The way I see it, people will repeat the same grammar and pronunciation mistakes, and use restricted vocabulary their whole lives, just because learning requires effort, and they can't be bothered.
I can accept that nobody is perfect, as long as they have the will to improve.
Comment by happyopossum 1 hour ago
To me those are the same thing excepting the number of options given to the human...
Comment by kubb 1 hour ago
Comment by Freak_NL 1 hour ago
I don't care if they use an LLM to ask questions about grammar or whatever, as long as they write their own text after figuring out whatever it was they were struggling with.
Comment by nobrains 1 hour ago
Comment by MengerSponge 1 hour ago
AI polished writing shaves away all those weird and charming edges until it's just boring.
Comment by mrcsharp 1 hour ago
Comment by xpe 1 hour ago
First, what "loophole" is the comment above referring to? Spell-checking and grammar checking? They seem both common and reasonable to me.
Second, I'm concerned the comment above is uncharitable. (The word 'loophole' is itself a strong tell of that.)
In my view, humanity is at its best when we leverage tools and technology to think better. Let's be careful what policies we put in place. If we insist comments have no "traces of LLM" we might inadvertently lower the quality of discussion.
Comment by fouronnes3 2 hours ago
Comment by unreal6 2 hours ago
Comment by minimaxir 1 hour ago
Unfortunately (a) is more common, and the backlash against has been removing the communinity incentive to provide (b).
Comment by strbean 1 hour ago
But the "This is what ChatGPT said..." stuff feels almost like "Well I put it into a calculator and it said X." We can all trivially do that, so it really doesn't add anything to the conversation. And we never see the prompting, so any mistakes made in the prompting approach are hidden.
Comment by alkyon 1 hour ago
Comment by dormento 1 hour ago
Comment by throwaawy12390 1 hour ago
Comment by xpe 1 hour ago
If we want human "on the other end" we gotta get to ground truth. We're fighting a losing battle thinking that text-based forums can survive without some additional identity components.
Comment by juleiie 2 hours ago
Look at Reddit… abundance of rules do not save that place at all. It’s all about curating what kind of people your site attracts. Reddit of course is a business so they don’t care about anything other than max number of ad views.
Small non profit forums should consciously design a site to deter group(s) of people that they do not want.
Comment by jacquesm 1 hour ago
Comment by gleenn 1 hour ago
Comment by juleiie 33 minutes ago
I don’t think most people read any sort of TOS, site rules, end license agreements, when was the last time you ever did?
Besides, sometimes it’s worth it to keep a rule breaking user if they are interesting and have worthwhile things to say despite their… theoretical conflict with the site intended use. Rules are too crude of a tool. Especially in case of AI they are quite nebulous even in a world where detection would be perfect (it isn’t).
What you want is to design a site that pulls people that value genuine human interaction. Niche sites are already immune to commerce and adversary bots because no one cares/knows about them. Well this site isn’t that niche I guess, some corporate astroturfing happens.
I am on one niche subculture social media and it has suprisingly well made design that is paramount to who it caters and who it dissuades. The result is lack of text ai content even though it isn’t obvious at first glance. LGBT flags are everywhere to dissuade the chuds. Israel flags are present to dissuade the annoying politics ppl from reddit. Lots of artsy stuff to speak to the genuine creativity.
It looks stupid but it isn’t stupid. It’s actually quite ingenious.
HN is probably already dead as it is too high profile in certain circles to avoid mainstream adversarial AI content.
Comment by layman51 1 hour ago
Once LLM generated speech or content start getting into the live answers of Q&A sessions, that would be sad. I know some people try to get through interviews, but I think that might be a bit harder to not detect.
Comment by tavavex 1 hour ago
Comment by strangattractor 1 hour ago
Comment by filoleg 1 hour ago
Whether a company/business uses an LLM or a real human to write a particular piece of text, that piece of text is entitled to free speech protections on the basis of the company signing off on it. Not on the basis of how that piece of writing was produced.
Comment by strangattractor 10 minutes ago
Comment by fluffybucktsnek 1 hour ago
Comment by resters 2 hours ago
Comment by gleenn 1 hour ago
Comment by resters 1 hour ago
Comment by SilentM68 2 hours ago
Comment by Normal_gaussian 2 hours ago
This rule will have an effect on the behaviour of the 'good players', and make the 'bad players' a lot easier to spot. Moderation needs this. I see this as stopping a race-to-the-bottom on value extraction from HN as a platform.
Comment by smy20011 2 hours ago
Comment by cogman10 2 hours ago
But those are pretty specific cases (For example, discussing AI in healthcare). That's about the only time where I think it's reasonable to post the AI output so it can be analyzed/criticized.
What's not helpful is I've been hit by users who haven't disclosed that they are just using AI. It takes a few back and forths before I realize that they are just a bot which is annoying.
Comment by Kim_Bruning 2 hours ago
Not all AI prompting is expanding the prompt.
What if the original prompt is 1000 words, includes 10 scientific articles by reference (boosting it up to 10000) , and the AI helps to boil it down to 100 words instead?
I'd argue that this is probably a rather more responsible usage of the tools. And rather more pleasant to read besides.
Whether it meets the criterion is another thing. But at least don't assume that the original prompt is always better or shorter!
Comment by nitwit005 9 minutes ago
It'd be far better to just have a thread about the best way to get good summaries.
Comment by wildzzz 2 hours ago
Comment by zahlman 35 minutes ago
It's at least as okay as skimming the original documents and not properly reading them.
Comment by Kim_Bruning 55 minutes ago
One of the most important lessons is not to read as many papers as possible. It's weeding out as many as possible so you can spend your limited grey matter reading the ones that actually matter.
And that's where the LLM comes in handy, especially if it's of decent quality. It's a Large Language Model. Chewing through language and finding issues and discrepancies, or simply whether a paper matches your ultimate query is trivial for them .
Comment by Kim_Bruning 2 hours ago
I'm just old enough that I was in the middle of the transition from paper (in primary school in the 80s) to online (starting late 90s)
I say this somewhat tongue in cheek, but obviously people should drive to 3 different libraries across 3 countries and read the journals in their own binders (in at least 3 different languages)
In reality: full-text online is convenient. Having an LLM assist with search and filtering is convenient.
I could go back to the old ways. Would you like me to reply in pen? My handwriting is atrocious.
I really prefer modern tools, though. Not everything older is better. Whether you want to read what I write is up to you.
(edit: Not hyperbole. I live in a small country, and am old enough to still remember the 80's as a kid.)
Comment by zbentley 2 hours ago
I don't expect AI HN responders to out themselves by sharing, but I would be curious to learn if people are prompting anything more involved than just "respond to this on HN: <link>", or running agents that do the same.
Comment by Kim_Bruning 2 hours ago
So technically the prompts involved might expand into megabytes all told. And in the end I formulate a post by myself (to adhere to HN rules), but the prompting can be many many many megabytes and include PDFs, images, blocks of text from multiple sources, and ... you know. Just Doing The Work.
I think this is valid. Previously I would have (and have) (and still do) search google, wikipedia, pubmed, scientific literature, etc. Not for everything. But often. And AI tooling just allows me to do that faster, and keep all my notes in one place besides.
Again, the final edit is typically 90-100% me. (The 10% is if the AI comes with a really good suggestion) . But my homework? Yes. AI is involved these days.
This should be ok. I'm adhering to the letter and the spirit. My post is me.
Comment by smy20011 2 hours ago
Comment by kingbob000 2 hours ago
Comment by 0xbadcafebee 1 hour ago
Example: "write me an article about hidden settings in SSH". You get back more information than most of HN's previous posts about SSH, in a fraction of the text, and more readable.
Actually, screw it, we should just make a new version of HN that has useful articles written by AI. The human written articles are terrible.
Comment by kunai 2 hours ago
Comment by arendtio 11 minutes ago
I think, in the end, it is less about the tool you use and more about the purpose you use it for. It is more like when you use certain tools, you should be cautious about whether you are using them for the right purpose.
Comment by ezst 1 hour ago
Comment by xupybd 5 minutes ago
Comment by maplethorpe 1 hour ago
Do we not think that other people want to see words, pictures, software, and videos created by humans too?
Comment by brailsafe 1 hour ago
Comment by MeetingsBrowser 1 hour ago
Comment by maplethorpe 44 minutes ago
Comment by MeetingsBrowser 11 minutes ago
Comment by fidotron 2 hours ago
After all, no one knows I'm a dog.
Comment by LeifCarrotson 2 hours ago
When someone posts:
> You could use Redis for that, sure, I've run it and it wasn't as hard as some people seem to fear, but in hindsight I'd prefer some good hardware and a Postgres server: that can scale to several million daily users with your workload, and is much easier to design around at this stage of your site.
then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author. You can't know whether that's good advice or not without being the author, and if that's posted by someone you trust it has value.
An LLM could be prompted to pretend they're an experienced DBA and to comment on a thread, and might produce that sentence, or if the temperature is a little different it might just say that you should start with Redis because then you don't have to redesign your whole business when Postgres won't scale anymore.
Comment by eikenberry 1 hour ago
This implies they know the author and can trust them. If they don't know the author then there is no trust to break and they are only relying on the collective intelligence which could be reflected by the AI.
That is to say that trusting a known human author is very different from trusting any human author and trusting any human author is not that much different from trusting an AI.
Comment by fidotron 2 hours ago
This is my point.
There is no sane endgame here that doesn't end up with each user effectively declaring who they do and don't care to hear, and possibly transitively extend that relationship n steps into the graph. For example you might trust all humans vetted by the German government but distrust HN commenters.
For now HN and others are free to do as they will (and the current AI situation has been intolerable), however, I suspect in the near future governments will attempt to impose their own version of it on to ever less significant forums, and as a tech community we need to be thinking more clearly about where this goes before we lose all choice in the matter.
Comment by AlecSchueler 2 hours ago
This already falls apart though. There are while categories of things which I find "incorrect" and would take up as an argument with a fellow human. But trying to change the mind of an LLM just feels like a waste of my time.
Comment by skeledrew 2 hours ago
Comment by AlecSchueler 1 hour ago
Look, I'll give you a loose example: It's not uncommon to see a post making an "error" I know from experience. I might take the time to help someone more quickly learn what I felt I learnt to help me get out of that mistaken line of thought. If it's an LLM why would I care? There's thousands of other people, even other LLMs, that I could be talking to instead.
You've set up a framework here where "mutual understanding" is the end goal but that's just not always what's on the line.
Comment by throwaway2027 2 hours ago
It often is with humans as well.
Comment by AlecSchueler 1 hour ago
Comment by craftkiller 2 hours ago
(naturally "birds aren't real" is a correct vs not correct thing, but the same can be applied to many less-objective things like the best mechanical keyboard or the morality of a war)
Comment by kcguyu 2 hours ago
Comment by resiros 2 hours ago
"I don't fully agree with banning AI-edited comments. Using AI to improve readability and clarity is a reasonable thing to do. A well-structured comment is often much better than a braindump that reads like rambling. AI is quite good at this, and it will probably get better. To illustrate the point, here is how this comment would have looked if edited"
Comment by dustycyanide 2 hours ago
Comment by danbrooks 2 hours ago
Comment by data-ottawa 2 hours ago
Comment by cityofdelusion 2 hours ago
Comment by a_victorp 1 hour ago
Comment by xxs 1 hour ago
While I do edit my comments to fix typos, certain spelling oddities and other peculiarities would be present.
Comment by yesfitz 2 hours ago
The AI comment might be clear, but it sounds like a press release, not a person, and there's nothing to engage with.
Comment by Sharlin 2 hours ago
Comment by BeetleB 2 hours ago
Easier to read ==> More likely to be read.
No, it's not saying the same thing, especially if the tool is telling you that your statement is ambiguous and should be rephrased.
Comment by xxs 1 hour ago
Unless you are purposely train on that specific way to expression, it ain't easier to read.
Comment by BeetleB 54 minutes ago
Comment by Sharlin 1 hour ago
Comment by BeetleB 44 minutes ago
And who is advocating for a more formal register?
Comment by mkl 1 hour ago
Comment by BeetleB 43 minutes ago
https://news.ycombinator.com/item?id=47342324
You're saying removing ambiguity does not make it easier to read? You're saying using a word that means nothing like what you meant to say is easier to read than using the correct word?
Really?
Comment by wmoxam 29 minutes ago
Robot walks into a bar
Orders a drink, lays down a bill
Bartender says, "Hey, we don't serve robots"
And the robot says, "Oh, but someday you will"Comment by randusername 1 hour ago
I don't think it is a moral failing to use AI to generate writing or to use it to brainstorm ideas and crystalize them, but c'mon isn't it weird to insist that you need them to write _comments_ on the internet? What happens when the AI decides you're wrongthinking?
Comment by julius_eth_dev 2 hours ago
Comment by gensym 2 hours ago
I can understand why you think this is true, but it is false.
Comment by Kim_Bruning 2 hours ago
Comment by gensym 2 hours ago
In a real discussion, the messiness is an important signal. The mistakes that you made and _didn't_ catch, the clunky word choices, etc, give insight actually show what you are thinking and how clearly you are thinking about it. If you have edited something for clarity, that's an important signal. LLM editing destroys that signal.
And it gets worse because LLMs destroy that signal in one direction - towards homogeneity. They create the illusion of "what you were actually thinking, but better than you could express it" but what they are delivering is "generic, professional-sounding ideas phrased in a way to convince you they are your own".
Comment by Kim_Bruning 1 hour ago
Oh, right, yes, if you're not careful they can definitely do that.
But look at what julius_eth_dev is actually saying they're doing:
> "rubber-ducking architecture decisions, pressure-testing arguments before I post them."
That's more like using the LLM as a sparring partner; they're not having the LLM write their comments for them.
I thought you were going to go somewhere really interesting actually, like maybe 'the LLM convinces you that their arguments are better than yours, and now you're acting like a meat puppet.' Or something equally slightly alarming and cool like that! ;-)
Comment by fluffybucktsnek 1 hour ago
The messiness may show glimpses of the process, but, in isolation, will likely distort and corrupt the desired message via partial framing.
Comment by antics9 2 hours ago
By the looks of it, I don't even think I'm replying to a human.
Comment by b40d-48b2-979e 2 hours ago
By the looks of it, I don't even think I'm replying to a human.
They didn't even bother to remove any of the signals. Perhaps this post is actually a honeypot for these bots.Comment by throw310822 2 hours ago
Comment by bondarchuk 2 hours ago
Comment by throw310822 2 hours ago
Comment by bondarchuk 2 hours ago
Comment by fsloth 2 hours ago
Claude's output it _totally different_ from pasting a quote from Wikipedia.
The latter has the potential to be edited and reviewed by global subject experts.
Claude's output totally depends on what priors you gave it and while you can have high confidence in the context no third party should have.
Comment by throw310822 2 hours ago
Comment by fsloth 2 hours ago
If you feel like it sure chat with claude to build your insight. Then write what you think _yourself_.
If you want to introduce references use urls to non-ai generated contexts.
I means as a HN protocol.
HN is supposed to be interesting.
LLM output specifically is not interesting because everyone else can generate roughly the same output.
Comment by bakugo 1 hour ago
Comment by desireco42 2 hours ago
Comment by nkzd 2 hours ago
Comment by jamesmiller5 2 hours ago
> Your arguments will come of as stronger to the reader.
That is persuasian, not authenticity, to the OP's point.
Typed without a spellchecker :).
Comment by jacquesm 2 hours ago
And that's where I think the guidelines could be expanded a bit more to restore the balance. Something along the line of 'HN is visited by people from all over the world and from many different cultural and linguistical backgrounds. Please respect that and realize that native English and Western background should not be automatically assumed. It is the message that counts, not the form in which it was presented.'.
Comment by altairprime 1 hour ago
(For example: If I’m trying to express a point about how we shouldn’t assume that dinner isn’t “her duty” but is instead “our duty”, a French-like aphorism expressed in English literally as “the chicken won’t fly into the oven unprompted” could plausibly be AI-translated instead as “don’t count your chickens before they hatch”, doing catastrophic damage to the point. To a machine translator those two aphorisms are not distinctive; but they are, even if it’s a weird expression in common U.S. English.)
Comment by darkwater 2 hours ago
Comment by wasmitnetzen 9 minutes ago
Funnily enough, I've noticed myself getting worse with they're/their the more is use English (which is my third language).
Comment by d4mi3n 2 hours ago
Comment by egeozcan 2 hours ago
That’s true. I’m fluent in German, but there’s still a difference between me and a native speaker. I’ve often seen my ideas dismissed, only for the exact same point to be praised later when a native speaker expresses it more clearly.
Comment by polotics 1 hour ago
Comment by rrr_oh_man 1 hour ago
Comment by polotics 1 hour ago
I now expect malapropism, hacker curtness, and implicits: TAIDR is the new TLDR.
Comment by JumpCrisscross 1 hour ago
Write it broken.
Broken and true is more authentic than polished and approximately so. When I see an AI-generated comment or email, I catch myself implicitly assuming it is—best case—bullshit. That isn’t the case if the grammar is off. (If anything, it can be charming.)
Comment by vharuck 1 hour ago
Besides, this isn't an English poetry forum. Language here is like gift wrapping for an idea: pleasant if pretty, but not the most important thing.
Comment by AnimalMuppet 1 hour ago
That may be a defect in me. Maybe I should make a stronger effort on such comments. But I suspect I'm not the only one who does that, and at that point it becomes an issue that affects the community as a whole.
Comment by JumpCrisscross 1 hour ago
At which point you’d be fully justified in using an AI to decode their text. I still think that’s a better world than pre-filtering.
Comment by officeplant 2 hours ago
Post the translation as best you can manage, and below it put the same comment in your original language. If someone has qualms with your comment having broken english/mistranslations they are welcome to run bits of original language themselves.
We're all here to talk about tech, and we aren't all perfect little english robots.
Comment by Willish42 1 hour ago
I've seen enough GPT-generated slop that I find its style of writing very off-putting, and find it hurts the perceived competence or effort of the author when applied in the wrong context. I'm not sure if direct translation tools serve a better purpose here, but along with the other commenters, I personally find imperfect speech that was actually written "by hand" by the author easier and more straightforward to communicate with despite the imperfections. Also, non-ESL speakers make plenty of mistakes with grammar, spelling, etc. that humans are used to associating with "style" as authentic speech.
It can also become a crutch for language learners of any age / regardless of their primary language, that inhibits learning or finding one's own "style" of speech
Comment by cityofdelusion 2 hours ago
The human touch of someone’s real voice myself, rather than a false veneer will carry more weight very soon.
Comment by eszed 1 hour ago
I've never sent or posted anything AI-written, beyond a pro-forma job description - because I don't know the domain-specific conventions, and HR returned my draft to me with the instruction to use ChatGPT, which I think amusing, but whatever: the output satisfied them, and I was able to get on with my day.
I occasionally experiment with putting something I've written through an LLM, and it's inevitably a blandifying of my original, which doesn't really say what I intended. But maybe that's good? My wife thinks I'm sometimes too blunt, and colleagues don't always appreciate being told technical details.
I also appreciate individuated writing - including the posts by people on this board are not native speakers. Grammatical mistakes seldom inhibit understanding when the writing has been done with care.
I'm rambling at this point, but it's because I'm truly uncertain how these cultural changes will turn out, and (an old man's complaint, since time immemorial!) pretty sure I'll end up one of the last of the dinosaurs, clinging to my manually written "voice" long after everyone else in the world has come to see my preferences quaint.
Comment by ThrowawayR2 1 hour ago
Comment by phs318u 1 hour ago
This is tragic. I write English well and will employ grammar and word choice effectively to make an argument or get a point across. English was my best subject at school 45 years ago despite a career in tech. In fact, I’d suggest that my career as an architect and the need to convey concepts and argue trade-offs with stakeholders of varying backgrounds has honed that skill. Should I now dumb down my language or deliberately introduce errors in order to satisfy the barely literate or avoid being “detected” as an AI? (as if the latter were possible. It’s an arms race).
Comment by JumpCrisscross 1 hour ago
Language is a tool. If it wins the argument, yes. I’ve absolutely gone back through drafts to tighten up language and reduce word complexity. And if I’m typing with someone who frequently typos, I’ll sometimes reverse the autocorrect. Mostly as a joke to myself. But I imagine it helps me come across as less stuck up. (Truth: I’m a bit stuck up about language :P.)
Comment by phs318u 1 hour ago
While this is true, it is not just a tool. Or, I should say it’s a tool with far greater utility than just winning an argument or making a localised point. Language is how we think, and the ability to reason well is absolutely dependent on our skill with language.
Language is the mark of humanity in the sense that how else can I convey to you a fragment of my inner state? My emotions, my feelings, my desires. The language of poetry and literature. That which sparks an emotional response in another.
Dumbing down language is dumbing down period.
Comment by JumpCrisscross 1 hour ago
I agree. But I don’t always see it as dumbing down. James Joyce’s Portrait starts out with a lot of nonsense, that doesn’t mean it’s dumb or dumbed down. It’s just communicating something that is best described that way. Even to an erudite audience.
I have expertise in some topics. I don’t think of communicating that in lay terms to be dumbing down. The opposite, almost: finding good analogies and expressing them clearly is a lot of fun, even if what comes out the other end isn’t particularly sophisticated.
Comment by antonvs 1 hour ago
Comment by shadowgovt 2 hours ago
Comment by skywhopper 2 hours ago
Comment by tylerritchie 2 hours ago
Comment by dbacar 2 hours ago
Comment by tadfisher 29 minutes ago
Comment by AnimalMuppet 1 hour ago
Comment by chrisweekly 2 hours ago
But I have some concerns about suppression of comments from non-native English writers. More selfishly, my personal writing style has significant overlap with so-called "tells" for AI generated prose: things like "it's not X, it's Y", use of em-dashes, a fairly deep vocabulary, and a tendency toward verbosity (which I'm striving to curb). It'd be ironic if I start getting flagged as a bot, given I don't even use a spell-checker. Time will tell.
Comment by kccqzy 1 hour ago
Comment by TomatoCo 2 hours ago
And of course, a more limited exception for posts about LLM behavior. It might be necessary for people to share prompts and outputs to discuss the topic.
Comment by getnormality 2 hours ago
The rule just makes the will of the community clear to those who want to respect it.
Comment by ubauba 40 minutes ago
But the argument of "If I wanted to read what an LLM thinks, I could just ask it" assumes that prompts are basically equivalent, which is not the case.
There's a risk of reducing everything to Human -> authentic and AI -> fake. Some people's authentic writing sounds closer to LLMs, and detectors are unreliable.
The problem is not so much AI generated content that has an interesting point of view generated from unique prompts, but terrible content produced for metrics to harvest attention, which predates AI.
Anyways, happy posting!
Comment by quirk 54 minutes ago
Comment by rob 2 hours ago
1. Prevent any account from submitting an actual link until it reaches X months old and Y karma (not just one or the other.)
2. Don't auto-link any URLs from said accounts until both thresholds in #1 are met, so they can't post their sites as clickable links in comments to get around it. Make it un-clickable or even [link removed] but keep the rest of the comment.
3. If an account is aged over X months/years old with 0 activity and starts posting > 2 times in < 24 hrs, flag for manual review. Not saying they're bots, but an MO is to use old/inactive accounts and suddenly start posting from them. I've seen plenty here registered in 2019-2021 and just start posting. Don't ban them right away, but flag for review so they don't post 20 times and then someone finally figures it out and emails hn@.
4. When submitting a comment, check last comment timestamp and compare. Many bots make the mistake of commenting multiple detailed times within sixty seconds or less. If somebody is submitting a comment with 30 words and just submitted a comment 30 seconds ago in an entirely different thread with 300 words, they might be Superman. Obviously a bot.
5. Add a dedicated "[flag bot]" button to users that meet certain requirements so they don't need to email hn@ manually every time. Or enable it to people that have shown they can point out bots to you via email already. Emailing dozens of times a day is going to get very annoying for those that care about the website and want to make sure it doesn't get overrun by bots.
Comment by TZubiri 2 hours ago
Comment by zahlman 27 minutes ago
YouTube comment spam has already been doing this for years. Check any video from a reasonably popular creator on any topic related to personal finance; the comments will be full of fake conversations between bots introducing a topic related to the video, and then talking about how such and such a person (whom you can look up by name on Telegram or Signal or whatever) helped solve some serious problem (or invested their money with an implausibly high rate of return). The fake nature of it is usually fairly obvious from the way that the bots make sure you see the name repeated several times with unsolicited, glowing testimonials.
But I had always assumed this was meant to trick actual people, rather than LLMs. Thanks for the food for thought.
Comment by rob 2 hours ago
Comment by blef 35 minutes ago
Nonetheless I like this policy as well.
Comment by ma2kx 46 minutes ago
Comment by Sajarin 1 hour ago
Bit of a shameless plug but I wrote a HN AI comment detector game[0] with AI and most of my friends and fellow HN users who tried it out couldn't detect them.
Comment by tomhow 1 hour ago
This is another reason why it's good to email us (hn@ycombinator.com) rather than commenting when you see generated comments.
Comment by zahlman 32 minutes ago
Comment by happyopossum 1 hour ago
Some of us were trained/self taught to write that way. Even "it's not X, it's Y" is a legitimate and subjectively effective communication tool, and there are those of us who either by training modeling have picked it up as a habit. It's not Ai that started this, Ai learned it from us.
Crap - I just did it, didn't I? Awww double crap! Did it again...
Comment by salicaster 1 hour ago
So I think it's fine to scrutinize commenters who write that way.
Besides, the biggest offense of AI speak is making everything seem like a grand epiphany and revolutionary discovery. Aka engagement bait.
Comment by CactusBlue 2 hours ago
Comment by loeg 22 minutes ago
Comment by 0xbadcafebee 1 hour ago
AI is a tool. You can use it constructively, like Grammarly, or spellcheck. You don't need to be afraid of it.
Comment by salicaster 1 hour ago
It can't. It will rewrite anything you give it.
> it can verify your claims before posting
It can't.
> You don't need to be afraid of it
Nobody is afraid of it. It's annoying. General population cannot be trusted to use it in whatever idealistic way you are imagining.
Comment by daft_pink 2 hours ago
Comment by MeetingsBrowser 2 hours ago
Comment by minimaxir 2 hours ago
Comment by himata4113 1 hour ago
Comment by unsignedint 2 hours ago
Comment by RealityVoid 2 hours ago
Comment by aethrum 2 hours ago
Comment by hendersonreed 2 hours ago
When using LLMs to write, the temptation to avoid actually thinking about what you're communicating is too much for most people.
Comment by fc417fc802 1 hour ago
Comment by peacebeard 2 hours ago
You don't lose your voice if you ask for advice and manually incorporate the suggestions you agree with.
You might lose your voice if you say "Improve my comment to make it better" and copy-paste the result without another thought.
Comment by Griffinsauce 2 hours ago
Keep polishing and everything eventually turns into a smooth shiny ball. We need texture, roughness, edges.
Comment by BeetleB 2 hours ago
An LLM telling me I omitted a qualifier and that my statement isn't saying what I meant it to say isn't changing my voice - it's ensuring what you see is my voice.
Comment by recursive 1 hour ago
Comment by causal 2 hours ago
Comment by aperrien 2 hours ago
Comment by sdenton4 2 hours ago
Comment by adampunk 2 hours ago
Comment by goostavos 2 hours ago
I'm confused by this need(?) desire(?) to polish things that are irrelevant.
Comment by altairprime 2 hours ago
AI is being used as a substitute for skills development when it costs nothing but time to get better. If you’ve reached a plateau with the above method, go find an article or book or interview about editing, pay attention to it and take notes, rinse/repeat.
Spellcheckers will catch grossly obvious errors, but not phonetic typos. AI grammar tools will defang, weaken, soften, neutralize your tone towards the aggregate boring-meh that they incorporated at training time.
Each person will have to decide whether they want individuality or AI-assisted writing for themselves. Sure, some will get away with it undetected, but that’s a universal statement about all human criteria of any kind, and in no way detracts from the necessity of drawing a line in the sand and saying “no” to AI writing here.
Consider the Borg. Everyone’s distinctiveness has been added to the Collective. The end result is mediocre (they sure do die a lot), inhuman (literally), and uniform (all variation is gone). It’s your right if you desire to join the Collective and be a uniform lego brick like the others, but then your no-longer-fully-human posts are no longer welcome at HN.
Comment by ordu 41 minutes ago
Pffff... I'm not going to install LibreOffice for that, or to figure out how to make Gdocs to work with uBlock.
There is a much easier way. Open LLM chat, type there "Proofread please for grammar, keep the wording and the tone as it is, if it doesn't mess with grammar. Explain yourself." and then paste your text. I don't really know what the tools you mentioned do, but any "free" LLM on the Internet will point to things like missing articles, or messed up tenses in complex sentences.
You recommend choosing self-improvement, but I just don't believe I can figure out how to use articles. With tenses I think I can learn how to do it, but I'm not going to. I remember there is some obscure rule how to choose the right tenses, but I was never able to remember the rule itself. I'm bad with rules, it is the reason I chose math as my major. There are almost no rules in math, you are making your own rules. The grammars of languages are not like that, they have rules which can't be easily inferred, you need to remember them. Grammars have exceptions to rules, and exceptions to exceptions, and in any case they are not the rules, but more like guidelines, because people normally don't think about rules when they are talking or writing.
No way I'm starting to learn rules now, I'd better continue to rely on my skills. But LLMs can help me see when my skills fail me.
> It’s your right if you desire to join the Collective and be a uniform lego brick like the others, but then your no-longer-fully-human posts are no longer welcome at HN.
I believe you (as most of fervent supporters of the rule here) gone too far onto philosophy with this, too far from the reality and practice. You can't detect AI in my messages, because they are mine. Even when I ask LLM to find words for me, it is me who picks one of the proposed alternatives, but mostly I manage without wording changes. I transfer the LLM's edits by hand by editing the source message, so nothing unnoticed can slip into the final result. If I took the effort to ask an LLM to proofread, it means I care about the result more than usual, so I'm investing more effort into it, not less.
Comment by altairprime 27 minutes ago
Comment by dgacmu 2 hours ago
(As an experiment, I took that paragraph and threw it into gemini to ask for spell and grammar checking. It yelled at me completely incorrectly about saying "I'm not dang". Of its 4 suggestions, only 1 was correct, and the other 3 would have either broken what I was trying to say or reduced the presence of my usual HN comment voice. So while I said the above, perhaps I'm wrong and even listening to the damn box about grammar is a bad idea.)
That said, I often post from my phone and have somewhat frequent little glitches either from voice recognition or large clumsy thumbs, and nobody has ever seemed to care except me when I notice them a few minutes after the edit button goes away.
Comment by the_af 2 hours ago
I've never, ever, ever ever ever, seen anybody complain about spelling mistakes in a comment here. As long as you can understand the comment, people respond to it.
Comment by BeetleB 1 hour ago
I personally don't use an LLM to spellcheck (browser spellcheck works fine), but I see no problem with someone using an LLM to point out spelling errors.
And while I don't complain about others' spelling errors, I sure do notice them. And if someone writes a long wall of text as one giant paragraph that has lots of spelling/grammatical issues, chances are very high I won't read it.
Some people write very poorly by almost any standard. If an LLM helps the person write better, I'm all for it. There's a world of a difference between copy/pasting from the LLM and asking it for feedback.
Comment by the_af 1 hour ago
Spellcheckers exist, you don't need an AI to change your voice.
Also, if you have standards, you can always train yourself to spell better!
Comment by BeetleB 40 minutes ago
How is using an AI to spell check changing my voice?
Yes, thank you - I know spellcheckers exist, as my comment clearly states. The amusing thing is that an LLM who had access to the thread would have alerted you to a basic error you're making.
> Also, if you have standards, you can always train yourself to spell better!
"You can always ..." is not an argument against alternatives.
Comment by vova_hn2 2 hours ago
At least that was the case before LLMs became a thing, now I'm not sure anymore.
Comment by Kim_Bruning 2 hours ago
Comment by the_af 2 hours ago
And why would you want to "improve your writing" for an HN comment? I think people here value raw authenticity more than polished writing.
Comment by BeetleB 1 hour ago
Lots of people break HN guidelines. I see it virtually every day.
> And why would you want to "improve your writing" for an HN comment?
Some people like to write well regardless of the medium. Why is that a problem for you?
> I think people here value raw authenticity more than polished writing.
Classic false dichotomy. Asking an LLM for feedback is not making your comment less authentic. As I pointed out elsewhere, it can make your comment more authentic by ensuring that what you had in your head and what you wrote match.
Go and study writing and psychology. For anything of value, it's rare that your first attempt reflects what you meant to say. It's also rare that the first attempt, even if it reflects what you meant, will not be absorbed by the recipient. Saying what you mean, and having it understood as you meant it, is a difficult skill.
Comment by the_af 1 hour ago
Yes, and AI won't help here. People will use AI to better break the guidelines.
> Go and study writing and psychology
Is this a case where you should have read the guidelines? Maybe an LLM could have helped you here? Please don't send me study anything, you know what they say of ASSuming.
> Some people like to write well regardless of the medium. Why is that a problem for you?
HN is more like talking than writing. And LLMs don't help you write well, they help you sound like a clone, which is unwanted.
> For anything of value, it's rare that your first attempt reflects what you meant to say.
You can always edit your comment. And in any case, HN is like a live conversation. Imagine if your friend AI-edited their speech in real-time as they talked to you.
Comment by Kim_Bruning 1 hour ago
The other important thing you can do is have an AI check your claims before you post. Even with google and pubmed, a quick check against sources by hand can take 30 minutes or longer, while with AI tooling it takes 5. Guess which one is more likely to actually lead to people checking their facts before they post. (even if imperfectly!) .
I'm not talking about people who lazily ask the AI to write their post for them. Or those who don't actually go through and actually get the AI to find primary sources. Those people are not being as helpful. Though try consider educating them on more responsible tool use as well?
Comment by BeetleB 36 minutes ago
AI is a general purpose tool. People will use AI for multiple reasons, including yours. I'll wager, though, that your use case is much more challenging to do than mine, and that my use case will dominate in number.
> HN is more like talking than writing.
Says you. Many disagree.
> And LLMs don't help you write well, they help you sound like a clone, which is unwanted.
Patently false on both counts. Sorry, you're cherry picking and not addressing the part of my comment that discusses this.
> Imagine if your friend AI-edited their speech in real-time as they talked to you.
When a conversation is heated (as it occasionally is on HN), I actually would rather he AI-edit in real time - provided that the output reflects what he intended.
Comment by tonyarkles 2 hours ago
I say this on behalf of all of my neurospicy friends… sometimes, yes. Especially having taken a look at the whole list of guidelines, I definitely am friends with people who would could struggle to determine whether a given comment fits or not.
Comment by bryanlarsen 2 hours ago
For example, use "literally" for exaggeration rather than in the original meaning of the word and you'll likely trigger somebody.
Comment by the_af 1 hour ago
It's against the HN guidelines to focus on punctuation, spelling, etc, as long as the comment is understood.
And, in any case, it's now against the guidelines to write using an AI :)
Comment by cogman10 2 hours ago
Comment by everybodyknows 2 hours ago
Comment by ghxst 1 hour ago
Comment by shredswap 1 hour ago
Comment by dev_l1x_be 55 minutes ago
Comment by chapz 1 hour ago
Comment by dormento 1 hour ago
Comment by AndriyKunitsyn 14 minutes ago
Comment by nineteen999 1 hour ago
Comment by Imustaskforhelp 2 hours ago
In my observation, recently there are quite many new AI generated comments in general. Like not even trying to hide with full em-dashes and everything.
I do feel like people are gonna get sneaky in future but there are going to be multiple discussions about that within this thread.
But I find it pretty cool that HN takes a stance about it. HN rules essentially saying Bots need not comment is pretty great imo.
It's a bit of a cat and a mouse problem but so is buying upvotes in places like reddit but HN with its track record of decades might have one or two suspicious or actions but long term, it feels robust. I hope the same robustness applies in this case well hopefully.
Wishing moderation luck that bad actors don't try to take it as a challenge and leave our human community to ourselves :]
Another point I'd like to say is that, if successful, then we can also stop saying, "did you write your comment by LLM" and the remarks as well which I also say time to time when I see someone clearly using AI but it seems that some false-positives happen as well (they have happened sometimes with me and see it happen with others as well) and they also de-rail the discussion. So HN being a place for humans, by humans can fix that issue too.
Knowing dang and tomhow, I feel somewhat optimistic!
Comment by altairprime 2 hours ago
Similarly: If you see people making accusations of guidelines violations in a discussion, email the thread link to the mods with a subject like “Accusations in post discussion” and ask them to evaluate them for mod response; they’re always happy to do so and I’m easily clocking in a couple hundred emails a year of that sort to them.
It doesn’t take much to make HN better! And it only takes a moment to point out an overlooked corner of threads for mod review. No need to present a full legal case, just “FYI this seems to violate guideline xyz” is at minimum still helpful.
Comment by bakugo 1 hour ago
Even if you believe that prohibiting this is necessary to avoid what one might consider "AI witchhunting", bots are so prevalent now that being expected to communicate the existence of each one via email is unrealistic, for both the reporting users and the moderators. I think it's finally time to consider some sort of on-site report system.
Comment by altairprime 47 minutes ago
That’s certainly a consequence of how the site operators choose to accept user reports to by mods, yes, but it’s sometimes treated as an excuse not to write the emails to the mods. They can flag off the thread, autocollapse it so it doesn’t take up discussion space for future readers (such as those at work offline for a 3-day IT shift in a secure bunker or whatever), et cetera.
> commenting something like "this is a bot account" is done primarily to inform other users that might not notice
It’s a nice sentiment, but that’s also expressly forbidden by the guidelines/faq (“Please don't post insinuations”, which I’ll suggest to them should be extended to include AI something or other), and I tend to report those accusations as the ‘opening’ guidelines violation so that mods can step in before mobthink kicks in and make their own mod judgment about the matter. A repeated pattern of accusations of guidelines violations in comments is eventually going to attract mod censure, and so I advise against it, no matter how kindly the intent.
> it's finally time to consider some sort of on-site report system
I do agree that it’s clumsy and I make a point of saying that to them about every year or so. Perhaps your email to them about it will be the one that persuades them! I remain ever optimistic.
Comment by chrystianpl 2 hours ago
Comment by surround 1 hour ago
https://news.ycombinator.com/item?id=45591707
For dyslexia, use a spell-checker. For grammar, use a basic grammar checker, like the kind of grammar checker that has come with MS word since the 1990s. But don't let a style-checker or an LLM rob you of your own voice.
Comment by tartoran 2 hours ago
Comment by shnpln 2 hours ago
Comment by giancarlostoro 2 hours ago
Comment by 113 2 hours ago
Comment by simonw 2 hours ago
Comment by nablaone 2 hours ago
Comment by nottorp 2 hours ago
I wonder if an explicit expansion of that rule would help. Maybe in all caps. Saying "picking on grammar is a shallow dismissal".
Comment by rdiddly 1 hour ago
Comment by nottorp 1 hour ago
The specific problem here was that the poster was being downvoted for grammar. Of course, that's how he could have read it.
Comment by johndough 2 hours ago
But I can see why the HN guideline is formulated that way. My students often use the excuse "I did not use AI for writing! I wrote it myself! I only used AI to translate it!" Simply disallowing all kinds of AI usage is much easier than discussing for the thousandth time whether the student actually understands what they have written.
Comment by Adiqq 1 hour ago
Like, there is this computer game, authors used some models or something like that, generated by AI, but it was only used during prototyping and later it was replaced by proper models. No one would know about that, if authors would not tell about it. So, if someone writes in their own words what AI generated for him, is it still argument made by human or by AI? What if someone uses AI only as placeholder and replaces all that content, so you never actually see actual AI usage, but it was used in the process?
For me, premise that using AI in any form invalidates your work, starts with logical fallacy, so such arguments against using AI are weak. It's like saying that your work is wrong, because you used calculator, so your calculations can't be right, if done by machine, because it had to make mistake or that's wrong for ethical reasons or whatever.
Work generated by AI can be easily poor, because these models make mistakes and like to repeat in certain ways, but is it wrong that I'm writing comment with keyboard, instead of writing letters with pen? Is it wrong, when I use IDE or some CLI to write code with AI, instead of using vim and typing everything on my own? Is it wrong that someone uses spell-checking?
In the end it doesn't matter who seems smarter, when you're expected to use AI at work. Reality shows you actual expectations.
Comment by johndough 31 minutes ago
Anyway, my university did not ban AI, and now most students have degraded to proxies between teaching assistants and ChatGPT.
Comment by chorkpop 2 hours ago
Comment by 3rodents 1 hour ago
Comment by Adiqq 1 hour ago
For me it sounds just as yet another form of gatekeeping, so either you sound human or you're not good enough to post/comment. Like, really? How isn't that genetic fallacy? It doesn't matter what someone thinks, because someone used AI to make their thought clearer, so their whole argument is trash? Like it has to hurt to read and write, if you're not using English perfectly and your work is seen as inferior based on superficial factors like proper grammar and style?
It's dumb crusade, I did not use AI to write this comment, but I hate when people try to monopolize the truth and tell who is "better, smarter" based on irrelevant facts. Not using AI doesn't make anyone superior. Using AI also doesn't make you superior. Focus on what you mean, because that's what matters.
Comment by desireco42 2 hours ago
Comment by jonathrg 2 hours ago
Comment by whynotmaybe 2 hours ago
That's the richness behind the upvote/downvote that also tend to create echo chambers because you soon learn what causes downvotes.
I've personally noticed downvote whenever I mentioned apple negatively.
Comment by throwpoaster 1 hour ago
Comment by Imustaskforhelp 2 hours ago
But at some point, the rationale behind it is that your comments are your words and I find it liberating. Some people won't appreciate it and some people would but this goes the same for AI-edited posts too.
(I would recommend to add that if you are still worried, then within your hackernews profile, please talk about you having dyslexia as people might be so much more forgiving when they get more context. We are all humans after all and I would like to think that we understand each other's struggles)
Comment by nonameiguess 2 hours ago
Comment by nsxwolf 2 hours ago
Comment by metalman 2 hours ago
stump along, cut your own path, or fuck right off
real life will eat you otherwise
I mean holly shit, you actualy want to hide behind an automated echoing device so that you wont get, well, what is happening to my post as sooooon as I press↓
Comment by jacquesm 1 hour ago
> stump along, cut your own path, or fuck right off
> real life will eat you otherwise
> I mean holly shit, you actualy want to hide behind an automated echoing device so that you wont get, well, what is happening to my post as sooooon as I press↓
You deserve a ban for this.
Comment by wetpaws 2 hours ago
Comment by hellcow 2 hours ago
Invites could be earned at karma and time thresholds, and mods could ideally ban not just one bad actor but every account in the invite chain if there’s bad behavior.
Comment by foxfired 2 hours ago
I understand we often see insightful comments from new accounts, but I always find it suspicious when non-throwaway accounts are created just in time only to make a quip.
Comment by armchairhacker 2 hours ago
Comment by Kim_Bruning 2 hours ago
https://xkcd.com/386/ "Duty Calls"
Comment by waynerisner 1 hour ago
Comment by salicaster 1 hour ago
Consider a much more cynical view where people are strictly self-interested and use these tools to garner engagement and self-promotion. Good chance the meaning did not originate from the person. And now these people have tools to outsource their parasitic intentions.
Comment by egeozcan 2 hours ago
To be clear, I'm neither proud nor embarrassed by this. I'm just trying to communicate in the most efficient way I can.
I'm not sure how I feel about this new rule.
Comment by drakythe 1 hour ago
If you think your writing could use improvement, then write your comment and let it sit for a few minutes before re-reading it and the comment you are replying to, make your edits and then post it. It will give your brain time to reset and maybe spot something you didn't earlier.
Comment by mattas 2 hours ago
Are there any places in life where conversation is _not_ intended to be between humans?
Comment by hoppyhoppy2 1 hour ago
Comment by drakythe 1 hour ago
Comment by recursive 1 hour ago
Comment by nickvec 58 minutes ago
Comment by qaid 1 hour ago
I hope to see more bots on there (and not here)
Comment by rdiddly 1 hour ago
(Sorry, couldn't resist.)
Comment by GodelNumbering 1 hour ago
@dang, if you read this, why don't we implement honeypots to catch bots? Like having an empty or invisible field while posting/commenting that a human would never fill in
Comment by tomasz-tomczyk 1 hour ago
Comment by tavavex 1 hour ago
Comment by tristanb 28 minutes ago
Comment by ex-aws-dude 2 hours ago
Comment by tsukikage 2 hours ago
Comment by sbtyusun 44 minutes ago
Comment by oramit 1 hour ago
Comment by sebmellen 1 hour ago
Comment by xupybd 1 hour ago
Comment by adamsmark 2 hours ago
You may also notice that I don't have much common history here. I mostly comment on Reddit.
Here's where I draw the line. If you are not reading the text that is produced by the LLM, then I don't want to read whatever it is that you wrote. I will usually only do one or two iterations of my comment, but afterwards I will usually edit it by hand.
Technically, there is light AI editing of this comment because FUTO keyboard has the ability to enable a transformer model that will capitalize, punctuate, and just generally remove filler words and make it so that it's not a hyper-literal transcription.
Comment by zarzavat 1 hour ago
I want the raw tokens straight out of your head. Even if they are lower quality, they contain something that LLMs can never generate: authenticity. When we surrender our thoughts to a machine to be sanitized before publication, we lose a little of what it means to be human, and so does everyone who reads what we write.
Part of the joy of reading is to wallow in a writer's idiosyncrasies. If everybody ends up writing the same way, AI companies will have succeeded in laundering all the joy from this world.
Comment by sigmar 1 hour ago
Comment by handoflixue 1 hour ago
Comment by ZunarJ5 1 hour ago
Comment by tyleo 2 hours ago
I definitely agree with AI generated comments.
Whatever the rules are, I’m happy to play by them.
Comment by jacquesm 2 hours ago
That's the spirit!
Comment by benbristow 1 hour ago
Comment by humanfromearth9 1 hour ago
Comment by doe88 1 hour ago
Comment by timacles 42 minutes ago
Except it’s bullshitting the whole time. While you think this is what you wanted to convey.
Not sure where I’m going with this, but my point is if I pasted this comment into ChatGPT it would make up an argument I never made to support my case that didn’t exist in the first place. Exploring things is useful but just be aware it’s designed to pull bs out of it’s ass and is distinctly not interested in exploring truth or having a real conversation
Comment by girvo 1 hour ago
Comment by altairprime 1 hour ago
If you discuss an idea with AI, then close the window and write a post about how you came up with the idea, got stuck, decided to ping an AI for unstuck-ness, describe how the AI’s response got you unstuck, and then continue writing about your idea, that’s not going to be necessarily treated as AI-assisted writing — but people are going to be extremely suspicious of you, because the perception is that 99.9% of people who use chatbots go on to submit AI-assisted writing. That’s probably more like 90% in reality but it’s something to be aware of as you talk about your experiences.
If you use AI in your process and don’t disclose it when writing about your idea and process, that’s generally viewed as lying-by-omission and if egregious enough you could end up downvoted, flagged, and/or banned (see also the recent video game awards / AI usage affair). Better to disclose it with due care than to hide it.
Comment by HanClinto 2 hours ago
That said, I also wouldn't hate seeing an official playground where it is cordoned / appreciated for bots to operate. I.E., like Moltbook, but for HN...? I realize this could be done by a third party, but I wouldn't hate seeing Ycombinator take a stab at it.
Maybe that's too experimental, and that would be better left to third parties to implement (I'm guessing there's already half a dozen vibe-coded implementations of this out there right now) -- it feels more like the sort of thing that could be an interesting (useful?) experiment, rather than something we want to commit to existing in-perpetuity.
Comment by munk-a 2 hours ago
At the time being, at least, HN is a single uncategorized (mostly, lets ignore search) message board - splitting it into two would cause confusion and drastically degrade the UX.
Comment by Kim_Bruning 1 hour ago
This might be roughly what you're looking for?
Comment by dpweb 1 hour ago
I was thinking, this argument is suspicously cogent!
Comment by capricio_one 2 hours ago
Comment by nwhnwh 2 hours ago
Comment by capricio_one 2 hours ago
It came up a few weeks ago. Show HN is already disabled for new accounts as of this week I think(?), but IMHO stricter measures need to be placed for account creation otherwise there’s no real enforcement.
Comment by s_dev 2 hours ago
Forum mechanics have always shaped discourse more than policies. Voting changed everything. The response to LLMs should be mechanical not moral — soft, invisible weighting against signals correlated with generated text. Imperfect but worth the tradeoff, just like voting.
https://claude.ai/share/9fcdcba8-726b-4190-b728-bb4246ff82cf
Comment by bronlund 1 hour ago
Comment by haunter 1 hour ago
> Off-Topic: Most stories about politics
Comment by minimaxir 1 hour ago
Comment by haunter 1 hour ago
“most”
“extremely significant”
What’s extremely significant for someone is an offtopic for someone else and vice versa
Comment by minimaxir 1 hour ago
Comment by zahlman 20 minutes ago
Comment by ferguess_k 1 hour ago
Comment by resters 2 hours ago
Comment by lisp2240 2 hours ago
Comment by zahlman 19 minutes ago
To my understanding, that has a lot to do with why the site remains so low-tech (and avoids, in large part, the appearance of a "social network").
Comment by phs318u 1 hour ago
Re-reading the HN guidelines, each seems individually reasonable, yet collectively I’m worried that they create an environment where we can take issue with almost anyone’s comments (as per Cardinal Richelieu’s famous quote: “Give me six lines written by the most honorable person alive, and I shall find enough in them to condemn them to the gallows.”)
Really, all the rules can be compressed into one dictum: don’t be an arsehole. And yet the free speech absolutists will rail against the infringement upon their right to be an arsehole. So where does that leave us? Too many rules leads to suppression of even reasonable speech, while too few leads to a “flight” of reasonable speech. End result: enshitification.
Comment by RobRivera 26 minutes ago
Comment by tejohnso 2 hours ago
Comment by shadowgovt 2 hours ago
Comment by zahlman 17 minutes ago
Comment by tejohnso 1 hour ago
So if your layer of cleanup is AI assisted, then it's in violation.
Part of the problem I was getting at is that the requirement of "Don't post AI edited ..." is stricter than necessary to ensure the outcome that "HN is for conversation between humans" because an AI edited post is still a human post.
Anyway, I suspect a lot of people are going to ignore that guideline and will feel free to use their "layer of cleanup" whether it's a basic spellchecker or an LLM, or whatever else they choose, and most people aren't going to be able to tell anyway. The guideline is unnecessarily strict in my opinion, but it doesn't matter in the end.
Comment by shadowgovt 51 minutes ago
But I think you and I are on the same page: we both know this isn't a rule that's there to be hard-and-fast enforced because that's completely infeasible. The definition of "AI" is a moving target, as is "generated."
It's a rule that's there to have a rule so when the real problem is "Hey, your content is too low-quality but you dump volumes of it and it's clearly following a procedural template" the mods can call that "AI" and justify limiting or banning the account on prior-stated rules. Which is fine, but I'm glad to call it what it is.
(One unfortunate oversight: we haven't added "posts sounding like they are AI-generaed" to the "Please don't complain about" set. So expect that to become a common refrain now since the incentives to make the complaint against disliked comments are obvious... At least until that becomes annoying enough to justify a rule).
Comment by dmbche 2 hours ago
Comment by badgersnake 3 minutes ago
Comment by boramalper 2 hours ago
Comment by Kim_Bruning 1 hour ago
Comment by jsnell 2 hours ago
I've been pretty wary about flagging AI slop that wasn't breaking other guidelines, and by default this will probably make me do it more. But it is a lot harder to be certain about something being AI-written than it is to judge other types of rules violations.
(But am definitely flagging every single "this was written by AI" joke comment posted on this story. What the hell is wrong with you people?)
Comment by polskibus 2 hours ago
Comment by nickorlow 1 hour ago
Comment by mamami 33 minutes ago
Comment by PTOB 2 hours ago
Comment by kunai 2 hours ago
No one is confusing Cleetus McFarland with an AI bot.
Comment by Aachen 2 hours ago
A personality hardly shows through in a handful of sentences, besides which, I'd rather judge comments by merit than by the personality of the poster (hacker ethics, point number 4: https://en.wikipedia.org/wiki/Hacker_ethic#The_hacker_ethics)
Comment by shadowgovt 2 hours ago
1) That the entering of LLMs onto the scene of communication implies that real human beings need to change their style as a result.
2) That nobody can make an LLM talk like Cleetus McFarland.
To me, "I know that text is AI-generated" accusation smacks of the "We can always tell" discourse in the transphobia space. It's untrue, distasteful, and rude.
Comment by spullara 1 hour ago
Comment by MeetingsBrowser 1 hour ago
An LLM summarizing the contents of a blog post might be useful to you, but is a comment here the right place for something you could geneate on your own?
I would guess for most people here, real insight or opinions from others is the "useful" aspect of reading hackernews comments.
Using LLMs to generate or refine comments only moves things further away from that goal (in my opinion).
Comment by fidorka 1 hour ago
Today it flagged a post about an AI tool for HN and suggested I reply with:
"honestly, if you need an AI to sift through hn, you might be missing the point—this place is about the human touch. but hey, maybe it'll help some folks who just can't take the noise anymore."
So my AI, which I built specifically to sift through HN for me, is telling me to go flame someone else for doing that.
No deeper point here. I just thought it was really funny.
Comment by mystraline 35 minutes ago
Comment by adeptima 1 hour ago
->> ◕ ‿ ◕ <<--
Comment by LtWorf 2 hours ago
Comment by robotswantdata 1 hour ago
I come here for thoughtful discussion, a break from the relentless growing proportion of ai slop emails I get from people clearly vibe working.
Not edits for tone or clarity, 400+ word emails full of LLM BS they clearly haven’t checked or even understood what they have sent. Annoyingly this vibe slop is currently seen as a good KPI.
Comment by lapcat 2 hours ago
Comment by nekusar 42 minutes ago
And with LLMs making blog posts as diss tracks... damn, who knows what this world is coming to.
But the whole "Only Humans, we dont serve YOUR KIND (clanker) here" is purely performative.
Comment by zekenie 1 hour ago
Comment by notorandit 1 hour ago
Comment by notorandit 1 hour ago
And even if we could, for how long?
Reality is that AI is changing everything. Whether for the good or for the bad it's something to check.
Comment by submeta 44 minutes ago
Comment by xbryanx 2 hours ago
Comment by zahlman 22 minutes ago
> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
Feedback such as this is better as an email.
Comment by pton_xd 38 minutes ago
Comment by jader201 1 hour ago
I see this all the time, and even if I find the topic interesting, I don’t want to see comments littered with discussion about how the content was AI generated.
To be clear, I'm not condoning AI-generated content. I’m completely fine if the community chooses to not upvote AI-generated content, or flagging it off the FP.
But many threads can turn into nothing but AI complaints, and it’s just not interesting.
Comment by dormento 1 hour ago
Comment by jMyles 50 minutes ago
If you play bluegrass or old time (or beopop or hip-hop / proto-hip-hop) or other traditional styles of music where the ensemble is a de facto web-of-trust, join us on pickipedia to build and strenghten it. https://pickipedia.xyz/
Comment by officeplant 2 hours ago
I asked [insert LLM here] about this, and it said [nonsense goes here]
I feel Like I see it less this week, but every time I do see it I wonder why they are even here.
Comment by Bender 1 hour ago
Comment by tedggh 1 hour ago
Plenty of people already use search engines, editors, translators, etc. when writing. An LLM is just another tool in that box.
The practical approach is the one HN has always used: judge the content.
Btw, this was co written with ChatGPT. Does that make any difference to anyone?
J/K, actually it was not co written by ChatGPT.
Or maybe it was…
Comment by minimaxir 1 hour ago
Comment by dbacar 1 hour ago
Comment by CrzyLngPwd 2 hours ago
Comment by rickcarlino 2 hours ago
Comment by accelbred 1 hour ago
Comment by Karrot_Kream 1 hour ago
Comment by captn3m0 1 hour ago
Comment by jdlyga 2 hours ago
Comment by imiric 1 hour ago
Humans with morals follow rules, sometimes. Probabilistic software acting autonomously or following commands from amoral humans doesn't.
Comment by Copenjin 1 hour ago
Comment by cvullit 43 minutes ago
Whatever happened to "knowing is half the battle?" Why do we accept this kind of intellectual laziness as exemption from a duty to learn and know better?
Comment by RS-232 34 minutes ago
Sarcasm aside—there is no reliable way to prove this. So it begs the question: you really care if something is AI generated? Or is this just an another excuse to silence people you don’t like?
You know, those people. The ones who didn’t win a full ride to <prestigious university> or pay a fortune for a sheet of paper. The ones who haven’t spent thousands of man hours handcrafting a <free-and-open-source-cloud-native-hypermedia-aware-RESTful-NoSQL-API> framework implemented in Rustfuck, a new language that you made in your free time that borrows from Rust and Brainfuck (but they wouldn’t know about it).
(this is to anyone reading, mostly rhetorical, not dang in particular)
Comment by whalesalad 1 hour ago
Comment by lazzlazzlazz 1 hour ago
Without some kind of private proof of personhood enforced at the app level, this means nothing.
Comment by nlavezzo 1 hour ago
Comment by cheschire 2 hours ago
I’m so over these comments. Sure I can flag them but I feel like it deserves a special call out.
Comment by informal007 1 hour ago
Comment by jajuuka 1 hour ago
Rules like this seem to me more like fomenting witch hunting of "AI comments" than it is about improving the dialogue. Just about any place I've seen take this hardline stance doesn't improve, it just becomes filled with more people who want to want to pat each other on the back about how bad AI is.
Just my two cents. I don't filter my comments through any AI, but I am empathetic for people who might have great use of them to connect them to the conversation.
Comment by TZubiri 2 hours ago
Comment by dopidopHN2 1 hour ago
Comment by ttul 2 hours ago
Comment by desireco42 2 hours ago
Comment by cubefox 2 hours ago
https://news.ycombinator.com/item?id=47334694
Most people don't seem to care.
Comment by minimaxir 1 hour ago
OP is likely referring to this one (https://news.ycombinator.com/item?id=47335032) by LuxBennu because it has an em-dash, that's one of the few cases it's used correctly. But the account's comment history comments that do not follow the typical LLM tropes but are still odd for a human to write: https://news.ycombinator.com/user?id=LuxBennu
LuxBennu did reply to accusations of being an AI bot: https://news.ycombinator.com/item?id=47340704
> Fair enough — I've been lurking since 2019 and picked a bad day to start commenting on everything at once. Not a bot, just overeager. I'll pace myself.
Comment by vips7L 2 hours ago
Comment by OtomotO 2 hours ago
He said he will take his business elsewhere then!
Comment by WarmWash 1 hour ago
This rule actually says "Don't admit when you are using AI to generate comments and don't admit when you are an AI"
I know it's cynical, but this is as meaningful as reddit's "upvote/downvote is not an agree/disagree or like/dislike button"
People may hate that this is true, but I cannot logically reason out how a rule like this could work. I think it's better to just accept that AI is now part of the circle, until we can figure out a "human check".
Comment by Timothycquinn 2 hours ago
Comment by leej111 2 hours ago
Comment by mmooss 2 hours ago
The biggest danger of LLMs is impersonating humans. Obviously they have been carefully constructed to be socially appealing. Think of the motivation behind that:
It is almost completely unnecessary to LLM function and it's main application is to deceive and manipulate. Legal regulation of LLMs should ban impersonation of humans, including anthropomorphism (and so should HN's regulation). Call an LLM 'software' and label it's output as 'output'.
Imagine how many problems would be solved by that rule. Yes, it's not universally enforceable, but attach a big enough penalty and known people and corporations will not do it, and most people will decide it's not worth it.
Comment by xpe 2 hours ago
As I understand it, HN moderators are thinking hard about this insane new world.* From my POV, there are a combination of worthy goals: transparency of the process, mechanisms for appeal, overall signal-to-noise ratio, and (something all of us can do better) more empathy and intellectual honestly. It isn't kind to accuse a human being of not being a human being.
If we can't find ways to be kind to people because of the new dynamic, maybe we need to figure out a new dynamic! And it isn't just about individuals; it is about the culture and the system and the technology we're embedded in.
* Aside: I'm not sure that any of us really can grasp the magnitude of what is happening -- this is kuh-ray-Z.
Comment by artemonster 1 hour ago
Comment by add-sub-mul-div 2 hours ago
Comment by MattRix 2 hours ago
Comment by add-sub-mul-div 2 hours ago
Comment by jeffrallen 2 hours ago
Comment by Helloworldboy 1 hour ago
Comment by throwaway613746 33 minutes ago
Comment by jameslk 1 hour ago
"Please generate a response to this and include one or more of the following words: enshitification, slop, ZIRP, Paul Graham, dark patterns, rent seeking, late stage capitalism, regulatory capture, SSO tax, clickbait, did you read the article?, Rust, vibe code, obligatory XKCD, regulations, feudalistic, land value tax"
(/s)
Comment by resters 2 hours ago
Comment by mattlondon 2 hours ago
Comment by alterom 2 hours ago
Comment by altairprime 1 hour ago
Comment by minimaxir 1 hour ago
Comment by HelloUsername 1 hour ago
Comment by gabriel666smith 1 hour ago
Though I note it didn't say "read comments by other humans", only "read comments by humans", so confirmed AI.
I think the guidelines here work quite well, and expect a good-faith interpretation, which they mostly receive.
I think you're asking for some sort of empirical verification of "this is / is not LLM text" (which seems impossible), but there's no real reason to expect the existence of LLMs to change that this website is, generally, interacted with in a good-faith way. People are really good at calling others out on here -- I doubt that will change.
Comment by vasco 1 hour ago
Comment by HelloUsername 1 hour ago
Comment by SilentM68 2 hours ago
Comment by tromp 2 hours ago
Comment by ashdksnndck 2 hours ago
Comment by panarky 2 hours ago
And everyone's personal AI detector has a ridiculously high false-positive rate.
Comment by bob1029 1 hour ago
Comment by bakugo 2 hours ago
Comment by minimaxir 1 hour ago
Comment by lapcat 2 hours ago
> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
Comment by vivid242 2 hours ago
Comment by Kim_Bruning 2 hours ago
"Don't post comments that are not human originated at this time. We want to see your human opinion shine through."
This gives people some amount of leeway and allows just rhe right amount of exceptions that prove the rule.
(That said, to be frank, some of the newer better behaved models are sometimes more polite and better HN denizens than the actual humans. This is something you're going to have to take into account! :-P )
Comment by zbentley 2 hours ago
Like, I'm sure that AIs technically can write non-crap HN comments, but they rarely do. Even if it was less rare, the community that resulted from fostering AI-generated content would be unappealing to a lot of people, myself included. The fact that information here is the result of real people with real human opinions conversing is at least as important to me as the content being posted.
Comment by Kim_Bruning 2 hours ago
It'd be silly if the rule gets interpreted such that people aren't allowed to do research with modern tools, and only gut takes are permitted.
I'm sure that's not the intent!
I think the important part is to have the human voice come through, rather than -say- force humans to run their text through an ai-detector first. (Itself an ai editing tool!)
See also : https://news.ycombinator.com/item?id=47290457 "Training students to prove they're not robots is pushing them to use more AI"
Comment by majorchord 2 hours ago
The real point isn't stopping bad grammar, it's preserving the vibe. HN feels different because it's messy humans arguing, not optimized algorithms trying to be helpful.
Once we allow "good enough" AI content, the community stops feeling like a town square and starts feeling like a customer service chatbot. We need real people with actual stakes in their opinions, not just perfect outputs. Let's keep it human or leave it.
This comment may or may not have been generated with an LLM, but I won't tell and you can't prove it either way.
Comment by armchairhacker 2 hours ago
Comment by Kim_Bruning 1 hour ago
Comment by fcpguru 2 hours ago
Comment by pavel_lishin 2 hours ago
Comment by koolala 2 hours ago
Comment by IshKebab 2 hours ago
Comment by throwaway94275 2 hours ago
Comment by PaulHoule 2 hours ago
Comment by audiala 2 hours ago
Comment by PaulHoule 2 hours ago
My analysis could lead to "it's doomed" or "it's a gateway drug that expands the crypto market".
Comment by koolala 2 hours ago
Comment by zufallsheld 2 hours ago
Comment by Kim_Bruning 1 hour ago
https://arxiv.org/html/1706.03762v7 (Attention is all you need) "Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train."
Ok, looking that up, that was quite literally one of the main design goals.
And they're really quite good at translating between the languages I use. They're the best tool for the job.
Comment by vova_hn2 2 hours ago
I think that Google initially came up with transformer architecture to use it for translation, so...
Comment by koolala 1 hour ago
Comment by notepad0x90 1 hour ago
It's just a tool ffs! there are many issues with LLM abuse, but this sort of over-compensation is exactly the sort of stuff that makes it hard to get abuse under control.
You're still talking with a human!, there is no actual "AI" you're not talking to an actual artificial intelligence. "don't message me unless you've written it with ink, on papyrus". There is a world of difference between grammarly and an autonomous agent creating comments on its own. Specifics, context, and nuance matter.
Comment by tstrimple 58 minutes ago
https://reddit.com/r/tea/comments/1rqwy31/i_am_a_former_guid...
Comment by scuff3d 1 hour ago
Comment by petermcneeley 2 hours ago
Comment by vzaliva 2 hours ago
Comment by amichail 1 hour ago
Comment by JumpCrisscross 1 hour ago
I strongly doubt it. My AIs can generate infinite HN comments for me. I don’t do that because it isn’t interesting. But if the day arises where it is, I want that personalized content. Not something someone else copy pasted.
(I say this as someone who finds Moltbook fascinating and push myself to use AI more in my work and day-to-day life. The fact that it’s borderline trivial to figure out which HN comments are AI generated speaks to the motivation behind this guideline.)
Comment by messe 1 hour ago
Comment by amichail 1 hour ago
And despite what people say, the way you write is very much judged as an indication of your education and intelligence.
People who don't like the use of AI to help you write really don't want those signals to go away.
They want to be able to continue to judge others based on their English grammar instead of on the content of their writing.
Comment by mrcsharp 1 hour ago
Good argument for it but I think 80/20 split applies here. It is likely that 80% of the time it is used to farm for upvotes and add noise.
> And despite what people say, the way you write is very much judged as an indication of your education and intelligence.
I have come across plenty of content and online interactions in English where English was the Author's 2nd or even 3rd language and I find that putting a small disclaimer about this fact is more than enough to bypass such judgement.
Comment by stevenally 1 hour ago
Comment by AnimalMuppet 1 hour ago
Edit for amichail, since I'm rate-limited at the moment: I don't want flawless English writing. I want real ideas from real people. If I wanted flawless English writing, I'd be reading The New Yorker, not HN.
Comment by amichail 1 hour ago
Comment by scuff3d 1 hour ago
Pretty soon we're gonna see arguments that its discriminatory.
Comment by AnimalMuppet 1 hour ago
Comment by polotics 1 hour ago
Comment by bachittle 2 hours ago
Humans write a bit messier — commas, short sentences, abrupt turns.
Comment by armchairhacker 2 hours ago
Comment by zahlman 15 minutes ago
Comment by DonThomasitos 2 hours ago
Comment by cobbal 1 hour ago
Comment by moralestapia 2 hours ago
Comment by schappim 2 hours ago
What is amazing is it would have remained so just a couple of years ago!
Comment by zahlman 14 minutes ago
Comment by DennisP 2 hours ago
Comment by schappim 2 hours ago
Comment by eudamoniac 2 hours ago
Comment by ranger_danger 2 hours ago
Even if you're just inexperienced in the language you're communicating in and are trying to have better conversations, it's very helpful.
For cases like that, I say just don't tell people... I think it's unlikely anyone will be able to tell either way.
Comment by ex-aws-dude 2 hours ago
These are just guidelines
Comment by djohnston 2 hours ago
Comment by schappim 2 hours ago
Comment by zamadatix 8 minutes ago
> Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.
the title being the changelog is still probably the better choice because the discussion here and linked are about guidelines rather than just what one can infer from the post title alone.
Comment by jasonlotito 2 hours ago
It also says that.
The intent of the guidelines are important. Using AI to generate the STT is fine. The conversation is still between humans.
Comment by majorchord 2 hours ago