AI Usage Policy
Posted by mefengl 1 day ago
Comments
Comment by Version467 1 day ago
Other people apparently don't have this feeling at all. Maybe I shouldn't have been surprised by this, but I've definitely been caught off guard by it.
Comment by mg794613 1 day ago
You on the other hand, have for many years honed your craft. The more you learn, the more you discover to learn aka , you realize how little you know. They don't have this. _At all_. They see this as a "free ticket to the front row" and when we politely push back (we should be way harsher in this, its the only language they understand) all they hear is "he doesn't like _me_." which is an escape.
You know how much work you ask of me, when you open a PR on my project, they don't. They will just see it as "why don't you let me join, since I have AI I should have the same skill as you".... unironically.
In other words, these "other people" that we talk about haven't worked a day in the field in their life, so they simply don't understand much of it, however they feel they understand everything of it.
Comment by nlh 1 day ago
Comment by __loam 1 day ago
Comment by B1FIDO 14 hours ago
Many artists through the ages have learned to work in various mediums, like sculpture of materials, oil painting, watercolors, fresco or whatever. There are myriad ways to express your visual art using physical materials.
Likewise, a girlfriend of mine was a college-educated artist, and she had some great output in all sorts of media, and had a great grasp of paints, and paper and canvas and what-have-you.
But she was also an Amiga aficionado, and then worked on the PCs I had, and ultimately the item she wanted most in life was a Wacom Tablet. This tablet was a force-multiplier for her art, and allowed her some real creative freedom to work in digital mediums and create art with ease that was unheard-of for messy oil paintings or whatever on canvas in your garage (we actually lived in a converted garage anyway.)
So, digital art was her saving grace, but also a significant leveler of playing fields. What would distinguish her original creativity from A.I.-generated stuff later on? Really not much. You could still make an oil or watercolor painting that is obviously hand-made. Forgeries of great artists have been perpetrated, but most of us can't explain, e.g. the Shroud of Turin anyway.
So generative A.I. is competing in these digital mediums, and perhaps 3D-printing is competing in the realm of physical objects, but it's unfortunate for artists that their choices have narrowed so far, that they are practically required to work in digital media exclusively, and master those apps, and therefore, they compete with gen A.I. in the virtual realm. That's just how it's gonna be, until folks go back to sculpting marble and painting soup cans.
Comment by TeMPOraL 7 hours ago
It's basically like GenAI, but running on protein substrate instead of silicon one.
And even in the digital realm, artists already spent the last decade+ competing with equivalent "factory art", too. Advertising stands on art, and most of that isn't commissioned, it's rented or bought for cheap from stock art providers, and a lot of supply there comes from people and organizations who specialize in producing art for them. The OG slop art, before AI.
EDIT: there's some irony here, in that people like to talk about how GenAI might (or might already be) start putting artists out of work. But I haven't seen anyone mention that the AI has already put slop creators out of work.
Comment by __loam 12 hours ago
Comment by BoorishBears 9 hours ago
And here's your response to what felt like a pretty good faith response that deserved at most an equally earnest answer, and at worst no response.
Instead they got worse than no response lol.
Comment by andrekandre 8 hours ago
> All while being completely ignorant to the medium or the process.
also ignorant that the art they generated was made possible by those people who "wasted their time"...Comment by johnnyanmac 20 hours ago
But that care isn't even evident here. People submitting prs that don't even compile, bug reports for issues that may not even exist. The minimum I'd expect is to check the work of whatever you vibe coded. We can't even get that. It's some. Odd form of clout chasing as if repos are a factor of success, not what you contribute to them.
Comment by BlackjackCF 19 hours ago
Comment by ThrowawayR2 23 hours ago
Comment by alfalfasprout 22 hours ago
The humility of understanding what you don't know and the limitations of that is out the window for many people now. I see time and time again the idea that "expertise is dead". Yet it's crystal clear it's not. But those people cannot understand why.
It all boils down to a simple reality: you can't understand why something is fundamentally bad if you don't understand it at all.
Comment by njhnjhnjh 1 day ago
Comment by monegator 1 day ago
ever had a client second guess you by replying you a screenshot from GPT?
ever asked anything in a public group only to have a complete moron replying you with a screenshot from GPT or - at least a bit of effor there - a copy/paste of the wall of text?
no, people have no shame. they have a need for a little bit of (borrowed) self importance and validation.
Which is why i applaud every code of conduct that has public ridicule as punishment for wasting everybody's time
Comment by Sharlin 1 day ago
Comment by pera 1 day ago
Comment by TeMPOraL 1 day ago
Comment by Suzuran 1 day ago
Comment by tzs 1 day ago
Comment by buggy6257 1 day ago
Comment by Suzuran 1 day ago
Comment by direwolf20 22 hours ago
Comment by nathanaldensr 1 day ago
Comment by pluralmonad 1 day ago
Comment by OGEnthusiast 1 day ago
Comment by ryandrake 22 hours ago
Comment by johnnyanmac 20 hours ago
Maybe a million dollar company needs to be compliant. A billion dollar company can start to ward off any loopholes with lawsuits instead of compliance.
A trillion dollar company will simply change the law and fight governments over the law to begin with, rather than worrying about compliance.
Comment by TeMPOraL 1 day ago
So their boss may be naive, but not hilariously so - because that is, in fact, how the world works[1]! And as a boss, they probably have some understanding of it.
The thing they miss is that AI fundamentally[2] cannot provide this kind of "correct" output, and more importantly, that the "trillion dollar companies" not only don't guarantee that, they actually explicitly inform everyone everywhere, including in the UI, that the output may be incorrect.
So it's mostly failure to pay attention and realize they're dealing with an exception to the rule.
--
[0] - Actually hurt you, I'm ignoring all the fitness/healthy eating fads and "ultraprocessed food" bullshit.
[1] - On a related note, it's also something security people often don't get: real world security relies on being connected - via contracts and laws and institutions - to "men with guns". It's not perfect, but scales better.
[2] - Because LLMs are not databases, but - to a first-order approximation - little people on a chip!
Comment by ryandrake 22 hours ago
We are currently facing a political climate trying to tear many of these safeguards down. Some people really think "caveat emptor" is some kind of natural, efficient, ideal way of life.
Comment by miki123211 1 day ago
Cybersecurity is also an exception here.
"men with guns" only work for cases where the criminal must be in the jurisdiction of the crime for the crime to have occurred.
If you rob a bank in London, you must be in London, and the British police can catch you. If you rob a bank somebody else, the British police doesn't care. If you hack a bank in London though, you may very well be in North Korea.
Comment by TeMPOraL 7 hours ago
Comment by saratogacx 21 hours ago
There's so much CYA because there is an A that needs C'ing
Comment by rsynnott 1 day ago
Comment by breakingcups 1 day ago
Comment by tveita 1 day ago
E.g.
"A random drunk guy on the subway suggested that this wouldn't be a problem if we were running the latest SOL server version" "Huh, I guess that's worth testing"
Comment by TheSpiceIsLife 22 hours ago
Comment by direwolf20 22 hours ago
Comment by TheSpiceIsLife 14 hours ago
Comment by anon_anon12 1 day ago
Comment by wpietri 1 day ago
Comment by cess11 1 day ago
Comment by IgorPartola 1 day ago
Comment by TeMPOraL 1 day ago
Consider: GP would've been much more correct if they said "It's just a person on a chip." Still wrong, but much less, in qualitative fashion, than they are now.
Comment by cess11 2 hours ago
It's a person in the same sense as a Markov chain is one, or the bot in the reception on Starship Titanic, i.e. not at all.
Comment by NoGravitas 1 day ago
Comment by dematz 1 day ago
Comment by TeMPOraL 7 hours ago
FWIW, I prefer my "little people on a chip" because this is a deliberate riff on SoC, aka. System on a Chip, aka. an actual component you put when designing computer systems. The implication being, when you design information processing systems, the box with "LLM" on it should go where you'd consider putting a box with "Person" on it, not where you'd put "Database" or any other software/hardware box.
Comment by cess11 19 hours ago
Comment by IgorPartola 14 hours ago
Edit: unless we are talking about MongoDB. It will only keep your data if you are lucky and might lose it. :)
Comment by cess11 2 hours ago
It's not just the weirdness in Mongo that could exhibit non-deterministic behaviour, some common indexing techniques do not guarantee order and/or exhaustiveness.
Let it go, LLM:s and related compression techniques aren't very special, and neither are chatbots or copy-paste-oriented software development. Optimising them for speed or manipulation does not change this, at least not from a technical perspective.
Comment by KronisLV 1 day ago
It's like a JPEG. Except instead of lossy compression on images that give you a pixel soup that only vaguely resembles the original if you're resource bound (and even modern SOTA models are when it comes to LLMs), instead you get stuff that looks more or less correct but just isn't.
Comment by cess11 2 hours ago
Comment by derrida 1 day ago
Comment by the_af 1 day ago
An LLM chatbot is not like querying a database. Postgres doesn't have a human-like interface. Querying SQL is highly technical, when you get nonsensical results out of it (which is most often than not) you immediately suspect the JOIN you wrote or whatever. There's no "confident vibe" in results spat out by the DB engine.
Interacting with a chat bot is highly non-technical. The chat bot seems to many people like a highly competent person-like robot that knows everything, and it knows it with a high degree of confidence too.
So it makes sense to talk about "hallucinations", even though it's a flawed analogy.
I think the mistake people make when interacting with LLMs is similar to what they do when they read/watch the news: "well, they said so on the news, so it must be true."
Comment by cess11 19 hours ago
It's precisely like a database. You might think the query interface is special, but that's all it is and if you let it fool you, fine, go ahead, keep it public that you do.
Comment by Cthulhu_ 1 day ago
But (as someone else described), GPTs and other current-day LLMs are probabilistic. But 99% of what they produce seems feasible enough.
Comment by pjc50 1 day ago
Comment by pousada 1 day ago
Unless I have been reading very different science fiction I think it’s definitely not that.
I think it’s more the confidence and seeming plausibility of LLM answers
Comment by oneeyedpigeon 1 day ago
Comment by direwolf20 22 hours ago
I'm sorry. That was a terrible joke.
Comment by rsynnott 1 day ago
Comment by TheSpiceIsLife 22 hours ago
Comment by Sharlin 1 day ago
Comment by TeMPOraL 1 day ago
Comment by johnnyanmac 20 hours ago
But yes, look at the US c.2025-6. As long as the leader sounds assertive, some people will eat the blatant lies that can be disproven even by the same AI tools they laud.
Comment by TeMPOraL 1 day ago
Comment by slfreference 1 day ago
delicate feelers is like octopus arms
Comment by TeMPOraL 7 hours ago
Still, I meant that in the other direction: not request, but a gift/favor. "Guess culture" would be going out of your way to make the gift valuable for the receiver - matching what they need, and not generating extra burden. "Ask culture" would be like doing whatever's easiest that matches the explicit requirements, and throwing it over the fence.
Comment by ncruces 1 day ago
I raise an issue or PR after carefully reviewing someone else's open source code.
They ask Claude to answer me; neither them nor Claude understood the issue.
Well, at least it's their repo, they can do whatever.
Comment by monooso 1 day ago
The client in your example isn't a (presumably) professional developer, submitting code to a public repository, inviting the scrutiny of fellow professionals and potential future clients or employers.
Comment by monegator 1 day ago
They are sure they know better because they get a yes man doing their job for them.
Comment by meindnoch 1 day ago
Comment by pixl97 1 day ago
Comment by javcasas 1 day ago
Comment by positive-spite 1 day ago
I'm not looking forward to it...
Comment by Aeolun 1 day ago
Comment by flexagoon 1 day ago
Comment by nchmy 1 day ago
Comment by wccrawford 23 hours ago
Comment by 0x696C6961 1 day ago
Comment by ionwake 1 day ago
I am not saying one has to lose their shame, but at best, understand it.
Comment by pousada 1 day ago
Too little or too much shame can lead to issue.
Problem is no one tells you what too little or too much actually is and there are many different situations where you need to figure it out on your own.
So I think sometimes people just get it wrong but ultimately everyone tries their best. Truly malicious shameless people are extremely rare in my experience.
For the topic at hand I think a lot of these “shameless” contributions come from kids
Comment by ryandrake 22 hours ago
So many people now respond to "You shouldn't do that..." with one or more of:
- But, I'm allowed to.
- But, it's legal.
- But, the rules don't say I can't.
- But, nobody is stopping me.
The shared cultural understanding of right and wrong is shrinking. More and more, there's just can and can't.
Comment by pousada 21 hours ago
Fwiw I haven’t noticed either phenomenon much irl but that might just be my bubble.
Comment by amanaplanacanal 6 hours ago
Comment by Cthulhu_ 1 day ago
Basically teenagers. But it feels like the rebellious teenager phase lasts longer nowadays. Zero evidence besides vibes and anecdotes, but still.
Or maybe it's me that's getting old?
Comment by derrida 1 day ago
Just like pain is a good thing, it tells you and signals to remove your hand from the stove.
Comment by krferriter 21 hours ago
Comment by ionwake 1 day ago
Comment by Wojtkie 21 hours ago
Think of a lot of the inflammatory content on social media, how people have made whole careers and fortunes over outrage, and they have no shame over it.
It really does begin to look like having a good sense of shame isn't rewarded in the same way.
Comment by wang_li 1 day ago
Comment by warkdarrior 1 day ago
That has NEVER led to a positive result in the whole of human history, especially that the second group is much larger than the first.
Comment by pepperball 23 hours ago
Comment by Etheryte 1 day ago
Comment by latexr 1 day ago
For those curious:
Comment by Cthulhu_ 1 day ago
Of course, the vast majority of OS work is the same cog-in-a-machine work, and with low effort AI assisted contributions, the non-hero-coding work becomes more prevalent than ever.
Comment by vanderZwan 1 day ago
Just like with email spam I would expect that a big part of the issue is that it only takes a minority of shameless people to create a ton of contribution spam. Unlike email spam these people actually want their contributions to be tied to their personal reputation. Which in theory means that it should be easier to identify and isolate them.
Comment by bgro 1 day ago
Comment by kleiba 1 day ago
It's not necessarily maliciousness or laziness, it could simply be enthusiasm paired with lack of experience.
Comment by JDye 1 day ago
I can't imagine the level of laziness or entitlement required for a student (or any developer) to blame their tools so quickly without conducting a thorough investigation.
Comment by mr_toad 4 hours ago
Comment by benldrmn 1 day ago
Comment by zehaeva 1 day ago
Comment by direwolf20 22 hours ago
Comment by xxs 1 day ago
Memory leaks and issues with the memory allocator are months long process to pin on the JVM...
In the early days (bug parade times), the bugs are a lot more common, nowadays -- I'd say it'd be an extreme naivete to consider JVM the culprit from the get-go.
Comment by jm4 1 day ago
Comment by Ronsenshi 1 day ago
Comment by toyg 1 day ago
Comment by latentsea 1 day ago
Comment by Ronsenshi 1 day ago
Comment by direwolf20 22 hours ago
Comment by Aurornis 1 day ago
Any smart interviewer knows that you have to look at actual code of the contributions to confirm it was actually accepted and that it was a non-trivial change (e.g. not updating punctuation in the README or something).
In my experience this is where the PR-spammers fall apart in interviews. When they proudly tell you they’re a contributor to a dozen popular projects and you ask for direct links to their contributions, they start coming up with excuses for why they can’t find them or their story changes.
There are of course lazy interviewers who will see the resume line about having contributed to popular projects and take it as strong signal without second guessing. That’s what these people are counting on.
Comment by Sharlin 1 day ago
Comment by SpecialistK 19 hours ago
I've been deep-diving into AI code generation for more niche platforms, to see if it can either fill the coding gap in my skillset, or help me learn more code. And without writing my whole blog post(s) here, it's been fairly mediocre but improving over time.
But for the life of me I would never submit PRs of this code. Not if I can't explain every line and why it's there. And in preparation of publishing anything to my own repos I have a readme which explicitly states how the code was generated and requesting not to bother any upstream or community members with issues from it. It's just (uncommon) courtesy, no?
Comment by DrewADesign 1 day ago
I’ll bet there are probably also people trying to farm accounts with plausible histories for things like anonymous supply chain attacks.
Comment by arbitrandomuser 1 day ago
Comment by kkukshtel 1 day ago
Two immediate ones I can think of:
- The yellow hue/sepia tone of any image coming out of ChatGPT
- People responding to text by starting with "Good Question!" or inserting hard-to-memorize-or-type unicode symbols like → into text where they obviously wouldn't have used that and have no history of using it.
Comment by wnevets 22 hours ago
You can expand this sentiment to everyday life. The things some people are willing to say and do in public is a never ending supply of surprising.
Comment by hintymad 22 hours ago
My guess is that those people have different incentives. They need to build a portfolio of open-source contributions, so shame is not of their concern. So, yeah, where you stand depends on where you sit.
Comment by pil0u 1 day ago
Comment by 6LLvveMx2koXfwn 1 day ago
Comment by pixl97 1 day ago
An example I have of this is from high school where there were guys that were utterly shameless in asking girls for sex. The thing is it worked for them. Regardless of how many people turned them down they got enough of a hit rate it was an effective strategy. Simply put there was no other social mechanism that provided enough disincentive to stop them.
And to take the position as devil's advocate, why should they feel shame? Shame is typically a moral construct of the culture you're raised in and what to be ashamed for can vary widely.
For example, if your raised in the culture of Abrahamic religions it's very likely you're told to be ashamed for being gay. Whereas non-religious upbringing is more likely to say why the hell would you be ashamed for being gay.
TL:DR, shame is not an effective mechanism on the internet because you're dealing with far too many cultures that have wildly different views on shame, and any particular viewpoint on shame is apt to have millions to billions of people that don't believe the same.
Comment by quanwinn 1 day ago
Comment by slfreference 1 day ago
I am seeing the doomed future of AI math: just received another set theory paper by a set theory amateur with an AI workflow and an interest in the continuum hypothesis.
At first glance, the paper looks polished and advanced. It is beautifully typeset and contains many correct definitions and theorems, many of which I recognize from my own published work and in work by people I know to be expert. Between those correct bits, however, are sprinkled whole passages of claims and results with new technical jargon. One can't really tell at first, but upon looking into it, it seems to be meaningless nonsense. The author has evidently hoodwinked himself.
We are all going to be suffering under this kind of garbage, which is not easily recognizable for the slop it is without effort. It is our regrettable fate.
Comment by lm28469 1 day ago
Comment by OGEnthusiast 1 day ago
My guess is it's mostly people from countries with a culture that reward shameless behavior.
Comment by guerrilla 1 day ago
I think this is interesting too. I've noticed the difference in dating/hook-up contexts. The people you're talking about also end up getting laid more but that group also has a very large intersection with sex pests and other shitty people. The thing they have in common though is that they just don't care what other people think about them. That leads some of them to be successful if they are otherwise good people... or to become borderline or actual crininals if not. I find it fascinating actually, like how does this difference come about and can it actually be changed or is it something we get early in life or from the genetic lottery.
Comment by GardenLetter27 1 day ago
The grift culture has changed that completely, now students face a lot of pressure to spam out PRs just to show they have contributed something.
Comment by nobodywillobsrv 1 day ago
i.e. imagine a change that is literally a small diff, that is easy to describe as a mere user and not a developer, and that requires quite a lot of deep understanding merely to submit as a PR (build the project! run the tests! write the template for the PR!).
Really a lot of this stuff ends up being a kind of failure mode of various projects that we all fall into at some point where "config" is in the code and what could be a simple change and test required a lot of friction.
Obviously not all submissions are going to be like this but I think I've tried a few little ones like that where I would normally just leave whatever annoyance I have alone but think "hey maybe it's 10 min faff with AI and a PR".
The structure of the project incentives kind of creates this. Increasing cost to contribution is a valid strategy of course, but from a holistic project point of view it is not always a good one especially assuming you are not dealing with adversarial contributors but only slightly incompetent ones.
Comment by micromacrofoot 1 day ago
it's easy to not have shame when you have no skin in the game... this is similar to how narcissists think so highly of themselves, it's never their fault
Comment by blell 1 day ago
Comment by postepowanieadm 1 day ago
Comment by MrBuddyCasino 1 day ago
Comment by weinzierl 1 day ago
And this is one half of why I think
"Bad AI drivers will be [..] ridiculed in public."
isn't a good clause. The other is that ridiculing others, not matter what, is just no decent behavior. Putting it as a rule in your policy document makes it only worse.
Comment by anonymous908213 1 day ago
Shaming people for violating valid social norms is absolutely decent behaviour. It is the primary mechanism we have to establish social norms. When people do bad things that are harmful to the rest of society, shaming them is society's first-level corrective response to get them to stop doing bad things. If people continue to violate norms, then society's higher levels of corrective behaviour can involve things like establishing laws and fining or imprisoning people, but you don't want to start with that level of response. Although putting these LLM spammers in jail does sound awfully enticing to me in a petty way, it's probably not the most constructive way to handle the problem.
The fact that shamelessness is taking over in some cultures is another problem altogether, and I don't know how you deal with that. Certain cultures have completely abdicated the ability to influence people's behaviour socially without resorting to heavy-handed intervention, and on the internet, this becomes everyone in the world's problem. I guess the answer is probably cultivation of spaces with strict moderation to bar shameless people from participating. The problem could be mitigated to some degree if a Github-like entity outright banned these people from their platform so they could not continue to harass open-source maintainers, but there is no platform like that. It unfortunately takes a lot of unrewarding work to maintain a curated social environment on the internet.
Comment by weinzierl 1 day ago
To demand public humiliation doesn’t just put you on the same level as our medieval ancestors, who responded to violations of social norms with the pillory - it’s actually even worse: the contemporary internet pillory never forgets.
Comment by anonymous908213 1 day ago
Shame is also not the same thing as "public humiliation". They are publicly humiliating themselves. Pointing out that what they publicly chose to do themselves is bad is in no way the same as coercing them into being humiliated, which is what "public humiliation as a medieval punishment" entails. For example, the medieval practice of dragging a woman through the streets nude in order to humiliate her is indeed abhorrent, but you can hardly complain if you march through the streets nude of your own volition, against other people's desires, and are then publicly shamed for it.
Comment by wpietri 1 day ago
What negative experience do you think should instead be created for people breaking these rules?
Comment by weinzierl 1 day ago
A permanent public internet pillory isn’t just useless against the worst offenders, who are shameless anyway. It’s also permanently damaging to those who are still learning societal norms.
The Ghostty AI policy lacks any nuance in this regard. No consideration for the age or experience of the offender. No consideration for how serious the offense actually was.
Comment by wpietri 3 hours ago
I see plenty of nuance beyond the bold print. They clearly say they love to help junior developers. Your assumption that they will apply this without thought is, well, your assumption. I'd rather see what they actually do instead of getting wrapped up in your fantasies.
Comment by ryandrake 22 hours ago
Comment by conartist6 1 day ago
Tit for tat
Comment by weinzierl 1 day ago
What is written in the Ghostty AI policy lacks any nuance or generosity. It's more like a Grim Trigger strategy than Tit for Tat.
Comment by conartist6 1 day ago
It is understanding of these dynamics that lead to us to our current system of law: punitive justice, but forgiveness through pardons.
Comment by senko 1 day ago
"This person contributed to a lot of projects" heuristic for "they're a good and passionate developer" means people will increasingly game this using low-quality submissions. This has been happening for years already.
Of course, AI just added kerosene to the fire, but re-read the policy and omit AI and it still makes sense!
A long term fix for this is to remove the incentive. Paradoxically, AI might help here because this can so trivially be gamed that it's obvious it's not longer any kind of signal.
Comment by stephantul 1 day ago
The economics of it have changed, human nature hasn’t. Before 2023 (?) people also submitted garbage PRs just to be able to add “contributed to X” to their CV. It’s just become a lot cheaper.
Comment by TeMPOraL 1 day ago
No, this problem isn't fundamentally about AI, it's about "social" structure of Github and incentives it creates (fame, employment).
Comment by achyudh 20 hours ago
Comment by arjunbajaj 1 day ago
Covers most of the points I'm sure many of us have experienced here while developing with AI. Most importantly, AI generated code does not substitute human thinking, testing, and clean up/rewrite.
On that last point, whenever I've gotten Codex to generate a substantial feature, usually I've had to rewrite a lot of the code to make it more compact even if it is correct. Adding indirection where it does not make sense is a big issue I've noticed LLMs make.
Comment by ottah 18 hours ago
It's one of those provisions that seem reasonable, but really have no justification. It's an attempt to allow something, while extracting a cost. If I am responsible for my code, and am considered the author in the PR, than you as the recipient don't have a greater interest to know than my own personal preference not to disclose. There's never been any other requirement to disclose anything of this nature before. We don't require engineers to attest to the operating system or the licensing of the tools they use, so materially outside your own purant interests, how does it matter?
Comment by arjunbajaj 11 hours ago
It is of course your responsibility, but the maintainer may also want to change their review approach when dealing with AI generated code. And currently, as the AI Usage Policy also states, because of bad actors sending pull requests without reviewing or taking the responsibility themselves, this acts as a filter to separate your PR which you have taken the responsibility for.
Comment by oblio 17 hours ago
Comment by dawnerd 1 day ago
Comment by fzaninotto 1 day ago
Comment by imiric 1 day ago
However:
> AI generated code does not substitute human thinking, testing, and clean up/rewrite.
Isn't that the end goal of these tools and companies producing them?
According to the marketing[1], the tools are already "smarter than people in many ways". If that is the case, what are these "ways", and why should we trust a human to do a better job at them? If these "ways" keep expanding, which most proponents of this technology believe will happen, then the end state is that the tools are smarter than people at everything, and we shouldn't trust humans to do anything.
Now, clearly, we're not there yet, but where the line is drawn today is extremely fuzzy, and mostly based on opinion. The wildly different narratives around this tech certainly don't help.
Comment by nicoburns 1 day ago
It seems to be the goal. But they seem very far away from achieving that goal.
One thing you probably account for is that most of the proponents of these technologies are trying to sell you something. Doesn't mean that there is no value to these tools, but the wild claims about the capabilities of the tools are just that.
Comment by Terretta 1 day ago
Comment by imiric 1 day ago
Comment by TeMPOraL 1 day ago
You may hire a genius developer that's better than you at everything, and you still won't trust them blindly with work you are responsible for. In fact, the smarter they are than you, the less trusting you can afford to be.
Comment by phanimahesh 1 day ago
Comment by cmsj 1 day ago
Comment by OvbiousError 1 day ago
Comment by sjajshha 1 day ago
Comment by Lucasoato 1 day ago
Finally an AI policy I can agree with :) jokes aside, it might sound a bit too agressive but it's also true that some people have really no shame into overloading you with AI generated shit. You need to protect your attention as much as you can, it's becoming the new currency.
Comment by weinzierl 1 day ago
Comment by wpietri 1 day ago
One of the theorized reasons for junk AI submissions is reputation boosting. So maybe this will help.
And I think it will help with people who just bought into the AI hype and are proceeding without much thought. Cluelessness can look a lot like shamelessness at first.
Comment by mijoharas 1 day ago
Presumably people want this for some kind of prestige, so they can put it on their CV (contributed to ghostty/submitted security issue to curl).
If we change that equation to have them think "wait, if I do this, then when employers Google me they'll see a blog post saying I'm incompetent" changes calculation that is neutral/positive for if their slop gets accepted to negative/positive.
Seems like it's addressing the incentives to me.
Comment by sjajshha 1 day ago
Comment by Applejinx 1 day ago
Comment by alansaber 1 day ago
Comment by Ntrails 1 day ago
I would expect this is entirely uncontroversial and the AI qualifier redundant.
Comment by Retr0id 1 day ago
Comment by verdverm 21 hours ago
Comment by bwat49 1 day ago
Comment by maxnevermind 22 hours ago
Quality of that verification matters, people who might use AI tend to cut corners. This does not completely solve problem with AI slop imo and solution quality. You ask Claude Code to go and implement a new feature in a complex code base, it will, the code might even work, but implementation might have subtle issues and might be missing the broader vision of the repo.
Comment by verdverm 21 hours ago
People do this all the time too, and is one source for the phrase "tech debt"
It's also a biased statement. I use Ai and I cut fewer corners now because the Ai can spam out that boring stuff for me
Comment by njhnjhnjh 1 day ago
This sort of request may have made sense in the old days but as the quality of generated code rapidly increases, so does the necessity of human intervention decrease.
Comment by rf15 7 hours ago
Comment by toraway 1 day ago
Comment by thunderfork 1 day ago
If you don't check it yourself, then you're going to own whatever your tooling misses, and also own the amount of others' time you waste through what the project has decided to categorize as negligence, which will make you look worse than if you simply made an honest mistake.
Comment by Applejinx 1 day ago
Comment by epaga 1 day ago
Comment by skybrian 1 day ago
Comment by jakozaur 1 day ago
“ Ultimately, I want to see full session transcripts, but we don't have enough tool support for that broadly.”
I have a side project, git-prompt-story to attach Claude Vode session in GitHub git notes. Though it is not that simple to do automatic (e.g. i need to redact credentials).
Comment by ollien 1 day ago
Comment by radarsat1 1 day ago
Comment by simonw 1 day ago
My latest attempt at this is https://github.com/simonw/claude-code-transcripts which produces output like the is: https://gisthost.github.io/?c75bf4d827ea4ee3c325625d24c6cd86...
Comment by radarsat1 1 day ago
Comment by simonw 1 day ago
Comment by radarsat1 21 hours ago
I get that, but I guess what I'm asking is, why does it matter what you did?
The result is working, documented source code, which seems to me to be the important part. What value does keeping the prompt have?
I'm not trying to needle, I just don't see it.
Comment by simonw 15 hours ago
It's also great for improving my prompting skills over time - I can go back and see what worked.
Comment by verdverm 21 hours ago
I save all of mine, including their environment, and plan to use them for iterating on my various system prompts and tool instructions.
Comment by awesan 1 day ago
At a minimum it will help you to be skeptical at specific parts of the diff so you can look at those more closely in your review. But it can inform test scenarios etc.
Comment by fragmede 1 day ago
Comment by Ronsenshi 1 day ago
Comment by verdverm 21 hours ago
To me, quality code is quality code no matter how it was arrived at. That should be the end of it
Comment by couchdb_ouchdb 1 day ago
Comment by optimalsolver 1 day ago
I think AI could help with that.
Comment by stevenhuang 1 day ago
https://simonw.substack.com/p/a-new-way-to-extract-detailed-...
Comment by empath75 1 day ago
Comment by waldrews 21 hours ago
In the old era, the combination 'it works' + 'it uses a sophisticated language' + 'it integrates with a complex codebase' implied that this was an intentional effort by someone who knew what they were doing, and therefore probably safe to commit.
We can no longer make that social assumption. So then, what can we rely on to signal 'this was thoroughly supervised and reviewed and understood and tested?' That's going to be hard and subjective.
Personal reputations and track records are pedigrees and brands are going to become more important in the industry; and the meritocratic 'code talks no matter where you came from' ethos is at risk.
Comment by rikschennink 1 day ago
I find this distinction between media and text/code so interesting. To me it sounds like they think "text and code" are free from the controversy surrounding AI-generated media.
But judging from how AI companies grabbed all the art, images, videos, and audio they could get their hands on to train their LLMs it's naive to think that they didn't do the same with text and code.
Comment by embedding-shape 1 day ago
It really isn't, don't you recall the "protests" against Microsoft starting to use repositories hosted at GitHub for training their own coding models? Lots of articles and sentiments everywhere at the time.
Seems to have died down though, probably because most developers seemingly at this point use LLMs in some capacity today. Some just use it as a search engine replacement, others to compose snippets they copy-paste and others wholesale don't type code anymore, just instructions then review it.
I'm guessing Ghostty feels like if they'd ban generated text/code, they'd block almost all potential contributors. Not sure I agree with that personally, but I'm guessing that's their perspective.
Comment by rikschennink 1 day ago
Comment by Applejinx 1 day ago
Comment by embedding-shape 1 day ago
Comment by Applejinx 20 hours ago
Comment by NiloCK 1 day ago
I've written a fair amount of open source code. On anything like a per-capita basis, I'm way above median in terms of what I've contributed (without consent) to the training of these tools. I'm also specifically "in the crosshairs" in terms of work loss from automation of software development.
I don't find it hard to convince myself that I have moral authority to think about the usage of gen AI for writing code.
The same is not true for digital art.
There, the contribution-without-consent, aka theft, (I could frame it differently when I was the victim, but here I can't) is entirely from people other than me. The current and future damages won't be born by me.
Comment by rikschennink 1 day ago
I've written _a lot_ of open source MIT licensed code, and I'm on the fence about that being part of the training data. I've published it as much for other people to use for learning purposes as I did for fun.
I also build and sell closed source commercial JavaScript packages, and more than likely those have ended up in the training data as well. Obviously without consent. So this is why I feel strong about making this separation between code and media, from my perspective it all has the same problem.
Comment by NiloCK 12 hours ago
Comment by Applejinx 1 day ago
Comment by tzs 1 day ago
What's the reason for this?
Media is the most likely thing I'd consider using AI for as part of a contribution to an open source project.
My code would be hand crafted by me. Any AI use would be similar to Google use: a way to search for examples and explanations if I'm unclear on something. Said examples and explanations would then be read, and after I understand what is going on I'd write my code.
Any documentation I contributed would also be hand written. However, if I wanted to include a diagram in that documentation I might give AI a try. It can't be worse than my zero talent attempts to make something in OmniGraffle or worse a photograph of my attempt to draw a nice diagram on paper.
I'd have expected this to be the least concerning use of AI.
Comment by alya 23 hours ago
Our evolving AI policy is in the same spirit as ghostty's, with more detail to address specific failure modes we've experienced: https://zulip.readthedocs.io/en/latest/contributing/contribu...
Comment by verdverm 21 hours ago
It's actually reasonable and the guidance you provide on how to best use Ai when contributing to Zulip is :chef's kiss:
truly, I'm going to copy yours as a thank you!
Comment by cranium 1 day ago
You'd need that kind of sharp rules to compete against unhinged (or drunken) AI drivers and that's unfortunate. But at the same time, letting people DoS maintainers' time at essential no cost is not an option either.
Comment by yomismoaqui 1 day ago
But now we have some kind of electronic brains that can also generate code, not at the level of the best human brains out there but good enough for most projects. And they are quicker and cheaper than humans, for sure.
So maybe in the end this will reduce the need for human contributions to opensource projects.
I just know that as a solo developer AI coding agents enable me to tackle projects I didn't think about event starting before.
Comment by Sparkyte 1 day ago
Sanitization practices of AI are bad too.
Let me be clear nothing wrong with AI in your workflow, just be an active participator in your code. Code is not meant to be one and done.
You will go through iteration after iteration, security fix after fix. This is how development is.
Comment by dw_arthur 1 day ago
Comment by twoodfin 22 hours ago
Comment by CrociDB 1 day ago
The fact that some people will straight up lie after submitting you a PR with lots of _that type_ of comment in the middle of the code is baffling!
Comment by nutjob2 1 day ago
Maybe a bit unlikely, but still an issue no one is really considering.
There has been a single ruling (I think) that AI generated code is uncopyrightable. There has been at least one affirmative fair use ruling. Both of these are from the lower courts. I'm still of the opinion that generative AI is not fair use because its clearly substitutive.
Comment by tpxl 1 day ago
However, at this point, the economic impact of trying to de tangle this mess would be so large, the courts likely won't do anything about it. You and I don't get to infringe on copyright; Microsoft, Facebook and Google sure do though.
Comment by Sytten 1 day ago
Comment by direwolf20 1 day ago
Comment by christoph-heiss 1 day ago
Comment by latexr 1 day ago
It’s illegal to commit fraud or murder, but if you do it and suffer no consequences (perhaps you even get pardoned by your president), does it matter that it was illegal? Laws are as strong as their enforcement.
For a less grim and more explicit example, Apple has a policy on the iOS App Store that apps may not use notifications to advertise. Yet it happens all the time, especially from big players like Uber. Apple themselves have done it too. So if you’re a bad actor and disrespectful to your users, does it matter that the rule exists?
Comment by direwolf20 23 hours ago
Licenses determine whether a copyright lawsuit is likely to happen. Most entities won't sue you if they expect to lose. But they are not the only deciding factor. Some entities never sue, which means you don't have to follow their licenses.
Sometimes they don't sue because they don't think they can prove you infringed copyright, even if you did. Even if AI is found to be copyright infringement in general, that won't mean every output is a copyright infringement of every input. Writing C code wouldn't be copyright infringement of Harry Potter. The entity suing you would still have to prove that you infringed.
Comment by consp 1 day ago
Comment by nutjob2 1 day ago
You may become a big enough target only when it's too late to undo it.
Comment by cess11 1 day ago
Comment by 101008 1 day ago
Comment by BoredomIsFun 20 hours ago
Comment by evilhackerdude 1 day ago
on a related note: i wish we could agree on rebranding the current LLM-driven never-gonna-AGI generation of "AI" to something else… now i'm thinking of when i read the in-game lore definition for VI (Virtual Intelligence) back when i played mass effect 1 ;)
Comment by vegabook 1 day ago
Comment by KolmogorovComp 1 day ago
Comment by layer8 1 day ago
Comment by KolmogorovComp 1 day ago
Comment by milancurcic 1 day ago
Comment by phanimahesh 1 day ago
Comment by milancurcic 1 day ago
Comment by njhnjhnjh 1 day ago
Comment by andy99 1 day ago
I can see this being a problem. I read a thread here a few weeks ago where someone was called out on submitting an AI slop article they wrote with all the usual tells. They finally admitted it but said something to the effect they reviewed it and stood behind every line.
The problem with AI writing is at least some people appear incapable of critically reviewing it. Writing something yourself eliminates this problem because it forces you to pick your words (there could be other problems of course).
So the AI-blind will still submit slop under the policy but believe themselves to have reviewed it and “stand behind” it.
Comment by epolanski 1 day ago
I work in a team of 5 great professionals, there hasn't been a single instance since Copilot launched in 2022 that anybody, in any single modification did not take full responsibility for what's been committed.
I know we all use it, to different extent and usage, but the quality of what's produced hasn't dipped a single bit, I'd even argue it has improved because LLMs can find answers easier in complex codebases. We started putting `_vendor` directories with our main external dependencies as git subtrees, and it's super useful to find information about those directly in their source code and tests.
It's really as simple. If your teammates are producing slop, that's a human and professional problem and these people should be fired. If you use the tool correctly, it can help you a lot finding information and connecting dots.
Any person with a brain can clearly see the huge benefit of these tools, but also the great danger of not reviewing their output line by line and forfeiting the constant work of resolving design tensions.
Of course, open source is a different beast. The people committing may not be professionals and have no real stakes so they get little to lose by producing slop whereas maintainers are already stretched in their time and attention.
Comment by embedding-shape 1 day ago
Agree, slop isn't "the tool is so easy to use I can't review the code I'm producing", slop is the symptom of "I don't care how it's done, as long as it looks correct", and that's been a problem before LLMs too, the difference is how quickly you reach the "slop" state now, not that you have gate your codebase and reject shit code.
As always, most problems in "software programming" isn't about software nor programming but everything around it, including communication and workflows. If your workflow allows people to not be responsible for what they produce, and if allows shitty code to get into production, then that's on you and your team, not on the tools that the individuals use.
Comment by altmanaltman 1 day ago
> Ghostty is written with plenty of AI assistance, and many maintainers embrace AI tools as a productive tool in their workflow. As a project, we welcome AI as a tool!
> Our reason for the strict AI policy is not due to an anti-AI stance, but instead due to the number of highly unqualified people using AI. It's the people, not the tools, that are the problem.
Basically don't write slop and if you want to contribute as an outsider, ensure your contribution actually is valid and works.
Comment by kanzure 1 day ago
Another idea is to simply promote the donation of AI credits instead of output tokens. It would be better to donate credits, not outputs, because people already working on the project would be better at prompting and steering AI outputs.
Comment by lagniappe 1 day ago
In an ideal world sure, but I've seen the entire gamut from amateurs making surprising work to experts whose prompt history looks like a comedy of errors and gotchas. There's some "skill" I can't quite put my finger on when it comes to the way you must speak to an LLM vs another dev. There's more monkey-paw involved in the LLM process, in the sense that you get what you want, but do you want what you'll get?
Comment by yellowapple 21 hours ago
Comment by hereme888 1 day ago
But I've never had the gall to let my AI agent do stuff on other people's projects without my direct oversight.
Comment by PlatoIsADisease 1 day ago
I might copy it for my company.
Comment by zzzeek 23 hours ago
Comment by cxrpx 1 day ago
Comment by gverrilla 1 day ago
Comment by lifetimerubyist 1 day ago
Surely they are incapable of producing slop because they are just so much smarter than everyone else so the rules shouldn't apply to them, surely.
Comment by antirez 1 day ago
Moreover this policy is strictly unenforceable because good AI use is indistinguishable from good manual coding. And sometimes even the reverse. I don't believe in coding policies where maintainers need to spot if AI is used or not. I believe in experienced maintainers that are able to tell if a change looks sensible or not.
Comment by sumtechguy 1 day ago
Comment by danw1979 1 day ago
There's some sensible, easily-judged-by-a-human rules in here. I like the spirit of it and it's well written (I assume by Mitchell, not Claude, given the brevity).
Comment by b3kart 1 day ago
Comment by antirez 22 hours ago
Comment by b3kart 42 minutes ago
Comment by yaront111 1 day ago
Comment by mefengl 1 day ago
Comment by postepowanieadm 1 day ago
Comment by kleiba 1 day ago
Comment by christoph-heiss 1 day ago
Comment by embedding-shape 1 day ago
https://raw.githubusercontent.com/ghostty-org/ghostty/refs/h...
Comment by flexagoon 1 day ago
Comment by embedding-shape 1 day ago
Actually, trying to load that previous platform on my phone makes it worse for readability, seems there is ~10% less width and not as efficient use of vertical space. Together with both being unformatted markdown, I think the raw GitHub URL seems to render better on mobile, at least small ones like my mini.
Comment by user34283 1 day ago
Comment by hmokiguess 1 day ago
EDIT: I'm getting downvoted with no feedback, which is fine I guess, so I am just going to share some more colour on my opinion in case I am being misunderstood
What I meant with analogous to phishing is that the intent for the work is likely the one of personal reward and perhaps less of the desire to contribute. I was thinking they want their name on the contributors list, they want the credit, they want something and they don't want to put effort on it.
Do they deserve to be ridiculed for doing that? Maybe. However, I like to think humans deserve kindness sometimes. It's normal to want something, and I agree that it is not okay to be selfish and lazy about it (ignoring contribution rules and whatnot), so at minimum I think respect applies.
Some people are ignorant, naive, and are still maturing and growing. Bullying them may not help (thought it could) and mockery is a form of aggression.
I think some true false positives will fall into that category and pay the price for those who are truly ill intended.
Lastly, to ridicule is to care. To hate or attack requires caring about it. It requires effort, energy, and time from the maintainers. I think this just adds more waste and is more wasteful.
Maybe those wordings are there just to 'scare' people away and maintainers won't bother engaging, though I find it is just compounding the amount of garbage at this point and nobody benefits from it.
Anyways, would appreciate some feedback from those of you that seem to think otherwise.
Thanks!
PS: What I meant with ghostty should "ghost" them was this: https://en.wikipedia.org/wiki/Shadow_banning
Comment by krzyk 23 hours ago
Are images somehow better? If one draws, is he better the one that writes code? Why protect one and not the other. Or why protect any form at all?
Comment by KronisLV 1 day ago
Interesting requirement! Feels a bit like asking someone what IDE they used.
There shouldn't be that meaningful of a difference between the different tools/providers unless you'd consistently see a few underperform and would choose to ban those or something.
The other rules feel like they might discourage AI use due to more boilerplate needed (though I assume the people using AI might make the AI fill out some of it), though I can understand why a project might want to have those sorts of disclosures and control. That said, the rules themselves feel quite reasonable!