Redox OS has adopted a Certificate of Origin policy and a strict no-LLM policy
Posted by pjmlp 15 hours ago
Comments
Comment by ptnpzwqd 15 hours ago
It doesn't really matter what your stance on AI is, the problem is the increased review burden on OSS maintainers.
In the past, the code itself was a sort of proof of effort - you would need to invest some time and effort on your PRs, otherwise they would be easily dismissed at a glance. That is no longer the case, as LLMs can quickly generate PRs that might look superficially correct. Effort can still have been out into those PRs, but there is no way to tell without spending time reviewing in more detail.
Policies like this help decrease that review burden, by outright rejecting what can be identified as LLM-generated code at a glance. That is probably a fair bit today, but it might get harder over time, though, so I suspect eventually we will see a shift towards more trust-based models, where you cannot submit PRs if you haven't been approved in advance somehow.
Even if we assume LLMs would consistently generate good enough quality code, code submitted by someone untrusted would still need detailed review for many reasons - so even in that case it would like be faster for the maintainers to just use the tools themselves, rather than reviewing someone else's use of the same tools.
Comment by stabbles 14 hours ago
* Prefer an issue over a PR (after iterating on the issue, either you or the maintainer can use it as a prompt)
* Only open a PR if the review effort is less than the implementation effort.
Whether the latter is feasible depends on the project, but in one of the projects I'm involved in it's fairly obvious: it's a package manager where the work is typically verifying dependencies and constraints; links to upstream commits etc are a great shortcut for reviewers.
Comment by zozbot234 13 hours ago
Comment by GorbachevyChase 9 hours ago
It’s fine to write things by hand, in the same way that there’s nothing wrong with making your own clothing with a sewing machine when you could have bought the same thing for a small fraction of the value of your time. Or in the same fashion, spending a whole weekend, modeling and printing apart, you could’ve bought for a few dollars. I think we need to be honest about differentiating between the hobby value of writing programs versus the utility value of programs. Redox is a hobby project, and, while it’s very cool, I’m not sure it has a strong utility proposition. Demanding that code be handwritten makes sense to me for the maintainer because the whole thing is just for fun anyway. There isn’t an urgent need to RIIR Linux. I would not apply this approach to projects where solving the problem is more important than the joy of writing the solution.
Comment by notpachet 9 hours ago
Is that really true? Like, if you took the time to plan it carefully, dot every i, cross every t?
The way I think of LLM's is as "median targeters" -- they reliably produce output at the centre of the bell curve from their training set. So if you're working in a language that you're unfamiliar with -- let's say I wanted to make a todo list in COBOL -- then LLM's can be a great help, because the median COBOL developer is better than I am. But for languages I'm actually versed in, the median is significantly worse than what I could produce.
So when I hear people say things like "the clanker produces better programs than me", what I hear is that you're worse than the median developer at producing programs by hand.
Comment by h3lp 7 hours ago
My go-to analogy is assembly language programming: it used to be an essential skill, but now is essentially delegated to compilers outside of some limited specialized cases. I think LLMs will be seen as the compiler technology of the next wave of computing.
Comment by Terr_ 3 hours ago
Consider calculators: Their consistency and adherence to requirements was necessary for adoption. Nobody would be using them if they gave unpredictable wrong answers, or where calculations involving 420 and 69 somehow keep yielding 5318008. (To be read upside-down, of course.)
Comment by h3lp 1 hour ago
I think LLMs will get better, as well.
Comment by aaronbrethorst 2 hours ago
Comment by torginus 8 hours ago
For example just recently I updated a component in one of our modules. The work was fairly rote (in this project we are not allowed to use LLMs). While it was absolutely necessary to do the update here, it was beneficial to do it everywhere else. I didn't do it in other places because I couldn't justify spending the effort.
There are two sides to this - with LLMs, housekeeping becomes easy and effortless, but you often err on the side of verbosity because it costs nothing to write.
But much less thought goes into every line of code, and I often am kinda amazed that how compact and rudimentary the (hand-written) logic is behind some of our stuff that I thought would be some sort of magnum opus.
When in fact the opposite should be the case - every piece of functionality you don't need right now, will be trivial to generate in the future, so the principle of YAGNI applies even more.
Comment by notpachet 7 hours ago
Comment by zozbot234 3 hours ago
Comment by dnautics 4 hours ago
The clanker can produce better programs than me because it will just try shit that I would never have tried, and it can fail more times than I can in a given period of time. It has specific advantages over me.
Comment by skeeter2020 7 hours ago
Comment by hananova 8 hours ago
I’m sorry but this says more about you than about the models. It is certainly not the case for me!
Comment by xyzsparetimexyz 5 hours ago
Comment by zozbot234 8 hours ago
That's correct, because most of the cost of code is not the development but rather the subsequent maintenance, where AI can't help. Verbose, unchecked AI slop becomes a huge liability over time, you're vastly better off spending those few weekends rewriting it from scratch.
Comment by fireflash38 7 hours ago
It never picks a style, it'll alternate between exceptions and then return codes.
It'll massively overcomplicate things. It'll reference things that straight up don't exist.
But boy is it brilliant at a fuzzy find and replace.
Comment by skeeter2020 7 hours ago
Comment by mech422 4 hours ago
Comment by heavyset_go 3 hours ago
No LLM can answer this question for you, it has no insight into how or why it outputted what it outputted. The reasons it gives might sound plausible, but they aren't real.
Comment by emperorxanu 10 hours ago
Despite that, you will make this argument when trying to use copilot to do something, the worst model in the entire industry.
If an AI can replace you at your job, you are not a very good programmer.
Comment by recursive 5 hours ago
I'll just wait and see.
Comment by calvinmorrison 5 hours ago
Me and millions of other local yokel programmers who work in regional cities at small shops, in house at businesses, etc are absolutely COOKED. No I cant leet code, no I didnt go to MIT, no I dont know how O(n) is calculated when reading a function. I can scrap together a lot of useful business stuff but no I am not a very good programmer.
Comment by ethbr1 5 hours ago
1. Confidently state "O(n)"
2. If they give you a look, say "O(1) with some tricks"
3. If they still give you a look, say "Just joking! O(nlogn)"Comment by calvinmorrison 4 hours ago
Comment by integralid 4 hours ago
This is really, honestly not hard. Spend a few minutes reading about this, or even better, ask a LLM to explain it to you and clear your misconceptions if regular blog posts don't do it for you. This is one of the concepts that sounds scarier than it is.
edit: To be clear there are tough academic cases where complexity is harder to compute, with weird functions in O(sqrt(n)) or O(log(log(n)) or worse, but most real world code complexity is really easy to tell at glance.
Comment by calvinmorrison 42 minutes ago
Comment by skydhash 10 hours ago
Comment by cyanydeez 1 hour ago
Comment by hijnksforall956 11 hours ago
Comment by UqWBcuFx6NV4r 12 hours ago
Comment by jacquesm 12 hours ago
The open source world has already been ripped off by AI the last thing they need is for AI to pollute the pedigree of the codebase.
Comment by sillysaurusx 11 hours ago
Do you think your worldview is still a reasonable one under those conditions?
Comment by lkjdsklf 11 hours ago
Maybe one day it will be.. And then people can reevaluate their stance then. Until that time, it's entirely reasonable to hold the position that you just don't
This is especially true with how LLM generated code may affect licensing and other things. There's a lot of unknowns there and it's entirely reasonable to not want to risk your projects license over some contributions.
I use them all the time at work because, rightly or wrongly, my company has decided that's the direction they want to go.
For open source, I'm not going to make that choice for them. If they explicitly allow for LLM generated code, then I'll use it, but if not I'm not going to assume that the project maintainers are willing to deal with the potential issues it creates.
For my own open source projects, I'm not interested in using LLM generated code. I mostly work on open source projects that I enjoy or in a specific area that I want to learn more about. The fact that it's functional software is great, but is only one of many goals of the project. AI generated code runs counter to all the other goals I have.
Comment by ndriscoll 9 hours ago
People might still code by hand as a hobby, but I'd be surprised if nearly all professional coding isn't being done by LLMs within the next year or two. It's clear that doing it by hand would mostly be because you enjoy the process. I expect people that are more focused on the output will adopt LLMs for hobby work as well.
Comment by joquarky 1 hour ago
This will not happen until companies decide to care about quality again. They don't want employees spending time on anything "extra" unless it also makes them significantly more money.
Comment by ipaddr 5 hours ago
Comment by notpachet 9 hours ago
This is gaslighting. We're only a few years into coding agents being a thing. Look at the history of human innovation and tell me that I'm unreasonable for suspecting that there is an iceberg worth of unmitigated externalities lurking beneath the surface that haven't yet been brought to light. In time they might. Like PFAS, ozone holes, global warming.
Comment by devonkelley 7 hours ago
Comment by ndriscoll 6 hours ago
Comment by jacquesm 7 hours ago
Comment by TuxPowered 10 hours ago
That sounds very Usanian. In the meantime transportation in around me is done on foot, bicycle, bus, tram, metro, train and cars. There are good use cases for each method including the car. If you really want to use an automotive analogy, then sure, LLMs can be like cars. I've seen cities made for cars instead of humans, and they are a horrible place to live.
Signed, a person who totally gets good results from coding with LLMs. Sometimes, maybe even often.
Comment by logicprog 11 hours ago
That seems like a win-win in a sense: let the agentic coders do their thing, and the artisanal coders do their thing, and we'll see who wins in the long run.
Comment by officeplant 10 hours ago
Saves the rest of us from having to tell you.
Comment by FridgeSeal 3 hours ago
Bold of you to assume that people won’t move (and their code along with it) to spaces where parasitic behaviour like this doesn’t occur, locking you out.
In addition to just being a straight-up rude, disrespectful and parasite position to take, you’re effectively poisoning your own well.
Comment by logicprog 3 hours ago
Additionally, if they accept AI contributions, I try, when I have the time and energy, make sure my PRs are high quality, and provide them. If they don't, then I'll go off and do my own thing, because that's literally what they asked me to do, and I wasn't going to contribute otherwise. I fail to see how that's rude or parasitic or disrespectful in any way except my assumption that the more featureful and polished forks might eventually win out.
Comment by hunterpayne 1 hour ago
Comment by logicprog 1 hour ago
Comment by skeeter2020 7 hours ago
this feels like the place where your approach breaks down. I have had very poor results trying to build a foundation that CAN be polished, or where features don't quickly feel like a jenga tower. I'm wondering if the success we've seen is because AI is building on top of, or we're early days in "foundational" work? Is anyone aware of studies comparing longer term structural aspects? is it too early?
Comment by logicprog 3 hours ago
Comment by short_sells_poo 10 hours ago
And this is why eventually you are likely to run the artisanal coders who tend to do most of the true innovation out of the room.
Because by and large, agentic coders don't contribute, they make their own fork which nobody else is interested in because it is personalized to them and the code quality is questionable at best.
Eventually, I'm sure LLM code quality will catch up, but the ease with which an existing codebase can be forked and slightly tuned, instead of contributing to the original, is a double edged sword.
Comment by geoffmunn 8 hours ago
Isn't that literally how open-source works, and why there's so many Linux distros?
Code quality is a subjective term as well, I feel like everyone dunking on AI coding is a defensive reaction - over time this will become an entirely acceptable concept.
Comment by short_sells_poo 8 hours ago
Vibe coders don't have to do any of this. They don't have to understand anything, they can just have their LLMs do some modifications that are completely opaque to the vibe coder.
Perhaps the long term steady state will be a goldilocks renaissance of open source where lots of new ideas and contributors spring up, made capable with AI assistance. But so far what I've seen is the opposite. These people just feed existing work into their LLMs, produce derivative works and never bother to engage with the original authors or community.
Comment by logicprog 3 hours ago
I spend time using my agent to better understand existing codebases and their best practices than I'd ever have the time/energy to do before, giving me a broader and more holistic view on whatever I'm changing, before I make a change.
Comment by Qwertious 2 hours ago
Comment by logicprog 56 minutes ago
I always find it odd that people say both that vibe coding has obvious and immediate negative consequences in terms of quality and at the same time that nobody could learn or be incentivized to produce better architecture and code quality from vibe coding when they would obviously face those consequences.
Comment by sanderjd 9 hours ago
Personally, I would not currently expect a fork of RedoxOS that is AI-implemented to become more popular than RedoxOS itself.
Comment by logicprog 3 hours ago
Comment by logicprog 3 hours ago
But if a project bans AI then yeah, they'll be run out of town because I won't bother trying to contribute.
Comment by sanderjd 9 hours ago
Start new projects using LLM tools, or maybe fork projects where that is acceptable. Don't force the volunteer maintainers of existing projects with existing workflows and cultures to review AI generated code. Create your own projects with workflows and cultures that are supportive of this, from the ground up.
I'm not suggesting this will come without downside, but it seems better to me than expecting maintainers to take on a new burden that they really didn't sign up for.
Comment by timando 1 hour ago
Comment by skeeter2020 7 hours ago
Comment by jacquesm 7 hours ago
Comment by bandrami 11 hours ago
Comment by gehdhffh 9 hours ago
There clearly should be, but that is not the world we live in.
Comment by olmo23 12 hours ago
Comment by duskdozer 11 hours ago
Comment by ChrisMarshallNY 13 hours ago
Prompts from issue text makes a lot of sense.
Comment by darkwater 12 hours ago
Comment by andrewchambers 14 hours ago
Comment by oytis 14 hours ago
Comment by swiftcoder 14 hours ago
Comment by ckolkey 12 hours ago
Comment by duskdozer 11 hours ago
>No big rewrites or anything crazy
I think those are the key points why they've been welcomed.
Comment by oytis 14 hours ago
And I would say especially for operating systems if it gets any adoption irregular contributions are pretty legit. E.g. when someone wants just one specific piece of hardware supported that no one else has or needs without being employed by the vendor.
Comment by Muromec 13 hours ago
Potential long time contributor is somebody who was already asking annoying questions in the irc channel for a few months and helped with other stuff before shooting off th e PR. If the PR is the first time you hear from a person -- that's pretty drive-by ish.
Comment by DrewADesign 13 hours ago
I always provided well-documented PRs with a narrow scope and an obvious purpose.
Comment by MadameMinty 13 hours ago
Not to mention LLMs can be annoying, too. Demand this, and you'll only be inviting bots to pester devs on IRC.
Comment by swiftcoder 11 hours ago
Because if the bug is sufficiently simple that an outsider with zero context to fix, there's a non-zero chance that the maintainers know about it and have a reason why it hasn't been addressed yet
i.e. the bug fix may have backwards-compatibility implications for other users which you aren't aware of. Or the maintainers may be bandwidth-limited, and reviewing your PR is an additional drain on that bandwidth that takes away from fixing larger issues
Comment by CorrectHorseBat 10 hours ago
Comment by duskdozer 11 hours ago
Comment by junon 12 hours ago
Comment by CorrectHorseBat 13 hours ago
Comment by pmarreck 13 hours ago
Comment by swiftcoder 12 hours ago
Drive-by folks tend to blindly fix the issue they care about, without regard to how/whether it fits into the overall project direction
Comment by hunterpayne 1 hour ago
Comment by kpcyrd 11 hours ago
Comment by mcherm 3 hours ago
Comment by ptnpzwqd 14 hours ago
Comment by adjfasn47573 10 hours ago
Wait but under that assumption - LLMs being good enough - wouldn't the maintainer also be able to leverage LLMs to speed up the review?
Often feels to me like the current stance of arguments is missing something.
Comment by chownie 10 hours ago
This assumes that AI capable of writing passable code is also capable of a passable review. It also assumes that you save any time by trusting that review, if it missed something wrong then it's often actually more effort to go back and fix than it would've been to just read it yourself the first time.
Comment by connicpu 9 hours ago
Comment by ptnpzwqd 10 hours ago
So it becomes a bit theoretical, but I guess if we had a future where LLMs could consistently write perfect code, it would not be too far fetched to also think it could perfectly review code, true enough. But either way the maintainer would still spend some time ensuring a contribution aligns with their vision and so forth, and there would still be close to zero incentive to allow outside contributors in that scenario. No matter what, that scenario is a bit of a fairytale at this point.
Comment by Jnr 10 hours ago
I use Claude Code a lot, I generate a ton of changes, and I have to review it all because it makes stupid mistakes. And during reviews it misses stupid things. This review part is now the biggest bottleneck that can't yet be skipped.
An in an open source project many people can generate a lot more code than a few people can review.
Comment by short_sells_poo 10 hours ago
Imagine someone vibe codes the code for a radiotherapy machine and it fries a patient (humans have made these errors). The developer won't be able to point to OpenAI and blame them for this, the developer is personally responsible for this (well, their employer is most likely). Ergo, in any setting where there is significant monetary or health risk at stake, humans have to review the code at least to show that they've done their due diligence.
I'm sure we are going to have some epic cases around someone messing up this way.
Comment by ketzu 14 hours ago
Wouldn't an agent run by a maintainer require the same scrutiny? An agent is imo "someone else" and not a trusted maintainer.
Comment by ptnpzwqd 14 hours ago
Comment by NitpickLawyer 14 hours ago
That being said, to outright ban a technology in 2026 on pure "vibes" is not something I'd say is reasonable. Others have already commented that it's likely unenforceable, but I'd also say it's unreasonable for the sake of utility. It leaves stuff on the table in a time where they really shouldn't. Things like documentation tracking, regression tracking, security, feature parity, etc. can all be enhanced with carefully orchestrated assistance. To simply ban this is ... a choice, I guess. But it's not reasonable, in my book. It's like saying we won't use ci/cd, because it's automated stuff, we're purely manual here.
I think a lot of projects will find ways to adapt. Create good guidelines, help the community to use the best tools for the best tasks, and use automation wherever it makes sense.
At the end of the day slop is slop. You can always refuse to even look at something if you don't like the presentation. Or if the code is a mess. Or if it doesn't follow conventions. Or if a PR is +203323 lines, and so on. But attaching "LLMs aka AI" to the reasoning only invites drama, if anything it makes the effort of distinguishing good content from good looking content even harder, and so on. In the long run it won't be viable. If there's a good way to optimise a piece of code, it won't matter where that optimisation came from, as long as it can be proved it's good.
tl;dr; focus on better verification instead of better identification; prove that a change is good instead of focusing where it came from; test, learn and adapt. Dogma was never good.
Comment by ptnpzwqd 14 hours ago
Once outside contributions are rejected by default, the maintainers can of course choose whether or not to use LLMs or not.
I do think that it is a misconception that OSS software needs to "viable". OSS maintainers can have many motivations to build something, and just shipping a product might not be at the top of that list at all, and they certainly don't have that obligation. Personally, I use OSS as a way to build and design software with a level of gold plating that is not possible in most work settings, for the feeling that _I_ built something, and the pure joy of coding - using LLMs to write code would work directly against those goals. Whether LLMs are essential in more competitive environments is also something that there are mixed opinions on, but in those cases being dogmatic is certainly more risky.
Comment by mapcars 14 hours ago
In my experience these things are very easily fixable by ai, I just ask it to follow the patterns found and conventions used in the code and it does that pretty well.
Comment by ZaoLahma 13 hours ago
Still haven't found a good way to keep it on course other than "Hey, remember that thing that you're required to do? Still do that please."
Comment by UqWBcuFx6NV4r 12 hours ago
Off the shelf agentic coding tools should be doing this for you.
Comment by lkjdsklf 11 hours ago
At my company, I use them all the time with the fancy models and everything. Preplanning does not solve the problem they're describing.
When claude is doing a complex task, it will regularly lose track of the rules (in either the .rules stuff or CLAUDE.md) and break conventions.
It follows it most of the time, but not all of the time.
Comment by bandrami 11 hours ago
Comment by rswail 11 hours ago
Licensing is dependent on IPR, primarily copyright.
It is very unclear whether the output of an AI tool is subject to copyright.
So if someone uses AI to refactor some code, that refactored code isn't considered a derivative work which means that the refactored source is no longer covered by the copyright, or the license that depends on that.
Comment by majewsky 8 hours ago
At least for those here under the jurisdiction of the US Copyright Office, the answer is rather clear. Copyright only applies to the part of a work that was contributed by a human.
See https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...
For example, on page 3 there (PDF page 11): "In February 2022, the Copyright Office’s Review Board issued a final decision affirming the refusal to register a work claimed to be generated with no human involvement. [...] Since [a guidance on the matter] was issued, the Office has registered hundreds of works that incorporate AI-generated material, with the registration covering the human author’s contribution to the work."
(I'm not saying that to mean "therefore this is how it works everywhere". Indeed, I'm less familiar with my own country's jurisprudence here in Germany, but the US Copyright Office has been on my radar from reading tech news.)
Comment by mathw 14 hours ago
But you're right it's probably unenforceable. They will probably end up accepting PRs which were written with LLM assistance, but if they do it will be because it's well-written code that the contributor can explain in a way that doesn't sound to the maintainers like an LLM is answering their questions. And maybe at that point the community as a whole would have less to worry about - if we're still assuming that we're not setting ourselves up for horrible licence violation problems in the future when it turns out an LLM spat out something verbatim from a GPLed project.
Comment by ckolkey 12 hours ago
Comment by surgical_fire 13 hours ago
To outright accept LLM contributions would be as much "pure vibes" as banning it.
The thing is, those that maintain open source projects have to make a decision where they want to spend their time. It's open source, they are not being paid for it, they should and will decide what it acceptable and what is not.
If you dislike it, you are free to fork it and make a "LLM's welcome" fork. If, as you imply, the LLM contributions are invaluable, your fork should eventually become the better choice.
Or you can complain to the void that open source maintainers don't want to deal with low effort vibe coded bullshit PRs.
Comment by ApolloFortyNine 11 hours ago
If you look back and think about what your saying for a minute, it's that low effort PRs are bad.
Using an LLM to assist in development does not instantly make the whole work 'low effort'.
It's also unenforceable and will create AI witch hunts. Someone used an em-dash in a 500 line PR? Oh the horror that's a reject and ban from the project.
2000 line PR where the user launched multiple agents going over the PR for 'AI patterns'? Perfectly acceptable, no AI here.
Comment by surgical_fire 10 hours ago
Instantly? No, of course not.
I do use LLMs for development, and I am very careful with how I use it. I throughly review the code it generated (unless I am asking for throwaway scripts, because then I only care about the immediate output).
But I am not naive. We both know that a lot of people just vibe code the way through, results be damned.
I am not going to fault people devoting their free time on Open Source for not wanting to deal with bullshit. A blanket ban is perfectly acceptable.
Comment by UqWBcuFx6NV4r 12 hours ago
Most of all, I’m sick of the patronising “don’t forget that you can fork the project!” What’s the point of saying this? We all know. Nobody needs to be reminded. Nobody isn’t aware. You aren’t being clever. You aren’t adding anything to the conversation. You’re being snarky.
Comment by surgical_fire 12 hours ago
Not directly, but that's the implication.
I just did not pretend that was not the implication.
> always come back to this point is so…American
I am not American.
To be frank, this was the most insulting thing someone ever told me online. Congratulations. I feel insulted. You win this one.
> If you aren’t interested in discussing the merits of the decision, don’t bother joining the conversation.
I will join whatever conversation I want, and to my desires I adressed the merits of the discussion perfectly.
You are not the judge here, your opinion is as meaningless as mine.
> Most of all, I’m sick of the patronising “don’t forget that you can fork the project!” What’s the point of saying this?
That sounds like a "you" problem. You will be sick of it until the end of time, because that's the final right answer to any complaints of open source project governance.
> You aren’t adding anything to the conversation. You’re being snarky.
I disagree. In fact, I contributed more than you. I adressed arguments. You went on a whinging session about me.
Comment by keybored 12 hours ago
The response to a large enough amount of data is always vibes. You cannot analyze it all so you offload it to your intuition.
> It leaves stuff on the table in a time where they really shouldn't. Things like documentation tracking, regression tracking, security, feature parity, etc. can all be enhanced with carefully orchestrated assistance.
What’s stopping the maintainers themselves from doing just that? Nothing.
Producing it through their own pipeline means they don’t have to guess at the intentions of someone else.
Maintainers just doing it themselves is just the logical conclusion. Why go through the process of vetting the contribution of some random person who says that they’ve used AI “a little” to check if it was maybe really 90%, whether they have ulterior motives... just do it yourself.
Comment by devonkelley 7 hours ago
Comment by advancespace 12 hours ago
Comment by r_lee 11 hours ago
Comment by rob 10 hours ago
Dan said yesterday he was "restricting" Show HN to new accounts:
https://news.ycombinator.com/item?id=47300772
I guess he meant that literally and new accounts can still post regular submissions:
https://news.ycombinator.com/submitted?id=advancespace
That doesn't make too much sense to me, or he hasn't actually implemented this yet.
Comment by baq 11 hours ago
Comment by short_sells_poo 10 hours ago
It looks like we are going to have large numbers of people whose entire personality is projected via an AI rather than their own mind. Surely this will have an (likely deleterious) effect on people's emotional and social intelligence, no? People's language centers will atrophy because the AI does the heavy lifting of transforming their thoughts into text, and even worse, I'm not sure it'll be avoidable to have the AIs biases and start to leak into the text that people like this generate.
Comment by baq 9 hours ago
I remember the first time I suspected someone using an LLM to answer on HN shortly after chatgpt's first release. In a few short years the tables turned and it's increasingly more difficult to read actual people's thoughts (and this has been predicted, and the predictions for the next few years are far worse).
Comment by rcruzeiro 11 hours ago
Comment by rob 10 hours ago
An em-dash might have been a good indicator when LLMs were first introduced, but that shouldn't be used as a reliable indicator now.
I'm more concerned that they keep fooling everybody on here to the point where people start questioning them and sticking up for them a lot of times.
Comment by petcat 11 hours ago
Also to, intentionally introduce random innoccuous punctuation and speling errors.
Comment by rcruzeiro 9 hours ago
Comment by AlecSchueler 10 hours ago
But everything up to that hyphen was pure slop.
Comment by amelius 11 hours ago
But the maintainers can use AI too, for their reviewing.
Comment by ptnpzwqd 11 hours ago
Comment by eyk19 14 hours ago
Maintainers could just accept feature requests, point their own agents at them using donated compute, and skip the whole review dance. You get code that actually matches the project's style and conventions, and nobody has to spend time cleaning up after a stranger's slightly-off take on how things should work.
Comment by ChadNauseam 14 hours ago
Comment by eyk19 14 hours ago
Comment by ChadNauseam 6 hours ago
Comment by eloisius 13 hours ago
Comment by oytis 14 hours ago
Comment by advancespace 12 hours ago
Comment by defmacr0 14 hours ago
Comment by eyk19 14 hours ago
Comment by layer8 13 hours ago
Secondly, it would seem that such contributions would contribute little value, if the maintainers have to write up the detailed plans by themselves, basically have to do all the work to implement the change by themselves.
Comment by oytis 13 hours ago
Comment by eatonphil 11 hours ago
On the other hand projects with AI assisted commits you can easily find include Linux, curl, io_uring, MariaDB, DuckDB, Elasticsearch, and so on. Of the 112 projects surveyed, 70 of them had AI assisted commits already.
https://theconsensus.dev/p/2026/03/02/source-available-proje...
Comment by flammafex 11 hours ago
Comment by aerhardt 11 hours ago
I find that pretty original. I think progress will march largely unimpeded. I would be wary of unhinged government intervention, but I wouldn’t begrudge private actors for not getting on with the ticket.
Comment by cptroot 7 hours ago
Comment by cmrdporcupine 7 hours ago
Opposing the machine does/did nothing.
Political organizing around unions, state regulations of the labour market, agitational political parties did (and can again).
Comment by filleduchaos 1 hour ago
Of course, there's definitely absolutely nothing about the state of the garment industry that's applicable to the current discussions about AI re: software quality and worker compensation. It's not as if this industry has not already seen its fair share of quality going to the dogs with only a small handful of people still knowing and caring enough to call it out while most others cheer for the Productivity™.
Comment by joelthelion 11 hours ago
Comment by ddtaylor 5 hours ago
Posted from my software made with AI assistance.
Comment by mfru 8 hours ago
Comment by Copyrightest 7 hours ago
Comment by lukaslalinsky 14 hours ago
Comment by konschubert 13 hours ago
* understanding the problem
* modelling a solution that is consistent with the existing modelling/architecture of the software and moves modelling and architecture in the right direction
* verifying that the the implementation of the solution is not introducing accidental complexity
These are the things LLMs can't do well yet. That's where contributions will be most appreciated. Producing code won't be it, maintainers have their own LLM subscriptions.
Comment by lukaslalinsky 12 hours ago
Comment by lkjdsklf 11 hours ago
This is the assumption that has almost always failed and thus has lead to the banning of AI code altogether in a lot of projects.
Comment by advancespace 12 hours ago
Comment by ZaoLahma 10 hours ago
Once you do understand the problem deep enough to know exactly what to ask for without ambiguity, the AI will produce the code that exactly solves your problem a heck of a lot quicker than you. And the time you don't spend on figuring out language syntax, you can instead spend on tweaking the code on a higher architecture level. Spend time where you, as a human, are better than the AI.
Comment by naasking 11 hours ago
Comment by mixedbit 13 hours ago
Comment by lukaslalinsky 12 hours ago
Comment by silverwind 12 hours ago
Comment by bandrami 11 hours ago
Comment by mfld 14 hours ago
Comment by konschubert 13 hours ago
Comment by zhangchen 12 hours ago
Comment by hijnksforall956 11 hours ago
We are inventing problems here. Fact is, an LLM writes better code than 95% of developers out there today. Yes, yes this is Lake Wobegone, everyone here is in the 1%. But for the world at large, I bet code quality goes up.
Comment by duskdozer 10 hours ago
Comment by riffraff 11 hours ago
But I think different projects have different needs.
[0] https://github.com/mastodon/.github/blob/main/AI_POLICY.md
Comment by zigzag312 13 hours ago
This would probably be more useful to help you see what (and how) was written by LLMs. Not really to catch bad actors trying to hide LLM use.
Comment by chatmasta 12 hours ago
Comment by pjc50 13 hours ago
Of course, even then it's not reproducible and requires proprietary software!
Comment by rswail 11 hours ago
That breaks "copyleft" entirely.
Comment by dlillard0 14 hours ago
Comment by pydry 13 hours ago
This will cut off one of the genuine entry points to the industry where all you really needed was raw talent.
Comment by throwaway2037 15 hours ago
> any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed
Note the word "clearly". Weirdly, as a native English speaker this term makes the policy less strict. What about submarine LLM submissions?I have no beef with Redox OS. I wish them well. This feels like the newest form of OSS virtue signaling.
Comment by layer8 13 hours ago
That would constitute an attempt to circumvent their policy, with the consequence of being banned from the project. In other words, it makes not clearly labeling any LLM use a bannable offense.
Comment by wang_li 3 hours ago
Comment by oytis 14 hours ago
Comment by BlackLotus89 14 hours ago
Comment by eesmith 14 hours ago
A submarine submission, if discovered, will result in a ban.
Using the phrase "virtual signaling" long ago became a meaningless term other than to indicate one's views in a culture war. 10 years ago David Shariatmadari wrote "The very act of accusing someone of virtue signalling is an act of virtue signalling in itself", https://www.theguardian.com/commentisfree/2016/jan/20/virtue... .
Comment by pjc50 13 hours ago
Comment by subjectsigma 13 hours ago
If you go by the literal definition in the article, it’s very clear what OP meant when he said the AI policy is virtue-signaling, and it has absolutely nothing to do with the culture war.
Comment by eesmith 10 hours ago
You have no doubt heard claims that AI "democratizes" software development. This is an argument that AI use for that case is virtuous.
You have no doubt heard claims that AI "decreases cognition ability." This is an argument that not using AI for software development is virtuous.
Which is correct depends strongly on your cultural views. If both are correct then the term has little or no weight.
From what I've seen, the term "virtue signalling" is almost always used by someone in camp A to disparage the public views of someone in camp B as being dishonest and ulterior to the actual hidden reason, which is to improve in-group social standing.
I therefore regard it as conspiracy theory couched as a sociological observation, unless strong evidence is given to the contrary. As a strawman exaggeration meant only to clarify my point, "all right-thinking people use AI to write code, so these are really just gatekeepers fighting to see who has the longest neckbeard."
Further, I agree with the observation at https://en.wikipedia.org/wiki/Virtue_signalling that "The concept of virtue signalling is most often used by those on the political right to denigrate the behaviour of those on the political left". I see that term as part of "culture war" framing, which makes it hard to use that term in other frames without careful clarification.
Comment by khalic 15 hours ago
Comment by BlackFly 14 hours ago
> This policy is not open to discussion, any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed, and any attempt to bypass this policy will result in a ban from the project.
Comment by hparadiz 14 hours ago
Comment by pm215 14 hours ago
Comment by hparadiz 14 hours ago
Comment by joaohaas 13 hours ago
Comment by pm215 14 hours ago
It's similar to how I can't implement a feature by copying-and-pasting the obvious code from some commercially licensed project. But somebody else could write basically the same thing independently without knowing about the proprietary-license code, and that would be fine.
Comment by pmarreck 13 hours ago
Like, this should be enshrined as the quintessential “they simply, obstinately, perilously, refused to get it” moment.
Shortly, no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.
Comment by bigstrat2003 9 hours ago
Well that day doesn't appear to be coming any time soon. Even after years of supposed improvements, LLMs make mistakes so frequently that you can't trust anything they put out, which completely negates any time savings from not writing the code.
Comment by notpachet 8 hours ago
No one is going to care about anyone’s painstaking avoidance of chlorofluorocarbons if it takes ten times as long to style your hair with imperceptibly less ozone hole damage.
Comment by ralferoo 13 hours ago
There are plenty of good reasons why somebody might not want your PR, independent of how good or useful to you your change is.
Comment by pjc50 13 hours ago
If the submitter is prepared to explain the code and vouch for its quality then that might reasonably fall under "don't ask, don't tell".
However, if LLM output is either (a) uncopyrightable or (b) considered a derivative work of the source that was used to train the model, then you have a legal problem. And the legal system does care about invisible "bit colour".
Comment by hparadiz 12 hours ago
For one simple reason. Intention.
Here's some code for example: https://i.imgur.com/dp0QHBp.png
Both sides written by an LLM. Both sides written based on my explicit prompts explaining exactly how I want it to behave, then testing, retesting, and generally doing all the normal software eng due diligence necessary for basic QA. Sometimes the prompts are explicitly "change this variable name" and it ends up changing 2 lines of code no different from a find/replace.
Also I'm watching it reason in real time by running terminal commands to probe runtime data and extrapolate the right code. I've already seen it fix basic bugs because an RFC wasn't adhered to perfectly. Even leaving a nice comment explaining why we're ignoring the RFC in that one spot.
Eventually these arguments are kinda exhausting. People will use it to build stuff and the stuff they build ends up retraining it so we're already hundreds of generations deep on the retraining already and talking about licenses at this point feels absurd to me.
Comment by rswail 11 hours ago
It doesn't matter if the "change this variable name" instruction ends up with the same result as a human operator using a text editor.
There is a big difference between "change this variable name" and "refactor this code base to extract a singleton".
Comment by hparadiz 10 hours ago
Comment by pmarreck 13 hours ago
CLEARLY, a lot of developers are not reasonable
Comment by hrmtst93837 4 hours ago
Comment by repelsteeltje 14 hours ago
Once identity is guaranteed, privileges basically come down to reputation — which in this case is a binary "you're okay until we detect content that is clearly labelled as LLM-generated".
[Added]
Note that identity (especially avoiding duplicate identity) is not easily solved.
Comment by khalic 10 hours ago
Comment by ptnpzwqd 15 hours ago
Comment by bonesss 14 hours ago
This heuristic lets the project flag problematic slop with minimal investment avoiding the cost issues with reviewing low-quality low-effort high-volume contributions, which should be near ideal.
Much like banning pornography on an artistic photo site, the perfect application on the borderline of the rule is far less important than filtering power “I know it when I see it” provides to the standard case. Plus, smut peddlers aren’t likely to set an OpenClaw bot-agent swarm loose arguing the point with you for days then posting blogs and medium articles attacking you personally for “discrimination”.
Comment by buzzardbait 15 hours ago
Comment by Ekaros 13 hours ago
Comment by scuff3d 6 hours ago
Comment by anonnon 14 hours ago
Just require that the CLA/Certificate of Origin statement be printed out, signed, and mailed with an envelope and stamp, where besides attesting that they appropriately license their contributions ((A)GPL, BSD, MIT, or whatever) and have the authority to do so, that they also attest that they haven't used any LLMs for their contributions. This will strongly deter direct LLM usage. Indirect usage, where people whip up LLM-generated PoCs that they then rewrite, will still probably go on, and go on without detection, but that's less objectionable morally (and legally) than trying to directly commit LLM code.
As an aside, I've noticed a huge drop off in license literacy amongst developers, as well as respect for the license choices of other developers/projects. I can't tell if LLMs caused this, but there's a noticeable difference from the way things were 10 years ago.
Comment by tentacleuno 14 hours ago
What do you mean by this? I always assumed this was the case anyway; MIT is, if I'm not mistaken, one of the mostly used licenses. I typically had a "fuck it" attitude when it came to the license, and I assume quite a lot of other people shared that sentiment. The code is the fun bit.
Comment by anonnon 14 hours ago
No, it wasn't that way in the 2000s, e.g., on platforms like SourceForge, where OSS devs would go out of their way to learn the terms and conditions of the popular licenses and made sure to respect each other's license choices, and usually defaulted to GPL (or LGPL), unless there was a compelling reason not to: https://web.archive.org/web/20160326002305/https://redmonk.c...
Now the corporate-backed "MIT-EVERYTHING" mindvirus has ruined all of that: https://opensource.org/blog/top-open-source-licenses-in-2025
Comment by khalic 10 hours ago
Not being able to publish anything without sifting through all the libs licences? Remembering legalese, jurisprudence, edge cases, on top of everything else?
MIT became ubiquitous because it gives us peace of mind
Comment by anonnon 1 hour ago
Yes, as do, probably, most people who remember it.
Comment by duskdozer 12 hours ago
Comment by khalic 10 hours ago
Comment by anonnon 1 hour ago
> it's like a high tech pinky swear
So is you attesting you didn't contribute any GPL'd code (which, incidentally, you arguably can't do if you're using LLMs trained on GPL'd code), and no one seemed to have issues with that, yet when it's extended to LLMs, the concern trolling starts in earnest. It's also legally binding .
Comment by yla92 13 hours ago
Comment by pmarreck 13 hours ago
Comment by 8organicbits 13 hours ago
Comment by logicprog 11 hours ago
Comment by ptx 11 hours ago
Comment by TacticalCoder 11 hours ago
It makes lots of sense to me.
Comment by conradludgate 9 hours ago
Comment by il-b 10 hours ago
Or a human will provide the fix?
Comment by laweijfmvo 8 hours ago
Comment by justin66 8 hours ago
Comment by lpcvoid 13 hours ago
I'd gladly take a bug report, sure, but then I'd fix the issues myself. I'd never allow LLM code to be merged.
Comment by CoastalCoder 12 hours ago
Comment by lpcvoid 11 hours ago
Generating slop using LLMs takes seconds, has no human element, no work goes into it. Mistakes made by an LLM are excused without sincerity, without real learning, without consequence. I hate everything about that.
Comment by hijnksforall956 11 hours ago
Comment by foresterre 10 hours ago
For the parent there's immaterial value knowing that is written by a human. From what I read in your comment, you see code more as a means to an end. I think I understand where the parent is coming from. Writing code myself, and accomplishing what I set out to build sometimes feels like a form of art, and knowing that I build it, gives me a sense of accomplishment. And gives me energy. Writing code solely as a means to an end, or letting it be generated by some model, doesn't give that same energy.
This thinking has nothing to do with not caring about being a good teammate or the business. I've no idea why you put that on the same pile.
Comment by hijnksforall956 8 hours ago
Code is a means to an end.
Comment by CoastalCoder 10 hours ago
People will be more likely to engage with your main assertion if you leave out the insults.
Comment by hijnksforall956 9 hours ago
Comment by CoastalCoder 9 hours ago
I noticed your account was new, so I thought you might appreciate a likely explanation for why your post was being downvoted.
Comment by Meneth 12 hours ago
Comment by TiredOfLife 10 hours ago
Comment by mekael 8 hours ago
The underlying data that said matrices compute upon, can be racist though.
I will admit that I may be missing some context though.
Comment by orf 13 hours ago
Comment by jacquesm 12 hours ago
Why on earth would you force stuff on a party that has said they don't want that?
Comment by orf 12 hours ago
If I want to use an auto-complete then I can, and I will? Restricting that is as regressive as a project trying to specify that I write code from a specific country or… standing on my head.
Sure, if they want me to add a “I’m writing this standing on my head” message in the PR then I will… but I’m not.
Comment by jacquesm 12 hours ago
Restricting this is their right, and it is not for you to attempt to overrule that right. Besides the fact that you do not oversee the consequences it also makes you an asshole.
They're not asking for you to write standing on your head, they are asking for you to author your contributions yourself.
Comment by orf 11 hours ago
Except they don’t, won’t and can’t control that: the very request is insulting.
I’ll make a change any way I choose, upright, sideways, using AI. My choice. Not theirs.
Their choice is to accept it or reject it based purely on the change itself, because that’s all there is.
Comment by duskdozer 11 hours ago
Comment by orf 11 hours ago
Comment by duskdozer 10 hours ago
Comment by orf 10 hours ago
But if they can’t enforce their boundaries, because they can’t tell the difference between AI code and non-AI code without being told, then their boundaries they made up are unenforceable nonsense.
About as nonsense and enforceable as asking me to code upside down.
Comment by jacquesm 7 hours ago
Boundaries - of all kinds - are not unenforceable nonsense, they are rights that you willingly and knowingly violate.
Comment by orf 7 hours ago
Markdown files - of all kinds - are totally not unenforceable nonsense, they are rights of a real legal entity (the git repository) that you willingly and knowing violate every time you don’t comment in all caps.
And yes, before you ask, this discussion is definitely one in which it is appropriate to bring up rape and pedophilia.
Comment by duskdozer 10 hours ago
Comment by orf 10 hours ago
- people can just say things
- when people say things, you don’t have to listen to them
- not listening to them doesn’t make you superior or more powerful than them
We can practice: I’d like you to always comment in uppercase letters from now on please. It’s my policy.
Comment by hijnksforall956 10 hours ago
If the maintainers don't want to accept it, fine. Someone will eventually fork and advance and we move on. The Uncles can continue to play in their no AI playground, and show each other how nice their code is.
The world is moving on from the "AI is bad" crowd.
Comment by justin66 8 hours ago
Comment by flykespice 6 hours ago
You're so much a law abiding citizent aren't you?
Tell me how many times did you lie on your tax returns?
Or how many times you submitted PR with code you don't own to your peers?
Comment by orf 4 hours ago
Comment by voidUpdate 12 hours ago
Comment by KingMob 11 hours ago
Comment by dakolli 13 hours ago
Just like when people started losing their ability to navigate without a GPS/Maps app, you will lose your ability to write solid code, solve problems, hell maybe even read well.
I want my brain to be strong in old age, and I actually love to write code unlike 99% in software apparently (like why did you people even start doing this career.. makes no sense to me).
I'm going to keep writing the code myself! Stop paying Billionaires for their thinking machines, its not going to work out well for you.
Comment by electrosphere 12 hours ago
I used a coding agent for the majority of my current project and I still got the "build stuff" itch scratched because Engineers are still responsible for the output and they are needed to interface between technical teams, UX, business people etc
Comment by jacquesm 12 hours ago
> I used a coding agent for the majority of my current project and I still got the "build stuff" itch scratched because Engineers are still responsible for the output and they are needed to interface between technical teams, UX, business people etc
Then you are the opposite of a carpenter or a craftsman, no matter what you think about it yourself.
Comment by mwigdahl 8 hours ago
Comment by jplusequalt 2 hours ago
Comment by jacquesm 8 hours ago
Comment by wccrawford 12 hours ago
And yet, I find a coding agent makes it even more fun. I spend less time working on the boilerplate crap that I hate, and a lot less time searching Google and trying to make sense of a dozen half-arsed StackOverflow posts that don't quite answer my question.
I just went through that yesterday with Unity. I did all the leg work to figure out why something didn't work like I expected. Even Google's search engine agent wasn't answering the question. It was a terrible, energy-draining experience that I don't miss at all. I did figure it out in the end, though.
Prior to yesterday, I was thinking that using AIs to do that was making it harder for me to learn things because it was so easy. But comparing what I remember from yesterday to other things I did with the AI, I don't really think that. The AI lets me do it repeatedly, quickly, and I learn by the repetition, and a lot of it. The slow method has just 1 instance, and it takes forever.
This is certainly an exciting time for coders, no matter why they're in the game.
Comment by slibhb 10 hours ago
Sure but once you learn long multiplication/division algorithms by hand there's not much point in using them. By high school everyone is using a calculator.
> Just like when people started losing their ability to navigate without a GPS/Maps app
Are you suggesting people shouldn't use Google Maps? Seems kind of nuts. Similar to calculators, the lesson here is that progress works by obviating the need to think about some thing. Paper maps and compasses work the same way, they render some older skill obsolete. The written word made memorization infinitely less valuable (and writing had its critics).
I don't think "LLMs making us dumber" is a real concern. Yes, people will lose some skills. Before calculators, adults were probably way better at doing arithmetic. But this isn't something worth prioritizing.
However, it is worth teaching people to code by hand, just like we still teach arithmetic and times tables. But ultimately, once we've learned these things, we're going to use tools that supercede them. There's nothig new or scary about this, and it will be a significant net win.
Comment by jplusequalt 2 hours ago
But it's a problem of scale.
Calculators are very specific tools. If you are trying to run a computation of some arithmetic/algebraic expression, then they are a great tool. But they're not going to get you far if you need help understanding how to file your taxes.
LLMs are multi-faceted tools. They can help with math, doing taxes, coding, doing research, writing essays, summarizing text, etc. Basically anything that can be condensed into an embedding that the LLM can work with is fair game.
If you're willing to accept that using a tool slowly erodes the skill that tool was made for, then you should also accept that you will see an erosion of MANY skill you currently have.
So the question is whether this is all worth it? Is an increase in productivity worth eroding a strong foundation of general purpose knowledge? Perhaps even the ability to learn in the first place?
I would argue no a million times over, but I'm starting to think that I'm an outlier.
Comment by iSnow 8 hours ago
I am old now, and the unfortunate truth is that my brain isn't working as fast or as precise as when I was young. LLMs help me maintain some of my coding abilities.
It's like having a non-judgemental co-coder sitting at your side, you can discuss about the code you wrote and it will point out things you didn't think of.
Or I can tap into the immense knowledge about APIs LLMs have to keep up with change. I wouldn't be able to still read that much documentation and keep all of this.
Comment by laweijfmvo 8 hours ago
Comment by duskdozer 11 hours ago
Comment by oerdier 11 hours ago
Comment by munk-a 3 hours ago
This is made more complex that the most senior members of organizations tend to be irrationally AI positive - so it's difficult for the hiring layer to push back on a candidate for over reliance on tools even if they fail to demonstrate core skills that those tools can't supplement. The discussion has become too political[1] in most organizations and that's going to be difficult to overcome.
1. In the classic intra-organizational meaning of politics - not the modern national meaning.
Comment by 0xbadcafebee 10 hours ago
Comment by okanat 7 hours ago
Quite a bit of the Linux userspace is already permissively licensed. Nobody has built a full-fledged open source alternative yet. Because it is hard to build an ecosystem, it is hard to test thousands of different pieces of hardware. None of that would happen without well-paid engineers contributing.
Comment by hparadiz 14 hours ago
Comment by akimbostrawman 14 hours ago
Comment by flykespice 5 hours ago
You pay taxes to a government using it to wage wars bombing children schools. Will you now live in hut a on the forest because you don't consent to it?
Comment by stuaxo 15 hours ago
For instance a GPL LLM trained only on GPL code where the source data is all known, and the output is all GPL.
It could be done with a distributed effort.
Comment by ptnpzwqd 14 hours ago
Comment by nottorp 14 hours ago
Comment by rswail 11 hours ago
So "copyleft" doesn't work on any of the output. Therefore no GPL applies.
Comment by andy12_ 13 hours ago
Comment by duskdozer 11 hours ago
>Many of the most common free-software licenses, especially the permissive licenses, such as the original MIT/X license, BSD licenses (in the three-clause and two-clause forms, though not the original four-clause form), MPL 2.0, and LGPL, are GPL-compatible. That is, their code can be combined with a program under the GPL without conflict, and the new combination would have the GPL applied to the whole (but the other license would not so apply). https://en.wikipedia.org/wiki/License_compatibility#GPL_comp...
A model that contains no GPL code makes sense so that people using non-GPL licenses don't violate it.
Comment by duskdozer 14 hours ago
Comment by flykespice 5 hours ago
Comment by tkel 15 hours ago
Comment by butILoveLife 11 hours ago
It seems well intentioned, but lots of bad ideas are like this.
I was told by my customer they didn't need my help because Claude Code did the program they wanted me to quote. I sheepishly said, 'I can send an intern to work in-house if you don't want to spend internal resources on it.'
I can't really imagine what kind of code will be done by hand anymore... Even military level stuff can run large local models.
Comment by decidu0us9034 2 hours ago
Comment by hananova 4 hours ago
Comment by butILoveLife 43 minutes ago
Comment by cardanome 13 hours ago
Are they really that delusional to think that their AI slop has any value to the project?
Do they think acting like a complete prick and increasing the burden for the maintainers will get them a job offer?
I guess interacting with a sycophantic LLM for hours truly rots the brain.
To spell it out: No, your AI generated code has zero value. Actually less than that because generating it helped destroy the environment.
If the problem could be solved by using an LLM and the maintainers wanted to, they could prompt it themselves and get much better results than you do because they actually know the code. And no AI will not help you "get into open source". You don't learn shit from spamming open source projects.
Comment by 999900000999 13 hours ago
Before this it was junk like spacing changes
Comment by Anonyneko 13 hours ago
Sometimes, I'd guess, it's also because your Github profile has some kind of an advertisement.
Comment by Ekaros 13 hours ago
I think some people also like the feeling of being helpful. And they do not understand reality of LLM outputs. See comments posting AI generated summaries or answers to question. With no verification or critical checking themselves.
Comment by butILoveLife 11 hours ago
At some point your manager is going to force you to AI code. At best you can try to find some healthcare or finance company that is too cheap to buy a machine that can locally run 400B models.
Comment by hananova 7 hours ago
Comment by aerhardt 10 hours ago
Comment by inder1 7 hours ago
Comment by sbcorvus 9 hours ago
Comment by steve-chavez 10 hours ago
[1]: https://github.com/PostgREST/postgrest/blob/main/CONTRIBUTIN...
Comment by ajstars 9 hours ago
Comment by hananova 8 hours ago
Comment by singularity2001 8 hours ago
Comment by jacquesm 12 hours ago
Comment by ethin 2 hours ago
Comment by sheepscreek 9 hours ago
Comment by The-Ludwig 15 hours ago
Comment by goku12 14 hours ago
It sounds serious and strict, but it applies to content that's 'clearly labelled as LLM-generated'. So what about content that isn't as clear? I don't know what to make of it.
My guess is that the serious tone is to avoid any possible legal issues that may arise from the inadvertent inclusion of AI-generated code. But the general motivation might be to avoid wasting the maintainers' time on reviewing confusing and sloppy submissions that are made using the lazy use of AI (as opposed finely guided and well reviewed AI code).
Comment by hananova 7 hours ago
That’s the point.
Comment by BirAdam 12 hours ago
Comment by aleph_minus_one 14 hours ago
"any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed"
For example:
- What if a non-native English speaker uses the help of an AI model in the formulation of some issue/task?
- What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")? Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?
Comment by VorpalWay 13 hours ago
Unfortunately, when I have seen this in the context of the Rust project, the result has still been the typical verbose word salad that is typical of chat style LLMs. It is better to use a dedicated translation tool, and post the original along with the translation.
> What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")?
Very good question, I myself consider this sort of AI usage benign (unlike agent style usage), and is the only style of AI I use myself (since I have RSI it helps having to type less). You could turn the feature off for just this project though.
> Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?
I don't think that follows, but what features you have active in the current project would definitely be affected. From what I have seen all IDEs allow turning AI features on and off as needed.
Comment by miningape 12 hours ago
this so many times - it's so incredibly handy to have the original message from the author, for one I may speak or understand parts of that language and so have an easier time understanding the intent of the translated text. For another I can cut and translate specific parts using whatever tools I want, again giving me more context about what is trying to be communicated.
Comment by cpburns2009 11 hours ago
How can you be sure the AI translation is accurately convening what was written by the speaker? The reality is you can't accommodate every hypothetical scenario.
> What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")? Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?
Nobody is talking about advanced autocomplete when they want to ban AI code. It's prompt generated code.
Comment by duskdozer 11 hours ago
Firefox has direct translation built in. One can self-host libretranslate. There are many free sites to paste in language input and get a direct translation sans filler and AI "interpretation". Just write in your native language or your imperfect English.
Comment by aleph_minus_one 11 hours ago
If the native language is very different from English, this problem gets much worse.
This is a problem that LLM claim to partially mitigate (and is one reason why non-native speakers could be tempted to use them), but hardly any classical translation tool can.
Comment by duskdozer 10 hours ago
Comment by hypeatei 13 hours ago
I've seen this excuse before but in practice the output they copy/paste is extremely verbose and long winded (with the bullet point and heading soup etc.)
Surely non-native speakers can see that structure and tell the LLM to match their natural style instead? No one wants to read a massive wall of text.
Comment by gtirloni 1 hour ago
Comment by hagen8 14 hours ago
Comment by algoth1 14 hours ago
Comment by nananana9 13 hours ago
if (foo == true) { // checking foo is true (rocketship emoji)
20 lines of code;
} else {
the same 20 lines of code with one boolean changed in the middle;
}
Description:(markdown header) Summary (nerd emoji):
This PR fixes a non-existent issue by adding an *if statement** that checks if a variable is true. This has the following benefits:
- Improves performance (rocketship emoji)
- Increases code maintainability (rising bar chart emoji)
- Helps prevent future bugs (detective emoji)
(markdown header) Conclusion:This PR does not just improve performance, it fundamentally reshapes how we approach performance considerations. This is not just design --- it's architecture. Simple, succinct, yet powerful.
Comment by The-Ludwig 13 hours ago
Comment by cpburns2009 11 hours ago
## Summary
...
## Problem
...
## Solution
...
## Verification
...
They're too methodical, and duplicate code when they're longer than a single line fix. I've never received a pull request formatted like that from a human.Comment by ok123456 4 hours ago
Comment by dana321 13 hours ago
Comment by xmodem 11 hours ago
Comment by scotty79 13 hours ago
Comment by decidu0us9034 1 hour ago
Comment by dgacmu 10 hours ago
Comment by scotty79 10 hours ago
Comment by dgacmu 10 hours ago
I assume that most of these purely llm generated unwanted contributions will just end up in dead end forks, because my impression is that a lot of them are just being generated as GitHub activity fodder. But the stuff that really solves a problem for a person - eh, good. Problem solved is problem solved. (Unless it creates new problems)
Comment by estsauver 15 hours ago
I think part of the battle is actually just getting people to identify which LLM made it to understand if someones contribution is good or not. A javascript project with contributions from Opus 4.6 will probably be pretty good, but if someone is using Mistral small via the chat app, it's probably just a waste of time.
Comment by VLM 5 hours ago
It takes some human effort to set up a slop generator. Have the slop generator make 100 buckets of slop, humans will work hard accepting or rejecting the buckets, somewhat less than 100 buckets will be approved, the payoff for the owner of the slop generator is now they have "verified FOSS developer contribution" on their resume which translates directly into job offers and salary. Its a profitable grift, profitable enough that the remaining humans are being flooded out. The ban makes successful submission to Redox even MORE valuable than before. They can expect infinite floods of PRs now that a successful PR "proves" that Redox thinks the human owner of the slop generator did the work and should therefore be offered more jobs, paid more, etc. Technically, they're hiring and paying based on ability to set up a slop generator which is not zero value, but not as valuable as being an Official Redox Contributor.
In the long run, this eliminates FOSS competency from the hiring process. Currently FOSS competency and coding experience indicates a certain amount, however minimal, of human skill and ability to work with others. Soon, it'll mean the person claiming to be a contributor has no problem violating orders and rules, such as the ones forbidding AI submissions, and it'll be a strong signal they actively work to subvert teams for their own financial reward and benefit. Which might actually be a hiring bullet point for corporate management in more dysfunctional orgs, but probably not help individual contributors get hired.
Comment by hananova 4 hours ago
Comment by oliver_dr 5 hours ago
Comment by emperorxanu 15 hours ago
Comment by flanked-evergl 13 hours ago
Comment by api 13 hours ago
Time consuming work can be done quickly at a fraction of the cost or even almost free with open weights LLMs.
Comment by menaerus 13 hours ago
[1] https://www.datadoghq.com/blog/ai/harness-first-agents/
[2] https://www.datadoghq.com/blog/ai/fully-autonomous-optimizat...
[3] https://www.datadoghq.com/blog/engineering/self-optimizing-s...
P.S. I know this will be downvoted to death but I'll leave it here anyway for folks who want to keep their eyes wide open.
Comment by duskdozer 11 hours ago
Comment by stingraycharles 13 hours ago
“Our approach is harness-first engineering: instead of reading every line of agent-generated code, invest in automated checks that can tell us with high confidence, in seconds, whether the code is correct. “
that’s literally what The whole industry has been doing for decades, and spoiler: you still need to review code! it just gives you confidence that you didn’t miss anything.
Also, without understanding the code, it’s difficult to see its failure modes, and how it should be tested accordingly.
Comment by menaerus 13 hours ago
Comment by grey-area 13 hours ago
Comment by menaerus 11 hours ago
Comment by stingraycharles 12 hours ago
Comment by menaerus 11 hours ago
Comment by subjectsigma 13 hours ago
No, they’re pushing back against a world full of even more mass surveillance, corporate oligarchy, mass unemployment, wanton spam, and global warming. It is absolutely in your personal best interest to hate AI.
Comment by baq 14 hours ago
IOW I think this stance is ethically good, but technically irresponsible.
Comment by ptnpzwqd 14 hours ago
Comment by holyra 14 hours ago
Comment by qsera 12 hours ago
I think one way to compare the use of LLMs is that it is like comparing a dynamically typed language with a functional/statically typed one. Functional programming languages with static typing makes it harder to implement the solution without understanding and developing an intuition of the problem.
But programming languages with dynamic typing will let you create a (partial) solutions with a lesser understanding the problem.
LLMs takes it even more easy to implement an even more partial solutions, without actually understanding even less of the problem (actually zero understanding is required)..
If I am a client who wants reliable software, then I want an competent programmer to
1. actually understand the problem,
2. and then come up with a solution.
The first part will be really important for me. Using LLM means that I cannot count on 1 being done, so I would not want the contractor to use LLMs.
Comment by dev_l1x_be 12 hours ago
Comment by lifis 14 hours ago
What makes sense if that of course any LLM-generated code must be reviewed by a good programmer and must be correct and well written, and the AI usage must be precisely disclosed.
What they should ban is people posting AI-generated code without mentioning it or replying "I don't know, the AI did it like that" to questions.
Comment by ptnpzwqd 14 hours ago
Over time this might not be enough, though, so I suspect we will see default deny policies popping up soon enough.
Comment by duskdozer 14 hours ago
Why not?
Comment by lifis 14 hours ago
Not to mention that even finding good developers willing to develop without AI (a significant handicap, even more so for coding things like an OS that are well represented in LLM training) seems difficult nowadays, especially if they aren't paying them.
Comment by lpcvoid 13 hours ago
Humans have been doing this for the better parts of 5 decades now. Don't assume others rely on LLMs as much as you do.
>Not to mention that even finding good developers willing to develop without AI (a significant handicap, even more so for coding things like an OS that are well represented in LLM training) seems difficult nowadays, especially if they aren't paying them.
I highly doubt that. In fact, I'd take a significant pay cut to move to a company that doesn't use LLMs, if I were forced to use them in my current job.
Comment by holyra 14 hours ago
Comment by vladms 13 hours ago
Comment by usrbinbash 14 hours ago
You know what else takes "a massive amount of developer work"?
"any LLM-generated code must be reviewed by a good programmer"
And this is the crux of the matter with using LLMs to generate code for everything but really simple greenfield projects: They don't really speed things up, because everything they produce HAS TO be verified by someone, and that someone HAS TO have the necessary skill to write such code themselves.
LLMs save time on the typing part of programming. Incidentially that part is the least time consuming.
Comment by lifis 14 hours ago
And yes of course they need to be able to write the code themselves, but that's the easy part: any good developer could write a full production OS by themselves given access to documentation and literature and an enormous amount of time. The problem is the time.
Comment by usrbinbash 8 hours ago
And how will that be assured? Everyone can open a PR or submit a bug.
> The problem is the time.
But not the time spent TYPING.
The problem is the time spent THINKING. And that's a task that LLMs, which are nothing other than statistical models trying to guess the next token, really aren't good at.
Comment by duskdozer 13 hours ago
Comment by usrbinbash 14 hours ago
Every single production OS, including the one you use right now, was made before LLMs even existed.
> What makes sense if that of course any LLM-generated code must be reviewed by a good programmer
The time of good programmers, especially ones working for free in their spare time on OSS projects, is a limited resource.
The ability to generate slop using LLMs, is effectively unlimited.
This discrepancy can only be resolved in one way: https://itsfoss.com/news/curl-ai-slop/
Comment by lifis 14 hours ago
And a new OS needs to be significantly better than those to overcome the switching costs.
Comment by swiftcoder 14 hours ago
Feel like you are using a very narrow definition of "success" here. Is BSD not successful? It is deployed on 10s of millions of routers/firewalls/etc in addition to being the ancestor of both modern MacOS and PlaystationOS...
Comment by bigstrat2003 9 hours ago
Who cares if nobody switches to it as their daily driver? The goal you proposed was "viable", not "widely used". The former is perfectly possible without LLMs (as history has proved), and the latter is unrelated to how you choose to make the OS.
Comment by usrbinbash 14 hours ago
Comment by lifis 14 hours ago
Comment by usrbinbash 8 hours ago
Erm...no? That's exactly what that means.
Earth-Ovens haven't been in widespread use for hundreds of years. People can still use them to bake bread however: https://www.youtube.com/watch?v=WAJqGVxuJPo
Comment by eqvinox 14 hours ago
Comment by sh4zb0t 1 hour ago
Terry Davis built a full OS with his own editor, compiler and language. I think Redox can survive just fine without LLMs
Comment by bigstrat2003 9 hours ago
Perhaps the same way that every other viable OS was made without use of LLMs.
Comment by dagi3d 14 hours ago
Comment by sh4zb0t 14 hours ago