Vibe-Coded Ext4 for OpenBSD
corbet
65 points
57 comments
March 27, 2026
Related Discussions
Found 5 related stories in 36.9ms across 3,471 title embeddings via pgvector HNSW
- The "Vibe Coding" Wall of Shame wa5ina · 122 pts · March 29, 2026 · 52% similar
- They're Vibe-Coding Spam Now raybb · 67 pts · March 22, 2026 · 48% similar
- I Fuzzed, and Vibe Fixed, the Vibed C Compiler luu · 17 pts · March 02, 2026 · 46% similar
- My spicy take on vibe coding for PMs dmckinno · 42 pts · March 03, 2026 · 45% similar
- Why I love FreeBSD enz · 393 pts · March 16, 2026 · 43% similar
Discussion Highlights (17 comments)
bitwizeshift
Paywalled article on something vibe-coded? That seems like a bold strategy.
LeFantome
Vibe coding and OpenBSD. The perfect combination.
nurettin
It is amusing to see that the only concern seems to be about a confusion around licensing, not the validity or maintainability of the code itself.
throwatdem12311
Can someone just copyright wash Windows already.
FeepingCreature
> So as of today, the Copyright system does not have a way for the output of a non-human produced set of files to contain the grant of permissions which the OpenBSD project needs to perform combination and redistribution. This seems extremely confused. The copyright system does not have a way to grant these permissions because the material is not covered under copyright! You can distribute it at will, not due to any sort of legal grant but simply because you have the ability and the law says nothing to stop you.
g0xA52A2A
Wow that thread just kept going. Whilst the LWN article covered most of the "highlights" I think this reply from Theo is pretty suscient on the topic at large [1]. [1] https://marc.info/?l=openbsd-tech&m=177425035627562&w=2
LeFantome
The article is largely about the copyright concerns of LLM generated code that was almost certainly trained on the GPL original. Also, it is essentially an ext2 filesystem as it does not support journaling.
charcircuit
>incorporate knowledge carrying an illiberal license. Copyright prevents copying. It doesn't prevent using knowledge.
CodeWriter23
Well this is ironic, GPL advocate(s) declaring a clean implementation based on specifications infringing due to someone/something reading specs provided under license. Didn't Oracle lose that argument in court as pertains to Android implementation of Java libraries?
longislandguido
~20 years ago, the Linux camp accused OpenBSD of importing GPL'd code (a wireless driver IIRC) and cried foul. The code was removed. Fast forward to 2026, Theo says no to vibe-coded slop, prove to me your magic oracle LLM didn't ingest gobs of GPL code before spitting out an answer. People are big mad of course, but you want me to believe Theo is the bad guy here for playing it conservatively?
hypeatei
> This obsession with copyrights between different free software ecosystems - who put the lawyers in charge? This comment on the article is spot on. I don't vibe code or care about AI really, but it's so exhausting to see people playing lawyer in threads about LLM-generated code. No one knows, a ton of people are using LLMs, the companies behind these models torrented content themselves, and why would you spend your time defending copyright / use it as a tool to spread FUD? Copyright is a made up concept that exists to kill competition and protect those who suck at executing on ideas.
ethin
> Lacking Copyright (or similarily a Public Domain declaration by a human), we don't receive sufficient rights grants which would permit us to include it into the aggregate body of source code, without that aggregate body becoming less free than it is now. Can someone explain this to me? I was under the impression that if a work of authorship was not copyrightable because it was AI generated and not authored by a human, it was in the public domain and therefore you could do whatever you wanted with it. Normal copyright restrictions would not apply here.
cachius
I'd like to see it AFL fuzzed and compared to the original. Took 2 hours to first bug ten years ago in 2016. Discussion then https://news.ycombinator.com/item?id=11469535 Mirror of the slides https://events.static.linuxfound.org/sites/events/files/slid...
joshstrange
> Who is the copyright holder in this case? It clearly draws heavily from an existing work, and it's clear the human offering the patch didn't do it. It's not the AI, because only persons can own copyright. Is it the set of people whose work was represented in the training corpus? Was the it the set of people who wrote ext4 and whose work was in the training corpus? The company who own the AI who wrote the code? Someone else? I don't love this take. Specifically: > it's clear the human offering the patch didn't do it I find it hard to believe that there wasn't a good bit of "blood, sweat, and tears" invested by a human directing the LLM to make this happen. Yes, LLMs can spit out full projects in 1 prompt but that's not what happened here. From his blog the work on this spanned 5 months at least. And while he probably wasn't working on it exclusively during that time, I find it hard to believe it was him sending "continue" periodically to an LLM. Anyone who has built something large or complicated with LLM assistance knows that it takes more than just asking the LLM to accomplish your end goal, saying "it's clear the human offering the patch didn't do it" is insulting. I've done a number of things with the help of LLMs, in all but the most contrived of cases it required knowledge, input from me, and careful guidance to accomplish. Multiple plans, multiple rollbacks, the knowledge of when we needed to step back and when to push forward. The LLM didn't bring that to the table. It brought the ability to crank out code to test a theory, to implement a plan only after we had gone 10+ rounds, or to function as grep++ or google++. LLMs are tools, they aren't a magic "Make me ext4 for OpenBSD"-button (or at least they sure as hell aren't that today, or 5 months ago when this was started).
kgeist
Binaries are copyrightable in both the US and the EU, and they are not technically produced by a human either, they're produced by a computer program. I honestly don't understand why this isn't extended to AI-generated code. Isn't it the same thing? One could argue that compilers merely transform source code into binaries "as is," while AI models have some "knowledge" baked in that they extract and paste as code. But there are compilers that also generate binaries by selecting ready-to-use binary patches authored by compiler developers and combining them into a program. One could also argue that, in the case of compilers, at least the input source code is authored by a human. But why can't we treat prompts as "source code in natural language" too? Where is the line between authorship and non-authorship, and how is the line defined? "Your prompt was too basic to constitute authorship" doesn't sound like an objectibe criterion. Maybe for lawyers, AI is some kind of magical thing on its own. But having successfully created a working inference engine for Qwen3, and seeing how the core loop is just ~50 lines of very simple matrix multiplication code, I can't see LLMs as anything more than pretty simple interpreters that process "neural network bytecode," which can output code from pre-existing templates just like some compilers. And I'm not sure how this is different from transpilers or autogenerated code (like server generators based on an OpenAPI schema) Sure, if an LLM was trained on GPL code, it's possible it may output GPL-licensed code verbatim, but that's a different matter from the question of whether AI-generated code is copyrightable in principle. Interestingly, I found an opinion here [0] that binaries technically shouldn't be copyrightable, and currently they are because: the copyright office listened to software publishers, and they wanted binaries protected by copyright so they could sell them that way [0] https://freesoftwaremagazine.com/articles/what_if_copyright_...
ptidhomme
I liked this reply in the thread : There's another issue surrounding developer skill atrophy or stunting that I find \ particularly concerning on an existential level. If we allow people to use LLMs to write code for a given project/platform, experience \ in that platform will potentially atrophy or under develop as contributors \ increasingly rely on out sourcing their applicable skills and decisions to "AI". Even if you believe out sourcing the minutia of coding is a net positive, the \ "enshitification" principal in general should give you pause; as soon as the net \ developer skill for a project has degraded to a point of reliance, even somewhat, I \ think we can be confident those AI tools will NOT get less expensive. I'd rather be independently less productive, than dependent on some MegaCorp(TM)'s \ good will to rent us back access to our brains at a fair price. - achaean https://marc.info/?l=openbsd-tech&m=177430829313972&w=2
hulitu
> Vibe-Coded Ext4 for OpenBSD Who wants to test it ? Preferably on real hardware. /s