Over-editing refers to a model modifying code beyond what is necessary
pella
336 points
188 comments
April 22, 2026
Related Discussions
Found 5 related stories in 62.8ms across 5,335 title embeddings via pgvector HNSW
- The Perils of an Over-Optimized Life jethronethro · 13 pts · April 13, 2026 · 44% similar
- Why Over-Engineering Happens zuhayeer · 34 pts · April 05, 2026 · 43% similar
- Show HN: Revise – An AI Editor for Documents artursapek · 64 pts · March 22, 2026 · 43% similar
- Clean code in the age of coding agents yanis_t · 49 pts · April 09, 2026 · 40% similar
- Show HN: Editing 2000 photos made me build a macOS bulk photo editor om202 · 11 pts · April 11, 2026 · 40% similar
Discussion Highlights (20 comments)
whinvik
Yeah I have always felt GPT 5.4 does too much. It is amazing at following instructions precisely but it convinces itself to do a bit too much. I am surprised Gemini 3.1 Pro is so high up there. I have never managed to make it work reliably so maybe there's some metric not being covered here.
eterm
It's funny, because the wisdom that was often taught ( but essentially never practiced ) was "Refactor as you go". The idea being that if you're working in an area, you should refactor and tidy it up and clean up "tech debt" while there. In practice, it was seldom done, and here we have LLMs actually doing it, and we're realising the drawbacks.
jstanley
Conversely, I often find coding agents privileging the existing code when they could do a much better job if they changed it to suit the new requirement. I guess it comes down to how ossified you want your existing code to be. If it's a big production application that's been running for decades then you probably want the minimum possible change. If you're just experimenting with stuff and the project didn't exist at all 3 days ago then you want the agent to make it better rather than leave it alone. Probably they just need to learn to calibrate themselves better to the project context.
anonu
Here, the author means the agent over-edits code. But agents also do "too much": as in they touch multiple files, run tests, do deployments, run smoke tests, etc... And all of this gets abstracted away. On one hand, its incredible. But on the other hand I have deep anxiety over this: 1. I have no real understanding of what is actually happening under the hood. The ease of just accepting a prompt to run some script the agent has assembled is too enticing. But, I've already wiped a DB or two just because the agent thought it was the right thing to do. I've also caught it sending my AWS credentials to deployment targets when it should never do that. 2. I've learned nothing. So the cognitive load of doing it myself, even assembling a simple docker command, is just too high. Thus, I repeatedly fallback to the "crutch" of using AI.
itopaloglu83
I always described it as over-complicating the code, but doing too much is a better diagnosis.
slopinthebag
I think the industry has leaned waaay too far into completely autonomous agents. Of course there are reasons why corporations would want to completely replace their engineers with fully autonomous coding agents, but for those of us who actually work developing software, why would we want less and less autonomy? Especially since it alienates us from our codebases, requiring more effort in the future to gain an understanding of what is happening. I think we should move to semi-autonomous steerable agents, with manual and powerful context management. Our tools should graduate from simple chat threads to something more akin to the way we approach our work naturally. And a big benefit of this is that we won't need expensive locked down SOTA models to do this, the open models are more than powerful enough for pennies on the dollar.
lo1tuma
I’m not sure if I share the authors opinion. When I was hand-writing code I also followed the boy-scout rule and did smaller refactorings along the line.
exitb
As mentioned in the article, prompting for minimal changes does help. I find GPT models to be very steerable, but it doesn’t mean much when you take your hands of the wheel. These type of issues should be solved at planning stage.
Almured
I feel ambivalent about it. In most cases, I fully agree with the overdoing assessment and then having to spend 30min correcting and fixing. But I also agree with the fact sometimes the system is missing out on more comprehensive changes (context limitations I suppose)! I am starting to be very strict when coding with these tool but still not quite getting the level of control I would like to see
lopsotronic
When asked to show their development-test path in the form of a design document or test document, I've also noticed variance between the document generated and what the chain-of-thought thingy shows during the process. The version it puts down into documents is not the thing it was actually doing. It's a little anxiety-inducing. I go back to review the code with big microscopes. "Reproducibility" is still pretty important for those trapped in the basements of aerospace and defense companies. No one wants the Lying Machine to jump into the cockpit quite yet. Soon, though. We have managed to convince the Overlords that some teensy non-agentic local models - sourced in good old America and running local - aren't going to All Your Base their Internets. So, baby steps.
aerhardt
I'm building a website in Astro and today I've been scaffolding localization. I asked Codex 5.4 x-high to follow the official guidelines for localization and from that perspective the implementation was good. But then it decides to re-write the copy and layout of all pages. They were placeholders, but still? Codex also has a tendency to apply unwanted styles everywhere. I see similar tendencies in backend and data work, but I somehow find it easier to control there. I'm pretty much all in on AI coding, but I still don't know how to give these things large units of work, and I still feel like I have to read everything but throwaway code.
pilgrim0
Like others mentioned, letting the agent touch the code makes learning difficult and induces anxiety. By introducing doubt it actually increases the burden of revision, negating the fast apparent progress. The way I found around this is to use LLMs for designing and auditing, not programming per se. Even more so because it’s terrible at keeping the coding style. Call it skill issue, but I’m happier treating it as a lousy assistant rather than as a dependable peer.
pyrolistical
I attempt to solve most agent problems by treating them as a dumb human. In this case I would ask for smaller changes and justify every change. Have it look back upon these changes and have it ask itself are they truly justified or can it be simplified.
graybeardhacker
I use Claude Code every day and have for as long as it has been available. I use git add -p to ensure I'm only adding what is needed. I review all code changes and make sure I understand every change. I prompt Claude to never change only whitespace. I ask it to be sure to make the minimal changes to fix a bug. Too many people are treating the tools as a complete replacement for a developer. When you are typing a text to someone and Google changes a word you misspelled to a completely different word and changes the whole meaning of the text message do you shrug and send it anyway? If so, maybe LLMs aren't for you.
dbvn
Don't forget the non-stop unnecessary comments
tim-projects
> The model fixes the bug but half the function has been rewritten. The solution to this is to use quality gates that loop back and check the work. I'm currently building a tool with gates and a diff regression check. I haven't seen these problems for a while now. https://github.com/tim-projects/hammer
Isolated_Routes
I think building something really well with AI takes a lot of work. You can certainly ask it to do things and it will comply, and produce something pretty good. But you don't know what you don't know, especially when it speaks to you authoritatively. So checking its work from many different angles and making sure it's precise can be a challenge. Will be interesting to see how all of this iterates over time.
simonw
I've not seen over-editing in Claude Code or Codex in quite a while, so I was interested to see the prompts being used for this study. I think they're in here, last edited 8 months ago: https://github.com/nreHieW/fyp/blob/5a4023e4d1f287ac73a616b5...
jollyllama
It's called code churn. Generally, LLMs make code churn.
ricardorivaldo
duplicated ? https://news.ycombinator.com/item?id=47866913