Technical, cognitive, and intent debt

theorchid 242 points 62 comments April 22, 2026
martinfowler.com · View on Hacker News

Discussion Highlights (12 comments)

kvisner

I see what Martin is saying here, but you could make that argument for moving up the abstraction layers at any point. Assembly to Python creates a lot of Intent & Cognitive debt by his definition, because you didn't think through how to manipulate the bits on the hardware, you just allowed the interpereter to do it. My counter is that technical intent, in the way he is describing it, only exists because we needed to translate human intent into machine language. You can still think deeply about problems without needed to formulate them as domain driven abstractions in code. You could mind map it, or journal about it, or put post-it notes all over the wall. Creating object oriented abstractions isn't magic.

PaulHoule

Hits the spot for me. I am always pushing back on AI to simplify and improve concision.

hibikir

LLMs don't lack the virtue of laziness: it has it if you want it to, by just having a base prompt that matches intent. I've had good success convincing claude backed agents to aim for minimal code changes, make deduplication passes, and basically every other reasonable "instinct" of a very senior dev. It's not knowledge that the models haven't integrated, but one that many don't have on their forefront with default settings. I bet we've all seen the models that over-edit everything, and act like the crazy mid-level dev that fiddles with the entire codebase without caring one bit about anyone else's changes, or any risk of knowledge loss due to overfiddling. And on Jess' comments on validating docs vs generating them... It's a traditional locking problem, with traditional solutions. And it's not as if the agent cannot read git, and realize when one thing is done first, in anticipation of the other by convention. I'm quite senior: In fact, I have been a teammate of a couple of people mention in this article. I suspect that they'd not question my engineering standards. And yet I've no seen any of that kind of debt in my LLM workflows: if anything, by most traditional forms of evaluating software quality, the projects I work on are better than what they were 5, 10 years ago, using the same metrics as back then. And it's not magic or anything, but making sure there are agents running sharing those quality priorities. But I am getting work done, instead of spending time looking for attention in conferences.

brodouevencode

> ...to develop the powerful abstractions that then allow us to do much more, much more easily. Of course, the implicit wink here is that it takes a lot of work to be lazy This lines up with YAGNI, but most people believe the opposite, often using YAGNI to justify NOT building the necessary abstractions.

ryanisnan

I think Martin isn't wrong here, but I've first hand seen AI produce "lazy" code, where the answer was actually more code. A concrete example, I had a set of python models that defined a database schema for a given set of logical concepts. I added a new logical concept to the system, very analogous to the existing logical set. Claude decided that it should just re-use the existing model set, which worked in theory, but caused the consumers to have to do all sorts of gymnastics to do type inference at runtime. It "worked", but it was definitely the wrong layer of abstraction.

mfiguiere

Wrong link. Technical, Cognitive and Intent Debt was discussed here: https://martinfowler.com/fragments/2026-04-02.html

backprop1989

> The problem is that LLMs inherently lack the virtue of laziness. I assure you, they do not.

kippinsula

the framing as "debt" is fair but in our case the bigger pain isn't lazy code, it's overzealous code. claude will happily refactor three unrelated files because it spotted a "pattern". we've ended up with a CLAUDE.md that's basically a list of "do not touch unless asked". probably says more about us than the model but yeah.

konovalov-nk

This is my current visualization of the problem: https://excalidraw.com/#json=y1fSSx2z8-0nFs7CDnqhp,d9Di8JdGU... I think the "cognitive bottlenecks" in software engineering live between artifacts, where code is simply one of them. outcome → requirements → spec → acceptance criteria → executable proof → review I'm making experimental tooling that automates the boring parts around those transitions, while keeping humans focused on validating that intent survived each step.

takihito

I heard that LLMs imitate humans. Let's add laziness, impatience, and arrogance—the virtues of programmers—to AGENTS.md and improve it.

meander_water

Unfortunately large parts of the paper that he linked to from the Wharton school is entirely AI generated, and yet to be peer reviewed. I realize that most researchers use AI to assist with writing, but when the topic of your paper is "cognitive surrender", I struggle to take any content in there seriously.

__mharrison__

Where's the other half of the article? What an abrupt ending...

Semantic search powered by Rivestack pgvector
5,335 stories · 50,170 chunks indexed