We should revisit literate programming in the agent era
horseradish
195 points
111 comments
March 08, 2026
Related Discussions
Found 5 related stories in 57.9ms across 3,471 title embeddings via pgvector HNSW
- Custom programming languages make agents good matsur · 17 pts · March 12, 2026 · 63% similar
- Coding Agents Could Make Free Software Matter Again rogueleaderr · 141 pts · March 29, 2026 · 60% similar
- Slowing Down in the Age of Coding Agents larve · 15 pts · March 24, 2026 · 59% similar
- New Research Reassesses the Value of Agents.md Files for AI Coding noemit · 19 pts · March 08, 2026 · 59% similar
- Agent-to-agent pair programming axldelafosse · 34 pts · March 27, 2026 · 56% similar
Discussion Highlights (20 comments)
sublinear
> This is especially important if the primary role of engineers is shifting from writing to reading. This was always the primary role. The only people who ever said it was about writing just wanted an easy sales pitch aimed at everyone else. Literate programming failed to take off because with that much prose it inevitably misrepresents the actual code. Most normal comments are bad enough. It's hard to maintain any writing that doesn't actually change the result. You can't "test" comments. The author doesn't even need to know why the code works to write comments that are convincing at first glance. If we want to read lies influenced by office politics, we already have the rest of the docs.
perrygeo
Considering LLMs are models of language, investing in the clarity of the written word pays off in spades. I don't know whether "literate programming" per se is required. Good names, docstrings, type signatures, strategic comments re: "why", a good README, and thoughtfully-designed abstractions are enough to establish a solid pattern. Going full "literate programming" may not be necessary. I'd maybe reframe it as a focus on communication. Notebooks, examples, scripts and such can go a long way to reinforcing the patterns. Ultimately that's what it's about: establishing patterns for both your human readers and your LLMs to follow.
rustybolt
I have noticed a trend recently that some practices (writing a decent README or architecture, being precise and unambiguous with language, providing context, literate programming) that were meant to help humans were not broadly adopted with the argument that it's too much effort. But when done to help an LLM instead of a human a lot of people suddenly seem to be a lot more motivated to put in the effort.
gervwyk
For me this is where a config layer shines. Develop a decent framework and then let the agents spin out the configuration. This allows a trusted and tested abstraction layer that does not shift and makes maintenance easier, while making the code that the agents generate easier to review and it also uses much less tokens. So as always, just build better abstractions.
anotheryou
but doesn't "the code is documentation" work better for machines? and don't we have doc-blocks?
librasteve
I dont know Org, but Rakudoc https://docs.raku.org/language/pod is useful for literate programming (put the docs in the code source) and for LLM (the code is "self documenting" so that in the LLM inversion of control, the LLM can determine how to call the code). https://podlite.org is this done in a language neutral way perl, JS/TS and raku for now. Heres an example: #!/usr/bin/env raku =begin pod =head1 NAME Stats::Simple - Simple statistical utilities written in Raku =head1 SYNOPSIS use Stats::Simple; my @numbers = 10, 20, 30, 40; say mean(@numbers); # 25 say median(@numbers); # 25 =head1 DESCRIPTION This module provides a few simple statistical helper functions such as mean and median. It is meant as a small example showing how Rakudoc documentation can be embedded directly inside Raku source code. =end pod unit module Stats::Simple; =begin pod =head2 mean mean(@values --> Numeric) Returns the arithmetic mean (average) of a list of numeric values. =head3 Parameters =over 4 =item @values A list of numeric values. =back =head3 Example say mean(1, 2, 3, 4); # 2.5 =end pod sub mean(*@values --> Numeric) is export { die "No values supplied" if @values.elems == 0; @values.sum / @values.elems; } =begin pod =head2 median median(@values --> Numeric) Returns the median value of a list of numbers. If the list length is even, the function returns the mean of the two middle values. =head3 Example say median(1, 5, 3); # 3 say median(1, 2, 3, 4); # 2.5 =end pod sub median(*@values --> Numeric) is export { die "No values supplied" if @values.elems == 0; my @sorted = @values.sort; my $n = @sorted.elems; return @sorted[$n div 2] if $n % 2; (@sorted[$n/2 - 1] + @sorted[$n/2]) / 2; } =begin pod =head1 AUTHOR Example written to demonstrate Rakudoc usage. =head1 LICENSE Public domain / example code. =end pod
cadamsdotcom
Test code and production code in a symmetrical pair has lots of benefits. It’s a bit like double entry accounting - you can view the code’s behavior through a lens of the code itself, or the code that proves it does what it seems to do. You can change the code by changing either tests or production code, and letting the other follow. Code reviews are a breeze because if you’re confused by the production code, the test code often holds an explanation - and vice versa. So just switch from one to the other as needed. Lots of benefits. The downside is how much extra code you end up with of course - up to you if the gains in readability make up for it.
senderista
The "test runbook" approach that TFA describes sounds like doctest comments in Python or Rust.
stephbook
Take it to the logical conclusion. Track the intended behavior in a proper issue tracking software like Jira. Reference the ticket in your version control system. Boring and reliable, I know. If you need guides to the code base beyond what the programming language provides, just write a directory level readme.md where necessary.
jauntywundrkind
One of the things I love most about WebMCP is the idea that it's a MCP session that exists on the page, which the user already knows. Most of these LLM things are kind of separate systems, with their own UI. The idea of agency being inlayed to existing systems the user knows like this, with immediate bidirectional feedback as the user and LLM work the page, is incredibly incredibly compelling to me. Series of submissions (descending in time): https://news.ycombinator.com/item?id=47211249 https://news.ycombinator.com/item?id=47037501 https://news.ycombinator.com/item?id=45622604
jph00
Nearly all my coding for the last decade or so has used literate programming. I built nbdev, which has let me write, document, and test my software using notebooks. Over the last couple of years we integrated LLMs with notebooks and nbdev to create Solveit, which everyone at our company uses for nearly all our work (even our lawyers, HR, etc). It turns out literate programming is useful for a lot more than just programming!
amelius
We need an append-only programming language.
cfiggers
Interesting and semi-related idea: use LLMs to flag when comments/docs have come out of sync with the code. The big problem with documentation is that if it was accurate when it was written, it's just a matter of time before it goes stale compared to the code it's documenting. And while compilers can tell you if your types and your implementation have come out of sync, before now there's been nothing automated that can check whether your comments are still telling the truth. Somebody could make a startup out of this.
charcircuit
>I don't have data to support this With there being data that shows context files which explain code reduces the performance of them, it is not straightforward that literate programming is better so without data this article is useless.
trane_project
I think full literate programming is overkill but I've been doing a lighter version of this: - Module level comments with explanations of the purpose of the module and how it fits into the whole codebase. - Document all methods, constants, and variables, public and private. A single terse sentence is enough, no need to go crazy. - Document each block of code. Again, a single sentence is enough. The goal is to be able to know what that block does in plain English without having to "read" code. Reading code is a misnomer because it is a different ability from reading human language. Example from one of my open-source projects: https://github.com/trane-project/trane/blob/master/src/sched...
avatardeejay
Something in this realm covers my practice. I just keep a master prompt for the whole program, and sparsely documented code. When it's time to use LLM's in the dev process, they always get a copy of both and it makes the whole process like 10x as coherent and continuous. Obvi when a change is made that deviates or greatly expands on the spec, I update the spec.
rednafi
I think a lighter version of literate programming, coupled with languages that have a small API surface but are heavy on convention, is going to thrive in this age of agentic programming. A lighter API footprint probably also means a higher amount of boilerplate code, but these models love cranking out boilerplate. I’ve been doing a lot more Go instead of dynamic languages like Python or TypeScript these days. Mostly because if agents are writing the program, they might as well write it in a language that’s fast enough. Fast compilation means agents can quickly iterate on a design, execute it, and loop back. The Go ecosystem is heavy on style guides, design patterns, and canonical ways of doing things. Mostly because the language doesn’t prevent obvious footguns like nil pointer errors, subtle race conditions in concurrent code, or context cancellation issues. So people rely heavily on patterns, and agents are quite good at picking those up. My version of literate programming is ensuring that each package has enough top-level docs and that all public APIs have good docstrings. I also point agents to read the Google Go style guide [1] each time before working on my codebase.This yields surprisingly good results most of the time. [1] https://google.github.io/styleguide/go/
Arubis
Anecdotally, Claude Opus is at least okay at literate emacs. Sometimes takes a few rounds to fix its own syntax errors, but it gets the idea. Requiring it to TDD its way in with Buttercup helps.
akater
The question posed is, “With agents, does it become practical to have large codebases that can be read like a narrative, whose prose is kept in sync with changes to the code by tireless machines?” It's not practical to have codebases that can be read like a narrative, because that's not how we want to read them when we deal with the source code. We jump to definitions, arriving at different pieces of code in different paths, for different reasons, and presuming there is one universal, linear, book-style way to read that code, is frankly just absurd from this perspective. A programming language should be expressive enough to make code read easily, and tools should make it easy to navigate. I believe my opinion on this matters more than an opinion of an average admirer of LP. By their own admission, they still mostly write code in boring plain text files. I write programs in org-mode all the time. Literally (no pun intended) all my libraries, outside of those written for a day job, are written in Org. I think it's important to note that they are all Lisp libraries, as my workflow might not be as great for something like C. The documentation in my Org files is mostly reduced to examples — I do like docstrings but I appreciate an exhaustive (or at least a rich enough) set of examples more, and writing them is much easier: I write them naturally as tests while I'm implementing a function. The examples are writen in Org blocks, and when I install a library of push an important commit, I run all tests, of which examples are but special cases. The effect is, this part of the documentation is always in sync with the code (of course, some tests fail, and they are marked as such when tests run). I know how to sync this with docstrings, too, if necessary; I haven't: it takes time to implement and I'm not sure the benefit will be that great. My (limited, so far) experience with LLMs in this setting is nice: a set of pre-written examples provides a good entry point, and an LLM is often capable of producing a very satisfactory solution, immediately testable, of course. The general structure of my Org files with code is also quite strict. I don't call this “literate programming”, however — I think LP is a mess of mostly wrong ideas — my approach is just a “notebook interface” to a program, inspired by Mathematica Notebooks, popularly (but not in a representative way) imitated by the now-famous Jupyter notebooks. The terminology doesn't matter much: what I'm describing is what the silly.business blogpost is largerly about. The author of nbdev is in the comments here; we're basically implementing the same idea. silly.business mentions tangling which is a fundamental concept in LP and is a good example of what I dislike about LP: tangling, like several concepts behing LP, is only a thing due to limitations of the programming systems that Donald Knuth was using. When I write Common Lisp in Org, I do not need to tangle, because Common Lisp does not have many of the limitations that apparently influenced the concepts of LP. Much like “reading like a narrative” idea is misguided, for reasons I outlined in the beginning. Lisp is expressive enough to read like prose (or like anything else) to as large a degree as required, and, more generally, to have code organized as non-linearly as required. This argument, however, is irrelevant if we want LLMs, rather than us, read codebases like a book; but that's a different topic.
pjmlp
I rather go with formal specifications, and proofs.