Do your own writing
karimf
458 points
166 comments
March 30, 2026
Related Discussions
Found 5 related stories in 156.0ms across 3,471 title embeddings via pgvector HNSW
- No one wants to read your AI slop flancian · 24 pts · March 02, 2026 · 50% similar
- AI is great at writing code. It's terrible at making decisions kdbgng · 12 pts · March 13, 2026 · 49% similar
- Show HN: Your AI Slop Bores Me mikidoodle · 12 pts · March 05, 2026 · 49% similar
- When AI writes the software, who verifies it? todsacerdoti · 192 pts · March 03, 2026 · 49% similar
- Be intentional about how AI changes your codebase benswerd · 97 pts · March 19, 2026 · 48% similar
Discussion Highlights (20 comments)
PaulRobinson
Outsource things that aren't valuable to you and your core mission. Do the things that are valuable to you and your core mission. This applies at a business level (most software shops shouldn't have full-time book keepers on staff, for example), but applies even more in the AI age. I use LLMs to help me code the boring stuff. I don't want to write CDK, I don't want to have to code the same boilerplate HTML and JS I've written dozens of times before - they can do that. But when I'm trying to implement something core to what I'm doing, I want to get more involved. Same with writing. There's an old joke in the writing business that most people want to be published authors than they do through the process of writing. People who say they want to write don't actually want to do the work of writing, they just want the cocktail parties and the stroked ego of seeing their name in a bookshop or library. LLMs are making that more possible, but at a rather odd cost. When I write, I do so because I want to think. Even when I use an LLM to rubber duck ideas off, I'm using it as a way to improve my thinking - the raw text it outputs is not the thing I want to give to others, but it might make me frame things differently or help me with grammar checks or with light editing tasks. Never the core thinking. Even when I dabble with fiction writing: I enjoy the process of plotting, character development, dialogue development, scene ordering, and so on. Why would I want to outsource that? Why would a reader be interested in that output rather than something I was trying to convey. Art lives in the gap between what an artist is trying to say and what an audience is trying to perceive - having an LLM involved breaks that. So yeah, coding, technical writing, non-fiction, fiction, whatever: if you're using an LLM you're giving up and saying "I don't care about this", and that might be OK if you don't care about this , but do that consciously and own it and talk about it up-front.
Aurornis
> When I send somebody a document that whiffs of LLM, I’m only demonstrating that the LLM produced something approximating what others want to hear. I’m not showing that I contended with the ideas. This eloquently states the problem with sending LLM content to other people: As soon as they catch on that you're giving them LLM writing, it changes the dynamic of the relationship entirely. Now you're not asking them to review your ideas or code, you're asking them to review some output you got from an LLM. The worst LLM offenders in the workplace are the people who take tickets, have Claude do the ticket, push the PR, and then go idle while they expect other people to review the work. I've had to have a few uncomfortable conversations where I explain to people that it's their job to review their own submissions before submitting them. It's something that should be obvious, but the magic of seeing an LLM produce code that passes tests or writing that looks like it agrees with the prompt you wrote does something to some people's brains.
CharlesW
The title and of this article is Don't Let AI Write For You , when its point seems to be closer to Don't Let AI Think For You (see "Thinking"). This distinction is important, because (1) writing is not the only way to faciliate thinking, and (2) writing is not neccessarily even the best way to facilitate thinking. It's definitely not the best way (a) for everyone, (b) in every situation. Audio can be a great way to capture ideas and thought processes. Rod Serling wrote predominantly through dictation. Mark Twain wrote most of his of his autobiography by dictation. Mark Duplass on The Talking Draft Method (1m): https://www.youtube.com/watch?v=UsV-3wel7k4 This can work especially well for people who are distracted by form and "writing correctly" too early in the process, for people who are intimidated by blank pages, for non-neurotypical people, etc. Self-recording is a great way to set all of those artifacts of the medium aside and capture what you want to say. From there, you can (and should) leverage AI for transcripts, light transcript cleanups, grammar checks, etc.
TrianguloY
> Letting an LLM write for you is like paying somebody to work out for you. This. This is the big distinction. If you like something and/or want to improve it, you do it yourself. If not, you pay someone else to do it. And I think that's ok. But I guess some people either choose a wrong job or had no other option. I'm happy to not be in that group.
fraywing
>Letting an LLM write for you is like paying somebody to work out for you. It's worse than this. If someone is working out for you, they still own the outcome of that effort (their physique). With an LLM people _act_ like the outcome is their own production. The thinking, reasoning, structural capability, modeling, and presentation can all just as easily be framed _as your creation_. That's why I think we're seeing an inverse relationship between ideation output and coherence (and perhaps unoriginality) and a decline in creative thinking and creativity[0] [0] https://time.com/7295195/ai-chatgpt-google-learning-school/
roadside_picnic
I've long considered writing to be the "last step in thinking". I can't tell you how many times an idea, that was crystal clear in my mind, fell apart the moment I started writing and I realize there were major contradictions I needed to resolve. Likewise I also have numerous times where writing about something loosely and casually revealed to me something that fundamentally changed how I viewed a topic and really consolidated my thinking. However, there is a lot of writing that is basically just an old school from of context engineering. While I would love to think that a PRD is a place to think through ideas, I think many of us have encountered situations, pre-AI, where PRDs were basically context dumps without any real planning or thought. For these cases, I think we should just drop the premise altogether that you're writing. If you need to write a proposal for something as a matter of ritual, give it AI. If you're documenting a feature to remember context only (and not really explain the larger abstract principles driving it), it's better created as context for an LLM to consume. Not long ago my engineering team was trying to enforce writing release notes so people could be aware of breaking changes, then people groaned at the idea of having to read this. The obvious best solution is to have your agent write release notes for your agent in the future to have context. No more tedious writing or reading, but also no missing context. I think it's going to be awhile before the full impact of AI really works it's way through how we work. In the mean time we'll continue to have AI written content fed back into AI and then sent back to someone else (when this could all be a more optimized, closed loop).
janalsncm
Well said. The most important part of writing is thinking. LLMs cannot do the thinking for you. This is why I’m bearish on all of the apps that want to do my writing for me. Expanding a stub of an idea into a low information density paragraph, and then summarizing those paragraphs on the other end. What’s the point? Unless the idea is trivial, LLMs are probably just getting in the way.
fleebee
You quote this: > LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too? Given your endorsement of using LLMs for generating ideas, isn't this the inverse of your thesis? The quote's issue with LLMs is the ideas that came out of them; the prose is the tell. I don't think they'd be happy with LLM generated ideas even if they were handwritten. I feel like this post is missing the forest for the trees. Writing is thinking alright, but fueling your writing by brainstorming with an LLM waters down the process.
nerevarthelame
I agree with most of this, but my one qualm is the notion that LLMs "are particularly good at generating ideas." It's fair enough that you can discard any bad ideas they generate. But by design, the recommendations will be average, bland, mainstream, and mostly devoid of nuance. I wouldn't encourage anyone to use LLMs to generate ideas if you're trying to create interesting or novel ideas.
bluepeter
Nowadays my writing (and maybe all of ours) has totally devolved into "prompt-ese." Much like days of yore where we all approached Google searches with acrobatic language knowing how to specifically get something done. Now? I am pushing so much of my writing into prompts into AI where I know the AI will understand me even with lots of typos and run-on sentences... Is that a bad thing? A good thing? I am able to be so much more effective by sheer volume of words, and the precision and grammar is mostly irrelevant. But I am able to insert nuances and sidetracks that ARE passing vital context to AI but may be lost on people. Or at least pre-prompt-writing people.
paulpauper
Letting an LLM write for you is like paying somebody to work out for you. The problem with writing is the feedback tends to be inconsistent. With going to the gym you can track your progress quantitatively such as how fast or far you can run or weight lifted, but it's sometimes hard to know if you're improving at writing.
gbro3n
I fully agree with the sentiment of the article. I will say that I feel I've had some success in having an LLM outline a document, provided that I then go through and read / edit thoroughly. I think there's even an argument that this a) possibly catches areas you I have forgotten to write about, and b) hooks into my critique mode which feels more motivated than author mode sometimes (I'm slightly ashamed to say). This does come at the cost however of not putting my self in 'researcher' mode, where I go back through the system I'm writing about and follow the threads, reacquainting myself and judging my previous decisions.
firefoxd
I'm 100% an advocate for not using LLM for writing... But I'll tell you were I use them just for that. For ceremonies. A large part of our work is about writing documents that no one will read, but you'll get 10 different reminders that they need to get done. These are documents that circulate, need approval from different stake holders. Everybody stamps their name on it, without ever reading it. I used to spend so much time crafting these documents. Now I use an LLM, the stakeholders are probably using an LLM to summarize it, someone is happy, they are filed for the records. I call these "ceremonies" because they are a requirement we have, it helps no one, we don't know why we have to do it, but no one wants to question it.
windowliker
>Don't Let AI Write For You >Essay structured like LLM output Hmmm...
drnick1
> They are particularly good at generating ideas. I think it's the opposite. People have ideas and know what they want to do. If I need to write something, I provide some bullet points and instructions, and Claude does the rest. I then review, and iterate.
jonathaneunice
Agree with the underlying point: "don't let an LLM do your thinking, or interfere with processes essential to you thinking things clearly through." My own experience, however, is that the best models are quite good and helping you with those writing and thinking processes. Finding gaps, exposing contradictions or weaknesses in your hypotheses or specifications, and suggesting related or supporting content that you might have included if you'd thought of it, but you didn't. While I'm a developer and engineer now, I was a professional author, editor, and publisher in a former life. Would have _killed_ for the fast, often excellent feedback and acceleration that LLMs now provide. And while sure, I often have to "no, no, no!" or delete-delete, "redraft this and do it this way," the overall process is faster and the outcomes better with AI assistance. The most important thing is to keep overall control of the tone, flow, and arguments. Every word need not be your own, at least in most forms of commercial and practical writing. True whether your collaborators are human, mecha, or some mix.
D13Fd
> LLMs are useful for research and checking your work. I have to disagree that it's good for LLMs to do the research, depending on the context. If by "useful for research" you mean useful for tracking down sources that you, as the writer, digest and consider, then great. If by "useful for research" you mean that it will fill in your citations for you, that's terrible. That sends a false signal to readers about the credibility of your work. It's critical that the author read and digest the things they are citing to.
bboynton97
There's a lot of ways to use an LLM, the least effective is automating an entire process- yet it's the most compelling. To your point, it's entirely a balance. I personally will record a 10-15 minute yap session on a concept I want to share and feed it to an agent to distill it into a series of observations and more compelling concepts. Then you can use this to write your piece.
locusofself
I had an interesting experience the other day. I've been struggling with some lyrics to a song I am writing. I asked Claude to review them, and it did an amazing job of finding the weak lines and best lines, and nearly perfectly articulating to my why they were weak or strong. It was strange because the output of the analysis almost perfectly mirrored my own thoughts. When I asked it for alternatives/edits, they were not good however.
6thbit
Writing down specs for technical projects is a transformational skill. I've had projects that seemed tedious or obvious in my head only to realize hidden complexity when trying to put their trivial-ness into written words. It really is a sort of meditation on the problem. In the most important AI assisted project I've shipped so far I wrote the spec myself first entirely. But feeding it through an LLM feedback loop felt just as transformational, it didn't only help me get an easier to parse document, but helped me understand both the problem and my own solution from multiple angles and allowed me to address gaps early on. So I'll say: Do your own writing, first.