Allow me to get to know you, mistakes and all
sebi_io
74 points
24 comments
March 14, 2026
Related Discussions
Found 5 related stories in 60.7ms across 3,471 title embeddings via pgvector HNSW
- Let yourself fall down more Brajeshwar · 41 pts · March 11, 2026 · 41% similar
- Show HN: Adentris (YC P25) – Find mistakes in your medical records digitaltzar · 11 pts · April 02, 2026 · 38% similar
- Prompt Injecting Contributing.md statements · 112 pts · March 19, 2026 · 38% similar
- Brute-forcing my algorithmic ignorance qikcik · 101 pts · March 22, 2026 · 37% similar
- Don't make me talk to your chatbot pkilgore · 229 pts · March 03, 2026 · 37% similar
Discussion Highlights (12 comments)
charlie0
This is starting to become my latest pet peeve, people using Claude to write their messages in Slack. I'm going to just stop communicating via text with these people. It's one thing to have Claude polish a message and another thing for it to write out an entire message.
jay_kyburz
There are two ways to write an email. One is to keep it short and to the point that so there are obviously no errors, the other is to waffle on and obfuscate the message with an LLM so that the reader's eyes glaze over...or something like that.
stingraycharles
Yeah, some colleagues started using ChatGPT for internal communication as well. While we don’t like to mandate or prohibit anyone from using any tools, we did need to make it really clear to everyone that this is not productive. Grammarly to make small corrections to external recipients is fine. Using ChatGPT to “polish” your message is not. If you’re not sure about your English abilities, we offer you free English lessons and encourage giving each other feedback during chats. LLMs shouldn’t be used for communication at all if you want any form of authenticity.
rexpop
> It robs me of getting to know you. Ugh, you are not entitled to get to know me. There is a threshold between all that I share with the world and the rest of me. Hell, not every person gets the same picture , and that's deliberate and healthy --my customers don't get to know what my proctologist knows. My mother doesn't get to know what my wife knows. You don't get to know all of me, because I don't trust you . This post comes across as sweet, and innocent. It also comes across as absurdly self-entitled, and it's not an OK posture to take towards the world. It's not OK when the police take this posture, it's not OK when private companies take this posture, and it's not OK when strangers on the internet take this posture. You are entitled to withdraw from relationships that don't fulfill your emotional needs. A reasonable audience for this missive is your girlfriend, your child (who relies on you), or your employer (to whom you are vulnerable).
DrammBA
It feels so disrespectful sometimes too, having to read a long paragraph that conveys so little meaning knowing full well the original prompt was probably very short and I'm now wasting extra time parsing the hollow LLM text expansion.
Havoc
In emails...whatever. I can tell it's there but fine whatever, we're just trying to get a message across LLM or otherwise. But this was the first year I saw it in performance review write-ups which frankly was jarring. Here is feedback supposedly 1:1 that massively affects this person's life and their perception of "worth" so to speak...and it's just AI. Notably it was split by geography. EU countries closest to organic, india slop trainwreck, US in the middle Sorta made me conclude "ok i guess that's the end of performance reviews that vaguely mean anything & actually get read"
arjie
I really don't mind text filtered through an LLM per se . But I prefer high signal-to-token so to speak. The way humans talk and write means that the seemingly extraneous text they add often provides an interesting insight into the thought patterns of the person, and therefore mistakes or even pointless monologues can be interesting. This is not always true. Once there was an online reaction to short content that made people treat "long-form" content as desirable entirely due to its length. I rather like reading books and the New Yorker's fiction section when I still subscribed, but much of this "long-form" content was token-expansion of a formulaic nature which I did not enjoy. LLMs have mastered this kind of long-form token-expansion. This is assuming people are using an LLM in good faith, obviously. One day, perhaps LLMs will learn to express what someone is saying in an elegant way that is enjoyable for people like me to read. But even then, I will have the difficulty of distinguishing whether this is a human speaking through an LLM in good faith or a human who has set up a machine that is set up to mimic a human. The latter is undesirable to me because I have access to the best such machines at a remarkably low cost. Were I to desire a conversation with an LLM, it is trivial for me to find one. I'm not coming here for that[0]. A sufficiently insightful LLM which prompts my thinking in certain ways wouldn't be unwelcome to me, I suppose. I have a couple of my friends for whom I still go on Twitter to read what they say even after I have stopped using the site routinely. If I found out the posts were entirely an LLM I think I would still read them simply because I find the posts useful and with sufficiently high signal-to-token. 0: Certainly, if every place only spoke about things I was interested in and never in things I was not interested in, I wouldn't need separation of interest spaces at all. But the variation of interest vectors for different humans has made this impossible.
devsda
Imagine going to work or a social meeting where everyone looks and sounds the same(or just a limited set) all with the same perfect tone, body language and communication style. Sounds like a nightmare and I would find it hard to relate and get that "perspective", when there is nothing to differentiate a person. I guess everyone using LLMs for text is similar to that. If everyone uses the same LLM style, its hard to understand where the other person is coming from. This is not a problem for technical and precise communication though(the choice of LLMs in that context has other risks). It is also strictly not an LLM capability problem because they can mimic or retain the original style and just "polish" with enough hints but that takes time, investment and people go through path of least resistance. So, we all end up with similar text with typical AI-isms. There are other reasons to dislike LLM text like padding and effort asymmetry that have been discussed here enough.
ahf8Aithaex7Nai
That’s exactly why I’ve refused to use autocomplete on smartphone keyboards from the very beginning. I want to express myself in my own words. In a work context, of course, things are a bit different: I want to move the project forward and not jeopardize my future paychecks. Authenticity tends to take a back seat there. However, I’d be more concerned about inefficiency. Is it really necessary to run every piece of communication through ChatGPT to refine the wording? Are you sure nothing gets lost in the process? Doesn’t that end up wasting a lot of work time without adding any real value? And on top of that, it leads to alienation and frustration. If you talk to me as if you were an LLM, don’t be surprised if I talk to you as if you were an LLM.
quectophoton
I think there was an SMBC comic about this topic, but I don't think I can find it, and the site doesn't exactly make it easy. I don't even remember if it was pre-2020 or not. It was about how people would get a thing (a robot?) that would repeat whatever they said but in a more fancy way (or something along those lines), to make them sound smarter. Then the people would start depending on these robots to communicate at all, to the point their speech degrades and they start making unintelligible noises that the robots still translate into actual speech. EDIT: Found it, from 2014: https://smbc-comics.com/index.php?id=3576
Scrapemist
When I wrote a snarky mail to the MD and I couldn’t suppress my anger, Claude did a great job smoothing it out while keeping it pointy.
Scrapemist
Once asked Claude to guess what the prompt was that generated a mail. Didn’t work unfortunately.