Stop Sloppypasta
namnnumbr
213 points
100 comments
March 15, 2026
Related Discussions
Found 5 related stories in 52.7ms across 3,471 title embeddings via pgvector HNSW
- Your AI Slop Bores Me alexanderameye · 11 pts · March 06, 2026 · 61% similar
- Your AI Slop Bores Me maurycyz · 15 pts · March 07, 2026 · 60% similar
- A curated list of AI slops xiaoyu2006 · 15 pts · March 16, 2026 · 58% similar
- Show HN: Your AI Slop Bores Me mikidoodle · 12 pts · March 05, 2026 · 58% similar
- AI Slop Is Infiltrating Online Children's Content jruohonen · 14 pts · March 21, 2026 · 55% similar
Discussion Highlights (20 comments)
namnnumbr
Tired of people at work pasting raw ChatGPT output into chats, I coined the term "sloppypasta" and have written this rant to explain why it's rude and some guidelines for what to do instead sloppypasta: Verbatim LLM output copy-pasted at someone, unread, unrefined, and unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and pasted, often as a meme, without critical thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.
stabbles
I wouldn't call "ChatGPT says" an equivalent of LMGTFY. The former is people in awe with the oracle, the latter is people tired of having to look something up for others.
uniq7
This article's proposal for stopping sloppypasta is to convince the people who does it to stop doing it, but I am more interested on what someone who receives sloppypasta can do. How do I tell my colleagues to stop contributing unverified AI output without creating tension between us? I've never did that so far because I feel like I am either exposing their serious lack of professionalism or, if I wrongly assumed it was AI, I am plainly telling them that their work looks like bad AI slop.
incognito124
Related: https://news.ycombinator.com/item?id=44617172
madrox
I find that I don't have a lot of sympathy for people angry at this type of behavior, even though I share the disdain for someone else's AI output. The people doing this kind of thing are not the kind of people to be reading this manifesto. We've been creating bait content for a long time, and humans have never been given the tools to manage this in any sophisticated fashion. The internet was not a bastion of high quality content or discourse pre-AI. We need better tools as content consumers to filter content. Ironically, AI is what may actually make this possible. I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow. I suspect the endgame of this is probably the fulfillment of Dead Internet Theory, where it's just AI creating content and AI browsing the internet for content, and users will never engage with it directly. That person who spent 10 seconds getting AI to write something will be consumed by AI as well, only to be surfaced to you when you ask the AI to summon and summarize. And if that fills people with horror at the inefficiency of it all, well, like I said, it isn't like the internet was a bastion of efficiency before. We smiled and laughed for years that all of this technology and power is just being used to share cat videos.
simianwords
I've been thinking about this, what if AI runs autonomously and finds things to criticise that are factually incorrect? It is easy to do in social media because the context is global but in enterprises it is a bit harder. Something like "flagged as very likely untrue by AI" is something I would really appreciate. I see many posts and comments throughout the internet that can easily be dispelled by a single LLM prompt. But this should only be used when the confidence is really high.
OptionOfT
It's very weird how many people take the output of ChatGPT/Gemini/Claude as gospel, and don't question it at all. It's also very impolite to dump 5 pages of text on someone, because now you're asking _them_ to validate it. When I ask a question in Slack I want people's input. Part of my work is also consulting the GPTs and see if the information makes sense. And it shows up the most with people who answer questions in domains they're not a 100% familiar with.
rrr_oh_man
It's ironic, because the site has all the hallmarks of an LLM generated website.
chewbacha
When you must remind someone to “think” when using a technology because the least resistant path is to not think… it feels like the technology isn’t really helping. They are stealing our work, turning it into a model, and then renting our decisions to less intelligent people. They (tech companies) don’t want us to be smart any more. They are commodifying intelligence.
0xbadcafebee
If I was a bot I would probably write some perfectly punctuated garbage about how your site is a crucial testament to the ever evolving digital landscape or use big words to delve into the multifaceted tapestry of internet ethics. But honestly your website about stopping sloppy pasta is just so dumb and a complete waste of time. Your acting like somebody writing a fake story with ai is the end of the world or something. Literaly nobody cares if some random article was written by a computer so maybe stop pretending your the heroic saviors of the web. Get a real hobby and stop whining about people using chat bots because its really not that deep bro. - now the fun part: which AI did I use to write the above?
parrellel
Ah, AI slop trying to convince you to properly edit your AI slop, how depressing.
singpolyma3
This is what slop used to mean. Then people started using it for everything and LLM assisted with. Language evolving faster than the tools...
jaimex2
I have a prompt thats basically the CIA sabotage handbook for replying to any co-worker that dares send me LLM generated crap. It includes 4 follow up actions and I automate check in messages to see how they are progressing with them.
tonymet
"just google it" or copying from google is just as bad. It's passive aggressive and aims to shut down dialog. I wish there was a remedy. I block or mute the person when I can.
api
The solution is to have your bot read the sloppypasta for you!
Rapzid
This is one of my biggest pet peeves to the point where I'm often pondering how I can leave the industry now.. People who previously couldn't put in the effort or quality, are now vomiting tons of slop I'm meant to read and review. PRs descriptions. Documentation. Plans. Etc. Walls of sprawling text, "relevant files", linked references, unhelpful factoids, subtle inconsistencies and incoherencies. It's oppressive like 95% humidity on a warm day.
boerseth
This reminds me of why I despise certain works/styles of art and artists. I feel cheated if I'm made to spend more time and effort interpreting a work of art than the creator put into it themselves.
anonzzzies
Talking with middle managers in fortune 100 companies, I often get 'send us the documents so we can make a decision'. It used to be that we carefully wrote things and no one would read them. Now we send 3000 pages of AI crap to make sure no one reads it and then we get approved to start working. Not great but the old situation was worse; no one would read anything and ask you to read it for them on a conference call with 36 people; now that does not happen anymore.
djoldman
What's interesting is that there are probably people who could spend a year happily working with an AI "coworker" without knowing it was an AI, but then get upset and change their viewpoint after learning the truth.
GaryBluto
> "ChatGPT says" is the enshittified LLM-era equivalent of LMGTFY [...] Recipients are left to figure out whether it's AI generated How?