Why do we tell ourselves scary stories about AI?

lschueller 40 points 96 comments April 10, 2026
www.quantamagazine.org · View on Hacker News

Discussion Highlights (20 comments)

Zigurd

If I can plausibly say I'm making something super dangerous, the government is likely to want to be the first government to have it. If the check clears before they figure out if I'm BSing them or not, it's a win.

afavour

It does feel like a bizarre moment, where the AI companies are deliberately trying to scare us about their own product in a bid to, I think, show the inevitability of it? Or to sell themselves as the one responsible power to constrain it? It's very odd. "It's going to take all your jobs" is not a great selling point to the everyday public.

mememememememo

I read and experience scary stories about AI already. It is not a future maybe thing.

zaps

Why do we tell ourselves scary stories about anything?

5asaKI

Indeed. Apart from the obvious prompt research frauds mentioned in the article, the model learned all deceptive behaviors from hundreds of Yudkowsky scenarios that are easily available. It literally plagiarizes its supposed free will like a good IP laundromat.

chrisbrandow

I don’t think the fact that the robot was instructed to lie to a human and was able to do so successfully makes the story much less scary for most people.

nalekberov

> “The last four years have demonstrated that AI agents can acquire the will to survive and that AIs have already learned how to lie.” Why Harari feels an obligation to comment about everything is of course beyond me, but describing 'AI' as if it takes independent decisions to lie, make moral judgements, etc. demonstrates either that he has zero clue how 'AI' trains itself or that he chooses to mislead the audience.

Forgeties79

LLM companies’ behavior, AI evangelists, and the investment fervor around it all, are telling us the scary stories.

0x4e

Because we don't like uncertainty, and the AI future is uncertain. There are multiple high probability scenarios. Because we're seeing how its capabilities increase overtime. I find the rate at which I prefer to go to an AI than an UpWorker is scary. Because we——the people——are not in control of it. We're at the whims of whatever it and the tech bros want (technocracy).

Rzor

For regulatory capture, of course. They are not fooling me. There may be other motives, and the more ever-doom-looking crowd can find something in it for themselves as well, but you don't have to dig any deeper if you are looking for an explanation for the perspective of the people actually building it. The Chinese tech sector popularizing cheap and open source models sure did a number on that narrative, too. Llama models, a while ago, too.

ramon156

I feel like this article is more written towards non-techies. A decent amount of programmers have touched coding agents, and know it "kind of" does it's job. It's good enough for some tasks... I cannot be arsed to figure out how to edit a graph in Drupal, so I ask Claude. Claude fixes it, and it's not anymore broken than it already was. Win win. However, that's where I stop my agent usage. I let ~~Claude~~ GLM do the following: - Fix tedious tasks that cost me more to figure out than I care for - Research something I'm not familiar with, and give me the facts it had found, and even then I end up looking at the source myself

everdrive

One thing that strikes me that I never really see anyone discuss is that we've been afraid of conscious computers for a _long_ time. Back in the 50s and before people were quite afraid that we'd build conscious computers. This was long before there was any sense that could actually accomplish the task. I think that similarly to seeing faces in the clouds, we imagine a consciousness where none exists. (eg: a rain god rather than a complex system of physics and chemistry) Even LLMs, which blow past any normal Turing test methods, are still not conscious. But they certainly _feel_ conscious. They trigger the same intuitions that we rely on for consciousness. You ask yourself "how would I need to frame this question so that Claude would understand it?" You use the same mental hardware that you'd use for consciousness. So, you have an historical and permanent fear of consciousness in a powerful entity where no consciousness actually exists combined with the fact that we created things which definitely seem conscious. (not to mention that consciousness could genuinely be on its way soon)

ACCount37

It's simple. It's because AI is the scariest technology ever made. Human intelligence has proven itself capable of doing a lot of scary things. And AI research is keen on building ever more capable and more scalable intelligence. By now, the list of advantages humans still have over machines is both finite and rapidly diminishing. If that doesn't make you uneasy, you're in denial.

bharat1010

The point about AI companies actively hyping the danger of their own products is something I hadn't really thought about before — it's a strange kind of marketing when you think about it.

ggambetta

We tell ourselves scary stories about everything new. Advances in electricity + medicine == FRANKENSTEIN!

vdelpuerto

The framing of "scary stories" misses something interesting: most of the actual operational fear isn't about consciousness or superintelligence — it's about systems that seem to work until they quietly don't.

SpicyLemonZest

The actual contents of this article are making reasonable arguments I largely agree with. It would be very surprising for LLM-based AI systems to act as monomanaical goal optimizers, since they're trained on human text and humans are extremely bad at goal-oriented behavior. (My goals for today include a number of work and self-maintenance tasks, and the time I'm spending here writing out a HN comment does not all help achieve them - I suspect most people reading this comment are in the same boat.) It's very frustrating that the magazine wrote such a dumb headline which guarantees people won't talk about the issues the article raised. Obviously non-goal-oriented systems can still have important negative effects.

GolfPopper

Why does the uncanny valley[1] exist? (If it truly does.) What in our evolutionary history gave us a reflexive rejection of things that seem human but aren't? 1. https://en.wikipedia.org/wiki/Uncanny_valley

KaoruAoiShiho

TLDR: Writer hasn't heard of agents.

jacquesm

This article would be a lot more digestible if we didn't have actual scary data rather than just stories. Not a day goes by without some prompt injection oopsie, security gotcha, deepfake or some sandbox escape artist demonstration and tbh I'm impressed but more to the point where I don't doubt this is dangerous tech, I'm sure of it. This is roughly 1995 again and we're going to find out all over why mixing instructions and data was a spectacularly bad idea. Only now with human language as the input stream, which is far more expressive than HTML or SQL ever were. So now everybody is a hacker. At least in that sense it has leveled the playing field I guess.

Semantic search powered by Rivestack pgvector
4,179 stories · 39,198 chunks indexed