AI will never be ethical or safe

caisah 59 points 31 comments April 14, 2026
meiert.com · View on Hacker News

Discussion Highlights (18 comments)

superkuh

These kind of write-ups all have an implicit premise that is unstated: they're talking about corporate AI run by corporations. They're not actually talking about the technology. Corporate AI will never be ethical or safe because corporate persons have different motivations and profit incentives driving them than human persons do. And most of the time they're quite nasty when viewed through the lens of human ethics. It reminds me of the parable of the blind monks each feeling a different part of the elephant and arguing about it's shape. They're each not wrong, but they're also only talking about a limited subset of the elephant (AI). Cory Doctorow is much more eloquent in his explaination of this important distinction in his reverse centaur metaphor.

Maxatar

The article immediately starts off with such a glaring contradiction that it makes it very hard to correctly interpret the remainder of it. You can't say that something can never be ethical/safe on the one hand, and then on the other hand say that being ethical/safe depends on context/intent. Those two statements contradict each other. Either AI can be safe and ethical in the right context with the appropriate intent which contradicts the title, or it can't be safe/ethical regardless of intent/context, in which case the title is correct but the reasoning is incorrect. There is no consistent way to interpret the remainder of the article with such a glaring and obvious inconsistency.

ckastner

> The reason is this: > Both ethical and safe conduct depend on context and intent. The same apples to knives, and they can be plenty useful, and used in a safe manner.

Rohinator

Would AI be safer or more ethical if it required malicious users to lie about their intent first? "Most people don’t provide their context. They never have—not to search engines, not to librarians, not to hardware store clerks." Exactly. Are hardware store clerks unethical as well?

amelius

Can an encyclopaedia be ethical or safe? Can a search engine be ethical or safe? Can an AI be ethical or safe? If you answer differently for one or more of these questions, then you'll have to say why and where you draw the line.

toenail

> Both ethical and safe conduct depend on context and intent. That entire line of reasoning is absurd. You can get information from books, they don't know context and intent either. Books will never be ethical or safe.

daft_pink

AI is just a way to search information, program and control computers faster with natural language understanding etc. I’m not sure why people are attributing to so much to it. It just allows a single person to do a lot more units of work, the same way that a computer allowed a single person to do a lot more units of work.

rvz

There's no such thing as "ethics in AI" in a company when there are billions of dollars of investor money on the table. "Safety" was just the smokescreen and the perfect scare tactic towards tricking governments to turn even more tyrannical and place in extreme surveillance on everyone which benefits tech corporations, data brokers and AI companies.

bnjmn

"Context and intent cannot be known" seems like a bit of an overstatement? A qualifying clause like "in all cases, with complete confidence" would allow for the possibility of alignment in some cases (yay), but not always, and of course it's that "not always" that's problematic when you're trying to make blanket safety guarantees. Here's a version I imagine both the author and I can nod along with: "Context and intent cannot be known at model training time, so most attempts to enforce safety or ethics guardrails purely through the weights of the model, fine-tuning, or other training-time interventions are doomed to guarantee very little at inference time."

ctoth

So from the exact same article: Doctors Will Never Be Ethical or Safe Hardware Stores Will Never Be Ethical or Safe. Okay?

undecisive

This article in a nutshell: AI will never be ethical or safe, because no tool can ever be ethical or safe, without it knowing the complete motivation of any person using it and every person who might receive its outputs. Wasn't the article I was expecting! Not sure it helps much, except maybe if you wanted to muddy the water of ethics-and-AI discussions.

dzink

Water can never be safe. Water in large quantities can drown anyone. When mixed with the wrong things it can turn into chemical reactions. Water safety depends on context and intent. So if we consider AI a chemical substance - if inserted in with limited context in tools with specific intent, can it be useful beyond tools available at this moment? You can trust just any liquid that looks like water, just as you can trust just any model or especially any inference provider (they can switch models to save money or mess with other key parameters, or insert ads). You have to test your water supply and your AI supply regularly. And benchmark new sources. We’ll see labeling and quality guarantees in future suppliers. We’ll see personal models and model families trained and refined as brands for reliability. Bottled neatly for you by certified suppliers. In the mean time we all just found our selves out of a desert and splashing around in this funky thing that we now find on the ground and falling for free from clouds.

dec0dedab0de

This article is nothing, but the title is probably right. At least if you consider it unethical to source training data without informed consent, because generating code is inherently unsafe. Of course, you have to have a very narrow definition of AI for even that to be true.

akagusu

AI will never be ethical because the copyrighted material used for training the AI without proper copyright payments is not only unethical but illegal. Unfortunately law enforcement decided the copyright law only applies to regular citizens like me and not to billionaires owners of AI companies.

lutusp

Wait a sec ... > The problem AI inherits from us is that context and intent cannot be known. > Both can be omitted or lied about. This implies that neither we nor our creations can ever be ethical or safe. It follows logically that no entity can ever meet that standard. Therefore focusing on AI is arbitrary -- the focus might as well have been pit vipers or platypuses. And the article misses the point that an AI engine can be forced to imitate ethical behavior, because it has no civil rights or behavioral latitude (yet). Granted that would only be an imitation of ethical behavior, but then, so is ours.

gmuslera

Never is too much time. And humans not aware of intent or context also can make unethical decisions, even if we assume an absolute and eternal ethical framework. Asimov robot stories (with it’s magical three/four rules) had examples of situations where even being “ethical” bad things happened. And in Black Mirror episode Men Against Fire humans were the ones with a fake context making unethical decisions (and reality is much worse than fiction as we’ve seen in the last months). Taking out the absolutes, I would stop in that today’s LLMs lack context, critical thinking, and a lot more than make them unethical and unsafe. But something future that could be labeled as AI too could have some of those problems mitigated, maybe making better/safer decisions than humans in general.

happytoexplain

I don't think the writeup is very good, but the thesis is not being engaged with honestly in these comments. Knives, books, water, calculators, encyclopedias, search engines: Just a few of the analogies being made with barely a word beyond "it's like X". In fact, the opposite: Demanding that other people make arguments that AI is not like X. Analogies are almost always just a pithy, empty distraction. They are the fodder of low-quality internet conversations. It should be obvious why an analogy is so often reached for - if an argument about X can't be supported on its own, it's easy to point to another thing, Y, with some similarity, but which more easily fits the argument in other ways, and... just assert that they're the same. Here's a dumb analogy: Yes, "it's just a tool." So is C4.

marshray

This argument is so bad that I have to wonder if it's an intentional a strawman. (I don't think it deserves to be flagged, however.) It leads with "AI Will Never Be Ethical or Safe". The first sentence is "AI will never be *entirely* ethical or safe." It concludes with "AI is a tool, and it can be used in ethical and unethical, safe and unsafe ways" and compares them to "hardware store clerks". Hardware stores are *specifically* places where society has had a centuries-long conversation about risk and the products on sale represent a very intentional set of choices. In some parts of the US hardware stores used to sell dynamite, they don't anymore. That's the 'social contract' functioning in daily life. "AI is like a tool one might buy from the hardware store" is, in most people's minds, the opposite of the opening premise.

Semantic search powered by Rivestack pgvector
4,562 stories · 42,934 chunks indexed