AI Gets Wrong Woman Jailed for Six Months, Life Ruined

vaxman 72 points 25 comments March 14, 2026
www.youtube.com · View on Hacker News

Discussion Highlights (10 comments)

gnabgib

Discussion (730 points, 2 days ago, 379 comments) https://news.ycombinator.com/item?id=47356968

bradley13

Really, it's more about the police not doing their job. Face recognition pointed her out, the police saw she had a rap sheet, and therefore they didn't check further. She apparently could not afford a lawyer, who would have pointed out that she was provably at home (transactions, etc.) at the time the crime was committed in another state. Really it's not specifically AIs fault, though it made the error easier.

mvrckhckr

AI is a tool. It is humans who abdicate their responsibility (and thinking).

mannanj

Humans kill people not AI.

hyperhello

In Oregon the courts just ruled that since defendants weren’t provided a public defender in a certain amount of time, their cases were voided. There was an outcry, of course. But the ruling was sound: the pain had to be pushed to the part of the system that was failing. An honest system does not allow things like this; the accused either needs to either have a competent advocate, or the case is void.

righthand

I’m sure the cops got a slap on the wrist and their lives are fine. ACAB.

odshoifsdhfs

But have they have tried the latest models? I understand this from October last year but Opus 4.6 is light and day and I wasn't a believer but now with this latest model it changed everything. it hasn't send any innocent person to jail yet and identified all my neighboorhood creeps 100%. /s

rectang

My takeaway from the huge discussion thread yesterday was that the big divide among HN commenters is whether or not purveyors of AI tech have any responsibility to account for automation bias in their users. https://en.wikipedia.org/wiki/Automation_bias > Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct. In other words, if it is foreseeable that the tool will be misused, what does that mean for the toolmaker?

OutOfHere

Those deploying AI where it can affect individuals must ensure that the UI always prominently shows the failure rate. For example, if a person's face is matched to a ID, the UI must show not just the match percentage (which is very misleading) but also contextually the odds of getting it wrong. For example, if there are 7 IDs whose face is at least a 95% feature match to the thief, the odds of getting it wrong are at least 6 out of 7, meaning the chances of an accurate classification is just 14% at best!

mulosolitario

AI gets wrong 40%,50% even 70% of the times. Nonetheless Anthropic Claude has been used (behind Palantir) by the most moral army in the world to decide who to kill in Gaza or where to drop the bombs in Teheran. AI "solves" the problem of the accountability because it can fabricate all the "legitimate targets" you need. So now you can drop a bomb and kill 10 children and claim it is moral because AI said they are terrorists.

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed