Essex police pause facial recognition camera use after study finds racial bias

Brajeshwar 47 points 41 comments March 20, 2026
www.theguardian.com · View on Hacker News

Discussion Highlights (10 comments)

gib444

Alternative headlines: Essex police, well aware of all the issues before using it, pause use until expected bad publicity dies down Or Essex police chosen as force to take some flack for the issues while other forces steam ahead

ap99

> more likely to correctly identify men than women. > more likely to correctly identify black participants than participants from other ethnic groups. > AI surveillance that is experimental, untested, inaccurate or potentially biased has no place on our streets. I wonder if they're more worried about putting too many men in prison or too many black people.

pingou

If the suspect is Black, the software should automatically return zero matches in 30% of cases. Problem solved.

bloqs

Correlation does not indicate causation

ghusto

> the system was more likely to correctly identify men than women and it was “statistically significantly more likely to correctly identify black participants than participants from other ethnic groups”. I am genuinely unsure what's going on. My understanding of the article is that the system is problematic because it is more likely to correctly identify black people than "other ethnic groups". Is that right?

OJFord

This is actually more (socially/ethically/philosophically) interesting than one might assume from the headline: it's not false positives, it's that it's more effective (correctly identifies someone is on a watch-list) for one group than another within a protected characteristic. So essentially they're pausing the use of it because it works too well for group A / not well enough for group B, potentially leading to disproportionate (albeit correct) arrests of group A.

blitzar

> the system was more likely to correctly identify men than women and it was “statistically significantly more likely to correctly identify black participants than participants from other ethnic groups” Technology has moved on a lot no doubt, however, studies were finding the opposite (and with order of magnitude errors) as recently as 2020 with a lazy google literature search > these algorithms were found to be between 10 and 100 times more likely to misidentify a Black or East Asian face than a white face https://jolt.law.harvard.edu/digest/why-racial-bias-is-preva...

bsenftner

Former author of one of the top 5 facial recognition servers in the world for multiple years running, here's what's going on: the industry has solved this issue, but the potential clients are seeking the lowest bidder, and picking the newer companies, the nepostically created not really players but well connected, and those companies have terrible implementations. This is not a case of the technology not there yet, we solved all these racial bias issues 10 years ago. But new companies with new training sets and new ML engineers that do not know any of the industry's history are now landing contracts with terrible quality models, but well connected sales channels.

moi2388

“statistically significantly more likely to correctly identify black participants than participants from other ethnic groups”. Great. Wasn’t the problem before always that it couldn’t correctly identify non-white people? It does it accurately now. That is somehow also a problem? It should make more mistakes?

glyco

This seems like an easy problem to solve - when the system informs you of a black criminal, just roll dice to ignore them and let them get away.

Semantic search powered by Rivestack pgvector
3,663 stories · 34,065 chunks indexed