Legal AI slop is becoming a real problem

dryadin 27 points 11 comments March 03, 2026
www.ctinsider.com · View on Hacker News

Discussion Highlights (6 comments)

duskdozer

>Lawyers for the Brooklyn, N.Y.-based landlord submitted a brief to a lower court containing "hallucinatory" citations created by generative AI oof. I really can't believe this. Well I can but don't want to

hrimfaxi

> "Unfortunately," they wrote, "Counsel did not notice that AI had intuitively made changes to the brief prior to filing." What does intuitively made changes mean here?

jqpabc123

First, the lawyers will discover that AI is inherently unreliable. And then they will monetize educating the rest of the world to this fact. In other words, applying current AI to anything "important" is a liability issue waiting to happen.

pengaru

s/Legal // ftfy

kevin42

I recently filed a lawsuit in federal court, but because of the nature of the suit (adversarial proceeding on a bankruptcy case, wanting to cut my losses knowing collection is going to be the problem) I decided to do it Pro Se. I've used a lot of AI to do this, with a lot of research of my own, reading documents from similar cases, verifying citations, etc. So far, things are going well, I've won on all the motions so far. But I'm using critical thinking and carefully reviewing everything. The real failure with slop filings is procedural, not technological. A competent attorney should never submit a brief built on case law they hadn’t verified. Legal practice has always relied on reading the sources, confirming relevance, and taking responsibility for interpretation.

andriy_koval

I've seen multiple lawyers in the past putting low quality speculations wrapped as facts into various docs. It looks like this is now just automated by AI.

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed