Ars Technica newsroom AI policy
LorenDB
17 points
6 comments
April 22, 2026
Related Discussions
Found 5 related stories in 71.1ms across 5,335 title embeddings via pgvector HNSW
- Ars Technica fires reporter after AI controversy involving fabricated quotes danso · 130 pts · March 03, 2026 · 71% similar
- Senior European journalist suspended over AI-generated quotes Brajeshwar · 84 pts · March 21, 2026 · 58% similar
- MIT tech review: OpenAI is Building an Automated Researcher Bang2Bay · 13 pts · March 23, 2026 · 57% similar
- Stanford report highlights growing disconnect between AI insiders and everyone ZeidJ · 233 pts · April 13, 2026 · 56% similar
- AI Resistance: some recent anti-AI stuff that’s worth discussing speckx · 346 pts · April 20, 2026 · 56% similar
Discussion Highlights (4 comments)
spondyl
Having ads in the middle of an article about newsroom policy is pretty wacky
add-sub-mul-div
Sounds like the usual. "We don't use generative AI, except for the places we do. But forget what you know about human nature and everything you've seen from everyone else using it. We're going to use it responsibly ."
klustregrif
There is a certain level of recursive irony in Ars Technica needing a formal AI policy because a senior reporter used an AI to hallucinate quotes for an article about an AI hallucinating a hit piece. Or maybe not, I don't know, I had AI write that comment. In any case for anyone who missed what led up to this AI policy here's a reference: https://news.ycombinator.com/item?id=47226608
Wowfunhappy
> Reporters may use AI tools vetted and approved for our workflow to assist with research, including navigating large volumes of material, summarizing background documents, and searching datasets. Even then, AI output is never treated as an authoritative source. Everything must be verified. Good. This is basically treating AI as a search engine—it can lead you to the right answer, but you need to verify that answer for yourself.