DOGE canceled NC Museum grant for HVAC systems after ChatGPT flagged it as DEI

cldwalker 70 points 17 comments March 18, 2026
myfox8.com · View on Hacker News

Discussion Highlights (9 comments)

cldwalker

The original article title is "DOGE canceled High Point Museum grant for HVAC systems after ChatGPT flagged it as DEI, lawsuit alleges" but I modified it so it could fit the character limit

mellosouls

Somebody tweeted the other day they'd had their 600k grant cancelled because of the word "polarization". It was a physics grant for research related to light polarization.

techblueberry

ChatGPT determined that this was related to DEI, responding, “Yes. Improving HVAC systems enhances preservation conditions for collections, aligning with the goal of providing greater access to diverse audiences. #DEI.” Well, case closed.

josefritzishere

AI is so stupid it enables third parties to be stupider. It's a new type of cognitive deficit.

goatlover

If the goal of DOGE was really to reduce "waste, fraud and abuse", wouldn't a human be checking for false positives? If anything, mistakenly cancelling a grant sounds like waste or abuse.

JumpCrisscross

Interesting they found ChatGPT more useful than Grok.

ChrisArchitect

Related: Another DOGE staffer explaining how he flagged grants at NEH for "DEI" https://news.ycombinator.com/item?id=47352819

jfengel

The grant was for $349,000. There's an old joke about a billion here and a billion there, but this is 0.00035 billion. DOGE asserted it would save $2 trillion. Their own web site claims about 10% of that, and that number is likely exaggerated by as much as an order of magnitude. Nor is it a work in progress. DOGE has already disbanded. It was clearly about harassing ideological opponents, and they couldn't even get that right. Meantime, the US federal budget outlays have gone up by $400 billion from $7 trillion in 2025 to $7.4 trillion in 2026.

OutOfHere

The problem is broader -- it is risked with everyone using LLMs for classification. Via a shallow application, the usage fails to model uncertainty, calibrate uncertainty if modeling it, and the users fail to understand the costs and consequences of misclassification.

Semantic search powered by Rivestack pgvector
3,471 stories · 32,344 chunks indexed