OpenAI backs Illinois bill that would limit when AI labs can be held liable

smurda 427 points 309 comments April 10, 2026
www.wired.com · View on Hacker News

https://archive.md/WzwBY

Discussion Highlights (20 comments)

mrcwinn

Fortunately at any moment the virtuous non-profit will step in and make this all okay.

giancarlostoro

Is this for like military scenarios or like, ChatGPT designed a drug that seemed to work, but people died by the millions 5 years later? Because they should 100% be liable for the latter. The former, good luck trying to prosecute an AI company for something the military does. To an extent, the military would probably want their AI models to be behind their private network, completely firewalled from any public network. SIPRNet iirc. If they lock it down behind a highly classified network, good luck figuring out how they're using AI.

ArekDymalski

So much for the "Our mission is to ensure that artificial general intelligence benefits all of humanity." I was naive to hope that now such laws would ever pass

sassymuffinz

So they did the math and worked out it's cheaper and easier to lobby the government instead of working to make their product safe. And these are the people that a lot programmers want to give the keys to the kingdom. Idiocracy really is in full effect.

avaer

Take all of the data, take all of the credit, take all of the money, and none of the blame. That would be a better mission statement for OpenAI at this point.

jstummbillig

I am not sure what the other side of this argument looks like: Unlimited liability (i.e. liability no matter how poor an implementation and use of the tech is)? The would be quite a novel burden, that no other tech (afaik) had to carry so far. We always assumed some operator responsibility. It's interesting to think of AI as a tech that could feasible be able to internally guardrail itself, and, maybe more so with increasing capability, no human can be expected to do so in it's stead – but, surely, some limits must apply and the more interesting question is what they are, as with any other tool?

scrumper

I forget, wasn't OpenAI the company that was formed as a nonprofit to limit the risks of LLMs? Founded by a bunch of visionaries scared of what they had wrought and anxious to lead so they could make sure it was only used responsibly?

Talderigi

We built systems we don’t fully understand, so naturally the next step is… immunity

LogicFailsMe

Yep, this is everything wrong with AI in one easy to protest package, but do keep going on and on about the evils of datacenters, how they're coming for your jobs, and that AI art isn't art. That's really winning hearts and minds!

elAhmo

Sam is working hard to confirm everything in that article.

himata4113

I have made both GPT 5.4 and Opus 4.6 produce me content on creating neurotoxic agents from items you can get at most everyday stores. It struggled to suggest how to source phosphorus, but eventually lead me to some ebay listings that sell phosphorus elemental 'decorations' and also lead me towards real!! blackmarket codewords for sourcing such materials. It coached me how to: stay safe, what materials I need, how to stay under the radar and the entire chemical process backed by academic google searches. Of course this was done with a lengthy context exhausition attack, this is not how the model should behave and it all stemmed from trying to make the model racist for fun. All these findings were reported to both openai and anthropic and they were not interested in responding. I did try to re-run the tests few days ago and the expected session termination now occurs so it seems that there was some adjustment made, but might have also been just general randomess that occurs with anthropics safety layer. I am very confident when I say that it keeps every single person that works at anti-terrorism units awake.

micromacrofoot

Please note that you can not hold the Torment Nexus™ liable for any torment you experience.

greenavocado

A conspiracy theorist would claim this is straight from Protocols 15 & 16. But I don't say that because I'm not a conspiracy theorist. 15. Our method of gaining power is better than any other because it grows invisibly. Then when it has gained enough strength, we can unleash it; and it will be unstoppable because no one will be prepared for it. 16. We need to do a lot of evil things in order to gain power. But that’s okay because once we have power over everything we can use it to do good things; like running the nations properly. We could never do that if we gave people freedom. The end justifies the means. So let’s put aside moral issues and focus on the end result.

giwook

This seems par for the course for OpenAI/Sam Altman. Unfortunately they are not the first company to try and externalize their costs, and they will not be the last. Serious question, maybe a bit naive: Is there anything we can do to push back against and discourage the externalization of costs onto others? Is this simply a matter of greed and profit-seeking outweighing one's morals (assuming one has them to begin with)?

sph

The thing that bugs me the most about OpenAI are not the AI-enabled mass deaths. It's the hypocrisy.

an0malous

Let’s see how long until this is flagged off the front page. I’ll put the over/under at 1 hour from the posted time

simianwords

Is there something equivalent in other industries that we can compare to? This is the summary >Creates the Artificial Intelligence Safety Act. Provides that a developer of a frontier artificial intelligence model shall not be held liable for critical harms caused by the frontier model if the developer did not intentionally or recklessly cause the critical harms and the developer publishes a safety and security protocol and transparency report on its website. Provides that a developer shall be deemed to have complied with these requirements if the developer: (1) agrees to be bound by safety and security requirements adopted by the European Union; or (2) enters into an agreement with an agency of the federal government that satisfies specified requirements. Sets forth requirements for safety and security protocols and transparency reports. Provides that the Act shall no longer apply if the federal government enacts a law or adopts regulations that establish overlapping requirements for developers of frontier models. https://legiscan.com/IL/bill/SB3444/2025 I'm trying to think of an alternative bill. Imagine OpenAI came up with a model that when deployed in OpenClaw, allows you to spam people and this causes a huge disruption. Should OpenAI be liable for it? If this was not intentional and they had earnestly tried to not have this happen by safety protocols?

xeyownt

Skynet begins learning at a geometric rate.

4128kawr

Good that OpenAI is a corporation for the public benefit. Altman with his constantly fake worried look must be the most hated picture in existence. Please write articles without a picture or add a trigger warning.

chollida1

Sure and Google, FaceBook and Twitter support section 230 that gives them cover for hosting others content. A company backing legislation that takes liability off them is something that they will always do.

Semantic search powered by Rivestack pgvector
4,179 stories · 39,198 chunks indexed