OpenAI has thrown its weight behind a bill in Illinois that would shield AI labs from liability if their technology causes mass deaths or financial catastrophes. The move marks a shift in the company's stance on regulating AI, focusing instead on reducing risk while allowing tech to proliferate.
The legislation defines ‘critical harms’ as anything causing 100 deaths or $1 billion in property damage. It comes at a time when federal and state legislatures are grappling with how to regulate AI without inhibiting its advancement.
While the bill has a slim chance of passing due to Illinois' stringent tech regulations, it highlights the ongoing debate over AI liability. As powerful models like Anthropic's Claude Mythos push boundaries, so too do the questions surrounding their safety and accountability.
The shift in OpenAI’s stance is part of a broader Silicon Valley strategy: avoid patchwork state rules in favour of clear federal guidelines that maintain US competitiveness in the global AI race.
However, critics argue this could lead to a dangerous precedent where companies are shielded from liability for catastrophic outcomes. As we navigate the future of AI, these debates will only become more pertinent.







