When do we take AI doomers seriously? This was the subtext of Elon Musk's attempt to shut down OpenAI’s for-profit AI business. He argues that the organization, meant to be focused on AI safety as a charity, veered off course in pursuit of profit.
The only expert witness called by Musk was Peter Russell, a professor at UC Berkeley who has long studied AI dangers. He testified about risks from cybersecurity threats to misalignment and the winner-takes-all nature of AGI development. However, his concerns were limited due to objections from OpenAI’s attorneys.
Russell co-signed a letter calling for a pause in AI research, even as Musk signed it while launching xAI, his own for-profit lab. This contradiction highlights how the race to achieve AGI first creates an arms-race dynamic, prompting calls for tighter government regulation.
The core issue is clear: OpenAI’s founders have repeatedly warned about AI risks but are also pushing hard for rapid development and profit. The founding team's fear of a single entity controlling AGI pushed them to seek the capital needed, ultimately splitting the team and creating today's lawsuit.







