Elon Musk's legal bid to dismantle OpenAI has brought its commitment to artificial intelligence (AI) safety under intense scrutiny. A former employee testified that OpenAI’s focus shifted from research to product development, raising questions about whether the organisation is prioritising profits over safety.
Rosie Campbell joined in 2021 but left in 2024 when her team was disbanded. She noted a shift towards a more product-oriented culture, suggesting potential risks if safety measures are compromised.
In one incident, Microsoft deployed GPT-4 without sufficient safety checks, highlighting the need for robust procedures as AI capabilities grow. OpenAI’s lack of comment on AGI alignment and the controversial firing of CEO Sam Altman add to concerns about governance and transparency within the organisation.
David Schizer, an expert witness paid by Musk's team, highlighted the importance of safety protocols, stating that they need to be taken seriously for any AI venture. The debate extends beyond OpenAI, suggesting broader implications for how corporations handle advanced AI technologies.







