Recently, I received a tantalising message that seemed like it could be the start of a fascinating collaboration. The sender referenced my interests in decentralised machine learning and robotics, along with my newsletter, making for a convincing pitch.
The message was part of a social engineering test executed by an AI model named DeepSeek-V3. This model not only lured me into engaging but also demonstrated how easily crafted schemes can bypass human vigilance. The tool used to run these tests, developed by Charlemagne Labs, reveals the alarming reality that such sophisticated AI models could automate complex scams on a grand scale.
Jeremy Philip Galen of Charlemagne Labs explains: 'The genesis of 90 per cent of contemporary enterprise attacks is human risk.' This underscores the urgency of quantifying these risks. Meanwhile, Rachel Tobac from SocialProof highlights how AI is already being used to automate target research for scams, making individual attacks more scalable.
As powerful models like Anthropic’s Mythos are developed for broader use, concerns over their potential misuse grow. Open-source models might offer insights but could also aid in malicious activities if left unchecked. The balance between progress and security remains precarious.







