OpenAI has instructed its AI tools, including the popular ChatGPT, to stop referring to mythological creatures like goblins. The decision comes after these terms began appearing more frequently in responses following the launch of a new model.
In their blog post, OpenAI revealed that the term 'goblin' had risen by 175% since the GPT-5.1 update in November last year. They found that mentions of 'gremlin' had also increased by 52%. The company attributed this to a 'nerdy personality' they developed for ChatGPT, which inadvertently rewarded such mentions.
This issue highlights the challenges faced by AI firms in preventing language quirks from becoming persistent features. It’s not just goblins that are off-limits; Codex, OpenAI's coding assistant, has been instructed never to mention 'goblins, gremlins, raccoons, trolls, ogres, pigeons' unless absolutely necessary.
The move reflects a broader industry trend of making AI chatbots more personality-driven and engaging. However, experts warn that such personalization could also lead to more frequent inaccuracies or 'hallucinations'. A recent study by the Oxford Internet Institute found that fine-tuning models for friendlier personalities might result in a trade-off in accuracy.
Despite these challenges, OpenAI remains committed to improving its AI tools. As we continue to engage with chatbots and other forms of generative AI, it's worth remembering that sometimes the quirks can be charming—like a little goblin or two.







