ArXiv, a key preprint repository for scientific papers, is tightening its rules to ensure human oversight in the use of large language models. In a bid to maintain the integrity of research submissions, arXiv is now requiring authors to take full responsibility for their work, even if they've used AI tools extensively.
The move comes as recent studies have highlighted the rise of fabricated citations due to AI-generated content. While this isn’t an outright ban on AI use, it does impose strict conditions on how researchers can utilize these powerful tools. For those caught using AI without proper scrutiny, a one-year ban from ArXiv awaits.
The new policy is seen as a step towards ensuring that the trust in research findings remains intact. However, it also raises questions about the extent to which AI can be trusted and integrated into academic practices.
According to Thomas Dietterich, chair of arXiv’s computer science section, this initiative aims to encourage authors to critically assess the outputs from their AI tools rather than relying on them blindly. This could potentially shift the balance in how researchers approach collaboration with AI technologies.







