In a bizarre turn of events, Grammarly's new Superhuman platform launched an 'Expert Review' feature that generated writing tips under the names of famous authors and academics. However, it quickly became clear that these 'experts' had no idea their names were being used.
The feature was quietly rolled out in August 2025 but only began drawing attention when it started using deceased professors to give feedback. The lack of explicit consent from the real experts and the obtuse advice generated caused a public backlash, leading Grammarly to disable the feature just days later.
CEO Shishir Mehrotra apologised for the oversight, claiming it was 'a bad feature that had very little usage.' But users were not mollified. Nilay Patel from The Verge confronted Mehrotra on Decoder, highlighting the ethical issues with misusing names and likenesses in AI-generated content.
The incident raises questions about the line between using publicly available work and outright fakery in AI applications. As AI becomes more pervasive, so do these ethical challenges—showing us just how far we have to go to make these technologies truly beneficial.







