ICLR 2026 faces "AI Ghost Review" crisis—over half of reviews tainted by LLMs
Source: Saiyp | Date: 2025-11-29 20:11:00
The peer review process for ICLR 2026 is under fire after third-party analysis revealed that 56% of its 76,000 reviews involved AI: 21% were fully auto-generated by large language models (LLMs), and another 35% were AI-edited to varying degrees. Only 43% appear to be entirely human-written.
These AI-assisted reviews tend to be longer and receive higher ratings—but often include fabricated citations or falsely accuse papers of numerical errors that don’t exist. Frustrated authors have taken to social media to protest what they call “hallucinated peer review.”
In response, the ICLR organizing committee has enacted its strictest policy yet:
- For authors: Papers using LLMs without disclosure will be desk-rejected immediately.
- For reviewers: AI may be used as a tool, but reviewers bear full responsibility for content. Submissions containing fake references or “AI nonsense” could lead to rejection of the reviewer’s own papers.
- For transparency: Authors can confidentially flag suspicious reviews; the program chairs will investigate and publish findings within two weeks.
The conference chair acknowledged that the explosion in AI research has overwhelmed reviewers—each now expected to assess five papers in just two weeks, far beyond sustainable workloads. This pressure is fueling reliance on AI “ghostwriting.”
The ICLR 2026 incident highlights an urgent challenge: as LLMs infiltrate peer review, the academic community must deploy clear rules and detection tools to stop “ghost votes”—or risk turning scientific evaluation into an unaccountable, automated charade.