
Source
Times Higher Education
Summary
George Chalhoub argues that as AI becomes more entrenched in research and publication, the academic community urgently needs clear, enforceable guidelines for its use in scientific writing and peer review. He cites evidence of undeclared AI involvement in manuscripts and reviews, hidden prompts, and inflated submission volume. To maintain credibility, journals must require authors and reviewers to disclose AI use, forbid AI as a co-author, and ensure human oversight. Chalhoub frames AI as a tool—not a decision-maker—and insists that accountability, transparency, and common standards must guard against erosion of trust in the scientific record.
Key Points
- Significant prevalence of AI content: e.g. 13.5 % of 2024 abstracts bore signs of LLM use, with some fields reaching 40 %.
- Up to ~17 % of peer review sentences may already be generated by AI, per studies of review corpora.
- Some authors embed hidden prompts (e.g. white-text instructions) to influence AI-powered reviewing tools.
- Core requirements: disclosure of AI use (tools, versions, roles), human responsibility for verification, no listing of AI as author.
- Journals should adopt policies involving audits, sanctions for misuse, and shared frameworks via organisations like COPE and STM.
Keywords
URL
Summary generated by ChatGPT 5