A new white paper from Frontiers shows that AI is rapidly becoming part of daily peer review, with 53% of reviewers using AI tools. The findings of “Unlocking the Untapped Potential of AI: Responsible Innovation in Research and Publishing” show that we are at a pivotal time for research publishing.
Based on insights from 1,645 researchers around the world, this whitepaper identifies a global community committed to using AI confidently and responsibly. While many reviewers are now using AI to write reports and summarize findings, this report highlights the huge untapped potential of AI to support rigor, reproducibility, and deeper methodological insight.
The study was conducted in May and June 2025 and is the first large-scale study to examine the adoption, trust, training, and governance of AI in authoring, review, and editing workflows.
Camilla Markram, CEO and co-founder of Frontiers, said: “AI is transforming the way science is written and reviewed, opening up new possibilities for quality, collaboration and global participation. This white paper is a call to action for the entire research ecosystem to embrace that potential. With aligned policies and responsible governance, AI will strengthen the integrity of science and accelerate discovery.”
However, this transformation reveals both promise and limitations. Most uses remain limited, with reviewers primarily relying on AI to write reports, improve clarity, and summarize manuscripts. Only about 19% are using AI to assess methodology, statistical validity, and experimental design, areas traditionally considered the intellectual core of peer review.
The survey shows widespread enthusiasm for more effective use of AI, particularly among early-career researchers, with 87 percent reporting use, and even in fast-growing research regions, such as 77 percent in China and 66 percent in Africa.
“AI is already improving the efficiency and clarity of peer review, but its greatest value is yet to come. With the right governance, transparency, and training, AI can be a powerful partner in enhancing the quality of research and increasing the trustworthiness of the scientific record,” said Elena Vicario, Director of Research Integrity at Frontiers.
The study revealed what experts call the “paradox of trust.” Although most scientists agree that AI improves the quality of manuscripts, 57 percent say they would be dissatisfied if reviewers used AI to generate peer review reports for their manuscripts. If AI is used only to augment reports, this number drops to 42%.
Furthermore, 72% of respondents believe they can accurately detect peer review reports generated by AI on manuscripts they write, but our research shows that this belief may be misplaced.
When we analyze the responses by career stage, junior researchers tend to have a more positive view of the impact of generative AI compared to senior researchers. In total, 48% of junior researchers believed that AI would have a positive impact on peer review, compared to 34% of senior researchers.
In the paper’s preface, Markram says that AI is often used for peer review for superficial tasks such as polishing language, crafting sentences, and handling administrative tasks, rather than the deeper analytical and methodological work that can truly enhance rigor, reproducibility, and scientific discovery.
The report calls for concerted action across the research ecosystem, calling on publishers to build transparency, disclosure and human oversight into editorial workflows. Universities and research institutions will be encouraged to incorporate AI literacy into formal training, while funders and policy makers will be urged to harmonize standards internationally.
Frontiers’ position is that clear boundaries, human responsibility, and well-managed, secure tools are more effective at protecting and enhancing research integrity than outright bans. The company notes that greater risks to the quality of peer review come from unregulated, opaque, or private AI use, which is already occurring across the research ecosystem.
A quiet revolution within peer review is already reshaping how science is evaluated, paper by paper, reviewer by reviewer. Whether it strengthens scientific integrity or weakens public trust will depend on whether the global research community can manage AI with the same rigor it demands of the evidence itself.


