PeerAI.org
PeerAI.org
President Trump's executive order from May 23, 2025 aims to restore scientific integrity and public trust in government-funded research. It establishes a "Gold Standard Science" framework to ensure federal agencies' scientific activities are transparent, reproducible, and free from conflicts of interest.
The Policy, released July 17, 2025, states:
NIH will not consider applications that are either substantially developed by AI, or contain sections substantially developed by AI, to be original ideas of applicants.
NIH will only accept six new, renewal, resubmission, or revision applications from an individual Principal Investigator/Program Director or Multiple Principal Investigator for all council rounds in a calendar year.
The Notice, released June 23, 2023, states:
Maintaining security and confidentiality in the NIH peer review process is essential for safeguarding the exchange of scientific opinions [...], and thus NIH is revising its Confidentiality Agreements for Peer Reviewers to clarify that reviewers are prohibited from using AI tools in analyzing and critiquing NIH grant applications and R&D contract proposals.
Henry Han, Baylor University, examines the factors influencing the reproducibility of AI models in biomedical data science, highlighting the conflict between reproducibility standards and researchers' personal goals. (Jan 2025)
A commentary by Howard Bauchner, Boston University, and Frederick Rivara, University of Washington, on how Large Language Models could be used to assist editors in triaging manuscripts and enhance the effectiveness of peer review by detecting research misconduct and assessing adherence to reporting guidelines. (May 2024)
Shai Farber, Max Stern Yezreel Valley College, assesses the strengths and weaknesses of human and AI-based peer review in the Humanities and Social Sciences. He concludes that a hybrid approach merging the complementary skills of both, could lead to a more rigorous, impartial, and streamlined academic publishing system. (Apr 2024)
Tim Hannigan, University of Alberta, Ian P. McCarthy, Simon Fraser University, and Andre Spicer, City University London, introduce the concept of "botshit," untruthful, inaccurate, or fabricated content that is generated by large language models (aka "AI hallucinations") and then used by humans for work tasks. To help users manage the epistemic risks of AI-generated content, the authors propose a typology framework based on two factors: the verifiability and the importance of an AI's response accuracy. (Jan 2024)
The Reproducibility Project: Cancer Biology (RPCB) was created to address concerns about irreproducible preclinical cancer research. The project attempted to replicate experiments from 53 high-profile papers but only succeeded in 23 due to a lack of detail and statistics in the original publications. The replication attempts found that the median effect size was 85% smaller than the original studies, and only 40% of positive findings could be replicated. (Dec 2021)
Alessandro Checco, Trinity College Dublin, Stephen Pinfield, University of Sheffield, & Giuseppe Bianchi, University of Roma Tor Vergata, along with their colleagues, designed an AI tool that was able to successfully predict the peer-review outcome of manuscripts based on simple metrics like formatting and readability. (Jan 2021)
Glenn Begley and Lee Ellis, University of Texas, highlight the alarmingly low success rate of clinical oncology trials, arguing that a major contributing factor is the poor quality and irreproducibility of preclinical data published in scientific literature. They discuss a study from Amgen that found only 11% of "landmark" cancer research findings could be reproduced, and another study from Bayer HealthCare with a similar finding. (Mar 2012)
Florian Prinz, Thomas Schlange, and Khusru Asadullah conducted an analysis of 67 drug research projects at Bayer HealthCare, finding that only 20-25% of the published data could be reproduced, which represents a significant problem in scientific literature. (Aug 2011)
Daniele Fanelli showed that US states with higher academic productivity are more likely to publish research with "positive" results, suggesting that the pressure to publish may be linked to an increase in scientific bias. (Aug 2011)