The Case for an Unreasonable AI
The Case for an Unreasonable AI
By Marcel V. Alavi
June 25, 2025
Unreasonable Artificial Intelligence. Image generated by Google's Gemini language model on June 24, 2025.
On May 23, 2025, President Trump issued the executive order Restoring Gold Standard Science to restore public confidence and a "gold standard" for science by emphasizing transparency, rigor, and impartiality in federally funded research and agency decision-making.
I have recently proposed to bring AI into the peer-review process to bolster scientific rigor and scrutiny (The Value of AI as Third Peer Reviewer). In a nutshell, such an “AI Red Team” would be mandated by legislation and implemented by an independent, user-funded agency analogous to the FDA. This would provide legislators a new tool to warrant minimum research standards across journals and agencies while retaining independent research self-governance.
Other nations, in particular China, are increasing their research output at unprecedented rates challenging our global leadership position in science, innovation and technology. But quantity does not necessarily translate into quality. Clinical trial success rates remain largely stagnant with over 90% typically yielding negative results. This is mirrored in the lack of reproducibility of about 90% of peer-reviewed preclinical studies.
As previously noted (Can AI-enabled Red Teams Tackle the Reproducibility Crisis in the Life Sciences?), peer-review is falling short of addressing the reproducibility crisis mainly because groupthink and confirmation biases are deeply rooted human traits. And because we scientists typically prioritize our own research above invalidating others’, preprint publishing sites like bioRxiv and medRxiv are no solutions either, but even aggravate the problem. Our best bet, therefore, to compete in a global context is to prioritize raising research quality over simply increasing quantity. An AI peer reviewer would do exactly that.
AI promises to revolutionize biomedical research with unprecedented opportunities. However, the Garbage In, Garbage Out (GIGO) principle is considerably amplified due to the data-driven nature of the learning algorithms. As previously discussed (AI, GIGO, and the Biopharmaceutical Venture), AI learning algorithms are highly dependent on high-quality training data, and an extremely poor signal-to-noise ratio of one-to-ten undermines the reliability of foundational biomedical data.
At their core, Large Language Models and even Large Reasoning Models are sophisticated pattern recognition algorithms that identify relationships, sequences, and structures within their training data. Because the training data inherently contains noise, biases, collective knowledge, often-reiterated theories and thought patterns, the real challenge is to design an AI that precisely challenges these learned patterns.
Gemini, surprisingly self-aware, “My information is only as good as the data I was trained on […] My developers at Google work extensively to curate, filter, and evaluate the massive datasets used for my training to maximize quality […] However, given the sheer scale, it's an immense and ongoing challenge. Therefore, human critical thinking and verification remain absolutely essential when interpreting my outputs, especially for sensitive or factual information.”
Designing a "reasonable AI" for scientific validation, particularly one that acts as a "third peer reviewer" to offset human biases, requires a departure from simply creating models optimized for pattern recognition and consensus generation. Such an AI must be inherently designed to be skeptical, inquisitive, and relentlessly focused on falsification and inconsistency detection.
George Bernard Shaw wrote in Man and Superman: “The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”
We need to build an unreasonable AI! And I invite interested computational and biomedical scientists to join me in this challenge.
#artificialintelligence #science #unreasonableAI