Generalist Large Language Models Outperform Clinical Tools on Medical Benchmarks
Abstract
Generalist LLMs outperform specialized clinical AI assistants in a multi-task benchmark, highlighting the need for independent evaluation of clinical decision support tools.
Specialized clinical AI assistants are rapidly entering medical practice, often framed as safer or more reliable than general-purpose large language models (LLMs). Yet, unlike frontier models, these clinical tools are rarely subjected to independent, quantitative evaluation, creating a critical evidence gap despite their growing influence on diagnosis, triage, and guideline interpretation. We assessed two widely deployed clinical AI systems (OpenEvidence and UpToDate Expert AI) against three state-of-the-art generalist LLMs (GPT-5, Gemini 3 Pro, and Claude Sonnet 4.5) using a 1,000-item mini-benchmark combining MedQA (medical knowledge) and HealthBench (clinician-alignment) tasks. Generalist models consistently outperformed clinical tools, with GPT-5 achieving the highest scores, while OpenEvidence and UpToDate demonstrated deficits in completeness, communication quality, context awareness, and systems-based safety reasoning. These findings reveal that tools marketed for clinical decision support may often lag behind frontier LLMs, underscoring the urgent need for transparent, independent evaluation before deployment in patient-facing workflows.
Community
Several high-profile generative AI tools have been introduced directly into clinical workflows, including Open Evidence and UpToDate Expert AI. Unlike frontier LLMs such as GPT-5, Gemini 3 Pro (Preview), and Claude Sonnet 4.5, which have undergone extensive benchmarking, these clinical-facing systems have seen almost no independent evaluation. OpenEvidence, reportedly used by 40% of U.S. physicians, claims “perfect” USMLE performance, yet these claims cannot be verified because the tool does not provide open weights or even an accessible API. Similarly, the newly released UpToDate Expert AI, already used in an estimated 70% of major enterprise health systems, has not been subjected to any form of public scrutiny or third-party testing.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MedBench v4: A Robust and Scalable Benchmark for Evaluating Chinese Medical Language Models, Multimodal Models, and Intelligent Agents (2025)
- Can Large Language Models Function as Qualified Pediatricians? A Systematic Evaluation in Real-World Clinical Contexts (2025)
- Small Language Models for Emergency Departments Decision Support: A Benchmark Study (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper