Papers
arxiv:2512.01191

Generalist Large Language Models Outperform Clinical Tools on Medical Benchmarks

Published on Dec 1
· Submitted by Krithik Vishwanath on Dec 2
Authors:
,
,
,
,
,

Abstract

Generalist LLMs outperform specialized clinical AI assistants in a multi-task benchmark, highlighting the need for independent evaluation of clinical decision support tools.

AI-generated summary

Specialized clinical AI assistants are rapidly entering medical practice, often framed as safer or more reliable than general-purpose large language models (LLMs). Yet, unlike frontier models, these clinical tools are rarely subjected to independent, quantitative evaluation, creating a critical evidence gap despite their growing influence on diagnosis, triage, and guideline interpretation. We assessed two widely deployed clinical AI systems (OpenEvidence and UpToDate Expert AI) against three state-of-the-art generalist LLMs (GPT-5, Gemini 3 Pro, and Claude Sonnet 4.5) using a 1,000-item mini-benchmark combining MedQA (medical knowledge) and HealthBench (clinician-alignment) tasks. Generalist models consistently outperformed clinical tools, with GPT-5 achieving the highest scores, while OpenEvidence and UpToDate demonstrated deficits in completeness, communication quality, context awareness, and systems-based safety reasoning. These findings reveal that tools marketed for clinical decision support may often lag behind frontier LLMs, underscoring the urgent need for transparent, independent evaluation before deployment in patient-facing workflows.

Community

Paper submitter
edited 1 day ago

Several high-profile generative AI tools have been introduced directly into clinical workflows, including Open Evidence and UpToDate Expert AI. Unlike frontier LLMs such as GPT-5, Gemini 3 Pro (Preview), and Claude Sonnet 4.5, which have undergone extensive benchmarking, these clinical-facing systems have seen almost no independent evaluation. OpenEvidence, reportedly used by 40% of U.S. physicians, claims “perfect” USMLE performance, yet these claims cannot be verified because the tool does not provide open weights or even an accessible API. Similarly, the newly released UpToDate Expert AI, already used in an estimated 70% of major enterprise health systems, has not been subjected to any form of public scrutiny or third-party testing.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2512.01191 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.01191 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.01191 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.