Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
name
string
provider
string
osWorld
float64
tau2Bench
float64
browseComp
float64
termBench2
float64
gdpvalAA
int64
swePro
float64
screenSpotPro
float64
androidWorld
float64
GPT-5.4
OpenAI
75
null
82.7
null
83
null
null
null
Claude Opus 4.6
Anthropic
72.7
91.9
84
74.7
1,606
45
null
null
Claude Sonnet 4.6
Anthropic
72.5
null
null
53
1,633
null
null
null
Gemini 3.1 Pro
Google
null
99.3
85.9
78.4
1,317
null
null
null
GPT-5.2
OpenAI
38.2
82
77.9
64.9
null
null
null
null
GPT-5.3 Codex
OpenAI
null
null
null
77.3
null
57
null
null
Gemini 3 Flash
Google
null
null
null
64.3
null
null
null
null
Qwen3.5-9B
Alibaba
null
79.9
null
null
null
null
66.1
57.8
Qwen3.5-4B
Alibaba
null
79.1
null
null
null
null
50.3
58.6
MiniMax-M2.5
MiniMax
null
null
null
42.2
null
null
null
null

🏆 ALL Bench Leaderboard 2026

The only AI benchmark dataset covering LLM · VLM · Agent · Image · Video · Music in a single unified file.

Live Leaderboard

GitHub FINAL Bench FINAL Leaderboard

ALL Bench Leaderboard

ALL Bench Leaderboard

Dataset Summary

ALL Bench Leaderboard aggregates and cross-verifies benchmark scores for 90+ AI models across 6 modalities. Every numerical score is tagged with a confidence level (cross-verified, single-source, or self-reported) and its original source. The dataset is designed for researchers, developers, and decision-makers who need a trustworthy, unified view of the AI model landscape.

Category Models Benchmarks Description
LLM 41 32 fields MMLU-Pro, GPQA, AIME, HLE, ARC-AGI-2, Metacog, SWE-Pro, IFEval, LCB, Union Eval, etc.
VLM Flagship 11 10 fields MMMU, MMMU-Pro, MathVista, AI2D, OCRBench, MMStar, HallusionBench, etc.
Agent 10 8 fields OSWorld, τ²-bench, BrowseComp, Terminal-Bench 2.0, GDPval-AA, SWE-Pro
Image Gen 10 7 fields Photo realism, text rendering, instruction following, style, aesthetics
Video Gen 10 7 fields Quality, motion, consistency, text rendering, duration, resolution
Music Gen 8 6 fields Quality, vocals, instrumental, lyrics, duration

ALL Bench Leaderboard

ALL Bench Leaderboard

ALL Bench Leaderboard

What's New — v2.2.1

🏅 Union Eval ★NEW

ALL Bench's proprietary integrated benchmark. Fuses the discriminative core of 10 existing benchmarks (GPQA, AIME, HLE, MMLU-Pro, IFEval, LiveCodeBench, BFCL, ARC-AGI, SWE, FINAL Bench) into a single 1000-question pool with a season-based rotation system.

Key features:

  • 100% JSON auto-graded — every question requires mandatory JSON output with verifiable fields. Zero keyword matching.
  • Fuzzy JSON matching — tolerates key name variants, fraction formats, text fallback when JSON parsing fails.
  • Season rotation — 70% new questions each season, 30% anchor questions for cross-season IRT calibration.
  • 8 rounds of empirical testing — v2 (82.4%) → v3 (82.0%) → Final (79.5%) → S2 (81.8%) → S3 (75.0%) → Fuzzy (69.9/69.3%).

Key discovery: "The bottleneck in benchmarking is not question difficulty — it's grading methodology."

Empirically confirmed LLM weakness map:

  • 🔴 Poetry + code cross-constraints: 18-28%
  • 🔴 Complex JSON structure (10+ constraints): 0%
  • 🔴 Pure series computation (Σk²/3ᵏ): 0%
  • 🟢 Metacognitive reasoning (Bayes, proof errors): 95%
  • 🟢 Revised science detection: 86%

Current scores (S3, 20Q sample, Fuzzy JSON):

Model Union Eval
Claude Sonnet 4.6 69.9
Claude Opus 4.6 69.3

Other v2.2 changes

  • Fair Coverage Correction: composite scoring ^0.5 → ^0.7
  • +7 FINAL Bench scores (15 total)
  • Columns sorted by fill rate
  • Model Card popup (click model name) · FINAL Bench detail popup (click Metacog score)
  • 🔥 Heatmap, 💰 Price vs Performance scatter tools

Live Leaderboard

👉 https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard

Interactive features: composite ranking, dark mode, advanced search (GPQA > 90 open, price < 1), Model Finder, Head-to-Head comparison, Trust Map heatmap, Bar Race animation, Model Card popup, FINAL Bench detail popup, and downloadable Intelligence Report (PDF/DOCX).

Data Structure

data/
├── llm.jsonl           # 41 LLMs × 32 fields (incl. unionEval ★NEW)
├── vlm_flagship.jsonl  # 11 flagship VLMs × 10 benchmarks
├── agent.jsonl         # 10 agent models × 8 benchmarks
├── image.jsonl         # 10 image gen models × S/A/B/C ratings
├── video.jsonl         # 10 video gen models × S/A/B/C ratings
└── music.jsonl         # 8 music gen models × S/A/B/C ratings

LLM Field Schema

Field Type Description
name string Model name
provider string Organization
type string open or closed
group string flagship, open, korean, etc.
released string Release date (YYYY.MM)
mmluPro float | null MMLU-Pro score (%)
gpqa float | null GPQA Diamond (%)
aime float | null AIME 2025 (%)
hle float | null Humanity's Last Exam (%)
arcAgi2 float | null ARC-AGI-2 (%)
metacog float | null FINAL Bench Metacognitive score
swePro float | null SWE-bench Pro (%)
bfcl float | null Berkeley Function Calling (%)
ifeval float | null IFEval instruction following (%)
lcb float | null LiveCodeBench (%)
sweV float | null SWE-bench Verified (%) — deprecated
mmmlu float | null Multilingual MMLU (%)
termBench float | null Terminal-Bench 2.0 (%)
sciCode float | null SciCode (%)
unionEval float | null ★NEW Union Eval S3 — ALL Bench integrated benchmark (100% JSON auto-graded)
priceIn / priceOut float | null USD per 1M tokens
elo int | null Arena Elo rating
license string Prop, Apache2, MIT, Open, etc.

ALL Bench Leaderboard

ALL Bench Leaderboard

ALL Bench Leaderboard

Composite Score

Score = Avg(confirmed benchmarks) × (N/10)^0.7

10 core benchmarks across the 5-Axis Intelligence Framework: Knowledge · Expert Reasoning · Abstract Reasoning · Metacognition · Execution.

v2.2 change: Exponent adjusted from 0.5 to 0.7 for fairer coverage weighting. Models with 7/10 benchmarks receive ×0.79 (was ×0.84), while 4/10 receives ×0.53 (was ×0.63).

Confidence System

Each benchmark score in the confidence object is tagged:

Level Badge Meaning
cross-verified ✓✓ Confirmed by 2+ independent sources
single-source One official or third-party source
self-reported ~ Provider's own claim, unverified

Example:

"Claude Opus 4.6": {
  "gpqa": { "level": "cross-verified", "source": "Anthropic + Vellum + DataCamp" },
  "arcAgi2": { "level": "cross-verified", "source": "Vellum + llm-stats + NxCode + DataCamp" },
  "metacog": { "level": "single-source", "source": "FINAL Bench dataset" },
  "unionEval": { "level": "single-source", "source": "Union Eval S3 — ALL Bench official" }
}

Usage

from datasets import load_dataset

# Load LLM data
ds = load_dataset("FINAL-Bench/ALL-Bench-Leaderboard", "llm")
df = ds["train"].to_pandas()

# Top 5 LLMs by GPQA
ranked = df.dropna(subset=["gpqa"]).sort_values("gpqa", ascending=False)
for _, m in ranked.head(5).iterrows():
    print(f"{m['name']:25s} GPQA={m['gpqa']}")

# Union Eval scores
union = df.dropna(subset=["unionEval"]).sort_values("unionEval", ascending=False)
for _, m in union.iterrows():
    print(f"{m['name']:25s} Union Eval={m['unionEval']}")

ALL Bench Leaderboard

ALL Bench Leaderboard

ALL Bench Leaderboard

Union Eval — Integrated AI Assessment

Union Eval is ALL Bench's proprietary benchmark designed to address three fundamental problems with existing AI evaluations:

  1. Contamination — Public benchmarks leak into training data. Union Eval rotates 70% of questions each season.
  2. Single-axis measurement — AIME tests only math, IFEval only instruction-following. Union Eval integrates arithmetic, poetry constraints, metacognition, coding, calibration, and myth detection.
  3. Score inflation via keyword matching — Traditional rubric grading gives 100% to "well-written" answers even if content is wrong. Union Eval enforces mandatory JSON output with zero keyword matching.

Structure (S3 — 100 Questions from 1000 Pool):

Category Questions Role Expected Score
Pure Arithmetic 10 Confirmed Killer #1 0-57%
Poetry/Verse IFEval 8 Confirmed Killer #2 18-28%
Structured Data IFEval 7 JSON/CSV verification 0-70%
FINAL Bench Metacognition 20 Core brand 50-95%
Union Complex Synthesis 15 Extreme multi-domain 40-73%
Revised Science / Myths 5 Calibration traps 50-86%
Code I/O, GPQA, HLE 19 Expert + execution 50-100%
BFCL Tool Use, Anchors 16 Cross-season calibration varies

Note: The 100-question dataset is not publicly released to prevent contamination. Only scores are published.

FINAL Bench — Metacognitive Benchmark

FINAL Bench measures AI self-correction ability. Error Recovery (ER) explains 94.8% of metacognitive performance variance. 15 frontier models evaluated.

Changelog

Version Date Changes
v2.2.1 2026-03-10 🏅 Union Eval ★NEW — integrated benchmark column (unionEval field). Claude Opus 4.6: 69.3 · Sonnet 4.6: 69.9
v2.2 2026-03-10 Fair Coverage (^0.7), +7 Metacog scores, Model Cards, FINAL Bench popup, Heatmap, Price-Perf
v2.1 2026-03-08 Confidence badges, Intelligence Report, source tracking
v2.0 2026-03-07 All blanks filled, Korean AI data, 42 LLMs cross-verified
v1.9 2026-03-05 +3 LLMs, dark mode, mobile responsive

Citation

@misc{allbench2026,
    title={ALL Bench Leaderboard 2026: Unified Multi-Modal AI Evaluation},
    author={ALL Bench Team},
    year={2026},
    url={https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard}
}

#AIBenchmark #LLMLeaderboard #GPT5 #Claude #Gemini #ALLBench #FINALBench #Metacognition #UnionEval #VLM #AIAgent #MultiModal #HuggingFace #ARC-AGI #AIEvaluation #VIDRAFT.net

Downloads last month
626

Spaces using FINAL-Bench/ALL-Bench-Leaderboard 6