Gemma Judge
Collection
This is a collection of compact yet highly capable LLM-as-a-judge models fine-tuned from Gemma3 4B.
•
5 items
•
Updated
This model is a compact yet highly capable LLM-as-a-judge model, fine-tuned from Gemma3 +B. It can be used for both direct feedback evaluations and A/B preference evaluations.
It is obtained by merging two models separately fine-tuned on feedback and preference tasks.
The following models were included in the merge:
| Model | Benchmark | Exact Match Accuracy (%) | Pearson r | Spearman ρ | Accuracy (%) (Pairwise) | Notes |
|---|---|---|---|---|---|---|
| 🟪 altaidevorg/gemma-judge-v0.1 | Feedback Bench / Preference Bench | 73.0 / – | 0.9198 / – | 0.9210 / – | 94.54 | Strong unified performance across both tasks |
| 🟨 Prometheus 2 (8×7B) (Kim et al., 2024) | Feedback Bench / Preference Bench | – / – | ≈ 0.898 / – | ≈ 0.90 / – | 90.65 | – |
This model is released under the Apache 2.0 License.
However, because it is derived from Google’s Gemma 3, your use of this model must also comply with the Gemma Terms of Use.
By using this model, you agree to:
For full details, see: https://ai.google.dev/gemma/terms
Base model
google/gemma-3-4b-pt