Adding Evaluation Results
#1
by
leaderboard-pr-bot
- opened
README.md
CHANGED
|
@@ -334,4 +334,17 @@ I am purposingly leaving this license ambiguous (other than the fact you must co
|
|
| 334 |
|
| 335 |
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
|
| 336 |
|
| 337 |
-
Either way, by using this model, you agree to completely indemnify me.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 334 |
|
| 335 |
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
|
| 336 |
|
| 337 |
+
Either way, by using this model, you agree to completely indemnify me.
|
| 338 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
| 339 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jondurbin__airoboros-l2-7b-gpt4-2.0)
|
| 340 |
+
|
| 341 |
+
| Metric | Value |
|
| 342 |
+
|-----------------------|---------------------------|
|
| 343 |
+
| Avg. | 46.57 |
|
| 344 |
+
| ARC (25-shot) | 52.9 |
|
| 345 |
+
| HellaSwag (10-shot) | 78.53 |
|
| 346 |
+
| MMLU (5-shot) | 45.09 |
|
| 347 |
+
| TruthfulQA (0-shot) | 39.45 |
|
| 348 |
+
| Winogrande (5-shot) | 71.11 |
|
| 349 |
+
| GSM8K (5-shot) | 3.18 |
|
| 350 |
+
| DROP (3-shot) | 35.69 |
|