merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the della_linear merge method using jpacifico/Chocolatine-14B-Instruct-DPO-v1.2 as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: jpacifico/Chocolatine-14B-Instruct-DPO-v1.2
parameters:
weight: 0.5
density: 0.8
- model: jpacifico/Chocolatine-14B-Instruct-DPO-v1.1
parameters:
weight: 0.5
density: 0.8
merge_method: della_linear
base_model: jpacifico/Chocolatine-14B-Instruct-DPO-v1.2
parameters:
epsilon: 0.05
lambda: 1
int8_mask: true
dtype: float16
tokenzer_source: union
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 29.92 |
| IFEval (0-Shot) | 47.99 |
| BBH (3-Shot) | 48.41 |
| MATH Lvl 5 (4-Shot) | 14.35 |
| GPQA (0-shot) | 12.30 |
| MuSR (0-shot) | 14.36 |
| MMLU-PRO (5-shot) | 42.08 |
- Downloads last month
- 23
Model tree for allknowingroger/Ph3della5-14B
Merge model
this model
Space using allknowingroger/Ph3della5-14B 1
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard47.990
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard48.410
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard14.350
- acc_norm on GPQA (0-shot)Open LLM Leaderboard12.300
- acc_norm on MuSR (0-shot)Open LLM Leaderboard14.360
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard42.080