YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

NeuralDaredevil-7B - bnb 4bits

Original model description:

license: cc-by-nc-4.0 tags: - merge - mergekit - lazymergekit - dpo - rlhf - mlabonne/example base_model: mlabonne/Daredevil-7B model-index: - name: NeuralDaredevil-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.88 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDaredevil-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.62 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDaredevil-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.12 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDaredevil-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 66.85 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDaredevil-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.08 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDaredevil-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 73.16 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDaredevil-7B name: Open LLM Leaderboard

NeuralDaredevil-7B

NeuralDaredevil-7B is a DPO fine-tune of mlabonne/Daredevil-7B using the argilla/distilabel-intel-orca-dpo-pairs preference dataset and my DPO notebook from this article.

Thanks Argilla for providing the dataset and the training recipe here. 馃挭

馃弳 Evaluation

Nous

The evaluation was performed using LLM AutoEval on Nous suite.

Model Average AGIEval GPT4All TruthfulQA Bigbench
mlabonne/NeuralDaredevil-7B 馃搫 59.39 45.23 76.2 67.61 48.52
mlabonne/Beagle14-7B 馃搫 59.4 44.38 76.53 69.44 47.25
argilla/distilabeled-Marcoro14-7B-slerp 馃搫 58.93 45.38 76.48 65.68 48.18
mlabonne/NeuralMarcoro14-7B 馃搫 58.4 44.59 76.17 65.94 46.9
openchat/openchat-3.5-0106 馃搫 53.71 44.17 73.72 52.53 44.4
teknium/OpenHermes-2.5-Mistral-7B 馃搫 52.42 42.75 72.99 52.99 40.94

You can find the complete benchmark on YALL - Yet Another LLM Leaderboard.

Open LLM Leaderboard

Detailed results can be found here

Metric Value
Avg. 74.12
AI2 Reasoning Challenge (25-Shot) 69.88
HellaSwag (10-Shot) 87.62
MMLU (5-Shot) 65.12
TruthfulQA (0-shot) 66.85
Winogrande (5-shot) 82.08
GSM8k (5-shot) 73.16

馃捇 Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mlabonne/NeuralDaredevil-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

Built with Distilabel

Downloads last month
-
Safetensors
Model size
4B params
Tensor type
F32
F16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support