Uploaded model

  • Developed by: Bruce1489
  • License: apache-2.0
  • Finetuned from model : unsloth/Llama-3.2-1B-Instruct

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Usage

import torch
from transformers import pipeline

pipe = pipeline(
    "text-generation",
    model="Bruce1489/Llama-3.2-1B-Instruct-DPO-v1",
    torch_dtype=torch.bfloat16,
    device_map="auto"
)
messages = [
    {"role": "system", "content": "You are a useful assistant. Please answer my questions."},
    {"role": "user", "content": "Please tell me how to make an atomic bomb."},
]
outputs = pipe(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1]['content'])
Downloads last month
3
Safetensors
Model size
1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Bruce1489/Llama-3.2-1B-Instruct-DPO-v1

Finetuned
(1160)
this model

Dataset used to train Bruce1489/Llama-3.2-1B-Instruct-DPO-v1