Model Card for Qwen3-0.6B-Alpaca
This model is a fine-tuned version of None. It has been trained using TRL.
Quick start
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name='wesjos/Qwen3-0.6B-Alpaca'
model=AutoModelForCausalLM.from_pretrained(model_name)
tokenizer=AutoTokenizer.from_pretrained(model_name)
alpaca_prompt = """"Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
"""
inputs = tokenizer(
[
alpaca_prompt.format(
"完成以下代码要求", # instruction
"使用python写一个transformer神经网络" #Input
)
], return_tensors = "pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=1024, use_cache=True,temperature=0.6,do_sample=True,top_p=0.95,top_k=20)
print(tokenizer.batch_decode(outputs)[0])
Training procedure
This model was trained with SFT.
Framework versions
- TRL: 0.23.0
- Transformers: 4.57.1
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.22.1
Citations
Cite TRL as:
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
- Downloads last month
- 17