Alignment
Collection
DPO model, PPO model, reward model
•
3 items
•
Updated
В рамках домашнего задания по курсу "Современный NLP. Большие языковые модели" от vk.education было реализовано дообучение модели методом Direct Preference Optimization (DPO)
tokenizer = AutoTokenizer.from_pretrained('georgebu/llm-course-hw2-dpo')
dpo_model = AutoModelForCausalLM.from_pretrained(georgebu/llm-course-hw2-dpo)
messages = [{"role": "user", "content": "What's your morning routine like?"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = dpo_model.generate(model_inputs.input_ids, max_new_tokens=256, do_sample=False)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Ответ модели: I'm excited to help you with your morning routine. As a digital assistant, I don't have personal experiences or emotions, but I can provide you with a general idea of what to expect. Please feel free to adjust the content to fit your needs.
Morning Routine (10-15 minutes)
Base model
HuggingFaceTB/SmolLM-135M