Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
georgebu
's Collections
PEFT
Alignment
Alignment
updated
May 16, 2025
DPO model, PPO model, reward model
Upvote
-
georgebu/reward_model
Text Classification
•
0.1B
•
Updated
Mar 28, 2025
•
4
georgebu/dpo_model
Text Generation
•
0.1B
•
Updated
Mar 28, 2025
•
2
georgebu/ppo_model
Text Generation
•
0.1B
•
Updated
Mar 28, 2025
•
2
Upvote
-
Share collection
View history
Collection guide
Browse collections