Active filters: dpo
argilla/OpenHermesPreferences
Viewer
• Updated • 989k • 784
• 214
llamafactory/DPO-En-Zh-20k
Viewer
• Updated • 20k • 406
• 101
jondurbin/gutenberg-dpo-v0.1
Viewer
• Updated • 918 • 1.16k
• 163
CyberNative/Code_Vulnerability_Security_DPO
Viewer
• Updated • 4.66k • 2.42k
• 158
mlabonne/orpo-dpo-mix-40k
Viewer
• Updated • 44.2k • 1.82k
• 302
inclusionAI/Ling-Coder-DPO
Viewer
• Updated • 253k • 353
• 13
manaf1234/synthetic_leather
Viewer
• Updated • 1.84k • 668
• 1
clarkkitchen22/pokemon-roleplay-synthetic-5k
Preview
• Updated • 11
• 1
Agnuxo/p2pclaw-training-dataset
Updated • 1
d0rj/synthetic-instruct-gptj-pairwise-ru
Viewer
• Updated • 33.1k • 78
• 2
d0rj/rlhf-reward-datasets-ru
Viewer
• Updated • 81.4k • 31
• 4
Viewer
• Updated • 125k • 30
• 2
d0rj/oasst1_pairwise_rlhf_reward-ru
Viewer
• Updated • 18.9k • 34
• 1
xzuyn/mmlu-auxilary-train-dpo
Viewer
• Updated • 101k • 36
• 2
AlexHung29629/stack-exchange-paired-128K
Viewer
• Updated • 128k • 10
• 1
flyingfishinwater/ultrafeedback_clean
Viewer
• Updated • 175k • 21
• 2
efederici/alpaca-vs-alpaca-orpo-dpo
Viewer
• Updated • 49.2k • 105
• 7
Viewer
• Updated • 183k • 41
• 1
mlabonne/chatml_dpo_pairs
Viewer
• Updated • 12.9k • 109
• 55
Viewer
• Updated • 183k • 15
• 6
argilla/ultrafeedback-binarized-preferences-cleaned
Viewer
• Updated • 60.9k • 9.07k
• 162
ThWu/dpo_highest_n_random
Viewer
• Updated • 182k • 15
• 2
BramVanroy/orca_dpo_pairs_dutch
Viewer
• Updated • 11k • 134
• 6
argilla/ultrafeedback-multi-binarized-preferences-cleaned
Viewer
• Updated • 158k • 176
• 7
Viewer
• Updated • 2.42k • 134
• 10
Viewer
• Updated • 15.3k • 107
• 19
HuggingFaceH4/orca_dpo_pairs
Viewer
• Updated • 12.9k • 8.94k
• 30
5CD-AI/Vietnamese-Intel-orca_dpo_pairs-gg-translated
Viewer
• Updated • 12.9k • 60
• 35
Viewer
• Updated • 17.5k • 192
• 52
pszemraj/SHP-2-dpo-100k_sample
Viewer
• Updated • 200k • 107
• 2