CodeGoat24 commited on
Commit
f4a9771
Β·
verified Β·
1 Parent(s): 272af68

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -0
README.md CHANGED
@@ -32,6 +32,28 @@ For further details, please refer to the following resources:
32
  - πŸ€— Dataset Collections: https://huggingface.co/collections/CodeGoat24/unifiedreward-training-data-67c300d4fd5eff00fa7f1ede
33
  - πŸ‘‹ Point of Contact: [Yibin Wang](https://codegoat24.github.io)
34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
  ## 🏁 Compared with Current Reward Models
37
 
 
32
  - πŸ€— Dataset Collections: https://huggingface.co/collections/CodeGoat24/unifiedreward-training-data-67c300d4fd5eff00fa7f1ede
33
  - πŸ‘‹ Point of Contact: [Yibin Wang](https://codegoat24.github.io)
34
 
35
+ # πŸ”₯ News
36
+ [2025/10/23] πŸ”₯πŸ”₯πŸ”₯ We release **UnifiedReward-Edit**-[[3b](https://huggingface.co/CodeGoat24/UnifiedReward-Edit-qwen-3b)/[7b](https://huggingface.co/CodeGoat24/UnifiedReward-Edit-qwen-7b)/[32b](https://huggingface.co/CodeGoat24/UnifiedReward-Edit-qwen-32b)], a unified reward model for **both Text-to-Image and Image-to-Image generation** trained on approximately 700K unified image generation and editing reward data!!
37
+ For image editing reward task, our models support:
38
+
39
+ >1. Pairwise Rank β€” directly judge which of two edited images is better.
40
+ >
41
+ >2. Pairwise Score β€” assign a separate score to each image in a pair.
42
+ >
43
+ >3. Pointwise Score β€” rate a single image on two axes: instruction-following and overall image quality.
44
+
45
+ πŸš€ The image editing reward inference code is available at [`UnifiedReward-Edit/`](https://github.com/CodeGoat24/UnifiedReward/tree/main/UnifiedReward-Edit) directory, while T2I inference code is unchanged from previous models. The editing training data is preprocessed from [EditScore](https://huggingface.co/datasets/EditScore/EditScore-Reward-Data) and [EditReward](https://huggingface.co/datasets/TIGER-Lab/EditReward-Data) and will be released soon. We sincerely appreciate all contributors!!
46
+
47
+ [2025/9/25] πŸ”₯πŸ”₯πŸ”₯ We release **UnifiedReward-2.0**-qwen-[[3b](https://huggingface.co/CodeGoat24/UnifiedReward-2.0-qwen-3b)/[7b](https://huggingface.co/CodeGoat24/UnifiedReward-2.0-qwen-7b)/[32b](https://huggingface.co/CodeGoat24/UnifiedReward-2.0-qwen-32b)/[72b](https://huggingface.co/CodeGoat24/UnifiedReward-2.0-qwen-72b)].
48
+ This version introduces several new capabilities:
49
+ >
50
+ >1. **Pairwise scoring** for image and video generation assessment on **_Alignment_**, **_Coherence_**, **_Style_** dimensions.
51
+ >
52
+ >2. **Pointwise scoring** for image and video generation assessment on **_Alignment_**, **_Coherence/Physics_**, **_Style_** dimensions.
53
+ >
54
+ The added inference code is available at [`inference_qwen/UnifiedReward-2.0-inference`](https://github.com/CodeGoat24/UnifiedReward/tree/main/inference_qwen/UnifiedReward-2.0-inference) directory. The newly added training data has been released [here](https://huggingface.co/datasets/CodeGoat24/UnifiedReward-2.0-T2X-score-data) 😊.
55
+
56
+
57
 
58
  ## 🏁 Compared with Current Reward Models
59