Dauka-transformers commited on
Commit
92d21e6
Β·
verified Β·
1 Parent(s): 58833c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -9
README.md CHANGED
@@ -16,20 +16,15 @@ This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggin
16
  The model is designed to:
17
 
18
  - Evaluate alignment of image and caption
19
- - Provide justification scores for noisy web-scale data
20
- - Support local deployment for cost-efficient filtering
21
 
22
  ## πŸ‹οΈ Training Details
23
 
24
  - Base model: `Qwen/Qwen2-VL-2B-Instruct`
25
- - Fine-tuning objective: in-context scoring + justification
26
- - Dataset: ~4.8K samples with score, justification, text, and image
27
 
28
- ## πŸ“ Files
29
-
30
- - `model.safetensors` – fine-tuned weights
31
- - `processor` – image and text processor
32
- - `README.md` – this card
33
 
34
  ## 🀝 Acknowledgements
35
 
 
16
  The model is designed to:
17
 
18
  - Evaluate alignment of image and caption
19
+ - Provide image/caption alignment scores and textual justification for noisy web-scale data
20
+ - Supports local deployment for cost-efficient training data filtration
21
 
22
  ## πŸ‹οΈ Training Details
23
 
24
  - Base model: `Qwen/Qwen2-VL-2B-Instruct`
25
+ - Fine-tuning objective: in-context evaluation of aligment, quality and safety
26
+ - Dataset: ~4.8K samples with score, justification, caption, and image
27
 
 
 
 
 
 
28
 
29
  ## 🀝 Acknowledgements
30