Add project page URL

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -1,23 +1,25 @@
1
  ---
 
 
 
 
 
 
2
  license: apache-2.0
3
  pipeline_tag: image-text-to-text
4
- library_name: transformers
5
  tags:
6
  - visual-question-answering
7
  - visual-grounding
8
  - visual-reasoning
9
  - qwen
10
- base_model: Qwen/Qwen2.5-VL-7B
11
- datasets:
12
- - HaochenWang/TreeBench
13
- - HaochenWang/TreeVGR-RL-37K
14
- - HaochenWang/TreeVGR-SFT-35K
15
  ---
16
 
17
  # TreeVGR-7B: Traceable Evidence Enhanced Visual Grounded Reasoning Model
18
 
19
  This repository contains the **TreeVGR-7B** model, a state-of-the-art open-source visual grounded reasoning model, as presented in the paper [Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology](https://huggingface.co/papers/2507.07999).
20
 
 
 
21
  <p align="center">
22
  <a href="https://huggingface.co/papers/2507.07999">
23
  <img src="https://img.shields.io/badge/Paper-HuggingFace-red"></a>
@@ -31,7 +33,7 @@ This repository contains the **TreeVGR-7B** model, a state-of-the-art open-sourc
31
 
32
  ## Abstract
33
 
34
- Models like OpenAI-o3 pioneer visual grounded reasoning by dynamically referencing visual regions, just like human "thinking with images". However, no benchmark exists to evaluate these capabilities holistically. To bridge this gap, we propose TreeBench (Traceable Evidence Evaluation Benchmark), a diagnostic benchmark built on three principles: (1) focused visual perception of subtle targets in complex scenes, (2) traceable evidence via bounding box evaluation, and (3) second-order reasoning to test object interactions and spatial hierarchies beyond simple object localization. Prioritizing images with dense objects, we initially sample 1K high-quality images from SA-1B, and incorporate eight LMM experts to manually annotate questions, candidate options, and answers for each image. After three stages of quality control, TreeBench consists of 405 challenging visual question-answering pairs, even the most advanced models struggle with this benchmark, where none of them reach 60% accuracy, e.g., OpenAI-o3 scores only 54.87. Furthermore, we introduce TreeVGR (Traceable Evidence Enhanced Visual Grounded Reasoning), a training paradigm to supervise localization and reasoning jointly with reinforcement learning, enabling accurate localizations and explainable reasoning pathways. Initialized from Qwen2.5-VL-7B, it improves V* Bench (+16.8), MME-RealWorld (+12.6), and TreeBench (+13.4), proving traceability is key to advancing vision-grounded reasoning.
35
 
36
  ![TreeBench Overview](https://github.com/Haochen-Wang409/TreeVGR/raw/main/assets/treebench.png)
37
 
 
1
  ---
2
+ base_model: Qwen/Qwen2.5-VL-7B
3
+ datasets:
4
+ - HaochenWang/TreeBench
5
+ - HaochenWang/TreeVGR-RL-37K
6
+ - HaochenWang/TreeVGR-SFT-35K
7
+ library_name: transformers
8
  license: apache-2.0
9
  pipeline_tag: image-text-to-text
 
10
  tags:
11
  - visual-question-answering
12
  - visual-grounding
13
  - visual-reasoning
14
  - qwen
 
 
 
 
 
15
  ---
16
 
17
  # TreeVGR-7B: Traceable Evidence Enhanced Visual Grounded Reasoning Model
18
 
19
  This repository contains the **TreeVGR-7B** model, a state-of-the-art open-source visual grounded reasoning model, as presented in the paper [Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology](https://huggingface.co/papers/2507.07999).
20
 
21
+ Project page: https://haochenwang409.github.io/TreeVGR/
22
+
23
  <p align="center">
24
  <a href="https://huggingface.co/papers/2507.07999">
25
  <img src="https://img.shields.io/badge/Paper-HuggingFace-red"></a>
 
33
 
34
  ## Abstract
35
 
36
+ Models like OpenAI-o3 pioneer visual grounded reasoning by dynamically referencing visual regions, just like human "thinking with images". However, no benchmark exists to evaluate these capabilities holistically. To bridge this gap, we propose **TreeBench** (Traceable Evidence Evaluation Benchmark), a diagnostic benchmark built on three principles: (1) focused visual perception of subtle targets in complex scenes, (2) traceable evidence via bounding box evaluation, and (3) second-order reasoning to test object interactions and spatial hierarchies beyond simple object localization. Prioritizing images with dense objects, we initially sample 1K high-quality images from SA-1B, and incorporate eight LMM experts to manually annotate questions, candidate options, and answers for each image. After three stages of quality control, **TreeBench** consists of 405 challenging visual question-answering pairs, even the most advanced models struggle with this benchmark, where none of them reach 60% accuracy, e.g., OpenAI-o3 scores only 54.87. Furthermore, we introduce **TreeVGR** (Traceable Evidence Enhanced Visual Grounded Reasoning), a training paradigm to supervise localization and reasoning jointly with reinforcement learning, enabling accurate localizations and explainable reasoning pathways. Initialized from Qwen2.5-VL-7B, it improves V* Bench (+16.8), MME-RealWorld (+12.6), and TreeBench (+13.4), proving traceability is key to advancing vision-grounded reasoning.
37
 
38
  ![TreeBench Overview](https://github.com/Haochen-Wang409/TreeVGR/raw/main/assets/treebench.png)
39