File size: 16,942 Bytes
b2d2b0c 49566e4 b2d2b0c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 |
---
license: apache-2.0
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- qwen2_5_vl
- multimodal-llm
- multimodal-reasoning
- math-reasoning
datasets:
- MMR1/MMR1-SFT
- MMR1/MMR1-RL
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
model-index:
- name: MMR1-7B-RL
results:
- task:
type: multimodal-reasoning
name: Mathematics-related Multimodal Reasoning Benchmarks (Average)
metrics:
- type: accuracy
value: 58.4
name: Average Accuracy
verified: false
---
<p align="center">
<img src="https://github.com/LengSicong/MMR1/blob/main/assets/logo.png?raw=true" width="150" style="margin-bottom: 0.2;"/>
</p>
# MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources
[](https://arxiv.org/abs/2509.21268)
[](https://huggingface.co/papers/2509.21268)
[](https://github.com/LengSicong/MMR1)
This repository hosts the **MMR1** model, a family of multimodal reasoning models introduced in the paper [MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources](https://huggingface.co/papers/2509.21268).
## Paper Abstract
Large multimodal reasoning models have achieved rapid progress, but their advancement is constrained by two major limitations: the absence of open, large-scale, high-quality long chain-of-thought (CoT) data, and the instability of reinforcement learning (RL) algorithms in post-training. Group Relative Policy Optimization (GRPO), the standard framework for RL fine-tuning, is prone to gradient vanishing when reward variance is low, which weakens optimization signals and impairs convergence. This work makes three contributions: (1) We propose Variance-Aware Sampling (VAS), a data selection strategy guided by Variance Promotion Score (VPS) that combines outcome variance and trajectory diversity to promote reward variance and stabilize policy optimization. (2) We release large-scale, carefully curated resources containing ~1.6M long CoT cold-start data and ~15k RL QA pairs, designed to ensure quality, difficulty, and diversity, along with a fully reproducible end-to-end training codebase. (3) We open-source a family of multimodal reasoning models in multiple scales, establishing standardized baselines for the community. Experiments across mathematical reasoning benchmarks demonstrate the effectiveness of both the curated data and the proposed VAS. Comprehensive ablation studies and analyses provide further insight into the contributions of each component. In addition, we theoretically establish that reward variance lower-bounds the expected policy gradient magnitude, with VAS serving as a practical mechanism to realize this guarantee. Our code, data, and checkpoints are available at this https URL.
## 📰 News
* **[2025.09.25]** 🔥🔥 Release [technical report](https://huggingface.co/papers/2509.21268)!
* **[2025.09.25]** 🚀🚀 Release MMR1-SFT (~16M) and MMR1-RL (15k) datasets!
* **[2025.09.25]** 🚀🚀 Release MMR1-3B and MMR1-7B, 32B checkpoint are on the way!
* **[2025.09.25]** Old repo are now moved to the branch [mmr1_v0](https://github.com/LengSicong/MMR1/tree/mmr1_v0?tab=readme-ov-file).
* **[2025.03.11]** 🔥🔥 Release MMR1-Math-v0-7B, achieving SOTA with only **6k public training data**!
## 🌟 Introduction
This repository introduces our work on enhancing multimodal reasoning models. Current progress is limited by:
- ❌ **Lack of open, large-scale, high-quality long chain-of-thought (CoT) data**
- ❌ **Instability of RL fine-tuning**, where standard GRPO often suffers from *gradient vanishing* under low reward variance
### 🔑 Our Contributions
- **Variance-Aware Sampling (VAS):**
A new data selection strategy guided by the *Variance Promotion Score (VPS)*. VAS combines outcome variance and trajectory diversity to promote reward variance, stabilize policy optimization, and improve convergence.
- **Large-scale curated resources:**
- ~1.6M long CoT cold-start trajectories with verified short answer
- ~15k RL QA pairs
- Designed for **quality, difficulty, and diversity**
- **Open-source codebase & models:**
- Fully reproducible end-to-end training pipeline
- Released models at multiple scales as standardized baselines for multimodal reasoning
Please refer to our [TRAIN.md](https://github.com/LengSicong/MMR1/blob/main/TRAIN.md) for detailed instructions on training with VAS.
## 💡 Methodology Overview
Our method introduces **Variance-Aware Sampling (VAS)** to address the *gradient vanishing problem* in reinforcement learning with Group Relative Policy Optimization (GRPO).
<p align="center">
<img src="https://github.com/LengSicong/MMR1/blob/main/assets/fig1.png?raw=true" alt="Overview of the VAS framework" width="700"/>
</p>
### 🔹 Framework
As illustrated in **Figure 1**, training begins with a pool of prompts from the dataset:
1. A **random sampler** provides uniform coverage of data.
2. A **weighted sampler**, guided by Variance Promotion Score (VPS), prioritizes prompts with higher reward variance and trajectory diversity.
3. These two sources are combined to form training batches, balancing exploration and coverage.
4. The policy model generates rollouts, which are evaluated with rewards and used to update the policy. VPS scores are periodically re-estimated as the model improves, ensuring dynamic adaptation.
This design ensures that training consistently focuses on prompts that provide strong learning signals, while still maintaining sufficient randomness for coverage.
<p align="center">
<img src="https://github.com/LengSicong/MMR1/blob/main/assets/algo1.png?raw=true" alt="algo" width="700"/>
</p>
### 🔹 Algorithm
**Algorithm 1** provides a step-by-step description of VAS within the GRPO framework:
- **Initialization:** For each prompt, multiple rollouts are sampled to estimate pass rate, outcome variance (OVS), trajectory diversity (TDS), and VPS.
- **Periodic VPS update:** At specified intervals, these statistics are refreshed to reflect the evolving policy.
- **Batch construction:** A mixture of prompts is drawn—some uniformly at random, others proportionally to VPS—controlled by the mixture ratio λ.
- **Policy optimization:** Rollouts are generated for the selected prompts, GRPO loss is computed, and the policy parameters are updated accordingly.
By adaptively steering training toward prompts with higher reward variance, VAS effectively stabilizes optimization and amplifies gradient signals, enabling more efficient and robust learning.
## 📦 Open Resources
We release the following resources for the community:
- **[MMR1-SFT](https://huggingface.co/datasets/MMR1/MMR1-SFT) (~16M):** Supervised fine-tuning dataset with 16M long CoT cold-start trajectories (Gemini2.5 Pro/Flash) with verified short answer (GPT-4o)
- **[MMR1-RL](https://huggingface.co/datasets/MMR1/MMR1-RL) (15k):** RL dataset with 15k question-answer pairs (GPT-4o)
- **[MMR1-3B-SFT](https://huggingface.co/MMR1/MMR1-3B-SFT):** 3B checkpoint trained with MMR1-SFT
- **[MMR1-3B-RL](https://huggingface.co/MMR1/MMR1-3B-RL):** 3B checkpoint trained with MMR1-SFT and MMR1-RL
- **[MMR1-7B-SFT](https://huggingface.co/MMR1/MMR1-7B-SFT):** 7B checkpoint trained with MMR1-SFT
- **[MMR1-7B-RL](https://huggingface.co/MMR1/MMR1-7B-RL):** 7B checkpoint trained with MMR1-SFT and MMR1-RL
- **[MMR1-32B-SFT](https://huggingface.co/MMR1/MMR1-32B-SFT):** 32B checkpoint trained with MMR1-SFT
- **[MMR1-32B-RL](https://huggingface.co/MMR1/MMR1-32B-RL):** 32B checkpoint trained with MMR1-SFT and MMR1-RL (On the way!)
<p align="center">
<img src="https://github.com/LengSicong/MMR1/blob/main/assets/data.png?raw=true" alt="data" width="700"/>
</p>
The dataset spans diverse domains—including mathematics, science, charts/figures, document tables, and general understanding—covering ~1.6M math samples and an additional ~37K samples across other domains. It integrates existing public resources (e.g., MathVerse, ScienceQA, ChartQA, DocVQA, GQA) together with newly curated and self-collected data, ensuring quality, difficulty, and diversity. This collection establishes one of the most comprehensive open resources for multimodal reasoning models.
We hope these resources can serve as a benchmark for the community and facilitate the research of multimodal reasoning.
## 📊 Evaluation Results
We evaluate our models on a suite of **mathematics-related multimodal reasoning benchmarks** (MathVerse, MathVista, MathVision, LogicVista, and ChartQA).
<p align="center">
<img src="https://github.com/LengSicong/MMR1/blob/main/assets/result.png?raw=true" alt="result" width="700"/>
</p>
- **MMR1-7B-RL** achieves an average score of **58.4**, establishing new state-of-the-art performance among 7B-scale reasoning models.
- **MMR1-3B-RL** performs competitively with **52.7**, showing strong reasoning ability even at smaller scale.
- Our models consistently outperform or match larger baselines, demonstrating the effectiveness of **Variance-Aware Sampling (VAS)** and our curated **long CoT training data**.
## 🔍 Analysis of VAS Training Dynamics
We further analyze the effectiveness of **Variance-Aware Sampling (VAS)** through training efficiency and the evolution of **Variance Promotion Score (VPS)**.
<p align="center">
<img src="https://github.com/LengSicong/MMR1/blob/main/assets/anal1.png?raw=true" alt="anal1" width="700"/>
</p>
**Training Efficiency (Fig. 2).**
- **Gradient norm**: VAS substantially amplifies gradient magnitudes compared to the vanilla baseline, mitigating the gradient vanishing issue. This indicates that VAS consistently provides stronger optimization signals.
- **Clip fraction**: Higher clipping fractions in VAS runs suggest that policy updates are closer to the trust-region boundary, enabling more effective utilization of the learning signal without destabilizing training.
- **Validation accuracy**: Both full VAS (λ = 1.0) and mixed VAS–random sampling (λ = 0.5) converge faster and achieve higher final accuracy than the baseline, demonstrating that VAS improves both efficiency and performance. Notably, the mixed strategy achieves competitive results while maintaining broader data coverage.
<p align="center">
<img src="https://github.com/LengSicong/MMR1/blob/main/assets/anal2.png?raw=true" alt="anal2" width="700"/>
</p>
**VPS Dynamics (Fig. 3).**
- **Score distribution**: VPS distributions evolve from relatively uniform at the beginning of training to more concentrated in the middle bins, suggesting convergence in identifying consistently informative prompts.
- **Weight transitions**: Transition matrices show that many prompts shift across bins over time, with both upward and downward movements, reflecting the dynamic nature of reward variance as the policy evolves. Early transitions are more widespread, while later updates become more stable, consistent with convergence.
- **Interpretation**: This dynamic reweighting ensures that the model continually prioritizes prompts with higher variance while still allowing redistribution as learning progresses, preventing overfitting to a static subset of data.
👉 Together, these analyses highlight how **VAS effectively mitigates gradient vanishing, improves sample efficiency, and adapts dynamically to the evolving training landscape.**
## 🎨 Qualitative Demo
To illustrate the reasoning capability of our models, we provide qualitative examples from **MathVerse**.
The demo showcases how the model carefully analyzes the problem, plans a structured solution, executes step-by-step reasoning, verifies results, and even provides alternative solution paths.
<p align="center">
<img src="https://github.com/LengSicong/MMR1/blob/main/assets/demo.png?raw=true" alt="demo" width="700"/>
</p>
This demonstrates the model’s ability to maintain logical consistency, perform reflective verification, and present human-readable reasoning traces.
## 🚀 Quick Start (Inference)
You can use the MMR1 model with the Hugging Face `transformers` library. Ensure you have `transformers`, `torch`, and `Pillow` installed. You may also need `requests` for downloading images from URLs.
First, install the necessary libraries:
```bash
pip install transformers torch Pillow requests
```
Here's a quick inference code example using the `MMR1/MMR1-7B-RL` checkpoint:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoProcessor
from PIL import Image
import torch
import requests # For downloading images from URLs
import io # For handling image bytes
# Load model and tokenizer/processor
# Replace "MMR1/MMR1-7B-RL" with your desired model checkpoint, e.g., MMR1/MMR1-3B-RL, MMR1/MMR1-7B-SFT
model_id = "MMR1/MMR1-7B-RL"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True # Required for custom Qwen2.5-VL architecture
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
# Prepare conversation history (example from Qwen2.5-VL chat template format)
messages = []
text_query = "Generate a comprehensive and detailed description for this image."
image_url = "https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pond.jpg" # Example image URL
# Fetch the image
response = requests.get(image_url)
image = Image.open(io.BytesIO(response.content))
# Append user message with image and text
# The chat template expects content as a list of dictionaries for multimodal inputs
messages.append({"role": "user", "content": [{"type": "image", "image": image}, {"type": "text", "text": text_query}]})
# Apply chat template and process inputs
# `apply_chat_template` generates the text portion of the input, `processor` handles images.
text_inputs = processor.apply_chat_template(
messages,
tokenize=False, # We tokenize later with the full processor
add_generation_prompt=True
)
# Note: The processor expects a list of PIL Images for the `images` argument.
# It automatically handles vision token insertion based on the model's `chat_template`.
inputs = processor(text=text_inputs, images=[image], return_tensors="pt")
inputs = inputs.to(model.device)
# Generate response
generated_ids = model.generate(**inputs, max_new_tokens=1024, do_sample=False)
response_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(f"User: {text_query}")
print(f"Assistant: {response_text}")
```
## 🤝 Contribution and Contact
This project is still under active development. Community feedback and contributions are highly appreciated. If you want to contribute, please feel free to make a pull request or create an issue.
## 👍 Acknowledgement
Our MMR1 is build on top of [Qwen2.5VL](https://github.com/QwenLM/Qwen2.5-VL), [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) and [EasyR1](https://github.com/hiyouga/EasyR1/tree/main).
Besides, our MMR1 benefits from tons of open-source efforts. We sincerely appreciate these efforts and compile a list in [ACKNOWLEDGEMENT.md](https://github.com/LengSicong/MMR1/blob/main/ACKNOWLEDGEMENT.md) to express our gratitude. If your work is used in MMR1 but not mentioned in either this repo or the technical report, feel free to let us know ❤️.
## 📑 Citation
If you find MMR1 useful for your research and applications, please cite using this BibTeX:
```bibtex
@misc{leng2025mmr1,
title={MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources},
author={Sicong Leng and Jing Wang and Jiaxi Li and Hao Zhang and Zhiqiang Hu and Boqiang Zhang and Yuming Jiang and Hang Zhang and Xin Li and Lidong Bing and Deli Zhao and Wei Lu and Yu Rong and Aixin Sun and Shijian Lu},
year={2025},
eprint={2509.21268},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.21268},
}
```
## 🔒 License
This project is released under the Apache 2.0 license as found in the [LICENSE](https://github.com/LengSicong/MMR1/blob/main/LICENSE) file.
The service is a research preview intended for **non-commercial use ONLY**, subject to the model Licenses of Qwen, Terms of Use of the data generated by OpenAI and Gemini, and Privacy Practices of ShareGPT. Please get in touch with us if you find any potential violations.
## Star History
[](https://star-history.com/#LengSicong/MMR1&Date) |