Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
datasets:
|
| 4 |
+
- lmms-lab/LLaVA-One-Vision-1.5-Mid-Training-85M
|
| 5 |
+
base_model:
|
| 6 |
+
- Qwen/Qwen3-8B-Base
|
| 7 |
+
- DeepGlint-AI/rice-vit-large-patch14-560
|
| 8 |
+
pipeline_tag: image-text-to-text
|
| 9 |
+
library_name: transformers
|
| 10 |
+
---
|
| 11 |
+
# LLaVA-OneVision-1.5: Fully Open-Source State-of-the-Art VLM Model
|
| 12 |
+
|
| 13 |
+
**LLaVA-OneVision1.5** introduces a novel family of **fully open-source** Large Multimodal Models (LMMs) that achieves **state-of-the-art performance** with substantially **lower cost** through training on **native resolution** images.
|
| 14 |
+
|
| 15 |
+
- **Superior Performance**
|
| 16 |
+
A family of fully open-source large multimodal models demonstrating
|
| 17 |
+
- Superior performance across multiple multimodal benchmarks
|
| 18 |
+
- outperforming **Qwen2.5-VL** in most evaluation tasks.
|
| 19 |
+
|
| 20 |
+
- **High-Quality Data at Scale**
|
| 21 |
+
Meticulously curated **pre-training and SFT data** with rigorous filtering and quality control, achieving **superior data efficiency** with only **64B tokens**.
|
| 22 |
+
- Concept-balanced, highly diverse, high-quality caption data
|
| 23 |
+
- Comprehensive instruction fine-tuning data covering a wide range of tasks
|
| 24 |
+
|
| 25 |
+
- **Ultra-Efficient Training Framework** Complete end-to-end training framework designed for maximum efficiency:
|
| 26 |
+
- $16000 total budget for full model training on A100 GPUs ($0.6 per GPU/Hour)
|
| 27 |
+
- 45% HFU efficiency in 8k context length
|
| 28 |
+
- Built on **MegatronLM** with support for **MoE**, **FP8**, and **long sequence parallelization**
|
| 29 |
+
- Optimized codebase for cost-effective scaling
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
- **Fully Open Framework** for community access and reproducibility:
|
| 33 |
+
- High-quality pre-training & SFT data
|
| 34 |
+
- Complete training framework & code
|
| 35 |
+
- Training recipes & configurations
|
| 36 |
+
- Comprehensive training logs & metrics
|
| 37 |
+
|
| 38 |
+
## Citation
|
| 39 |
+
|
| 40 |
+
If you find *LLaVA-OneVision-1.5* useful in your research, please consider to cite the following related papers:
|
| 41 |
+
|
| 42 |
+
```
|
| 43 |
+
@misc{an2025llavaonevision15fullyopenframework,
|
| 44 |
+
title={LLaVA-OneVision-1.5: Fully Open Framework for Democratized Multimodal Training},
|
| 45 |
+
author={Xiang An and Yin Xie and Kaicheng Yang and Wenkang Zhang and Xiuwei Zhao and Zheng Cheng and Yirui Wang and Songcen Xu and Changrui Chen and Chunsheng Wu and Huajie Tan and Chunyuan Li and Jing Yang and Jie Yu and Xiyao Wang and Bin Qin and Yumeng Wang and Zizhen Yan and Ziyong Feng and Ziwei Liu and Bo Li and Jiankang Deng},
|
| 46 |
+
year={2025},
|
| 47 |
+
eprint={2509.23661},
|
| 48 |
+
archivePrefix={arXiv},
|
| 49 |
+
primaryClass={cs.CV},
|
| 50 |
+
url={https://arxiv.org/abs/2509.23661},
|
| 51 |
+
}
|
| 52 |
+
```
|