How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "TIGER-Lab/VisCoder2-14B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "TIGER-Lab/VisCoder2-14B",
		"messages": [
			{
				"role": "user",
				"content": [
					{
						"type": "text",
						"text": "Describe this image in one sentence."
					},
					{
						"type": "image_url",
						"image_url": {
							"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
						}
					}
				]
			}
		]
	}'
Use Docker
docker model run hf.co/TIGER-Lab/VisCoder2-14B
Quick Links

VisCoder2-14B

🏠 Project Page | 📖 Paper | 💻 GitHub | 🤗 VisCode2

VisCoder2-14B is a lightweight multi-language visualization coding model trained for executable code generation, rendering, and iterative self-debugging.


🧠 Model Description

VisCoder2-14B is trained on the VisCode-Multi-679K dataset, a large-scale instruction-tuning dataset for executable visualization tasks across 12 programming language. It addresses a core challenge in multi-language visualization: generating code that not only executes successfully but also produces semantically consistent visual outputs by aligning natural-language instructions and rendering results.


📊 Main Results on VisPlotBench

We evaluate VisCoder2-14B on VisPlotBench, which includes 888 executable visualization tasks spanning 8 languages, supporting both standard generation and multi-turn self-debugging.

main_results

VisCoder2-14B shows consistent performance across multiple languages and achieves notable improvements under the multi-round self-debug setting.


📁 Training Details

  • Base model: Qwen2.5-Coder-14B-Instruct
  • Framework: ms-swift
  • Tuning method: Full-parameter supervised fine-tuning (SFT)
  • Dataset: VisCode-Multi-679K

📖 Citation

If you use VisCoder2-14B or related datasets in your research, please cite:

@article{ni2025viscoder2,
  title={VisCoder2: Building Multi-Language Visualization Coding Agents},
  author={Ni, Yuansheng and Cai, Songcheng and Chen, Xiangchao and Liang, Jiarong and Lyu, Zhiheng and Deng, Jiaqi and Zou, Kai and Nie, Ping and Yuan, Fei and Yue, Xiang and others},
  journal={arXiv preprint arXiv:2510.23642},
  year={2025}
}

@article{ni2025viscoder,
  title={VisCoder: Fine-Tuning LLMs for Executable Python Visualization Code Generation},
  author={Ni, Yuansheng and Nie, Ping and Zou, Kai and Yue, Xiang and Chen, Wenhu},
  journal={arXiv preprint arXiv:2506.03930},
  year={2025}
}

For evaluation scripts and more information, see our GitHub repository.

Downloads last month
16
Safetensors
Model size
15B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TIGER-Lab/VisCoder2-14B

Base model

Qwen/Qwen2.5-14B
Finetuned
(79)
this model
Quantizations
3 models

Dataset used to train TIGER-Lab/VisCoder2-14B

Collection including TIGER-Lab/VisCoder2-14B

Papers for TIGER-Lab/VisCoder2-14B