--- license: apache-2.0 language: - en base_model: - Qwen/Qwen3-VL-30B-A3B-Thinking pipeline_tag: image-text-to-text library_name: transformers tags: - agent --- # Jan-v2-VL: Multimodal Agent for Long-Horizon Tasks [![GitHub](https://img.shields.io/badge/GitHub-Repository-blue?logo=github)](https://github.com/janhq/jan) [![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://opensource.org/licenses/Apache-2.0) [![Jan App](https://img.shields.io/badge/Powered%20by-Jan%20App-purple?style=flat&logo=android)](https://jan.ai/) ![image](https://cdn-uploads.huggingface.co/production/uploads/655e3b59d5c0d3db5359ca3c/NyZR7QqxeifhCXKLQUapu.png) ## Overview **Jan-v2-VL-max** extends the Jan-v2-VL family to a **30B-parameter** vision–language model focused on **long-horizon execution**. This release scales model capacity and applies **LoRA-based RLVR** to improve stability over many steps with **low error accumulation**. For evaluation, we continue to use **[The Illusion of Diminishing Returns: Measuring Long-Horizon Execution in LLMs](https://arxiv.org/pdf/2509.09677)**, which emphasizes execution length rather than knowledge recall. ### Intended Use Tasks where the plan and/or knowledge can be provided up front, and success hinges on stable, many-step execution with minimal drift: * **Agentic automation & UI control:** Stepwise operation in browsers/desktop apps with screenshot grounding and tool calls via **[Jan Browser MCP](https://chromewebstore.google.com/detail/jan-browser-mcp/mkciifcjehgnpaigoiaakdgabbpfppal?pli=1)**. ## Model Performance Evaluated under FP8 inference, **Jan-v2-VL-max** vs. **[Qwen3-VL-30B-A3B-Thinking](https://huggingface.co/Qwen/Qwen3-VL-30B-A3B-Thinking)** shows **no regressions** and **small gains** on several tasks, with the largest improvements in **long-horizon execution**. Our FP8 build maintains accuracy while reducing memory footprint and latency. ![image](https://cdn-uploads.huggingface.co/production/uploads/655e3b59d5c0d3db5359ca3c/pRHKkb-YRyQmimKiQuO4w.png) ## Local Deployment ### Jan Web Hosted on **Jan Web** — use the model directly at **[chat.jan.ai](https://chat.jan.ai/)** ![image/gif](demo.gif) ### Local Deployment **Using vLLM:** We recommend **vLLM** for serving and inference. All reported results were run with **vLLM 0.12.0**. For **FP8** deployment, we used **llm-compressor** built from source. Please pin transformers==4.57.1 for compatibility. ```bash # Exact versions used in our evals pip install vllm==0.12.0 pip install transformers==4.57.1 pip install "git+https://github.com/vllm-project/llm-compressor.git@1abfd9eb34a2941e82f47cbd595f1aab90280c80" ``` ```bash vllm serve Menlo/Jan-v2-VL-max-FP8 \ --host 0.0.0.0 \ --port 1234 \ -dp 1 \ --enable-auto-tool-choice \ --tool-call-parser hermes \ --reasoning-parser deepseek_r1 ``` ### Recommended Parameters For optimal performance in agentic and general tasks, we recommend the following inference parameters: ```yaml temperature: 1.0 top_p: 0.95 top_k: 20 repetition_penalty: 1.0 presence_penalty: 1.5 ``` ## 🤝 Community & Support - **Discussions**: [Hugging Face Community](https://huggingface.co/janhq/Jan-v2-VL-max-FP8/discussions) - **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/) ## 📄 Citation ```bibtex Updated Soon ```