File size: 3,923 Bytes
fcd1801
 
 
 
 
 
 
 
21166ba
 
fcd1801
2abdc4f
fcd1801
 
 
 
 
1d7c2e6
 
fcd1801
 
2abdc4f
fcd1801
03dc6ab
fcd1801
2abdc4f
 
 
 
 
 
 
d38253a
2abdc4f
 
fcd1801
 
 
5607dd6
 
071eddc
3e1f40f
b4414a1
fcd1801
b4414a1
fcd1801
b4414a1
fcd1801
 
 
635a286
fcd1801
635a286
 
 
fcd1801
635a286
 
3ff3d1f
fcd1801
635a286
 
 
 
 
fcd1801
 
635a286
fcd1801
635a286
 
fcd1801
635a286
 
 
fcd1801
 
 
 
 
a916181
fcd1801
 
22df780
a916181
fcd1801
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-VL-8B-Thinking
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- agent
---
# Jan-v2-VL: Multimodal Agent for Long-Horizon Tasks

[![GitHub](https://img.shields.io/badge/GitHub-Repository-blue?logo=github)](https://github.com/janhq/jan) 
[![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://opensource.org/licenses/Apache-2.0)
[![Jan App](https://img.shields.io/badge/Powered%20by-Jan%20App-purple?style=flat&logo=android)](https://jan.ai/) 

![image/gif](demo.gif)

## Overview

**Jan-v2-VL** is an 8B-parameter vision–language model for long-horizon, multi-step tasks in real software environments (e.g., browsers and desktop apps). It combines language reasoning with visual perception to follow complex instructions, maintain intermediate state, and recover from minor execution errors.

We recognize the importance of **long-horizon execution** for real-world tasks, where small per-step gains compound into much longer successful chains—so **Jan-v2-VL** is built for stable, many-step execution. For evaluation, we use **[The Illusion of Diminishing Returns: Measuring Long-Horizon Execution in LLMs](https://arxiv.org/pdf/2509.09677)**, which measures execution length. This benchmark aligns with public consensus on what makes a strong coding model—steady, low-drift step execution—suggesting that robust long-horizon ability closely tracks better user experience.

**Variants**

* **Jan-v2-VL-low** — efficiency-oriented, lower latency
* **Jan-v2-VL-med** — balanced latency/quality
* **Jan-v2-VL-high** — deeper reasoning; higher think time

### Intended Use
Tasks where the plan and/or knowledge can be provided up front, and success hinges on stable, many-step execution with minimal drift:

* **Agentic automation & UI control:** Stepwise operation in browsers/desktop apps with screenshot grounding and tool calls (e.g., BrowserMCP).

## Model Performance

![image](https://cdn-uploads.huggingface.co/production/uploads/655e3b59d5c0d3db5359ca3c/bruqlcVK87KMQE99JsS0c.png)

Compared with its base (**[Qwen-3-VL-8B-Thinking](https://huggingface.co/Qwen/Qwen3-VL-8B-Thinking)**), **Jan-v2-VL** shows **no degradation** on standard text-only and vision tasks—and is **slightly better on several**—while delivering stronger long-horizon execution on the *Illusion of Diminishing Returns* benchmark.

![image](https://cdn-uploads.huggingface.co/production/uploads/655e3b59d5c0d3db5359ca3c/q4DzuOjmcZOik2c8ZQSCN.png)

![image](https://cdn-uploads.huggingface.co/production/uploads/655e3b59d5c0d3db5359ca3c/JdA1kFh2IEJesQsOAOTrh.png)

![image](https://cdn-uploads.huggingface.co/production/uploads/655e3b59d5c0d3db5359ca3c/fuuZ5pMOGsbbEpKCM5xy8.png)

## Local Deployment

### Integration with Jan App

Jan-v2-VL is optimized for direct integration with the [Jan App](https://jan.ai/). Simply select the model from the Jan App interface for immediate access to its full capabilities.

### Local Deployment

**Using vLLM:**
```bash
vllm serve Menlo/Jan-v2-VL-high \
    --host 0.0.0.0 \
    --port 1234 \
    --enable-auto-tool-choice \
    --tool-call-parser hermes \
    --reasoning-parser qwen3 
    
```

**Using llama.cpp:**
```bash
llama-server --model Jan-v2-VL-high-Q8_0.gguf \
    --vision-model-path mmproj-Jan-v2-VL-high.gguf \
    --host 0.0.0.0 \
    --port 1234 \
    --jinja \
    --no-context-shift
```

### Recommended Parameters
For optimal performance in agentic and general tasks, we recommend the following inference parameters:
```yaml
temperature: 1.0
top_p: 0.95
top_k: 20
repetition_penalty: 1.0
presence_penalty: 1.5
```

## 🤝 Community & Support

- **Discussions**: [Hugging Face Community](https://huggingface.co/janhq/Jan-v2-VL-8B/discussions) 
- **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/)

## 📄 Citation
```bibtex
Updated Soon
```