File size: 14,590 Bytes
530733d a7aea10 530733d a7aea10 530733d a7aea10 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
---
title: voyager
app_file: app.py
sdk: gradio
sdk_version: 5.49.1
---
[δΈζι
θ―»](README_zh.md)
# **HunyuanWorld-Voyager**
<p align="center">
<img src="assets/teaser.png">
</p>
<div align="center">
<a href="https://3d-models.hunyuan.tencent.com/world/" target="_blank"><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Web&color=green" height=22px></a>
<a href="https://3d-models.hunyuan.tencent.com/voyager/voyager_en/assets/HYWorld_Voyager.pdf" target="_blank"><img src="https://img.shields.io/static/v1?label=Tech%20Report&message=arxiv&color=red" height=22px></a>
<a href="https://huggingface.co/tencent/HunyuanWorld-Voyager" target="_blank"><img src="https://img.shields.io/static/v1?label=HunyuanWorld-Voyager&message=HuggingFace&color=yellow" height=22px></a>
</div>
-----
We introduce HunyuanWorld-Voyager, a novel video diffusion framework that generates world-consistent 3D point-cloud sequences from a single image with user-defined camera path. Voyager can generate 3D-consistent scene videos for world exploration following custom camera trajectories. It can also generate aligned depth and RGB video for efficient and direct 3D reconstruction.
## π₯π₯π₯ News!!
* October 22, 2025: π We release [HunyuanWorld-1.1 (WorldMirror)](https://github.com/Tencent-Hunyuan/HunyuanWorld-Mirror), supporting 3D world creation from videos or multi-view images!
* October 16, 2025: π We recently propose [FlashWorld](https://github.com/imlixinyang/FlashWorld), enabling 3DGS world generation in 5~10 seconds on a single GPU!
* Sep 2, 2025: π We release the code and model weights of HunyuanWorld-Voyager. [Download](ckpts/README.md).
> Join our **[Wechat](#)** and **[Discord](https://discord.gg/dNBrdrGGMa)** group to discuss and find help from us.
| Wechat Group | Xiaohongshu | X | Discord |
|--------------------------------------------------|-------------------------------------------------------|---------------------------------------------|---------------------------------------------------|
| <img src="assets/qrcode/wechat.png" height=140> | <img src="assets/qrcode/xiaohongshu.png" height=140> | <img src="assets/qrcode/x.png" height=140> | <img src="assets/qrcode/discord.png" height=140> |
## π₯ Demo
### Demo Video
<div align="center">
<video src="https://github.com/user-attachments/assets/2eb844c9-30ba-4770-8066-189c123affee" width="80%" poster=""> </video>
</div>
### Camera-Controllable Video Generation
| Input | Generated Video |
|:----------------:|:----------------:|
| <img src="assets/demo/camera/input1.png" width="80%"> | <video src="https://github.com/user-attachments/assets/2b03ecd5-9a8f-455c-bf04-c668d3a61b04" width="100%"> </video> |
| <img src="assets/demo/camera/input2.png" width="80%"> | <video src="https://github.com/user-attachments/assets/45844ac0-c65a-4e04-9f7d-4c72d47e0339" width="100%"> </video> |
| <img src="assets/demo/camera/input3.png" width="80%"> | <video src="https://github.com/user-attachments/assets/f7f48473-3bb5-4a30-bd22-af3ca95ee8dc" width="100%"> </video> |
### Multiple Applications
- Video Reconstruction
| Generated Video | Reconstructed Point Cloud |
|:---------------:|:--------------------------------:|
| <video src="https://github.com/user-attachments/assets/72a41804-63fc-4596-963d-1497e68f7790" width="100%"> </video> | <video src="https://github.com/user-attachments/assets/67574e9c-9e21-4ed6-9503-e65d187086a2" width="100%"> </video> |
- Image-to-3D Generation
| | |
|:---------------:|:---------------:|
| <video src="https://github.com/user-attachments/assets/886aa86d-990e-4b86-97a5-0b9110862d14" width="100%"> </video> | <video src="https://github.com/user-attachments/assets/4c1734ba-4e78-4979-b30e-3c8c97aa984b" width="100%"> </video> |
- Video Depth Estimation
| | |
|:---------------:|:---------------:|
| <video src="https://github.com/user-attachments/assets/e4c8b729-e880-4be3-826f-429a5c1f12cd" width="100%"> </video> | <video src="https://github.com/user-attachments/assets/7ede0745-cde7-42f1-9c28-e4dca90dac52" width="100%"> </video> |
## β―οΈ **HunyuanWorld-Voyager Introduction**
### Architecture
Voyager consists of two key components:
(1) World-Consistent Video Diffusion: A unified architecture that jointly generates aligned RGB and depth video sequences, conditioned on existing world observation to ensure global coherence.
(2) Long-Range World Exploration: An efficient world cache with point culling and an auto-regressive inference with smooth video sampling for iterative scene extension with context-aware consistency.
To train Voyager, we propose a scalable data engine, i.e., a video reconstruction pipeline that automates camera pose estimation and metric depth prediction for arbitrary videos, enabling large-scale, diverse training data curation without manual 3D annotations. Using this pipeline, we compile a dataset of over 100,000 video clips, combining real-world captures and synthetic Unreal Engine renders.
<p align="center">
<img src="assets/backbone.jpg" height=500>
</p>
### Performance
<table class="comparison-table">
<thead>
<tr>
<th>Method</th>
<th>WorldScore Average</th>
<th>Camera Control</th>
<th>Object Control</th>
<th>Content Alignment</th>
<th>3D Consistency</th>
<th>Photometric Consistency</th>
<th>Style Consistency</th>
<th>Subjective Quality</th>
</tr>
</thead>
<tbody>
<tr>
<td>WonderJourney</td>
<td>π‘63.75</td>
<td>π‘84.6</td>
<td>37.1</td>
<td>35.54</td>
<td>80.6</td>
<td>79.03</td>
<td>62.82</td>
<td>π’66.56</td>
</tr>
<tr>
<td>WonderWorld</td>
<td>π’72.69</td>
<td>π΄92.98</td>
<td>51.76</td>
<td>π΄71.25</td>
<td>π΄86.87</td>
<td>85.56</td>
<td>70.57</td>
<td>49.81</td>
</tr>
<tr>
<td>EasyAnimate</td>
<td>52.85</td>
<td>26.72</td>
<td>54.5</td>
<td>50.76</td>
<td>67.29</td>
<td>47.35</td>
<td>π‘73.05</td>
<td>50.31</td>
</tr>
<tr>
<td>Allegro</td>
<td>55.31</td>
<td>24.84</td>
<td>π‘57.47</td>
<td>π‘51.48</td>
<td>70.5</td>
<td>69.89</td>
<td>65.6</td>
<td>47.41</td>
</tr>
<tr>
<td>Gen-3</td>
<td>60.71</td>
<td>29.47</td>
<td>π’62.92</td>
<td>50.49</td>
<td>68.31</td>
<td>π’87.09</td>
<td>62.82</td>
<td>π‘63.85</td>
</tr>
<tr>
<td>CogVideoX-I2V</td>
<td>62.15</td>
<td>38.27</td>
<td>40.07</td>
<td>36.73</td>
<td>π’86.21</td>
<td>π΄88.12</td>
<td>π’83.22</td>
<td>62.44</td>
</tr>
<tr class="voyager-row">
<td><b>Voyager</b></td>
<td>π΄77.62</td>
<td>π’85.95</td>
<td>π΄66.92</td>
<td>π’68.92</td>
<td>π‘81.56</td>
<td>π‘85.99</td>
<td>π΄84.89</td>
<td>π΄71.09</td>
</tr>
</tbody>
<caption>Quantitative comparison on <i>WorldScore Benchmark</i>. π΄ indicates the 1st, π’ indicates the 2nd, π‘ indicates the 3rd.</caption>
</table>
## π Requirements
The following table shows the requirements for running Voyager (batch size = 1) to generate videos:
| Model | Resolution | GPU Peak Memory |
|:----------------:|:-----------:|:----------------:|
| HunyuanWorld-Voyager | 540p | 60GB |
* An NVIDIA GPU with CUDA support is required.
* The model is tested on a single 80G GPU.
* **Minimum**: The minimum GPU memory required is 60GB for 540p.
* **Recommended**: We recommend using a GPU with 80GB of memory for better generation quality.
* Tested operating system: Linux
## π οΈ Dependencies and Installation
Begin by cloning the repository:
```shell
git clone https://github.com/Tencent-Hunyuan/HunyuanWorld-Voyager
cd HunyuanWorld-Voyager
```
### Installation Guide for Linux
We recommend CUDA versions 12.4 or 11.8 for the manual installation.
```shell
# 1. Create conda environment
conda create -n voyager python==3.11.9
# 2. Activate the environment
conda activate voyager
# 3. Install PyTorch and other dependencies using conda
# For CUDA 12.4
conda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=12.4 -c pytorch -c nvidia
# 4. Install pip dependencies
python -m pip install -r requirements.txt
python -m pip install transformers==4.39.3
# 5. Install flash attention v2 for acceleration (requires CUDA 11.8 or above)
python -m pip install flash-attn
# 6. Install xDiT for parallel inference (It is recommended to use torch 2.4.0 and flash-attn 2.6.3)
python -m pip install xfuser==0.4.2
```
In case of running into float point exception(core dump) on the specific GPU type, you may try the following solutions:
```shell
# Making sure you have installed CUDA 12.4, CUBLAS>=12.4.5.8, and CUDNN>=9.00 (or simply using our CUDA 12 docker image).
pip install nvidia-cublas-cu12==12.4.5.8
export LD_LIBRARY_PATH=/opt/conda/lib/python3.8/site-packages/nvidia/cublas/lib/
```
To create your own input conditions, you also need to install the following dependencies:
```shell
pip install --no-deps git+https://github.com/microsoft/MoGe.git
pip install scipy==1.11.4
pip install git+https://github.com/EasternJournalist/utils3d.git@c5daf6f6c244d251f252102d09e9b7bcef791a38
```
## π§± Download Pretrained Models
A detailed guidance for downloading pretrained models is shown [here](ckpts/README.md). Briefly,
```
huggingface-cli download tencent/HunyuanWorld-Voyager --local-dir ./ckpts
```
## π Inference
### Create Input Condition
We provide several input examples in the `examples` folder. You can find the corresponding input text in the `prompt.txt` file. If you'd like to use your own input image, you can run the following command:
```bash
cd data_engine
python3 create_input.py --image_path "your_input_image" --render_output_dir "examples/case/" --type "forward"
```
We provide the following types of camera path:
- forward
- backward
- left
- right
- turn_left
- turn_right
You can also modify the camera path in the `create_input.py` file.
### Single-GPU Inference
```bash
cd HunyuanWorld-Voyager
python3 sample_image2video.py \
--model HYVideo-T/2 \
--input-path "examples/case1" \
--prompt "An old-fashioned European village with thatched roofs on the houses." \
--i2v-stability \
--infer-steps 50 \
--flow-reverse \
--flow-shift 7.0 \
--seed 0 \
--embedded-cfg-scale 6.0 \
--use-cpu-offload \
--save-path ./results
```
You can add "--use-context-block" to add the context block in the inference.
### Parallel Inference on Multiple GPUs by xDiT
[xDiT](https://github.com/xdit-project/xDiT) is a Scalable Inference Engine for Diffusion Transformers (DiTs) on multi-GPU Clusters.
It has successfully provided low-latency parallel inference solutions for a variety of DiTs models, including mochi-1, CogVideoX, Flux.1, SD3, etc. This repo adopted the [Unified Sequence Parallelism (USP)](https://arxiv.org/abs/2405.07719) APIs for parallel inference of the HunyuanVideo-I2V model.
For example, to generate a video with 8 GPUs, you can use the following command:
```bash
cd HunyuanWorld-Voyager
ALLOW_RESIZE_FOR_SP=1 torchrun --nproc_per_node=8 \
sample_image2video.py \
--model HYVideo-T/2 \
--input-path "examples/case1" \
--prompt "An old-fashioned European village with thatched roofs on the houses." \
--i2v-stability \
--infer-steps 50 \
--flow-reverse \
--flow-shift 7.0 \
--seed 0 \
--embedded-cfg-scale 6.0 \
--save-path ./results \
--ulysses-degree 8 \
--ring-degree 1
```
The number of GPUs equals the product of `--ulysses-degree` and `--ring-degree.` Feel free to adjust these parallel configurations to optimize performance.
<p align="center">
<table align="center">
<thead>
<tr>
<th colspan="4">Latency (Sec) for 512x768 (49 frames 50 steps) on 8 x H20 GPU</th>
</tr>
<tr>
<th>1</th>
<th>2</th>
<th>4</th>
<th>8</th>
</tr>
</thead>
<tbody>
<tr>
<th>1925</th>
<th>1018 (1.89x)</th>
<th>534 (3.60x)</th>
<th>288 (6.69x)</th>
</tr>
</tbody>
</table>
</p>
### Gradio Demo
We also provide a Gradio demo for the HunyuanWorld-Voyager model.
<p align="center">
<img src="assets/gradio.png" height=500>
</p>
You can run the following command to start the demo:
```bash
cd HunyuanWorld-Voyager
python3 app.py
```
You need to first upload an image and choose a camera direction to create a condition video. Then, you can type your text prompt and generate the final RGB-D video.
### Export Point Cloud
After generating RGB-D video content, you can export `ply` file as follows:
```bash
cd data_engine
python3 convert_point.py --folder_path "your_input_condition_folder" --video_path "your_output_video_path"
```
## βοΈ Data Engine
We also release the data engine of HunyuanWorld-Voyager, which can be used to generate scalable data for RGB-D video training. Please refer to [data_engine](data_engine/README.md) for more details.
<p align="center">
<img src="assets/data_engine.jpg" height=500>
</p>
## π BibTeX
If you find [Voyager](https://arxiv.org/abs/2506.04225) useful for your research and applications, please cite using this BibTeX:
```BibTeX
@article{huang2025voyager,
title={Voyager: Long-Range and World-Consistent Video Diffusion for Explorable 3D Scene Generation},
author={Huang, Tianyu and Zheng, Wangguandong and Wang, Tengfei and Liu, Yuhao and Wang, Zhenwei and Wu, Junta and Jiang, Jie and Li, Hui and Lau, Rynson WH and Zuo, Wangmeng and Guo, Chunchao},
journal={arXiv preprint arXiv:2506.04225},
year={2025}
}
```
## π§ Contact
Please send emails to [email protected] if there is any question
## Acknowledgements
We would like to thank [HunyuanWorld](https://github.com/Tencent-Hunyuan/HunyuanWorld-1.0), [Hunyuan3D](https://github.com/Tencent-Hunyuan/Hunyuan3D-2), and [HunyuanVideo](https://github.com/Tencent-Hunyuan/HunyuanVideo-I2V). We also thank [VGGT](https://github.com/facebookresearch/vggt), [MoGE](https://github.com/microsoft/MoGe), [Metric3D](https://github.com/YvanYin/Metric3D), for their open research and exploration.
|