Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,170 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AccVideo: Accelerating Video Diffusion Model with Synthetic Dataset
|
| 2 |
+
|
| 3 |
+
This repository is the official PyTorch implementation of [AccVideo](https://arxiv.org/abs/2503.19462). AccVideo is a novel efficient distillation method to accelerate video diffusion models with synthetic datset. Our method is 8.5x faster than HunyuanVideo.
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
[](https://arxiv.org/abs/2503.19462)
|
| 7 |
+
[](https://aejion.github.io/accvideo/)
|
| 8 |
+
[](https://huggingface.co/aejion/AccVideo)
|
| 9 |
+
|
| 10 |
+
## π₯π₯π₯ News
|
| 11 |
+
|
| 12 |
+
* Jun 3, 2025: We release the inference code and [model weights](https://huggingface.co/aejion/AccVideo-WanX-I2V-480P-14B) of AccVideo based on WanXI2V-480P-14B.
|
| 13 |
+
* May 26, 2025: We release the inference code and [model weights](https://huggingface.co/aejion/AccVideo-WanX-T2V-14B) of AccVideo based on WanXT2V-14B.
|
| 14 |
+
* Mar 31, 2025: [ComfyUI-Kijai (FP8 Inference)](https://huggingface.co/Kijai/HunyuanVideo_comfy/blob/main/accvideo-t2v-5-steps_fp8_e4m3fn.safetensors): ComfyUI-Integration by [Kijai](https://huggingface.co/Kijai)
|
| 15 |
+
* Mar 26, 2025: We release the inference code and [model weights](https://huggingface.co/aejion/AccVideo) of AccVideo based on HunyuanT2V.
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
## π₯ Demo (Based on HunyuanT2V)
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
https://github.com/user-attachments/assets/59f3c5db-d585-4773-8d92-366c1eb040f0
|
| 22 |
+
|
| 23 |
+
## π₯ Demo (Based on WanXT2V-14B)
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
https://github.com/user-attachments/assets/ff9724da-b76c-478d-a9bf-0ee7240494b2
|
| 27 |
+
|
| 28 |
+
## π₯ Demo (Based on WanXI2V-480P-14B)
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
## π Open-source Plan
|
| 33 |
+
|
| 34 |
+
- [x] Inference
|
| 35 |
+
- [x] Checkpoints
|
| 36 |
+
- [ ] Multi-GPU Inference
|
| 37 |
+
- [ ] Synthetic Video Dataset, SynVid
|
| 38 |
+
- [ ] Training
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
## π§ Installation
|
| 42 |
+
The code is tested on Python 3.10.0, CUDA 11.8 and A100.
|
| 43 |
+
```
|
| 44 |
+
conda create -n accvideo python==3.10.0
|
| 45 |
+
conda activate accvideo
|
| 46 |
+
|
| 47 |
+
pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu118
|
| 48 |
+
pip install -r requirements.txt
|
| 49 |
+
pip install flash-attn==2.7.3 --no-build-isolation
|
| 50 |
+
pip install "huggingface_hub[cli]"
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
## π€ Checkpoints
|
| 54 |
+
To download the checkpoints (based on HunyuanT2V), use the following command:
|
| 55 |
+
```bash
|
| 56 |
+
# Download the model weight
|
| 57 |
+
huggingface-cli download aejion/AccVideo --local-dir ./ckpts
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
To download the checkpoints (based on WanX-T2V-14B), use the following command:
|
| 61 |
+
```bash
|
| 62 |
+
# Download the model weight
|
| 63 |
+
huggingface-cli download aejion/AccVideo-WanX-T2V-14B --local-dir ./wanx_t2v_ckpts
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+
To download the checkpoints (based on WanX-I2V-480P-14B), use the following command:
|
| 67 |
+
```bash
|
| 68 |
+
# Download the model weight
|
| 69 |
+
huggingface-cli download aejion/AccVideo-WanX-I2V-480P-14B --local-dir ./wanx_i2v_ckpts
|
| 70 |
+
```
|
| 71 |
+
|
| 72 |
+
## π Inference
|
| 73 |
+
We recommend using a GPU with 80GB of memory. We use AccVideo to distill Hunyuan and WanX.
|
| 74 |
+
|
| 75 |
+
### Inference for HunyuanT2V
|
| 76 |
+
|
| 77 |
+
To run the inference, use the following command:
|
| 78 |
+
```bash
|
| 79 |
+
export MODEL_BASE=./ckpts
|
| 80 |
+
python sample_t2v.py \
|
| 81 |
+
--height 544 \
|
| 82 |
+
--width 960 \
|
| 83 |
+
--num_frames 93 \
|
| 84 |
+
--num_inference_steps 5 \
|
| 85 |
+
--guidance_scale 1 \
|
| 86 |
+
--embedded_cfg_scale 6 \
|
| 87 |
+
--flow_shift 7 \
|
| 88 |
+
--flow-reverse \
|
| 89 |
+
--prompt_file ./assets/prompt.txt \
|
| 90 |
+
--seed 1024 \
|
| 91 |
+
--output_path ./results/accvideo-544p \
|
| 92 |
+
--model_path ./ckpts \
|
| 93 |
+
--dit-weight ./ckpts/accvideo-t2v-5-steps/diffusion_pytorch_model.pt
|
| 94 |
+
```
|
| 95 |
+
|
| 96 |
+
The following table shows the comparisons on inference time using a single A100 GPU:
|
| 97 |
+
|
| 98 |
+
| Model | Setting(height/width/frame) | Inference Time(s) |
|
| 99 |
+
|:------------:|:---------------------------:|:-----------------:|
|
| 100 |
+
| HunyuanVideo | 720px1280px129f | 3234 |
|
| 101 |
+
| Ours | 720px1280px129f | 380(8.5x faster) |
|
| 102 |
+
| HunyuanVideo | 544px960px93f | 704 |
|
| 103 |
+
| Ours | 544px960px93f | 91(7.7x faster) |
|
| 104 |
+
|
| 105 |
+
### Inference for WanXT2V
|
| 106 |
+
|
| 107 |
+
To run the inference, use the following command:
|
| 108 |
+
```bash
|
| 109 |
+
python sample_wanx_t2v.py \
|
| 110 |
+
--task t2v-14B \
|
| 111 |
+
--size 832*480 \
|
| 112 |
+
--ckpt_dir ./wanx_t2v_ckpts \
|
| 113 |
+
--sample_solver 'unipc' \
|
| 114 |
+
--save_dir ./results/accvideo_wanx_14B \
|
| 115 |
+
--sample_steps 10
|
| 116 |
+
```
|
| 117 |
+
|
| 118 |
+
The following table shows the comparisons on inference time using a single A100 GPU:
|
| 119 |
+
|
| 120 |
+
| Model | Setting(height/width/frame) | Inference Time(s) |
|
| 121 |
+
|:-----:|:---------------------------:|:-----------------:|
|
| 122 |
+
| WanX | 480px832px81f | 932 |
|
| 123 |
+
| Ours | 480px832px81f | 97(9.6x faster) |
|
| 124 |
+
|
| 125 |
+
### Inference for WanXI2V-480P
|
| 126 |
+
|
| 127 |
+
To run the inference, use the following command:
|
| 128 |
+
```bash
|
| 129 |
+
python sample_wanx_i2v.py \
|
| 130 |
+
--task i2v-14B \
|
| 131 |
+
--size 832*480 \
|
| 132 |
+
--ckpt_dir ./wanx_i2v_ckpts \
|
| 133 |
+
--sample_solver 'unipc' \
|
| 134 |
+
--save_dir ./results/accvideo_wanx_i2v_14B \
|
| 135 |
+
--sample_steps 10
|
| 136 |
+
```
|
| 137 |
+
|
| 138 |
+
The following table shows the comparisons on inference time using a single A100 GPU:
|
| 139 |
+
|
| 140 |
+
| Model | Setting(height/width/frame) | Inference Time(s) |
|
| 141 |
+
|:--------:|:---------------------------:|:-----------------:|
|
| 142 |
+
| WanX-I2V | 480px832px81f | 768 |
|
| 143 |
+
| Ours | 480px832px81f | 112(6.8x faster) |
|
| 144 |
+
|
| 145 |
+
|
| 146 |
+
## π VBench Results
|
| 147 |
+
|
| 148 |
+
We report VBench evaluation results for our distilled models. We utilized the respective augmented prompts provided by the VBench team to generate videos. ([HunyuanVideo augmented prompts](https://github.com/Vchitect/VBench/blob/master/prompts/augmented_prompts/hunyuan_all_dimension.txt) for AccVideo-HunyuanT2V and [WanX augmented prompts](https://github.com/Vchitect/VBench/blob/master/prompts/augmented_prompts/Wan2.1-T2V-1.3B/all_dimension_aug_wanx_seed42.txt) for AccVideo-WanXT2V)
|
| 149 |
+
|
| 150 |
+
| Model | Setting(height/width/frame) | Total Score | Quality Score | Semantic Score | Subject Consistency | Background Consistency | Temporal Flickering | Motion Smoothness | Dynamic Degree | Aesthetic Quality | Image Quality | Object Class | Multiple Objects | Human Action | Color | Spatial Relationship | Scene | Appearance Style | Temporal Style | Overall Consistency |
|
| 151 |
+
|:-------------------:|:---------------------------:|:-----------:|---------------|----------------|---------------------|------------------------|---------------------|-------------------|----------------|-------------------|---------------|--------------|------------------|--------------|--------|----------------------|--------|------------------|----------------|---------------------|
|
| 152 |
+
| AccVideo-HunyuanT2V | 544px960px93f | 83.26% | 84.58% | 77.96% | 94.46% | 97.45% | 99.18% | 98.79% | 75.00% | 62.08% | 65.64% | 92.99% | 67.33% | 95.60% | 94.11% | 75.70% | 54.72% | 19.87% | 23.71% | 27.21% |
|
| 153 |
+
| AccVideo-WanXT2V | 480px832px81f | 85.95% | 86.62% | 83.25% | 95.02% | 97.75% | 99.54% | 97.95% | 93.33% | 64.21% | 68.42% | 98.38% | 86.58% | 97.40% | 92.04% | 75.68% | 59.82% | 23.88% | 24.62% | 27.34% |
|
| 154 |
+
|
| 155 |
+
|
| 156 |
+
## π BibTeX
|
| 157 |
+
|
| 158 |
+
If you find [AccVideo](https://arxiv.org/abs/2503.19462) useful for your research and applications, please cite using this BibTeX:
|
| 159 |
+
|
| 160 |
+
```BibTeX
|
| 161 |
+
@article{zhang2025accvideo,
|
| 162 |
+
title={AccVideo: Accelerating Video Diffusion Model with Synthetic Dataset},
|
| 163 |
+
author={Zhang, Haiyu and Chen, Xinyuan and Wang, Yaohui and Liu, Xihui and Wang, Yunhong and Qiao, Yu},
|
| 164 |
+
journal={arXiv preprint arXiv:2503.19462},
|
| 165 |
+
year={2025}
|
| 166 |
+
}
|
| 167 |
+
```
|
| 168 |
+
|
| 169 |
+
## Acknowledgements
|
| 170 |
+
The code is built upon [FastVideo](https://github.com/hao-ai-lab/FastVideo) and [HunyuanVideo](https://github.com/Tencent/HunyuanVideo), we thank all the contributors for open-sourcing.
|