Update README.md
Browse files
README.md
CHANGED
|
@@ -15,7 +15,7 @@ tags:
|
|
| 15 |
|
| 16 |
# 🎬 LongLive: Real-time Interactive Long Video Generation
|
| 17 |
|
| 18 |
-
[](https://arxiv.org/abs/
|
| 19 |
[](https://github.com/NVlabs/LongLive)
|
| 20 |
[](https://huggingface.co/Efficient-Large-Model/LongLive-1.3B)
|
| 21 |
[](https://www.youtube.com/watch?v=CO1QC7BNvig)
|
|
@@ -29,14 +29,14 @@ tags:
|
|
| 29 |
|
| 30 |
## 💡 TLDR: Turn interactive prompts into long videos—instantly, as you type!
|
| 31 |
|
| 32 |
-
**LongLive: Real-time Interactive Long Video Generation [[Paper](https://arxiv.org/abs/
|
| 33 |
[Shuai Yang](https://andysonys.github.io/), [Wei Huang](https://aaron-weihuang.com/), [Ruihang Chu](https://ruihang-chu.github.io/), [Yicheng Xiao](https://easonxiao-888.github.io/), [Yuyang Zhao](https://yuyangzhao.com/), [Xianbang Wang](https://peppaking8.github.io/), [Muyang Li](https://lmxyy.me/), [Enze Xie](https://xieenze.github.io/), [Yingcong Chen](https://www.yingcong.me/), [Yao Lu](https://scholar.google.com/citations?user=OI7zFmwAAAAJ&hl=en), [Song Han](http://songhan.mit.edu/), [Yukang Chen](https://yukangchen.com/) <br />
|
| 34 |
|
| 35 |
We present LongLive, a frame-level autoregressive (AR) framework for real-time and interactive long video generation. Long video generation presents challenges in both efficiency and quality. Diffusion and Diffusion-Forcing models can produce high-quality videos but suffer from low efficiency due to bidirectional attention. Causal attention AR models support KV caching for faster inference but often degrade in quality on long videos due to memory challenges during long-video training. In addition, beyond static prompt-based generation, interactive capabilities, such as streaming prompt inputs, are critical for dynamic content creation, enabling users to guide narratives in real time. This interactive requirement significantly increases the complexity, especially in ensuring visual consistency and semantic coherence during prompt transitions. To address these challenges, LongLive adopts a causal, frame-level AR design that integrates a KV-recache mechanism that refreshes cached states with the new prompt for smooth, adherent switches; streaming long tuning to enable long video training and to align training and inference (train-long–test-long); and short window attention paired with a frame-level attention sink, preserving long-range consistency while enabling faster generation. With these key designs, LongLive fine-tunes a 1.3B-parameter short-clip model to minute-long generation in just 32 GPU-days. At inference, LongLive sustains 20.7 FPS on a single NVIDIA H100, achieves strong performance on VBench in both short- and long-video settings. LongLive supports up to 240-second videos on a single H100 GPU.
|
| 36 |
With FP8 quantization, LongLive boosts inference to 24.8 FPS with marginal quality loss.
|
| 37 |
|
| 38 |
## News
|
| 39 |
-
- [x] [2025.9.25] We release [Paper](https://arxiv.org/abs/
|
| 40 |
|
| 41 |
## Highlights
|
| 42 |
1. **Long Video Gen**: LongLive supports up to 240s video generation, with visual consistency.
|
|
@@ -128,7 +128,7 @@ Please consider to cite our paper and this framework, if they are helpful in you
|
|
| 128 |
title={LongLive: Real-time Interactive Long Video Generation},
|
| 129 |
author={Shuai Yang and Wei Huang and Ruihang Chu and Yicheng Xiao and Yuyang Zhao and Xianbang Wang and Muyang Li and Enze Xie and Yingcong Chen and Yao Lu and Song Hanand Yukang Chen},
|
| 130 |
year={2025},
|
| 131 |
-
eprint={
|
| 132 |
archivePrefix={arXiv},
|
| 133 |
primaryClass={cs.CV}
|
| 134 |
}
|
|
|
|
| 15 |
|
| 16 |
# 🎬 LongLive: Real-time Interactive Long Video Generation
|
| 17 |
|
| 18 |
+
[](https://arxiv.org/abs/2509.22622)
|
| 19 |
[](https://github.com/NVlabs/LongLive)
|
| 20 |
[](https://huggingface.co/Efficient-Large-Model/LongLive-1.3B)
|
| 21 |
[](https://www.youtube.com/watch?v=CO1QC7BNvig)
|
|
|
|
| 29 |
|
| 30 |
## 💡 TLDR: Turn interactive prompts into long videos—instantly, as you type!
|
| 31 |
|
| 32 |
+
**LongLive: Real-time Interactive Long Video Generation [[Paper](https://arxiv.org/abs/2509.22622)]** <br />
|
| 33 |
[Shuai Yang](https://andysonys.github.io/), [Wei Huang](https://aaron-weihuang.com/), [Ruihang Chu](https://ruihang-chu.github.io/), [Yicheng Xiao](https://easonxiao-888.github.io/), [Yuyang Zhao](https://yuyangzhao.com/), [Xianbang Wang](https://peppaking8.github.io/), [Muyang Li](https://lmxyy.me/), [Enze Xie](https://xieenze.github.io/), [Yingcong Chen](https://www.yingcong.me/), [Yao Lu](https://scholar.google.com/citations?user=OI7zFmwAAAAJ&hl=en), [Song Han](http://songhan.mit.edu/), [Yukang Chen](https://yukangchen.com/) <br />
|
| 34 |
|
| 35 |
We present LongLive, a frame-level autoregressive (AR) framework for real-time and interactive long video generation. Long video generation presents challenges in both efficiency and quality. Diffusion and Diffusion-Forcing models can produce high-quality videos but suffer from low efficiency due to bidirectional attention. Causal attention AR models support KV caching for faster inference but often degrade in quality on long videos due to memory challenges during long-video training. In addition, beyond static prompt-based generation, interactive capabilities, such as streaming prompt inputs, are critical for dynamic content creation, enabling users to guide narratives in real time. This interactive requirement significantly increases the complexity, especially in ensuring visual consistency and semantic coherence during prompt transitions. To address these challenges, LongLive adopts a causal, frame-level AR design that integrates a KV-recache mechanism that refreshes cached states with the new prompt for smooth, adherent switches; streaming long tuning to enable long video training and to align training and inference (train-long–test-long); and short window attention paired with a frame-level attention sink, preserving long-range consistency while enabling faster generation. With these key designs, LongLive fine-tunes a 1.3B-parameter short-clip model to minute-long generation in just 32 GPU-days. At inference, LongLive sustains 20.7 FPS on a single NVIDIA H100, achieves strong performance on VBench in both short- and long-video settings. LongLive supports up to 240-second videos on a single H100 GPU.
|
| 36 |
With FP8 quantization, LongLive boosts inference to 24.8 FPS with marginal quality loss.
|
| 37 |
|
| 38 |
## News
|
| 39 |
+
- [x] [2025.9.25] We release [Paper](https://arxiv.org/abs/2509.22622), this GitHub repo [LongLive](https://github.com/NVlabs/LongLive) with all training and inference code, the model weight [LongLive-1.3B](https://huggingface.co/Efficient-Large-Model/LongLive-1.3B), and demo page [Website](https://nvlabs.github.io/LongLive).
|
| 40 |
|
| 41 |
## Highlights
|
| 42 |
1. **Long Video Gen**: LongLive supports up to 240s video generation, with visual consistency.
|
|
|
|
| 128 |
title={LongLive: Real-time Interactive Long Video Generation},
|
| 129 |
author={Shuai Yang and Wei Huang and Ruihang Chu and Yicheng Xiao and Yuyang Zhao and Xianbang Wang and Muyang Li and Enze Xie and Yingcong Chen and Yao Lu and Song Hanand Yukang Chen},
|
| 130 |
year={2025},
|
| 131 |
+
eprint={2509.22622},
|
| 132 |
archivePrefix={arXiv},
|
| 133 |
primaryClass={cs.CV}
|
| 134 |
}
|