Post
1964
๐ Introducing Helios: a 14B real-time long-video generation model!
Itโs completely wildโfaster than 1.3B models and achieves this without using self-forcing. Welcome to the new era of video generation! ๐๐
๐ป Code: https://github.com/PKU-YuanGroup/Helios
๐ Page: https://pku-yuangroup.github.io/Helios-Page
๐ Paper: Helios: Real Real-Time Long Video Generation Model (2603.04379)
๐น True Single-GPU Extreme Speed โก๏ธ
No need to rely on traditional workarounds like KV-cache, quantization, sparse/linear attention, or TinyVAE. Helios hits an end-to-end 19.5 FPS on a single H100!
Training is also highly accessible: an 80GB VRAM can fit four 14B models.
๐น Solving Long-Video "Drift" from the Core ๐ฅ
Tired of visual drift and repetitive loops? We ditched traditional hacks (like error banks, self-forcing, or keyframe sampling).
Instead, our innovative training strategy simulates & eliminates drift directly, keeping minute-long videos incredibly coherent with stunning quality. โจ
๐น 3 Model Variants for Full Coverage ๐ ๏ธ
With a unified architecture natively supporting T2V, I2V, and V2V, we are open-sourcing 3 flavors:
1๏ธโฃ Base: Single-stage denoising for extreme high-fidelity.
2๏ธโฃ Mid: Pyramid denoising + CFG-Zero for the perfect balance of quality & throughput.
3๏ธโฃ Distilled: Adversarial Distillation (DMD) for ultra-fast, few-step generation.
๐น Day-0 Ecosystem Ready ๐
We wanted deployment to be a breeze from the second we launched. Helios drops with comprehensive Day-0 hardware and framework support:
โ Huawei Ascend-NPU
โ HuggingFace Diffusers
โ vLLM-Omni
โ SGLang-Diffusion
Try it out and let us know what you think!
Itโs completely wildโfaster than 1.3B models and achieves this without using self-forcing. Welcome to the new era of video generation! ๐๐
๐ป Code: https://github.com/PKU-YuanGroup/Helios
๐ Page: https://pku-yuangroup.github.io/Helios-Page
๐ Paper: Helios: Real Real-Time Long Video Generation Model (2603.04379)
๐น True Single-GPU Extreme Speed โก๏ธ
No need to rely on traditional workarounds like KV-cache, quantization, sparse/linear attention, or TinyVAE. Helios hits an end-to-end 19.5 FPS on a single H100!
Training is also highly accessible: an 80GB VRAM can fit four 14B models.
๐น Solving Long-Video "Drift" from the Core ๐ฅ
Tired of visual drift and repetitive loops? We ditched traditional hacks (like error banks, self-forcing, or keyframe sampling).
Instead, our innovative training strategy simulates & eliminates drift directly, keeping minute-long videos incredibly coherent with stunning quality. โจ
๐น 3 Model Variants for Full Coverage ๐ ๏ธ
With a unified architecture natively supporting T2V, I2V, and V2V, we are open-sourcing 3 flavors:
1๏ธโฃ Base: Single-stage denoising for extreme high-fidelity.
2๏ธโฃ Mid: Pyramid denoising + CFG-Zero for the perfect balance of quality & throughput.
3๏ธโฃ Distilled: Adversarial Distillation (DMD) for ultra-fast, few-step generation.
๐น Day-0 Ecosystem Ready ๐
We wanted deployment to be a breeze from the second we launched. Helios drops with comprehensive Day-0 hardware and framework support:
โ Huawei Ascend-NPU
โ HuggingFace Diffusers
โ vLLM-Omni
โ SGLang-Diffusion
Try it out and let us know what you think!