-
WorldDreamer: Towards General World Models for Video Generation via Predicting Masked Tokens
Paper • 2401.09985 • Published • 18 -
CustomVideo: Customizing Text-to-Video Generation with Multiple Subjects
Paper • 2401.09962 • Published • 9 -
Inflation with Diffusion: Efficient Temporal Adaptation for Text-to-Video Super-Resolution
Paper • 2401.10404 • Published • 10 -
ActAnywhere: Subject-Aware Video Background Generation
Paper • 2401.10822 • Published • 13
Collections
Discover the best community collections!
Collections including paper arxiv:2412.07730
-
tencent/HunyuanVideo
Text-to-Video • Updated • 2.18k • • 2.06k -
SNOOPI: Supercharged One-step Diffusion Distillation with Proper Guidance
Paper • 2412.02687 • Published • 113 -
STIV: Scalable Text and Image Conditioned Video Generation
Paper • 2412.07730 • Published • 74 -
Improving Video Generation with Human Feedback
Paper • 2501.13918 • Published • 52
-
Hunyuan-DiT: A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding
Paper • 2405.08748 • Published • 24 -
Grounding DINO 1.5: Advance the "Edge" of Open-Set Object Detection
Paper • 2405.10300 • Published • 30 -
Chameleon: Mixed-Modal Early-Fusion Foundation Models
Paper • 2405.09818 • Published • 131 -
OpenRLHF: An Easy-to-use, Scalable and High-performance RLHF Framework
Paper • 2405.11143 • Published • 41
-
One-Minute Video Generation with Test-Time Training
Paper • 2504.05298 • Published • 110 -
MoCha: Towards Movie-Grade Talking Character Synthesis
Paper • 2503.23307 • Published • 138 -
Towards Understanding Camera Motions in Any Video
Paper • 2504.15376 • Published • 158 -
Antidistillation Sampling
Paper • 2504.13146 • Published • 59
-
Understanding Diffusion Models: A Unified Perspective
Paper • 2208.11970 • Published -
Tutorial on Diffusion Models for Imaging and Vision
Paper • 2403.18103 • Published • 2 -
Denoising Diffusion Probabilistic Models
Paper • 2006.11239 • Published • 6 -
Denoising Diffusion Implicit Models
Paper • 2010.02502 • Published • 4
-
WorldDreamer: Towards General World Models for Video Generation via Predicting Masked Tokens
Paper • 2401.09985 • Published • 18 -
CustomVideo: Customizing Text-to-Video Generation with Multiple Subjects
Paper • 2401.09962 • Published • 9 -
Inflation with Diffusion: Efficient Temporal Adaptation for Text-to-Video Super-Resolution
Paper • 2401.10404 • Published • 10 -
ActAnywhere: Subject-Aware Video Background Generation
Paper • 2401.10822 • Published • 13
-
One-Minute Video Generation with Test-Time Training
Paper • 2504.05298 • Published • 110 -
MoCha: Towards Movie-Grade Talking Character Synthesis
Paper • 2503.23307 • Published • 138 -
Towards Understanding Camera Motions in Any Video
Paper • 2504.15376 • Published • 158 -
Antidistillation Sampling
Paper • 2504.13146 • Published • 59
-
tencent/HunyuanVideo
Text-to-Video • Updated • 2.18k • • 2.06k -
SNOOPI: Supercharged One-step Diffusion Distillation with Proper Guidance
Paper • 2412.02687 • Published • 113 -
STIV: Scalable Text and Image Conditioned Video Generation
Paper • 2412.07730 • Published • 74 -
Improving Video Generation with Human Feedback
Paper • 2501.13918 • Published • 52
-
Hunyuan-DiT: A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding
Paper • 2405.08748 • Published • 24 -
Grounding DINO 1.5: Advance the "Edge" of Open-Set Object Detection
Paper • 2405.10300 • Published • 30 -
Chameleon: Mixed-Modal Early-Fusion Foundation Models
Paper • 2405.09818 • Published • 131 -
OpenRLHF: An Easy-to-use, Scalable and High-performance RLHF Framework
Paper • 2405.11143 • Published • 41
-
Understanding Diffusion Models: A Unified Perspective
Paper • 2208.11970 • Published -
Tutorial on Diffusion Models for Imaging and Vision
Paper • 2403.18103 • Published • 2 -
Denoising Diffusion Probabilistic Models
Paper • 2006.11239 • Published • 6 -
Denoising Diffusion Implicit Models
Paper • 2010.02502 • Published • 4