π₯ News
- Oct 24, 2025: π We release the first unified semantic video generation model, Video-As-Prompt (VAP)!
- Oct 24, 2025: π€ We release the VAP-Data, the largest semantic-controlled video generation datasets with more than $100K$ samples!
- Oct 24, 2025: π We present the technical report of Video-As-Prompt, please check out the details and spark some discussion!
ποΈ Video-As-Prompt
Core idea: Given a reference video with wanted semantics as a video prompt, Video-As-Prompt animate a reference image with the same semantics as the reference video.
E.g., Different Reference Videos + Same Reference Image β New Videos with Different Semantics
Welcome to see our project page for more interesting results!
Dataset Info
This is a rich semantic-controlled video generation dataset containing 100 types of control semantics across four categories β concept, style, motion, and camera. Each semantic label corresponds to multiple video clips, and these clips consistently reflect that specific condition.
This dataset can be used for a wide range of controllable video generation tasks, including in-context video generation, video editing, visual effects (VFX), and more. The original collection contains over 100,000 video clips, and after careful filtering and quality control, we publicly release over 90,000 high-quality clips.
We warmly welcome the community to explore, research, and build upon this dataset. If you find it helpful, please consider giving us an upvote β€οΈ β your support means a lot!
π BibTeX
β€οΈ If you found this repository helpful, please give us a star and cite our report:
@article{bian2025videoasprompt,
title = {Video-As-Prompt: Unified Semantic Control for Video Generation},
author = {Yuxuan Bian and Xin Chen and Zenan Li and Tiancheng Zhi and Shen Sang and Linjie Luo and Qiang Xu},
journal = {arXiv preprint arXiv:2510.20888},
year = {2025},
url = {https://arxiv.org/abs/2510.20888}
}
- Downloads last month
- 3,083