Spaces:
Runtime error
A newer version of the Gradio SDK is available:
5.49.1
emoji: π₯
title: 'RunAsh Wan 2.1 '
short_description: Real-time video generation
sdk: gradio
sdk_version: 5.34.2
license: apache-2.0
colorFrom: yellow
colorTo: red
pinned: true
thumbnail: >-
https://cdn-uploads.huggingface.co/production/uploads/6799f4b5a2b48413dd18a8dd/nxPqZaXa6quMBU4ojqDzC.png
π¬ RunAsh Real Time Video Generation
A real-time video generation model based on Wan 2.1 β optimized for low-latency inference and interactive applications.
π§ Model Overview
RunAsh Real Time Video Generation is a fine-tuned and optimized version of Wan 2.1, designed for real-time video synthesis from text prompts or image inputs. It leverages efficient architecture modifications and inference optimizations to enable smooth, interactive video generation at the edge or in-browser environments.
π Features
- β Real-time generation (target: <500ms per frame on GPU)
- β Text-to-Video and Image-to-Video modes
- β Low VRAM usage optimizations
- β Interactive Gradio UI for live demos
- β Supports variable length outputs (2sβ8s clips)
- β Plug-and-play with Hugging Face Spaces
π οΈ Technical Details
- Base Model:
Wan-2.1-RealTime(by original authors) - Architecture: Diffusion Transformer + Latent Consistency Modules
- Resolution: 576x320 (16:9) or 320x576 (9:16) β configurable
- Frame Rate: 24 FPS (adjustable)
- Latency: ~300β600ms per generation step (RTX 3090 / A10G)
- Max Duration: 8 seconds (configurable in code)
πΌοΈ Example Usage
Text-to-Video
prompt = "A cyberpunk cat riding a neon scooter through Tokyo at night"
video = pipeline(prompt, num_frames=48, guidance_scale=7.5)
Image-to-Video
init_image = load_image("cat.png")
video = pipeline(init_image, motion_prompt="zoom in slowly", num_frames=24)
π§ͺ Try It Out
π Live Demo: https://huggingface.co/spaces/rammurmu/runash-realtime-video
Try generating short video clips in real time β no queue, no wait!
βοΈ Installation & Local Use
git clone https://huggingface.co/spaces/rammurmu/runash-realtime-video
cd runash-realtime-video
pip install -r requirements.txt
python app.py
Requires: Python 3.9+, PyTorch 2.0+, xFormers (optional), and a GPU with β₯8GB VRAM.
π License
This space is a derivative of Wan 2.1 Real-time Video Generation.
- Original Model License: Apache 2.0
- This Space: Licensed under the same terms as the original. For commercial use, please refer to the original authorsβ terms.
- Disclaimer: This is a demonstration/educational fork. Not affiliated with original authors unless explicitly stated.
π Attribution
This project is based on:
Wan 2.1 Real-time Video Generation
Authors: Ram Murmu RunAsh AI
Hugging Face Link: https://huggingface.co/spaces/wan-2.1
Paper: [Link to Paper, if available]
π¬ Feedback & Support
Found a bug? Want a feature?
β Open an Issue
β Join our Discord
β Tweet at @RunAsh AI Labs
π Star This Repo
If you find this useful, please βοΈ the original Wan 2.1 repo and this space!
π Disclaimer
This is a duplicate/forked space for demonstration and community experimentation. All credit for the underlying model goes to the original Wan 2.1 authors. This space does not claim original authorship of the core model.