rammurmu's picture
Update README.md (#1)
748be2b verified

A newer version of the Gradio SDK is available: 5.49.1

Upgrade
metadata
emoji: πŸŽ₯
title: 'RunAsh Wan 2.1 '
short_description: Real-time video generation
sdk: gradio
sdk_version: 5.34.2
license: apache-2.0
colorFrom: yellow
colorTo: red
pinned: true
thumbnail: >-
  https://cdn-uploads.huggingface.co/production/uploads/6799f4b5a2b48413dd18a8dd/nxPqZaXa6quMBU4ojqDzC.png

🎬 RunAsh Real Time Video Generation

A real-time video generation model based on Wan 2.1 β€” optimized for low-latency inference and interactive applications.

Demo GIF or Screenshot Placeholder


🧠 Model Overview

RunAsh Real Time Video Generation is a fine-tuned and optimized version of Wan 2.1, designed for real-time video synthesis from text prompts or image inputs. It leverages efficient architecture modifications and inference optimizations to enable smooth, interactive video generation at the edge or in-browser environments.


πŸš€ Features

  • βœ… Real-time generation (target: <500ms per frame on GPU)
  • βœ… Text-to-Video and Image-to-Video modes
  • βœ… Low VRAM usage optimizations
  • βœ… Interactive Gradio UI for live demos
  • βœ… Supports variable length outputs (2s–8s clips)
  • βœ… Plug-and-play with Hugging Face Spaces

πŸ› οΈ Technical Details

  • Base Model: Wan-2.1-RealTime (by original authors)
  • Architecture: Diffusion Transformer + Latent Consistency Modules
  • Resolution: 576x320 (16:9) or 320x576 (9:16) β€” configurable
  • Frame Rate: 24 FPS (adjustable)
  • Latency: ~300–600ms per generation step (RTX 3090 / A10G)
  • Max Duration: 8 seconds (configurable in code)

πŸ–ΌοΈ Example Usage

Text-to-Video

prompt = "A cyberpunk cat riding a neon scooter through Tokyo at night"
video = pipeline(prompt, num_frames=48, guidance_scale=7.5)

Image-to-Video

init_image = load_image("cat.png")
video = pipeline(init_image, motion_prompt="zoom in slowly", num_frames=24)

πŸ§ͺ Try It Out

πŸ‘‰ Live Demo: https://huggingface.co/spaces/rammurmu/runash-realtime-video

Try generating short video clips in real time β€” no queue, no wait!


βš™οΈ Installation & Local Use

git clone https://huggingface.co/spaces/rammurmu/runash-realtime-video
cd runash-realtime-video
pip install -r requirements.txt
python app.py

Requires: Python 3.9+, PyTorch 2.0+, xFormers (optional), and a GPU with β‰₯8GB VRAM.


πŸ“œ License

This space is a derivative of Wan 2.1 Real-time Video Generation.

  • Original Model License: Apache 2.0
  • This Space: Licensed under the same terms as the original. For commercial use, please refer to the original authors’ terms.
  • Disclaimer: This is a demonstration/educational fork. Not affiliated with original authors unless explicitly stated.

πŸ™ Attribution

This project is based on:

Wan 2.1 Real-time Video Generation
Authors: Ram Murmu RunAsh AI
Hugging Face Link: https://huggingface.co/spaces/wan-2.1
Paper: [Link to Paper, if available]


πŸ’¬ Feedback & Support

Found a bug? Want a feature?
β†’ Open an Issue
β†’ Join our Discord
β†’ Tweet at @RunAsh AI Labs


🌟 Star This Repo

If you find this useful, please ⭐️ the original Wan 2.1 repo and this space!


πŸ“Œ Disclaimer

This is a duplicate/forked space for demonstration and community experimentation. All credit for the underlying model goes to the original Wan 2.1 authors. This space does not claim original authorship of the core model.


βœ… Updated: April 2025
πŸ§‘β€πŸ’» Maintained by: RunAsh AI Labs
πŸ“¬ ** Contact**: [email protected]