JavisGPT: A Unified Multi-modal LLM for Sounding-Video Comprehension and Generation
Abstract
This paper presents JavisGPT, the first unified multimodal large language model (MLLM) for Joint Audio-Video (JAV) comprehension and generation. JavisGPT adopts a concise encoder-LLM-decoder architecture, featuring a SyncFusion module for spatio-temporal audio-video fusion and synchrony-aware learnable queries to bridge a pretrained JAV-DiT generator. This design enables temporally coherent video-audio understanding and generation from multimodal instructions. We design an effective three-stage training pipeline consisting of multimodal pretraining, audio-video fine-tuning, and large-scale instruction-tuning, to progressively build multimodal comprehension and generation from existing vision-language models. To support this, we further construct JavisInst-Omni, a high-quality instruction dataset with over 200K GPT-4o-curated audio-video-text dialogues that span diverse and multi-level comprehension and generation scenarios. Extensive experiments on JAV comprehension and generation benchmarks show that JavisGPT outperforms existing MLLMs, particularly in complex and temporally synchronized settings.
Community
๐ฅ๐ฅ๐ฅ JavisGPT
๐ We introduce JavisGPT, a multimodal LLM that can understand audiovisual inputs and simultaneously generate synchronized sounding videos in a unified model.
๐ค We contribute JavisInst-Omni, a dataset to facilitate diverse and complex instruction-tuning for comprehension and generation on sounding videos.
๐ Paper: https://arxiv.org/abs/2503.23377
๐ Project: https://javisverse.github.io/JavisGPT-page/
โจ Code: https://github.com/JavisVerse/JavisGPT
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- 3MDiT: Unified Tri-Modal Diffusion Transformer for Text-Driven Synchronized Audio-Video Generation (2025)
- DreamFoley: Scalable VLMs for High-Fidelity Video-to-Audio Generation (2025)
- MAViD: A Multimodal Framework for Audio-Visual Dialogue Understanding and Generation (2025)
- ChronusOmni: Improving Time Awareness of Omni Large Language Models (2025)
- VideoPerceiver: Enhancing Fine-Grained Temporal Perception in Video Multimodal Large Language Models (2025)
- UFVideo: Towards Unified Fine-Grained Video Cooperative Understanding with Large Language Models (2025)
- ProAV-DiT: A Projected Latent Diffusion Transformer for Efficient Synchronized Audio-Video Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
arXiv lens breakdown of this paper ๐ https://arxivlens.com/PaperView/Details/javisgpt-a-unified-multi-modal-llm-for-sounding-video-comprehension-and-generation-6566-5b0bd725
- Executive Summary
- Detailed Breakdown
- Practical Applications
Models citing this paper 1
Datasets citing this paper 4
Spaces citing this paper 0
No Space linking this paper