Abstract
Terminal Velocity Matching (TVM) generalizes flow matching for high-fidelity generative modeling, achieving state-of-the-art performance on ImageNet with minimal computational steps.
We propose Terminal Velocity Matching (TVM), a generalization of flow matching that enables high-fidelity one- and few-step generative modeling. TVM models the transition between any two diffusion timesteps and regularizes its behavior at its terminal time rather than at the initial time. We prove that TVM provides an upper bound on the 2-Wasserstein distance between data and model distributions when the model is Lipschitz continuous. However, since Diffusion Transformers lack this property, we introduce minimal architectural changes that achieve stable, single-stage training. To make TVM efficient in practice, we develop a fused attention kernel that supports backward passes on Jacobian-Vector Products, which scale well with transformer architectures. On ImageNet-256x256, TVM achieves 3.29 FID with a single function evaluation (NFE) and 1.99 FID with 4 NFEs. It similarly achieves 4.32 1-NFE FID and 2.94 4-NFE FID on ImageNet-512x512, representing state-of-the-art performance for one/few-step models from scratch.
Community
Generalizes flow matching to model transitions between diffusion timesteps, enabling single-stage training and state-of-the-art one/few-step image generation with efficient fused attention.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Decoupled MeanFlow: Turning Flow Models into Flow Maps for Accelerated Sampling (2025)
- Large Scale Diffusion Distillation via Score-Regularized Continuous-Time Consistency (2025)
- AlphaFlow: Understanding and Improving MeanFlow Models (2025)
- Equilibrium Matching: Generative Modeling with Implicit Energy-Based Models (2025)
- MeanFlow Transformers with Representation Autoencoders (2025)
- Advancing End-to-End Pixel Space Generative Modeling via Self-supervised Pre-training (2025)
- Mirror Flow Matching with Heavy-Tailed Priors for Generative Modeling on Convex Domains (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper