MeanFlow-Accelerated Multimodal Video-to-Audio Synthesis via One-Step Generation
Abstract
A key challenge in synthesizing audios from silent videos is the inherent trade-off between synthesis quality and inference efficiency in existing methods. Take Flow Matching based models as an example: they rely on modeling instantaneous velocity, inherently require an iterative sampling process, leading to slow inference speeds. To address this efficiency bottleneck, we introduce a MeanFlow-accelerated model that characterizes flow fields using average velocity, enabling one-step generation and thereby significantly accelerating multimodal video-to-audio (VTA) synthesis while preserving audio quality, semantic alignment, and temporal synchronization. Furthermore, a scalar rescaling mechanism is employed to balance conditional and unconditional predictions when classifier-free guidance (CFG) is applied, effectively mitigating CFG-induced distortions in one step generation. Since the audio synthesis network is jointly trained with multimodal conditions, we further evaluate it on text-to-audio (TTA) synthesis task. The results demonstrate that incorporating MeanFlow into the network significantly improves inference speed without compromising perceptual quality on both VTA and TTA tasks.

Comparison of (a) previous flow matching based method and (b) proposed MeanFlow-accelerated method
Audio Samples
Video to Audio Synthesis Task (Videos from VggSound)
Ground Truth
Frieren (step = 1)
MF-MJT (step = 1)
MMAudio (step = 25)
Frieren (step = 25)
MF-MJT (step = 25)
Ground Truth
Frieren (step = 1)
MF-MJT (step = 1)
MMAudio (step = 25)
Frieren (step = 25)
MF-MJT (step = 25)
Ground Truth
Frieren (step = 1)
MF-MJT (step = 1)
MMAudio (step = 25)
Frieren (step = 25)
MF-MJT (step = 25)
Ground Truth
Frieren (step = 1)
MF-MJT (step = 1)
MMAudio (step = 25)
Frieren (step = 25)
MF-MJT (step = 25)
Ground Truth
Frieren (step = 1)
MF-MJT (step = 1)
MMAudio (step = 25)
Frieren (step = 25)
MF-MJT (step = 25)