Video interpolation increases the temporal resolution of a video sequence by synthesizing intermediate frames between two consecutive frames. We propose a novel deep-learning-based video interpolation algorithm based on bilateral motion estimation. First, we develop the bilateral motion network with the bilateral cost volume to estimate bilateral motions accurately. Then, we approximate bi-directional motions to predict a different kind of bilateral motions. We then warp the two input frames using the estimated bilateral motions. Next, we develop the dynamic filter generation network to yield dynamic blending filters. Finally, we combine the warped frames using the dynamic blending filters to generate intermediate frames. Experimental results show that the proposed algorithm outperforms the state-of-the-art video interpolation algorithms on several benchmark datasets.
Junhuem Park, Keunsoo Ko, Chul Lee, and Chang-Su Kim,
"BMBC: Bilateral Motion Estimation with Bilateral Cost Volume for Video Interpolation," accepted to Proceedings of European Conference on Computer Vision (ECCV), 2020.
[arxiv] [code]