Skip to yearly menu bar Skip to main content


Poster

Dancing with Still Images: Video Distillation via Static-Dynamic Disentanglement

Ziyu Wang · Yue Xu · Cewu Lu · Yonglu Li

Arch 4A-E Poster #145
[ ]
[ Poster
Wed 19 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract:

Recently, dataset distillation has paved the way towards efficient machine learning, especially for image datasets. However, the distillation for videos, characterized by an exclusive temporal dimension, remains an underexplored domain. In this work, we provide the first systematic study of video distillation and introduce a taxonomy to categorize temporal compression. Our investigation reveals that the temporal information is usually not well learned during distillation, and the temporal dimension of synthetic data contributes little. The observations motivate our unified framework of disentangling the dynamic and static information in the videos. It first distills the videos into still images as static memory and then compensates the dynamic and motion information with a learnable dynamic memory block. Our method achieves state-of-the-art on video datasets at different scales, with notably smaller memory storage budget. Our code is available at https://github.com/yuz1wan/video_distillation.

Chat is not available.