Skip to yearly menu bar Skip to main content


Poster

End-to-End Spatio-Temporal Action Localisation with Video Transformers

Alexey Gritsenko · Xuehan Xiong · Josip Djolonga · Mostafa Dehghani · Chen Sun · Mario Lučić · Cordelia Schmid · Anurag Arnab

Arch 4A-E Poster #364
[ ]
Thu 20 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract:

The most performant spatio-temporal action localisation models use external person proposals and complex external memory banks. We propose a fully end-to-end, transformer based model that directly ingests an input video, and outputs tubelets -- a sequence of bounding boxes and the action classes at each frame. Our flexible model can be trained with either sparse bounding-box supervision on individual frames, or full tubelet annotations. And in both cases, it predicts coherent tubelets as the output. Moreover, our end-to-end model requires no additional pre-processing in the form of proposals, or post-processing in terms of non-maximal suppression. We perform extensive ablation experiments, and significantly advance the state-of-the-art on five different spatio-temporal action localisation benchmarks with both sparse keyframes and full tubelet annotations.

Chat is not available.