Skip to yearly menu bar Skip to main content


Tutorial

Learning Deep Low-dimensional Models from High-Dimensional Data: From Theory to Practice

Qing Qu · Zhihui Zhu · Yuqian Zhang · Yi Ma · Sam Buchanan · Beidi Chen · Mojan Javaheripi · Liyue Shen · Zhangyang Wang

Summit 442
[ ] [ Project Page ]
[ Slides
Tue 18 Jun 9 a.m. PDT — 6 p.m. PDT

Abstract:

Over the past decade, the advent of machine learning and large-scale computing has immeasurably changed the ways we process, interpret, and predict with data in imaging and computer vision. The “traditional” approach to algorithm design, based around parametric models for specific structures of signals and measurements—say sparse and low-rank models—and the associated optimization toolkit, is now significantly enriched with data-driven learning-based techniques, where large-scale networks are pre-trained and then adapted to a variety of specific tasks. Nevertheless, the successes of both modern data-driven and classic model-based paradigms rely crucially on correctly identifying the low-dimensional structures present in real-world data, to the extent that we see the roles of learning and compression of data processing algorithms—whether explicit or implicit, as with deep networks—as inextricably linked.

As such, this tutorial provides a timely tutorial that uniquely bridges low-dimensional models with deep learning in imaging and vision. This tutorial will show how: 1. Low-dimensional models and principles provide a valuable lens for formulating problems and understanding the behavior of modern deep models in imaging and computer vision; 2. How ideas from low-dimensional models can provide valuable guidance for designing new parameter efficient, robust, and interpretable deep learning models for computer vision problems in practice.

Chat is not available.