Название: Deep Learning for Video Understanding Автор: Zuxuan Wu, Yu-Gang Jiang Издательство: Springer Год: 2024 Страниц: 194 Язык: английский Формат: pdf (true), epub Размер: 42.2 MB
This book presents Deep Learning techniques for video understanding. For Deep Learning basics, the authors cover Machine Learning pipelines and notations, 2D and 3D Convolutional Neural Networks for spatial and temporal feature learning. For action recognition, the authors introduce classical frameworks for image classification, and then elaborate both image-based and clip-based 2D/3D CNN networks for action recognition. For action detection, the authors elaborate sliding windows, proposal-based detection methods, single stage and two stage approaches, spatial and temporal action localization, followed by datasets introduction. For video captioning, the authors present language-based models and how to perform sequence to sequence learning for video captioning. For unsupervised feature learning, the authors discuss the necessity of shifting from supervised learning to unsupervised learning and then introduce how to design better surrogate training tasks to learn video representations. Finally, the book introduces recent self-training pipelines like contrastive learning and masked image/video modeling with transformers. The book provides promising directions, with an aim to promote future research outcomes in the field of video understanding with Deep Learning.
Convolutional Neural Networks (CNNs) was first introduced in 1989 by Yann LeCun primarily used for handwritten character recognition. CNNs require fewer parameters than fully connected networks, making them superior to their peer deep networks with easier parallel computation on GPUs. Over time, CNNs have dominated a wide range of tasks such as image classification, object detection, semantic segmentation, and instance segmentation. In recent years, researchers extend CNNs’ ability to applications including natural language processing, recommendation systems, etc. This section is organized as below: First, we will explain convolutions in neural networks, followed by an introduction of pooling, which is a common operation in CNNs; lastly, we will present background and detailed architectures of five classic CNNs in a chronological order.
1. Overview of Video Understanding 2. Deep Learning Basics for Video Understanding 3. Deep Learning for Action Recognition 4. Deep Learning for Video Localization 5. Deep Learning for Video Captioning 6. Unsupervised Feature Learning for Video Understanding 7. Efficient Video Understanding 8. Conclusion and Future Directions
Внимание
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.