Название: Binary Representation Learning on Visual Images: Learning to Hash for Similarity Search Автор: Zheng Zhang Издательство: Springer Год: 2024 Страниц: 210 Язык: английский Формат: pdf (true), epub Размер: 50.2 MB
This book introduces pioneering developments in binary representation learning on visual images, a state-of-the-art data transformation methodology within the fields of Machine Learning and multimedia. Binary representation learning, often known as learning to hash or hashing, excels in converting high-dimensional data into compact binary codes meanwhile preserving the semantic attributes and maintaining the similarity measurements.
In this book, we provide a comprehensive introduction to the theories, algorithms, and applications that cover the latest research in hashing-based visual image retrieval, with a focus on binary representations. These representations are crucial in enabling fast and reliable feature extraction and similarity assessments on large-scale data. This book offers an insightful analysis of various research methodologies in binary representation learning for visual images, ranging from basis shallow hashing, advanced high-order similarity-preserving hashing, deep hashing, as well as adversarial and robust deep hashing techniques. These approaches can empower readers to proficiently grasp the fundamental principles of the traditional and state-of-the-art methods in binary representations, modeling, and learning. The theories and methodologies of binary representation learning expounded in this book will be beneficial to readers from diverse domains such as Machine Learning, multimedia, social network analysis, web search, information retrieval, data mining, and others.
This book includes 7 chapters, including one topic introduction and four technical parts. In Chap. 1, we start our introduction of the research background and formalized binary representation learning, followed by the overall storyline of this book. After the introduction, Part 1 introduces the asymmetric discrete hashing with multiple discriminative hashing functions learning, and this part includes Chaps. 2 and 3. Specifically, Chap. 2 focuses on scalable supervised asymmetric hash code learning with maximum inner product search. Chapter 3 aims to bridge the trilateral domain gaps to perform inductive structure consistent hashing with the asymmetric hashing functions construction.
Part 2 considers high-level ordinal-preserving hashing and includes Chaps. 4 and 5 for fast similarity search. Specifically, Chap. 4 presents an ordinal-preserving hashing concept under a non-parametric Bayesian theory, which can maximally explore the sample-level high-order similarities during space transformation. In contrast, instead of using pointwise or pairwise sample relations in the visual space, Chap. 5 explores the intrinsic latent feature space by capturing the underlying topological feature structure of data for ordinal-preserving latent graph hashing. Notably, we build two types of ordinal-preserving hashing methodology from sample- and feature-level ordinal-preserving hash code learning, respectively.
Part 3, including Chap. 6 shows deep collaborative learning for semantic-invariant hash code generation. Chapter 6 collectively considers multi-level semantics and latent space construction in discriminative hash code learning. Part 4, including Chaps. 7, uncovers the adversarial vulnerability of deep hashing networks and provides a feasible adversarial training solution for reliable deep hashing-based visual similarity search. We mainly instantiate the targeted attack of deep hashing in Chap. 7 and provide a unified adversarial training scheme to improve the adversarial robustness of deep hashing networks.
Скачать Binary Representation Learning on Visual Images: Learning to Hash for Similarity Search
Внимание
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.