Название: Neural Networks with Model Compression Автор: Baochang Zhang, Tiancheng Wang, Sheng Xu, David Doermann Издательство: Springer Серия: Computational Intelligence Methods and Applications Год: 2024 Страниц: 267 Язык: английский Формат: pdf (true), epub Размер: 34.3 MB
Deep Learning has achieved impressive results in image classification, Computer Vision and Natural Language Processing (NLP). To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floating-point operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, our book will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS due to its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge about Machine Learning and Deep Learning to better understand the methods described in this book.
Deep Learning is a subset of Machine Learning that focuses on developing and applying artificial neural networks with multiple layers, also known as deep neural networks. It is inspired by the structure and function of the human brain, specifically the interconnectedness of neurons. Deep Learning models, also known as deep neural networks, comprise multiple layers of interconnected artificial neurons called units or nodes. These layers include an input layer, one or more hidden layers, and an output layer. Each unit in the network receives input signals, applies a mathematical transformation to them, and produces an output signal that is passed to the next layer. The weights associated with each connection between the units determine the strength and impact of the signals.
Deep Learning models can have many architectures, depending on the task and data being addressed. Common architectures include feedforward neural networks, convolutional neural networks (CNNs) for image analysis, recurrent neural networks (RNNs) for sequence data, and transformers for natural language processing (NLP) tasks. Deep Learning has revolutionized the field of Artificial Intelligence, enabling machines to learn and make intelligent decisions from vast amounts of data. Its ability to learn complex patterns and representations has significantly advanced in various domains.
Внимание
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.