Название: Adversarial Machine Learning: Attack Surfaces, Defence Mechanisms, Learning Theories in Artificial Intelligence Автор: Aneesh Sreevallabh Chivukula, Xinghao Yang, Bo Liu Издательство: Springer Год: 2023 Страниц: 314 Язык: английский Формат: pdf (true), epub Размер: 10.2 MB
A critical challenge in Deep Learning is the vulnerability of Deep Learning networks to security attacks from intelligent cyber adversaries. Even innocuous perturbations to the training data can be used to manipulate the behaviour of deep networks in unintended ways. In this book, we review the latest developments in adversarial attack technologies in computer vision; natural language processing; and cybersecurity with regard to multidimensional, textual and image data, sequence data, and temporal data. In turn, we assess the robustness properties of Deep Learning networks to produce a taxonomy of adversarial examples that characterises the security of learning systems using game theoretical adversarial Deep Learning algorithms. The state-of-the-art in adversarial perturbation-based privacy protection mechanisms is also reviewed.
We propose new adversary types for game theoretical objectives in non-stationary computational learning environments. Proper quantification of the hypothesis set in the decision problems of our research leads to various functional problems, oracular problems, sampling tasks, and optimization problems. We also address the defence mechanisms currently available for deep learning models deployed in real-world environments. The learning theories used in these defence mechanisms concern data representations, feature manipulations, misclassifications costs, sensitivity landscapes, distributional robustness, and complexity classes of the adversarial deep learning algorithms and their applications.
In closing, we propose future research directions in adversarial deep learning applications for resilient learning system design and review formalized learning assumptions concerning the attack surfaces and robustness characteristics of artificial intelligence applications so as to deconstruct the contemporary adversarial deep learning designs. Given its scope, the book will be of interest to Adversarial Machine Learning practitioners and Adversarial Artificial Intelligence researchers whose work involves the design and application of Adversarial Deep Learning.
A significant robustness gap exists between machine intelligence and human perception despite recent advances in deep learning. Deep learning is not provably secure. A critical challenge in deep learning is the vulnerability of deep learning networks to security attacks from malicious adversaries. Even innocuous perturbations to the training data can be used to manipulate the behavior of the deep network in unintended ways. For example, autonomous AI agents in unmanned autonomous systems such as self-driving vehicles can play multistage cyber deception games with the learning algorithms. Adversarial deep learning algorithms are specifically designed to exploit such vulnerabilities in deep networks. These vulnerabilities are simulated by training the learning algorithm under various attack scenarios. The attack scenarios are assumed to be formulated by an intelligent adversary. The optimal attack policy is formulated as solving for optimization problems. The attack scenarios have led to the development of adversarial attack technologies in computer vision, natural language processing, cybersecurity on multidimensional, textual and image data, sequence data, and spatial data.
In discriminative learning models, adversarial learning problems are formulated with deep neural networks computing statistical divergence metrics between training data features and adversarial data features. Latent space on high-dimensional training data can also be searched by deep networks to construct adversarial examples. Depending on the goal, knowledge, and capability of an adversary, adversarial examples can be crafted by prior knowledge, observation, and experimentation on the loss functions in deep learning. Adversarial examples are known to transfer between data-specific manifolds of deep learning models. Thus predictive performance of deep learning models under attack is an interesting area for research. Randomized adversarial algorithms for discrimination can be extended with efficiency, complexity, reliability, learnability, etc. tradeoffs in the game theoretical optimization. The resultant convergence properties of game theoretical optima can be investigated with adaptive dynamic programming to produce numerical computational methods for adversarial deep learning.
1. Adversarial Machine Learning 2. Adversarial Deep Learning 3. Adversarial Attack Surfaces 4. Game Theoretical Adversarial Deep Learning 5. Adversarial Defense Mechanisms for Supervised Learning 6. Physical World Adversarial Attacks on Images and Texts 7. Adversarial Perturbation for Privacy Preservation
Внимание
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.