Название: Practicing Trustworthy Machine Learning: Consistent, Transparent, and Fair AI Pipelines (Final Release) Автор: Yada Pruksachatkun, Matthew McAteer, Subhabrata Majumdar Издательство: O’Reilly Media, Inc. Год: 2023 Страниц: 303 Язык: английский Формат: True/Retail PDF EPUB Размер: 47.0 MB
With the increasing use of AI in high-stakes domains such as medicine, law, and defense, organizations spend a lot of time and money to make ML models trustworthy. Many books on the subject offer deep dives into theories and concepts. This guide provides a practical starting point to help development teams produce models that are secure, more robust, less biased, and more explainable.
Authors Yada Pruksachatkun, Matthew McAteer, and Subhabrata Majumdar translate best practices in the academic literature for curating datasets and building models into a blueprint for building industry-grade trusted ML systems. With this book, engineers and data scientists will gain a much-needed foundation for releasing trustworthy ML applications into a noisy, messy, and often hostile world.
Why we wrote this book: We live in a world where Machine Learning (ML) systems are used in increasingly high-stakes domains like medicine, law, and defense. Model decisions can result in economic gains or losses in the millions or billions of dollars. Because of the high-stakes nature of their decisions and consequences, it is important for these ML systems to be trustworthy. This can be a problem when the ML systems are not secure, may fail unpredictably, have notable performance disparities across sample groups, and/or struggle to explain their decisions. We wrote this book to help your ML models stand up on their own in the real world.
As people who have both conducted research in ML and also worked on ML systems that have been successfully deployed, we’v noticed that the gap between building an initial ML model for a static dataset and deployment is large. A major part of this gap is in lack of trustworthiness. There are so many ways in which ML models that work in development can fail in production. Many large companies have dedicated responsible AI and safety teams to analyze the potential risks and consequences of both their current and potential future ML systems. Unfortunately, the vast majority of teams and companies using ML do not have the bandwidth to do this. Even in cases where such teams exist, they are often under-resourced, and the company as a whole may be rushing too fast for the safety team to keep up with for fear of a competitor releasing a similar model first. We wrote this book to lower the barrier to entry to making ML systems more trustworthy.
You'll learn: Methods to explain ML models and their outputs to stakeholders How to recognize and fix fairness concerns and privacy leaks in an ML pipeline How to develop ML systems that are robust and secure against malicious attacks Important systemic considerations, like how to manage trust debt and which ML obstacles require human intervention
Who this book is for: This book is written for anyone who is currently working with Machine Learning models and wants to be sure that the fruits of their labor will not cause unintended harm when released into the real world. The primary audience of the book are engineers and data scientists who have some familiarity with Machine Learning. Parts of the book should be accessible to non-engineers, such as product managers and executives with a conceptual understanding of ML. Some of you may be building ML systems that make higher-stakes decisions than they encountered in your previous job or in academia. We assume you are are familiar with the very basics of Deep Learning, and Python for the code samples.
Скачать Practicing Trustworthy Machine Learning: Consistent, Transparent, and Fair AI Pipelines (Final Release)
Внимание
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.