Vtome.ru - электронная библиотека

Practicing Trustworthy Machine Learning: Consistent, Transparent, and Safe AI Pipelines (Second Early Release)

  • Добавил: literator
  • Дата: 19-10-2022, 02:11
  • Комментариев: 0
Practicing Trustworthy Machine Learning: Consistent, Transparent, and Safe AI Pipelines (Second Early Release)Название: Practicing Trustworthy Machine Learning: Consistent, Transparent, and Safe AI Pipelines (Second Early Release)
Автор: Yada Pruksachatkun, Matthew McAteer, Subhabrata Majumdar
Издательство: O’Reilly Media, Inc.
Год: 2022-10-14
Страниц: 313
Язык: английский
Формат: epub (true), mobi
Размер: 47.3 MB

With the increasing use of AI in high-stakes domains such as medicine, law, and defense, organizations spend a lot of time and money to make ML models trustworthy. Many books on the subject offer deep dives into theories and concepts. This guide provides a practical starting point to help development teams produce models that are secure, more robust, less biased, and more explainable.

Authors Yada Pruksachatkun, Matthew McAteer, and Subhabrata Majumdar translate best practices in the academic literature for curating datasets and building models into a blueprint for building industry-grade trusted ML systems. With this book, engineers and data scientists will gain a much-needed foundation for releasing trustworthy ML applications into a noisy, messy, and often hostile world.

Why we wrote this book:
As people who have both conducted research in ML and also worked on ML systems that have been successfully deployed, we’v noticed that the gap between building an initial ML model for a static dataset and deployment is large. A major part of this gap is in lack of trustworthiness. There are so many ways in which ML models that work in development can fail in production. Many large companies have dedicated responsible AI and safety teams to analyze the potential risks and consequences of both their current and potential future ML systems. Unfortunately, the vast majority of teams and companies using ML do not have the bandwidth to do this. Even in cases where such teams exist, they are often under-resourced, and the company as a whole may be rushing too fast for the safety team to keep up with for fear of a competitor releasing a similar model first. We wrote this book to lower the barrier to entry to making ML systems more trustworthy.

You'll learn:

Methods to explain ML models and their outputs to stakeholders
How to recognize and fix fairness concerns and privacy leaks in an ML pipeline
How to develop ML systems that are robust and secure against malicious attacks
Important systemic considerations, like how to manage trust debt and which ML obstacles require human intervention

Who this book is for:
This book is written for anyone who is currently working with machine learning models and wants to be sure that the fruits of their labor will not cause unintended harm when released into the real world. The primary audience of the book are engineers and data scientists who have some familiarity with machine learning. Parts of the book should be accessible to non-engineers, such as product managers and executives with a conceptual understanding of ML. Some of you may be building ML systems that make higher-stakes decisions than they encountered in your previous job or in academia. We assume you are are familiar with the very basics of deep learning, and Python for the code samples.

Скачать Practicing Trustworthy Machine Learning (Second Early Release)












ОТСУТСТВУЕТ ССЫЛКА/ НЕ РАБОЧАЯ ССЫЛКА ЕСТЬ РЕШЕНИЕ, ПИШИМ СЮДА!


ПРАВООБЛАДАТЕЛЯМ


СООБЩИТЬ ОБ ОШИБКЕ ИЛИ НЕ РАБОЧЕЙ ССЫЛКЕ



Внимание
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.