Vtome.ru - электронная библиотека

The Developer's Playbook for Large Language Model Security

  • Добавил: literator
  • Дата: 5-09-2024, 16:33
  • Комментариев: 0
Название: The Developer's Playbook for Large Language Model Security: Building Secure AI Applications (Final Release)
Автор: Steve Wilson
Издательство: O’Reilly Media, Inc.
Год: 2024
Страниц: 378
Язык: английский
Формат: pdf, epub, mobi
Размер: 10.1 MB

Large Language Models (LLMs) are not just shaping the trajectory of Artificial Intelligence (AI), they're also unveiling a new era of security challenges. This practical book takes you straight to the heart of these threats. Author Steve Wilson, chief product officer at Exabeam, focuses exclusively on LLMs, eschewing generalized AI security to delve into the unique characteristics and vulnerabilities inherent in these models.

Complete with collective wisdom gained from the creation of the OWASP Top 10 for LLMs list—a feat accomplished by more than 400 industry experts—this guide delivers real-world guidance and practical strategies to help developers and security teams grapple with the realities of LLM applications. Whether you're architecting a new application or adding AI features to an existing one, this book is your go-to resource for mastering the security landscape of the next frontier in AI.

You'll learn:

Why LLMs present unique security challenges
How to navigate the many risk conditions associated with using LLM technology
The threat landscape pertaining to LLMs and the critical trust boundaries that must be maintained
How to identify the top risks and vulnerabilities associated with LLMs
Methods for deploying defenses to protect against attacks on top vulnerabilities
Ways to actively manage critical trust boundaries on your systems to ensure secure execution and risk minimization

Who Should Read This Book:
The primary audience for this book is development teams that are building custom applications that embed LLM technologies. Through my recent work in this area, I’ve come to understand that these teams are often large and their members include an incredibly diverse set of backgrounds. These include software developers skilled in “web app” technologies who are taking their first steps with AI. These teams may also consist of AI experts who are bringing their craft out of the back office for the first time and into the limelight, where the security risks are much different. They also include application security pros and data science specialists.

Beyond that core audience, I’ve learned that others have found much of this information useful. This includes the extended teams involved in these projects, who want to understand the underpinnings of the technologies to help mitigate the critical risks of adopting these new technologies. These include software development executives, chief information security officers (CISOs), quality engineers, and security operations teams.

Chapter 1, “Chatbots Breaking Bad”, walks through a real-world case study whereby amateur hackers destroyed an expensive and promising chatbot project from one of the world’s largest software companies. This will set the stage for your forthcoming battles in this arena.

Chapter 2, “The OWASP Top 10 for LLM Applications”, introduces a project I founded in 2023 that aims to identify and address the unique security challenges posed by LLMs. The knowledge gained working on that directly led to my writing this book.

Chapter 3, “Architectures and Trust Boundaries”, explores the structure of applications using LLMs, emphasizing the importance of controlling the various data flows within the application.

Chapter 4, “Prompt Injection”, explores how attackers can manipulate LLMs by crafting specific inputs that cause them to perform unintended actions.

Chapter 5, “Can Your LLM Know Too Much?”, dives into the risks of sensitive information leakage, showcasing how LLMs can inadvertently expose data they’ve been trained on and how to safeguard against this vulnerability.

Chapter 6, “Do Language Models Dream of Electric Sheep?”, examines the unique phenomenon of “hallucinations” in LLMs—instances where models generate false or misleading information.

Chapter 7, “Trust No One”, focuses on the principle of zero trust, explaining the importance of not taking any output at face value and ensuring rigorous validation processes are in place to handle LLM outputs.

Chapter 8, “Don’t Lose Your Wallet”, tackles the economic risks of deploying LLM technologies, focusing on denial-of-service (DoS), denial-of-wallet (DoW), and model cloning attacks. These threats exploit similar vulnerabilities to impose financial burdens, disrupt services, or steal intellectual property.

Chapter 9, “Find the Weakest Link”, highlights the vulnerabilities within the software supply chain and the critical steps needed to secure it from potential breaches that could compromise the entire application.

In Chapter 10, “Learning from Future History”, I’ll use some famous science fiction anecdotes to illustrate how multiple weaknesses and design issues can stitch together to spell disaster. By explaining these futuristic case studies, I hope to help you prevent a future like this from ever occurring.

In Chapter 11, “Trust the Process”, we’ll get down to the serious business of building LLM-savvy security practices into your software factory—without this, I do not believe you can successfully secure this type of software at scale.

Finally, in Chapter 12, “A Practical Framework for Responsible AI Security”, we’ll examine the trajectory of LLM and AI technologies to see where they’re taking us and the likely implications to security and safety requirements. I’ll also introduce you to the Responsible Artificial Intelligence Software Engineering (RAISE) framework that will give you a simple, checklist-based approach to ensuring you’re putting into practice the most important tools and lessons to keep your software safe and secure.

Скачать The Developer's Playbook for Large Language Model Security



ОТСУТСТВУЕТ ССЫЛКА/ НЕ РАБОЧАЯ ССЫЛКА ЕСТЬ РЕШЕНИЕ, ПИШИМ СЮДА!










ПРАВООБЛАДАТЕЛЯМ


СООБЩИТЬ ОБ ОШИБКЕ ИЛИ НЕ РАБОЧЕЙ ССЫЛКЕ



Внимание
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.