Название: Distributional Reinforcement Learning Автор: Marc G. Bellemare, Will Dabney, Mark Rowland Издательство: The MIT Press Серия: Adaptive Computation and Machine Learning Год: 2023 Страниц: 379 Язык: английский Формат: epub (true) Размер: 13.4 MB
The first comprehensive guide to Distributional Reinforcement Learning, providing a new mathematical formalism for thinking about decisions from a probabilistic perspective.
Distributional Reinforcement Learning is a new mathematical formalism for thinking about decisions. Going beyond the common approach to reinforcement learning and expected values, it focuses on the total reward or return obtained as a consequence of an agent's choices—specifically, how this return behaves from a probabilistic perspective. In this first comprehensive guide to distributional reinforcement learning, Marc G. Bellemare, Will Dabney, and Mark Rowland, who spearheaded development of the field, present its key concepts and review some of its many applications. They demonstrate its power to account for many complex, interesting phenomena that arise from interactions with one's environment.
How is distributional reinforcement learning different? In reinforcement learning, the value function describes the expected return that one would counterfactually obtain from beginning in any given state. It is reasonable to say that its fundamental object of interest – the expected return – is a scalar and that algorithms that operate on value functions operate on collections of scalars (one per state). On the other hand, the fundamental object of distributional reinforcement learning is a probability distribution over returns: the return distribution. The return distribution characterizes the probability of different returns that can be obtained as an agent interacts with its environment from a given state. Distributional reinforcement learning algorithms operate on collections of probability distributions that we call return-distribution functions (or simply return functions).
The authors present core ideas from classical reinforcement learning to contextualize distributional topics and include mathematical proofs pertaining to major results discussed in the text. They guide the reader through a series of algorithmic and mathematical developments that, in turn, characterize, compute, estimate, and make decisions on the basis of the random return. Practitioners in disciplines as diverse as finance (risk management), computational neuroscience, computational psychiatry, psychology, macroeconomics, and robotics are already using distributional reinforcement learning, paving the way for its expanding applications in mathematical finance, engineering, and the life sciences. More than a mathematical approach, Distributional Reinforcement Learning represents a new perspective on how intelligent agents make predictions and decisions.
Preface 1. Introduction 2. The Distribution of Returns 3. Learning the Return Distribution 4. Operators and Metrics 5. Distributional Dynamic Programming 6. Incremental Algorithms 7. Control 8. Statistical Functionals 9. Linear Function Approximation 10. Deep Reinforcement Learning 10.1. Learning with a Deep Neural Network 10.2. Distributional Reinforcement Learning with Deep Neural Networks 10.3. Implicit Parameterizations 10.4. Evaluation of Deep Reinforcement Learning Agents 10.5. How Predictions Shape State Representations 10.6. Technical Remarks 10.7. Bibliographical Remarks 10.8. Exercises 11. Two Applications and a Conclusion Notation References Index Series List
Внимание
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.