- Добавил: literator
- Дата: 29-05-2023, 01:32
- Комментариев: 0

Автор: Marc G. Bellemare, Will Dabney, Mark Rowland
Издательство: The MIT Press
Серия: Adaptive Computation and Machine Learning
Год: 2023
Страниц: 379
Язык: английский
Формат: epub (true)
Размер: 13.4 MB
The first comprehensive guide to Distributional Reinforcement Learning, providing a new mathematical formalism for thinking about decisions from a probabilistic perspective. Distributional Reinforcement Learning is a new mathematical formalism for thinking about decisions. Going beyond the common approach to reinforcement learning and expected values, it focuses on the total reward or return obtained as a consequence of an agent's choices—specifically, how this return behaves from a probabilistic perspective. How is distributional reinforcement learning different? In reinforcement learning, the value function describes the expected return that one would counterfactually obtain from beginning in any given state. It is reasonable to say that its fundamental object of interest – the expected return – is a scalar and that algorithms that operate on value functions operate on collections of scalars (one per state). On the other hand, the fundamental object of distributional reinforcement learning is a probability distribution over returns: the return distribution. The return distribution characterizes the probability of different returns that can be obtained as an agent interacts with its environment from a given state. Distributional reinforcement learning algorithms operate on collections of probability distributions that we call return-distribution functions (or simply return functions).