Stochastic Optimization Methods for Policy Evaluation in Reinforcement Learning

Stochastic Optimization Methods for Policy Evaluation in Reinforcement Learning
Author :
Publisher :
Total Pages : 0
Release :
ISBN-10 : 1638283702
ISBN-13 : 9781638283706
Rating : 4/5 (706 Downloads)

Book Synopsis Stochastic Optimization Methods for Policy Evaluation in Reinforcement Learning by : Yi Zhou

Download or read book Stochastic Optimization Methods for Policy Evaluation in Reinforcement Learning written by Yi Zhou and published by . This book was released on 2024-07-11 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: This monograph introduces various value-based approaches for solving the policy evaluation problem in the online reinforcement learning (RL) scenario, which aims to learn the value function associated with a specific policy under a single Markov decision process (MDP). Approaches vary depending on whether they are implemented in an on-policy or off-policy manner. In on-policy settings, where the evaluation of the policy is conducted using data generated from the same policy that is being assessed, classical techniques such as TD(0), TD(λ), and their extensions with function approximation or variance reduction are employed in this setting. For off-policy evaluation, where samples are collected under a different behavior policy, this monograph introduces gradient-based two-timescale algorithms like GTD2, TDC, and variance-reduced TDC. These algorithms are designed to minimize the mean-squared projected Bellman error (MSPBE) as the objective function. This monograph also discusses their finite-sample convergence upper bounds and sample complexity.


Stochastic Optimization Methods for Policy Evaluation in Reinforcement Learning Related Books