Offline policy evaluation (OPE) is considered a fundamental and challenging problem in reinforcement learning (RL). This talk focuses on the value estimation of a target policy based on pre-collected data generated from a possibly different policy, under the framework of infinite-horizon Markov decision processes. Motivated by the recently developed marginal importance sampling method in RL and the covariate balancing idea in causal inference, we propose a novel estimator with approximately projected state-action balancing weights for the policy value estimation. We obtain the convergence rate of these weights and show that the proposed value estimator is semi-parametric efficient under technical conditions. In terms of asymptotics, our results scale with both the number of trajectories and the number of decision points at each trajectory. As such, consistency can still be achieved with a limited number of subjects when the number of decision points diverges. In addition, we develop a necessary and sufficient condition for establishing the well-posedness of the Bellman operator in the off-policy setting, which characterizes the difficulty of OPE and may be of independent interest. Numerical experiments demonstrate the promising performance of our proposed estimator.
嘉宾介绍
I am an Assistant Professor in the Department of Mathematical Sciences at the University of Texas at Dallas. I obtaied my Ph.D. degree in the Department of Statistics at Texas A&M University(TAMU). Prior to TAMU, I received a B.S. in Statistics from Zhejiang University in 2017. I am broadly interested in methodology and theory in nonparametric statistics and machine learning. My recent research focuses on reinforcement learning, functional data and matrix completion.
数据分析从入门到精通,狗熊学习卡助您一臂之力!69元/年,狗熊会所有视频课程无限看,代码轻松学。欢迎小伙伴们扫码购入~