在线学术报告 | 朱进博士:强化学习的变量选择方法

学术   2024-11-02 07:03   广东  


  

  


摘要

In real-world applications of reinforcement learning, it is often challenging to obtain a state representation that is parsimonious and satisfies the Markov property without prior knowledge. Consequently, it is common practice to construct a state which is larger than necessary, e.g., by concatenating measurements over contiguous time points. However, needlessly increasing the dimension of the state can slow learning and obfuscate the learned policy. We introduce the notion of a minimal sufficient state in a Markov decision process (MDP) as the smallest subvector of the original state under which the process remains an MDP and shares the same optimal policy as the original process. We propose a novel sequential knockoffs (SEEK) algorithm that estimates the minimal sufficient state in a system with high-dimensional complex nonlinear dynamics. In large samples, the proposed method controls the false discovery rate, and selects all sufficient variables with probability approaching one. As the method is agnostic to the reinforcement learning algorithm being applied, it benefits downstream tasks such as policy optimization. Empirical experiments verify theoretical results and show the proposed approach outperforms several competing methods in terms of variable selection accuracy and regret.

嘉宾介绍

朱进,伦敦政治经济学院博士后,于中山大学获得博士学位。主要研究领域包括强化学习和高维数据分析,相关成果发表在 PNAS、JASA、JMLR、ICML、AISTATS 等期刊和会议。


狗熊会线上学术报告厅向数据科学及相关领域的学者及从业者开放,非常期待各位熊粉报名或推荐报告人。相关事宜,请联系:常莹,ying.chang@clubear.org

狗熊会
狗熊会,统计学第二课堂!传播统计学知识,培养统计学人才,推动统计学在产业中的应用!
 最新文章