Contextual-MDPs for PAC-Reinforcement Learning with Rich Observations

https://128.84.21.199/pdf/1602.02722v1.pdf

We propose and study a new tractable model for reinforcement learning with high-dimensional observation called Contextual-MDPs, generalizing contextual bandits to a sequential decision making setting.

These models require an agent to take actions based on high-dimensional observations (features) with the goal of achieving long-term performance competitive with a large set of policies. Since the size of the observation space is a primary obstacle to sample-efficient learning, Contextual-MDPs are assumed to be summarizable by a small number of hidden states. In this setting, we design a new reinforcement learning algorithm that engages in global exploration while using a function class to approximate future performance.

We also establish a sample complexity guarantee for this algorithm, proving that it learns near optimal behavior after a number of episodes that is polynomial in all relevant parameters, logarithmic in the number of policies, and independent of the size of the observation space. This represents an exponential improvement on the sample complexity of all existing alternative approaches and provides theoretical justification for reinforcement learning with function approximation.

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容