Pre-training with RL: APT

Behave From the Void: Unsupervised Active Pre-training paper The paper, Behave From the Void: Unsupervised Active Pre-training, proposed a new method for pretraining RL agents, APT , which is claimed to beat all baselines on DMControl Suite. As the abstract pointed out: the key novel idea is to explore the environment by maximizing a non-parametric entropy computed in a abstract representation space. This blog will take a look at the motivation, method and explanation of the paper, as well as compare it with the other AAAI paper....

September 24, 2022 · Dibbla

RL generalization: 2 Evaluations

It is obvious that to propose a problem better, one has to illustrate the problem well. RL generalization, as the survey indicated, is a class of problems. And here, we show two benchmark environments and their common experiment settings. Procgen Following Coinrun, OpenAI’s team proposed a new testing environment called procgen. Consisting of 16 games, the Procgen provides a convenient way to generate environments procedurally that share the same underlying logic and reward but are different in layout and rendering....

September 24, 2022 · Dibbla

Representation Learning with RL: SPR

Data-Efficient Reinforcement Learning with Self-Predictive Representations As we see in the blog, policy similarity metric (PSM) uses a specially designed bisimulation relation to force representation network to learn the transition dynamics. This blog will give a brief overview of another method, self-predictive dynamics, which learns about transition dynamics in a more explicit way. The goal of SPR is to improve the sample-efficiency with self-supervised process. This leverages limitless training signals from self-predictive process....

September 1, 2022 · Dibbla

Representation Learning with RL: SimCLR to PSM

Representation learning has been widely used and studied in CV&NLP. It is not surprising that people transfer the methods and ideas to reinforcement learning, especially for generalization and data-efficiency. SimCLR, as a widely used self-supervised learning (SSL) method, has achieved excellent performance in CV tasks. The very basic idea is to learn a representation. Under ideal circumstances, representations of pictures are high-level information abstract. SimCLR forces the representation network to learn invariants among pictures with a carefully designed structure....

August 30, 2022 · Dibbla

Hanabi Paper List

Dibbla: This file/list contains several papers about Hanabi, but mostly focus on 2 ideas: MCTS method and learning a protocol. Theoretical Method Playing Hanabi Near-Optimally This paper, from a theory view, provides a hat-guessing strategy that reaches nearly full score in some settings. Check here. Survey The Hanabi challenge: A new frontier for AI research Check here The 2018 Hanabi Competition Check here MCTS Re-determinizing MCTS in Hanabi Check here...

July 16, 2022 · Dibbla