RL generalization: Generalizable LfO via Inferring Goal Proximity

Paper Here; Official Blog Here Generalizable Imitation Learning from Observation via Inferring Goal Proximity is a NIPS2021 paper which focuses on the generalization problem of Learning from Demonstration(LfO). The idea of the paper is quite straightforward without much mathematical explanations. In this blog I will show the high-level idea and experiment setting of the paper. Preliminaries: LfO and “Goal” idea LfO is an imitation learning setting, where we cannot access the action information of experts’ demonstrations....

October 22, 2022 · Dibbla

Generalization & Imitation Learning: IRL Identifiability Part1

Paper reference Paper1: Towards Resolving Unidentifiability in Inverse Reinforcement Learning HERE Paper2: Identifiability in inverse reinforcement learning HERE Paper3: Identifiability and generalizability from multiple experts in Inverse Reinforcement Learning HERE This papers are quite theoretical and not so easy to read. But they, at least for me, reveals something to do with generalization. Preliminaries: IRL & Identifiability IRL, as a subset of Imitation Learning, aims to recover the reward function of certain MDP, given the reward-free environment $E$ and an optimal agent policy $\pi$....

September 30, 2022 · Dibbla

Generalization & Imitation Learning: IRL Identifiability Part1

Paper reference Paper1: Towards Resolving Unidentifiability in Inverse Reinforcement Learning HERE Paper2: Identifiability in inverse reinforcement learning HERE Paper3: Identifiability and generalizability from multiple experts in Inverse Reinforcement Learning HERE This papers are quite theoretical and not so easy to read. But they, at least for me, reveals something to do with generalization. Preliminaries: IRL & Identifiability IRL, as a subset of Imitation Learning, aims to recover the reward function of certain MDP, given the reward-free environment $E$ and an optimal agent policy $\pi$....

September 30, 2022 · Dibbla

Pre-training with RL: APT

Behave From the Void: Unsupervised Active Pre-training paper The paper, Behave From the Void: Unsupervised Active Pre-training, proposed a new method for pretraining RL agents, APT , which is claimed to beat all baselines on DMControl Suite. As the abstract pointed out: the key novel idea is to explore the environment by maximizing a non-parametric entropy computed in a abstract representation space. This blog will take a look at the motivation, method and explanation of the paper, as well as compare it with the other AAAI paper....

September 24, 2022 · Dibbla

Pre-training with RL: APT

Behave From the Void: Unsupervised Active Pre-training paper The paper, Behave From the Void: Unsupervised Active Pre-training, proposed a new method for pretraining RL agents, APT , which is claimed to beat all baselines on DMControl Suite. As the abstract pointed out: the key novel idea is to explore the environment by maximizing a non-parametric entropy computed in a abstract representation space. This blog will take a look at the motivation, method and explanation of the paper, as well as compare it with the other AAAI paper....

September 24, 2022 · Dibbla