|Title||Encoding formulas as deep networks: Reinforcement learning for zero-shot execution of LTL formulas|
|Publication Type||Conference Paper|
|Year of Publication||2020|
|Authors||Kuo, Y-L, Katz, B, Barbu, A|
|Conference Name||2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)|
|Conference Location||Las Vegas, NV, USA|
We demonstrate a reinforcement learning agent which uses a compositional recurrent neural network that takes as input an LTL formula and determines satisfying actions. The input LTL formulas have never been seen before, yet the network performs zero-shot generalization to satisfy them. This is a novel form of multi-task learning for RL agents where agents learn from one diverse set of tasks and generalize to a new set of diverse tasks. The formulation of the network enables this capacity to generalize. We demonstrate this ability in two domains. In a symbolic domain, the agent finds a sequence of letters that is accepted. In a Minecraft-like environment, the agent finds a sequence of actions that conform to the formula. While prior work could learn to execute one formula reliably given examples of that formula, we demonstrate how to encode all formulas reliably. This could form the basis of new multitask agents that discover sub-tasks and execute them without any additional training, as well as the agents which follow more complex linguistic commands. The structures required for this generalization are specific to LTL formulas, which opens up an interesting theoretical question: what structures are required in neural networks for zero-shot generalization to different logics?
- CBMM Funded