Workshop on “Structure & Priors in Reinforcement Learning” (SPiRL) at ICLR 2019
Date: Monday, May 6th, 09:00 AM – 06:00 PM CST
Location: Room R4, Ernest N. Morial Convention Center, New Orleans
Link to the workshop in the ICLR 2019 schedule.
News
- (2019/05/06) The workshop recording is now available! Hope you enjoyed attending or live-streaming :-)
- (2019/04/29) Call for challenge questions announced! Please encourage junior researchers attending ICLR to submit a question.
- (2019/03/29) Decisions mailed. Thank you to everyone who submitted!
- (2019/03/08) Call for submissions closed.
- (2019/02/06) Call for submissions formally announced!
- (2019/02/06) Reading list added!
Abstract
Generalization and sample complexity remain unresolved problems in reinforcement learning (RL), limiting the applicability of these methods to real-world problem settings. A powerful solution to these challenges lies in the deliberate use of inductive bias, which has the potential to allow RL algorithms to acquire solutions from significantly fewer samples and with greater generalization performance [Ponsen et al., 2009]. However, the question of what form this inductive bias should take in the context of RL remains an open one. Should it be provided as a prior distribution for use in Bayesian inference [Ghavamzadeh et al., 2015], learned wholly from data in a multi-task or meta-learning setup [Taylor and Stone, 2009], specified as structural constraints (such as temporal abstraction [Parr and Russell, 1998, Dietterich, 2000, Sutton et al., 1999] or hierarchy [Singh, 1992, Dayan and Hinton, 1992]), or some combination thereof?
The computational cost of recently successful applications of RL to complex domains such as gameplay [Silver et al., 2016, Silver et al., 2017, OpenAI, 2018] and robotics [Levine et al., 2018, Kalashnikov et al., 2018] has led to renewed interest in answering this question, most notably in the specification and learning of structure [Vezhnevets et al., 2017, Frans et al., 2018, Andreas et al., 2017] and priors [Duan et al., 2016, Wang et al., 2016, Finn et al., 2017]. In response to this trend, the ICLR 2019 workshop on “Structure & Priors in Reinforcement Learning” (SPiRL) aims to revitalize a multi-disciplinary approach to investigating the role of structure and priors as a way of specifying inductive bias in RL.
Beyond machine learning, other disciplines such as neuroscience and cognitive science have traditionally played, or have the potential to play, a role in identifying useful structure [Botvinick et al., 2009, Boureau et al., 2015] and priors [Trommershauser et al., 2008, Gershman and Niv, 2015, Dubey et al., 2018] for use in RL. As such, we expect attendees to be from a broad variety of backgrounds (including RL and machine learning, Bayesian methods, cognitive science and neuroscience), which would be beneficial for the (re-)discovery of commonalities and under-explored research directions.
Code of conduct
All participants of the workshop must abide by the ICLR code of conduct. We empower and encourage you to report any behavior that makes you or others feel uncomfortable by contacting the ICLR Diversity and Inclusion co-chairs. You can also contact the organizing committee by email at organizers@spirl.info or by submitting this (optionally anonymous) Google form.