[go: up one dir, main page]

Zafarali Ahmed

Research Areas

Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract Entropy regularization is commonly used to improve policy optimization in reinforcement learning. It is believed to help \emph{exploration} by encouraging a more stochastic policy. In this work, we analyze that claim and, through new visualizations of the optimization landscape, observe that its effect matches that of a regularizer. We show that even with access to the exact gradient, policy optimization is difficult due to the geometry of the objective function. We qualitatively show that, in some environments, entropy regularization can make the optimization landscape smoother thereby connecting local optima and enabling the use of larger learning rates. This work provides tools for understanding the underlying optimization landscape and highlights the challenge of designing general-purpose optimization algorithms in reinforcement learning. View details
    InfoBot: Structured Exploration in ReinforcementLearning Using Information Bottleneck
    Anirudh Goyal
    Riashat Islam
    Daniel Strouse
    Matthew Botvinick
    Yoshua Bengio
    Sergey Levine
    ICLR (2019)
    Preview abstract A central challenge in reinforcement learning is discovering effective policies for tasks where rewards are sparsely distributed. We postulate that in the absence of useful reward signals, an effective exploration strategy should seek out decision states. These states lie at critical junctions in the state space from where the agent can transition to new, potentially unexplored regions. We propose to learn about decision states from prior experience. By training a goal-conditioned policy with an information bottleneck, we can identify decision states by examining where the model actually leverages the goal state. We find that this simple mechanism effectively identifies decision states, even in partially observed settings. In effect, the model learns the sensory cues that correlate with potential subgoals. In new environments, this model can then identify novel subgoals for further exploration, guiding the agent through a sequence of potential decision states and through new regions of the state space. View details