GRK 2340

Graduiertenkolleg "Computational Cognition"


Navigation und Suche der Universität Osnabrück


Hauptinhalt

Topinformationen

How to participate?

The number of participants to the crash course is limited.
Please sign up by Friday, the 22nd of January using the following StudIP link.

You may share this link to the YouTube live stream of the talks & panel discussions with interested people.

Deep Reinforcement Learning Workshop

Online Workshop (January 27-28, 2021)

The online-workshop organized by Viviane Clay & Ashima Keshava comprises of a crash course and talk-sessions with invited speakers including panel discussions. The crash course aims to provide the participants with essential knowledge on deep reinforcement learning in order to understand the special features and advantages. It is designed as an interactive course and to get all participants on the same page to fully benefit from the talks and the panel discussions with the international speakers who work on highly relevant projects in that field.

Topinformationen

Mind reading by math

Abstract:
Inverse Rational Control is a new method to calculate which hidden beliefs of an agent best explain its behavior. It works by solving a family of tasks and identifying which solution gives the highest probability for the observed behavior. This method combines aspects of Inverse Reinforcement Learning and Inverse Optimal Control. In practice we use a combination of tools from statistics, deep learning, and optimization to apply this approach to concrete problems. I'll talk about how we are using this approach to understand how animals forage for food and catch fireflies in virtual reality.

Take a look at his research: Rational thoughts in neural codes

Assessing the Robustness of Deep RL Algorithms

Abstract:
Deep reinforcement-learning approaches have been shown to produce a remarkable performance on a range of challenging control tasks. Observations of the resulting behavior give the impression that agents construct rich task representations that support insightful action decisions. Looking closely at the generalization performance of these deep Q-networks, however, we find that the learned value computations often reduce to brittle memorization and that the network does not know how to handle even small non-adversarial modifications to the states it encounters during execution. We examine training methods that improve generalization capability. Our results provide strong evidence that not all deep networks learn robust behaviors, and that careful consideration must be made to achieve results to the contrary.

Take a look at his research: Reward-predictive representations generalize across tasks in reinforcement learning

Ontogeny and Phylogeny of Embodied AI Agents

Abstract:
Learning has led to significant progress in sensorimotor control, however, most successes require training agent policies from scratch for every new robot, task, or environment and are miserable at generalization. This is in contrast to other fields like computer vision or language where learning has been instrumental in pretraining general representations that allow fast adaptation to several downstream tasks. If we are to ever create such general, pre-trainable priors for movement control similar to those for image classification or language modeling, it is imperative for policies to be applicable to a wide variety of robots as well as tasks. But how does one decide what tasks to pretrain for, what robots to pretrain on or how should robot motions be represented so as to generalize well to unseen ones?

In this talk, I will present our initial efforts towards building a framework for learning general-purpose embodied intelligence driven by two key ingredients: curiosity and compositionality. The framework is inspired by ideas from developmental psychology (ontogeny) and evolutionary biology (phylogeny) for sensorimotor learning in robotic agents. I will present results from case studies of robots that can play video games, tie knots using rope, navigate in office environments, and display drastically diverse locomotion styles across unseen robot shapes.

Take a look at his research:
Curiosity-driven Exploration by Self-supervised Prediction
Neural Dynamic Policies for End-to-End Sensorimotor Learning

Social Reinforcement Learning

Abstract:
Social learning helps humans and animals rapidly adapt to new circumstances, and drives the emergence of complex learned behaviors. This talk focuses on Social Reinforcement Learning, developing new RL algorithms that leverage social learning to improve single-agent learning and generalization, multi-agent coordination, and human-AI interaction. We will demonstrate how a multi-agent technique for Adversarial Environment Generation based on minimax regret can lead to the generation of a complex curriculum of training environments, which improves an agent’s zero-shot transfer to unknown, single-agent test tasks. We also propose a novel Offline RL technique for learning from intrinsic social cues during interaction with humans in an open-domain dialog setting. Together, this work argues that Social RL is a valuable approach for developing more general, sophisticated, and human-compatible AI.

Take a look at her research:
Social Influence as Intrinsic Motivation for Multi-Agent Deep Reinforcement Learning
Emergent Complexity and Zero-shot Transfer viaUnsupervised Environment Design