Complexity, Self-Organization and Emergence in a Multi-Agent System through Microcosm Simulation
Julius is simulating microcosms (multi-agent systems simulating populations of simple organisms), to analyze the system properties of self-organization, emergence, and complexity.
Supervisors: Gordon Pipa, Elia Bruni
The neural mechanics of lifelong learners
Daniel researches why learning algorithms and artificial networks fail in continual learning scenarios. He takes inspiration from properties of biological brains to identify mechanisms and inductive biases that may enable artificial neural networks to become successful continual learners.
Supervisors: Tim C. Kietzman, Peter König
Graph theoretical analysis of eye tracking data recorded in complex VR cities to investigate spatial navigation
Eye tracking data recorded in virtual reality with freedom of movement require new analysis approaches. In this project, Jasmin L. Walter proposes a new method to quantify characteristics of visual behavior by applying graph-theoretical measures to eye tracking data. Using this methodology, she investigates visual behavior during free exploration of a virtual city and assesses global spatial navigation characteristics.
Supervisor: Peter König
The role of language and pragmatics in higher-level cognition: forming abstract concepts in social interaction
By using iterative, agent-based computational modeling Kristina Kobrock aims to answer the research question: “Under which circumstances do more and more abstract concepts evolve?”.
Supervisor: Nicole Gotzner, Elia Bruni
Towards a better understanding of visual information sampling in the brain: neural correlates and deep neural network models of the exploration-exploitation dilemma
To investigate which aspects of the continuously changing neural signatures are predictive for human fixation durations Philip Sulewski records neural activity using magnetoencephalography (MEG) combined with eye tracking while subjects visually explore natural scenes.
Supervisors: Tim C. Kietzmann, Peter König
Improving the Signal Quality of a Mobile EEG Device with Deep Learning
Laura is working on the DreamMachine, a low cost, mobile EEG-device, developed in the NI research group. Her goal is to improve the signal quality and spatial resolution of the device by applying different Deep Learning architectures.
Supervisor: Gordon Pipa
Project Westdrive: large scale VR Foundation for immersive Experiments on Human Computer Interactions
Would you trust a robot to drive your car? Maximilian Wächter's goal is to gain insights in human trust building behavior and ultimately lower reservations regarding this technology. For this he developed a large scaled, highly realistic VR simulation with AI controlled cars as an eye-tracking experiment.
Supervisors: Peter König, Gordon Pipa
Language emergence in artificial agents
Xenia Ohmer develops computational models of language learning and emergence in artificial agents. Firstly, she uses these models to gain insights on the role of pragmatic reasoning in human language learning, and secondly, she tries to integrate pragmatic reasoning mechanisms into artificial agents designed for language learning or communication.
Supervisors: Michael Franke, Peter König
Incorporating motion into PeriNet - a computational model for central and peripheral vision
This project helps to advance our understanding of the human visual system and to develop efficient, biologically plausible end-to-end computational models for vision. With the PeriNet computational model Hristofor Lukanov addresses the problem of the split in the peripheral and central vision.
Supervisors: Gordon Pipa & Peter König
The semantics, pragmatics, and acquisition of polarity items
Juliane Schwab studies positive and negative polarity items in natural language. Her project contributes to our understanding of the processing and learning mechanisms at the interface of syntax, semantics, and pragmatics.
Supervisors: Mingya Liu, Jutta Mueller
Semi-supervised Conceptors and Conceptor Logic
Conceptors were introduced by H. Jaeger in 2014 as a mathematical formalism to derive and manipulate internal representations of concepts in neural networks and reintroduce them into the network dynamics. Georg Schroeter explores further the theoretical foundations and possible applications of Conceptors to both recurrent and feed-forward neural network architectures.
Supervisors: Kai-Uwe Kühnberger & Gordon Pipa