Our Research Focus

Our focus is on vision-based perception and control in multi-robot systems, especially in the context of aerial robots. Our goal is to understand how teams of flying robots can act (navigate, cooperate and communicate) in optimal ways using only raw sensor inputs, e.g., RGB images and IMU measurements. Thus, our group studies novel methods for active perception (classical methods such as MPC; and learning based methods such as reinforcement learning) , sensor fusion and state estimation. To facilitate our research we also design and develop novel aerial robots.

Multi-robot Active Perception -- We investigate methods for multi-robot formation control based on cooperative target perception without relying on a pre-specified formation geometry. We have developed methods for teams of mobile aerial and ground robots, equipped with only RGB cameras, to maximize their joint perception of moving persons or objects in 3D space by actively steering the formations that facilitate the joint perception. We have introduced and rigorously tested active perception methods using novel detection and tracking pipelines and nonlinear model predictive control (MPC) based formation controller. This forms our autonomous aerial motion capture (AirCap) system's front-end (AirCap Front-End).

Multi-view pose and shape estimation for Motion Capture (MoCap) -- For human MoCap, we develop methods to estimate 3D pose and shape using images from multiple and approximately-calibrated cameras. Such image datasets are obtained using our AirCap system's front-end running an active perception method. We leverage 2D joint detectors as noisy sensor measurements and jointly optimize for human pose, shape and the extrinsics of the cameras (AirCap Back-End).

Multi-robot Sensor Fusion -- We study and develop unified methods for sensor fusion that are not only scalable to large environments but also simultaneously to a large number of sensors and teams of robots. We have developed several methods for unified and integrated multi-robot cooperative localization and target tracking. Here "unified" means that the poses of all robots and targets are estimated by every other robot and "integrated" means that disagreement among sensors, inconsistent sensor measurements, occlusions and sensor failures, are handled within a single Bayesian framework. The methods are either filter-based (filtering) or pose-graph optimization-based approaches (smoothing). While each category has its own advantages w.r.t. the available computational resources and the level of estimation accuracy, we have also developed a novel moving-horizon technique for a hybrid method that combines the advantages of both kinds of approaches.

New Robot Platforms -- In order to have extensive access to the hardware, we design and build most of our robotic platforms. Currently, our main flying platforms include 8-rotor Octocopters and a helium-based airship (see picture in the header of this page). More details can be found here https://ps.is.tue.mpg.de/pages/outdoor-aerial-motion-capture-system.