Aamir Ahmad

Aamir Ahmad leads the Flight Robotics and Perception Group (FRPG) at the University of Stuttgart. He is deputy director (research) of the Institute of Flight Mechanics and Controls (IFR) at the University of Stuttgart and tenure-track professor (chair 'Flugrobotik'). He is also a research group leader at the Max Planck Institute for Intelligent Systems in Tübingen, Germany, where he was previously a research scientist (2016–2020). His main research focus is on perception and control in multi-robot systems, with a particular emphasis on autonomous aerial robots. He has attracted research funding from the European Commission (H2020), Cyber Valley and Max Planck Institute's Grassroots program. In 2013, he was awarded the prestigious Marie-Curie Intra-European Fellowship, following which, he was a postdoctoral fellow at the Max Planck Institute for Biological Cybernetics, Tübingen (2014–2016). He obtained his PhD (awarded with merit), from the University of Lisbon, Portugal in 2013 and B-Tech (Honours) from the Indian Institute of Technology, Kharagpur (2008). Ahmad's publications at the IEEE European Conference on Mobile Robotics have twice (2011, 2015) received the top-ten paper award. He is currently serving as an associate editor of IEEE International Conference on Robotics and Automation (ICRA).

Eric Price

Eric Price is currently a researcher at the Institute for Flight Mechanics and Controls at the University Stuttgart. He studied computer science in Stuttgart with focus on flight robotics and aerial SLAM, then worked for the industry and open source projects on distributed wireless sensor systems and aerial control, before joining the Perceiving Systems department at the Max Planck Institute for Intelligent Systems as a PhD student, working on perception driven aerial formations.

Eric works on the intersection of visual perception, distributed sensor fusion and formation control, primarily for the application of aerial motion capture, but also similar tasks that require a tight formation of vehicles to follow and observe or study a subject, such as humans or animals. This involves real-time on-board computer vision, needed for "seeing" the subject in the camera image, modeling the appearance and behaviour of the subject, then sharing and fusing this information among all vehicles in a flying formation. Furthermore, the vehicles need to anticipate their own motions, that of the subject, and the environment and flight physics to establish and maintain a formation that ensures optimal observation dynamically. His latest work involves doing this with autonomous airships, which offer economic and safe flight characteristics but are challenging to control in formation.

Nitin Saini

Nitin is a PhD student at Max Planck Institute for Intelligent Systems, Tuebingen. He completed his Master degree in Neural Information Processing from University of Tuebingen. Before that, he worked with Samsung Research India for 2 years as a senior software engineer. He completed his bachelor degree in Electronsics Engineering from Indian Institute of Technology Varanasi, India.

Nitin's research focus is on estimation of 3D human pose and shape in outdoor scenarios using multiple UAVs. Each UAV is equipped with an RGB camera and onboard computation capabilities to process the camera input. He is interested in developing shape and pose estimation algorithms that can execute on each UAV’s computation unit with minimal inter-UAV communication. He has developed several methods in this context, which range from optimization-based to end-to-end learning based. The optimization approach considers 2D joint detections as measurements, while the end-to-end approach estimates 3D pose and shapes parameters directly from images.

Yu Tang Liu

Yu Tang Liu is currently a PhD student at the Max Planck Institute for Intelligent Sytsems, Tübingen. He completed his Masters at the Max Planck Institute Intelligent System and Technical University of Cologne. Before that, he did is Bachelors in Electrical Engineering from National Tsing Hua University, Taiwan. His research focuses on intelligent and data-driven control for autonomous robots.

Yu Tang's research focuses on designing multi-task reinforcement learning algorithms for robot control. A key aspect he explores is that RL agents can learn a new task by leveraging previous solutions. In the context of airship control, he has developed a solution in which he first employs a PID controller to provide baseline performance. Subsequently, a deep residual reinforcement learning agent learns to modify the PID decisions by interaction with the environment. Through rigorous simulation experiments, he has show that the agent is robust to changes in wind speed and buoyancy. He has also contributed in the development of a high-fidelity blimp simulation framework based on ROS and Gazebo simulation. Furthermore, his interests also include multi-agent deep reinforcement learning methods for cooperative perception tasks, such as motion capture.

Elia Bonetto

lia Bonetto received his B.Sc. in computer engineering from the University of Padova, Italy, in 2017, where he also obtained the M.Sc. degree in ICT for Internet and Multimedia – Cybersystems (cum laude). He is currently pursuing his Ph.D. within the IMPRS-IS program at the Max Planck Institute in Tübingen, Germany. His main research interests are computer vision and robotics and the interaction between these two fields. His current research focus is Active SLAM in dynamic environments.

Initially, Elia focused on developing an Active Visual SLAM method that could efficiently and autonomously explore an environment. This resulted in a multi-layered approach that worked by optimizing the robot's heading in both the global and local scales. The method, iRotate, resulted in exploration paths up to 39 % shorter than the state-of-the-art methods. To expand iRotate to different platforms, i.e. semi and non-holonomic, he also developed a custom camera's rotational joint. This further lowered the energy consumption by reducing the rotation of the wheels, thanks to a more flexible control. The proposed joint state estimation also proved its efficacy by reducing the trajectory error of up to 40%. Currently, to he is working on generating an indoor dynamic environments dataset to further expand this work to more realistic scenarios. Besides that, he also worked on markerless aerial human body pose and shape estimation within the AirPose and AirCap projects at FRPG.

Pascal Goldschmid

Following a lifelong interest in aviation and in autonomous systems, Pascal Goldschmid obtained a Bachelor's degree in 2017 and a Master's degree in 2020 in Aerospace Engineering from the University of Stuttgart. After joining the International Max Planck Research School for Intelligent System in 2021, he is now pursuing a PhD in the field of aerial robotics. He engages in applying RL combined with classical control approaches to enable autonomous landing and docking of multi-rotor UAV.

Pascal's research currently focuses on the development of approaches enabling autonomous landing and docking of multi-rotor vehicles. In the first step, a reinforcement learning-based method and simulation framework has been developed to learn the landing maneuver of a multi-rotor vehicle on a 2D moving platform from data exclusively through interaction with the simulated environment. Special focus was paid to a short training time, a high rate of successful landings and interpretable hyperparameters. Future work is planned to address an extended problem, where the platform is allowed to move in 3D space. For this purpose, a combination of a non-linear model predictive controller (NMPC) and an artificial neural network (ANN) shall be explored. The purpose is to leverage the best of two worlds, the NMPC's ability to consider constraints and the learning capabilities of an ANN.

Pranav Khandelwal

Pranav received his PhD from the University of North Carolina at Chapel Hill investigating the gliding behavior of flying lizards in their natural environment. His expertise lies at the interface of Physics, Biology, and Computer Vision allowing him to explore locomotion in biological systems in real-world situations using non-invasive measurement techniques. He is especially interested in how organisms (individually and in a group) sense and move in their complex natural habitat to perform their day-to-day activities.

In the Flight Robotics and Perception Group, he is working on translating the motion capture data collected from the field experiments using drones and blimps into insightful behavioral findings on how animals move in their natural habitat. Additionally, he hopes to further develop non-invasive motion capture techniques to study animal behavior in the wild and make them accessible for the large community of animal ecologists and biomechanists.

Christian Gall

Christian Gall is a Ph.D. student at the University of Stuttgart’s Institute of Flight Mechanics and Controls in Germany. His research interest lies in developing reinforcement learning methods for autonomous soaring. He received his B.S. and M.S. degrees in Aerospace Engineering from the University of Stuttgart, where he specialized in system dynamics and automation engineering. In addition, he spent a year at the University of Virginia in the U.S., focusing on robotics and machine learning.

There are manifold applications for electrically propelled unmanned aerial vehicles (UAVs), such as search and rescue purposes. However, the high energy consumption of these UAVs limits their range. In the case of fixed-wing aircraft, the range can be significantly improved by exploiting energy from the atmospheric environment. Glider pilots, for instance, combine experience, skills, knowledge, and perception to localize and exploit updrafts, which allows them to cover long distances. The objective of Christian Gall’s research is to develop methods that allow fixed-wing UAVs to maximize their range by autonomously localizing and exploiting updrafts in a similar way. This poses a complex decision-making problem, in which the UAVs need to trade off the short-term-rewarding action of tracking their path against the long-term-rewarding actions of localizing or exploiting hard-to-model updrafts. Therefore, reinforcement learning, which enables autonomous agents to learn decision-making tasks by directly interacting with the environment, shall be investigated.

Nico Klar

Nico Klar is a research associate at the Centre for Solar Energy and Hydrogen Research Baden-Württemberg (ZSW) and a PhD student at FRPG. He studied electrical engineering and information technology at the University of Applied Sciences in Constance (B.Eng.) and obtained his master's degree at the University of Stuttgart. During his studies, he focused on autonomous robots and worked in research and development at Bosch GmbH and Daimler AG. In 2020 he started as a research assistant at the ZSW in Stuttgart in the Group for Artificial Intelligence Applications. His work focuses on object and pattern recognition, classification and tracking of aerial objects.

In 2023 Nico joined the Flight Robotics and Perception Group as a PhD student. His scientific work is oriented towards the project "Bird and Bat Recorder 2.0" (BBR 2.0), which is funded by the Federal Ministry for Economic Affairs and Climate Action of Germany (BMWK). The aim is to bring the protection of birds and nocturnal animals into line with legal regulations regarding the targeted shutdown of wind turbines. An intelligent system of LiDAR devices, cameras and laser range finders monitors the airspace at a distance of up to 800 meters. If a protected animal approaches, the sensors detect and classify it using deep neural networks. Specifically, his doctoral research aims to process a massive dataset of birds of prey in order to develop, train and evaluate species recognition models. It will also focus on real-time tracking of the animals, which is particularly challenging at night. Finally, the goal is to develop a network of smart agents that can be modularly adapted to wind farm layouts and offer an alternative to the legal requirements for blanket shutdown times.

Chenghao Xu

Chenghao is a Master's student in Robotics at Delft University of Technology (TU Delft), the Netherlands. He obtained his Bachelor's degree in Mechanical Engineering with Excellent Graduate Honor from the Southern University of Science and Technology (SUSTech), Shenzhen, China. Currently, he is working as a student assistant at Robot Perception Group in the Perceiving Systems Department. His research interests lie in robot planning and perception, especially the intersection area of vision-based navigation and machine learning.