WildCap: Autonomous Non-Invasive Monitoring
of Animal Behavior and Motion
Project Goal: Continuous, accurate and on-board inference of behavior, pose and shape of animals from multiple, unsynchronized and close-range aerial images acquired in the animal's natural habitat, without any sensors or markers on the animals.
Videos of Latest Results (Scroll to bottom for more video results)
Our autonomous tracking and following methods enabling a multi-rotor drone to track various species at the Mpala conservancy in April 2025.
Our cooperative tracking and following methods enabling two multi-rotor drones to track various species at the Mpala conservancy in April 2025.
Objective 1: Development of novel aerial platforms for tracking and following animals
Autonomous Systems for Non-Invasive Monitoring of Animal Behavior and Motion [8, 10]
Objective 2: Novel methods for animal behavior, pose and shape estimation
A Framework for Fast, Large-scale, Semi-Automatic Inference of Animal Behavior from Monocular Videos [1,4]
Objective 3: Autonomous control methods for multiple aerial robots to maximize the accuracy of animal behavior, pose and shape inference
Perception-driven Formation control of Airships [2]
Reinforcement Learning-based Airship Control [5,6]
Reinforcement Learning-based Autonomous Landing on Moving Platforms [9]
Videos of WildCap's Results
Project Description
Overview: Automatically inferring animal behavior at scale - such as detecting whether they are standing, grazing, or interacting with their environment - is crucial for addressing key ecological challenges. Achieving this across large spatial and temporal scales remains a fundamental goal in wildlife research. Additionally, real-time estimation of an animal’s 3D pose and shape can aid in disease diagnosis, health profiling, and high-resolution behavior inference. However, conducting behavior analysis and pose estimation in the wild, without relying on physical markers or body-mounted sensors, presents a significant challenge. Current state-of-the-art methods often require GPS collars, IMU tags, or stationary camera traps, which are difficult to scale across vast landscapes and pose risks to animals. In WildCap (2021-2026), a project funded by Cyber Valley in Germany, we have developed autonomous systems to estimate the behavior, pose, and shape of endangered wild animals without physical interference. We introduce vision-based aerial robots that detect, track, and follow animals using novel control methods while performing behavior, pose, and shape estimation. To enhance vision-based inference, we developed synthetic data generation pipelines and validated these methods in extensive field missions and experiments.
Goal: Continuous, accurate and on-board inference of behavior, pose and shape of animals from multiple, unsynchronized and close-range aerial images acquired in the animal's natural habitat, without any sensors or markers on the animals.
Objectives:
Project Publications