WildCap

Autonomous Non-Invasive Monitoring of Animal Behavior and Motion

Recent Results



Funding Source: Cyber Valley

 

Open Source Code:

 

Motivation: Inferring animal behavior, e.g., whether they are standing, grazing, running or interacting with each other and with their environment, is a fundamental requirement for addressing the most important ecological problems today. On the other hand, estimating the 3D pose and shape of animals, in real-time, could also directly address several other problems such as disease diagnosis, health profiling and possibly provide very high resolution behavior inference. To do both of these in the wild and without any markers or sensors on the animal is an extremely challenging problem. State-of-the-art methods for animal behavior, pose and shape estimation either require sensors or markers on the animals (e.g., GPS collars and IMU tags), or rely on camera traps fixed in the animal's environment. Not only do these methods pose danger to the animals due to tranquilization and physical interference, but their scope is also difficult to extend to a larger number of animals in vast environments and over longer time periods. In WildCap, we are developing autonomous methods for estimating behavior, pose and shape of endangered wild animals, which will address the aforementioned issues. Our methods does not require any physical interference with the animals. Our novel approach is to develop a team of intelligent, autonomous and vision-based aerial robots which will detect, track and follow the wild animals and perform behavior pose and shape estimation tasks.

 

Goals and Objectives:

WildCap's goal is to achieve continuous, accurate and on-board inference of behavior, pose and shape of animal species from multiple, unsynchronized and close-range aerial images acquired in the animal's natural habitat, without any sensors or markers on the animal, and without modifying the environment. In pursuit of the above goal, the key objectives of this project are
  1. Animal behavior, pose and shape estimation methods from multiple unsynchronized images.
  2. Development of novel aerial platforms for tracking and following animals.
  3. Formation control methods for multiple aerial robots to maximize the accuracy of animal behavior, pose and shape inference.

Methodology: Aerial robots with longer autonomy time and payload are critical for continuous and long distance tracking of animals in the wild. To this end, we are developing novel systems, particularly lighter than air vehicles that could address these issues. Furthermore, we are developing formation control strategies for such vehicles to maximize the visual coverage of animals and accuracy in their state estimates. Finally, we are leveraging learning-in-simulation methods to develop algorithms for animal behavior, pose and shape estimation of animals.

 

Publications:

 

[1] Price, E., Khandelwal P. C., Rubenstein D. I., and Ahmad, A (2023).  A Framework for Fast, Large-scale, Semi-Automatic Inference of Animal Behavior from Monocular Videos, bioRxiv 2023.07.31.551177; doi: https://doi.org/10.1101/2023.07.31.551177   

[2] Price, E., Black, M., & Ahmad, A. (2023) Viewpoint-driven Formation Control of Airships for Cooperative Target Tracking, IEEE Robotics and Automation Letters, vol. 8, no. 6, pp. 3653-3660, June 2023. doi: https://doi.org/10.1109/LRA.2023.3264727 or at https://arxiv.org/abs/2209.13040
 
[3] Bonetto, E. & Ahmad, A. (2023) Synthetic Data-based Detection of Zebras in Drone Imagery, IEEE European Conference on Mobile Robots (ECMR 2023) (Accepted June 2023). Preprint available at https://is.mpg.de/uploads_file/attachment/attachment/718/ECMR_Zebra.pdf

[4] Price, E. & Ahmad, A. (2023) Accelerated Video Annotation driven by Deep Detector and Tracker, 18th International Conference on Intelligent Autonomous Systems (IAS 18) (Accepted April 2023). Preprint available at https://arxiv.org/abs/2302.09590

[5] Zuo, Y., Liu, Y. T. and Ahmad A. (2023), "Autonomous Blimp Control Via H∞ Robust Deep Residual Reinforcement Learning," 19th IEEE International Conference on Automation Science and Engineering (CASE), Auckland, New Zealand. Preprint available at https://arxiv.org/abs/2303.13929.

[6] Liu, Y. T., Price, E., Black, M.J., Ahmad, A. (2022) Deep Residual Reinforcement Learning based Autonomous Blimp Control, 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 2022, pp. 12566-12573, 2022.  https://doi.org/10.1109/IROS47612.2022.9981182
 
[7] Saini, N., Bonetto, E., Price, E., Ahmad, A., & Black, M. J. (2022). AirPose: Multi-View Fusion Network for Aerial 3D Human Pose and Shape Estimation. IEEE Robotics and Automation Letters, 7(2), 4805–4812. https://doi.org/10.1109/LRA.2022.3145494

[8] Price, E., Liu, Y. T., Black, M.J., Ahmad, A. (2022). Simulation and Control of Deformable Autonomous Airships in Turbulent Wind. In: Ang Jr, M.H., Asama, H., Lin, W., Foong, S. (eds) Intelligent Autonomous Systems 16 (IAS 2021). Lecture Notes in Networks and Systems, vol 412. Springer, Cham. https://doi.org/10.1007/978-3-030-95892-3_46