In recent years, the use of Unmanned Aerial Vehicles (UAVs) in different markets has increased significantly. Currently, most important markets are aerial video/photography, precision farming and surveillance/monitoring. Ongoing technical development, e.g. in size, endurance and usability, enables the use of UAVs in even more areas. First prototypes for delivering goods (e.g. DHL or Amazon) or even people (e.g. eHang 184) are already available.

In order to improve usability and safety, UAVs are equipped with several sensors, such as laser-scanners and cameras, to monitor the environment. One important aspect in this monitoring process is to detect objects in flight path in order to avoid collisions.
The goal of the UAV use case in TULIPP is to estimate depth images from a stereo camera setup, which is mounted onto a UAV and is orientated in the direction of flight. These depth images are then used to detect objects in the path of flight and to avoid a collision in further stages. This implies onboard processing under real-time constraints. Additionally, low weight and low power constraints are inherent to all UAVs.

Available systems are complex, as they often require specific processing modules, an inertial measurement unit (IMU) and a state estimator to calculate a reliable local depth-map. In TULIPP, we focus on a simple and user friendly, yet sophisticated and adjustable depth-map generation, running at real-time on embedded hardware with low-power consumption. Even though, in terms of the UAV use case, the depth image is used to perform collision avoidance, such measurements have a vast area of application across multiple domains.

Figure 1:  Example of image based stereo estiamtion. Top: Reference frames for which the range is estimated. Bottom: Estimated range image (color-coded). Red: near. Blue: far.