Saripalli’s paper discusses a technique for landing autonomous helicopters. Similar to Ettinger’s “Vision Guided Flight Stability and Control for Micro Air Vehicles”, I really like the simplicity in the approach to solving this problem. The idea is essentially taking an image of the ground in a search mode, down sampling the image so that the geometric shapes stand out in the image. This works considerably well since there are no perfect geometric shapes in nature. This problem addresses the issue with helicopters where autonomous landing is difficult because of the instability of a helicopter near the ground. What is interesting about this paper is that the approach that they use in solving this issue is based on a behavior based control architecture.

The behavior based architectures are interesting because they move the issues to the solution space instead of the implementation space. What I mean by this is that the issues with the autonomous helicopter can be described in terms of objects and how they interact instead of the low level limitations of the hardware, which would be separate issues delegated to implementation. An example is that for the helicopter, if we didn’t think about the solution in terms of actions and objects than we would be stuck in the implementation of the algorithm to solve this for us.

The behavior architecture is composed of three main action modes: search, object-track and landing. Now that these top level actions have been defined, the focus can be on the implementation of the algorithms to support these behaviors. The image processing of the algorithm can get involved because of the different conditions to account for such as noise and skewed images. After translating the image and filtering out the noise, the algorithm can then segment out the relevant features in the image to look for the geometric shape.

The architecture itself is standard behavior model where the low level behaviors are responsible for robot functions requiring quick response while higher-level behaviors meet less time critical needs. This is similar to a discussion that we had in class where the low level behaviors are meant for more instinctual, self-preservation actions while the higher level layers act as the cognitive area where complicated tasks get computed. The issue with this architecture though is with the fact that higher level functions can override the lower level functions if the conditions are the same. This could lead to adverse actions of the helicopter because of the higher level functions overriding the self-preservation functions. This may not be an issue specific to this helicopter though because the end goal is to land safely which fits the criteria for the self-preservation as well as the reason for the computation of the higher level functions. In the end, the algorithms work very well in real time, which is vital for this helicopter to be successful.

Reference:

  1. Saripalli, Srikanth. “Vision-based Autonomous Landing of an Unmanned Aerial Vehicle.” <http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.83.5586&rep=rep1&type=pdf>