Baker’s paper addresses the human-computer interaction issues with current urban search and rescue (USAR) interfaces by modeling the interface on and around the main video window. One of the issues that the new interface addresses is specific to the operator because under most circumstances, it is difficult for the operator to obtain situational awareness. The observations from the surveys conducted were used as requirements in the new interface. To enhance the awareness of the surroundings, the interface includes a map indicating where the robot has been. To lower the cognitive load on the operator, the interface provides fused sensor information rather than make the operator mentally combine data from multiple sources. To increase efficiency, the interface minimizes the use of multiple windows and yet is flexible enough to support multiple robots in a single window. Finally, to provide help in choosing robot modality, the interface is divided into four different modes: teleoperation, safe, shared and autonomous.

This paper directly correlates with the interface that I used this past summer as a thesis project. An autonomous robot was placed at random points on a playing surface. As an operator, I needed to use the interface to guide the robot through the course to look for “victims” while minimizing the amount of contact with the walls. After the robot would be teleported to some random area on the map, I would have to manually localize the robot. I did this by panning and rotating around in a 360 degree rotation in order to match similar geometrical shapes on the map provided to me. I remember from this study that I often forgot to re-center the camera after panning and tilting the camera. The crosshairs acted as a reference point which significantly reduced the amount of operational error. I found the interface to be uncluttered and liked the idea of the sensor data represented as parameter geometries.

One item that did not help me out as much as I would have thought was the rearview mirror effect. I did not think that feature helped me as much as just switching to the rear view. This is probably related to the fact that the point of interest to the operator is the full motion video where everything outside that is superfluous. In looking at the original interfaces, I can see how unnecessarily complicated they were. There was too much information which was rarely accessed in one window. One area of research in this field of study would be HUD Displays in video games. These displays have gotten very advanced by displaying large amounts of information in very small areas. I would not be surprised if a GUI developer from the video game field came up with a nice interface for USAR robots.

Reference:

  1. Baker, Michael. “Improved Interfaces for Human-Robot Interaction in Urban Search and Rescue.” <http://www.academia.edu/3336138/Analysis_of_Human-Robot_Interaction_for_Urban_Search_and_Rescue>