Goodrich’s paper discusses a design of a human-robot system with adjustable autonomy and describes not only the prototype interface but also the corresponding robot behaviors. The research is focused on developing a human-centered robot design concept that applies to multiple robot settings. The objective is to allow a single human operator to interact with multiple robots and do so while maintaining reasonable workload and team efficiency.

One of the most common issues with a system like this is with the time delays in communication between the robots and the human controller. As the level of neglect changes, an autonomy mode must be chosen that compensates for such neglect. Schemes devised for large time delays are appropriate for conditions of high neglect, and schemes devised for small time delays are appropriate for conditions of low neglect. The rule extrapolated from these facts is that as the autonomy level increases, the breadth of tasks that can be handled by a robot decreases.

By combining techniques from behavior based robotics with human-centered automation, a usable interface that facilitates adjustable autonomy can be developed and applied to multi-human, multi-robot interaction. This interaction is crucial to the design of the system and therefore reducing the scope of the problem down to the human operator and one autonomous robot is key. Parasuraman’s “A Model for Types and Levels of Human Interaction with Automation” provides a framework for modeling this autonomy into the system. The main issue with Parasuraman’s model is with the decision process in which the human operator needs to understand and also be aware of any wrong decisions the system makes. Goodrich’s paper discusses some alternatives for this including one that would address this problem.

The model which could solve the issue with how the human operator interacts with the autonomous system is the system is composed of three agents: a human operator, an interface agent and the robot agent. The human operator sets the bounds within which the robot has authority to initiate behaviors and the interface agent can initiate switches in these bounds. This mediator/interface agent would provide a compromise in the relationship between the human operator and the autonomous system itself. This interface could be extended to allow an operator to interrupt a robot’s behaviors for a time and then allow the robot to return to its previous task when ready.

There are many possible solutions for implementing an adjustable autonomy in an autonomous system. If a human operator is going to work in tandem with an autonomous system, there needs to be some level of decision making delegated to the system at the same time allowing the human operator to be able to switch tasks or change a decision if the system made a wrong choice. This would create a compatible environment.

Reference:

  1. Goodrich, Michael A. “Experiments in Adjustable Autonomy.” <http://acs.ist.psu.edu/misc/dirk-files/Papers/HRI-papers/Experiments%20in%20adjustable%20autonomy%20.pdf>