Autonomous agents are systems that inhabit a dynamic, unpredictable environment in which they try to satisfy a set of time-dependent goals of motivations. Agents are said to be adaptive if they improve their competence at dealing with these goals based on experience. Autonomous agents are highly inspired by biology, in particular ethology, the study of animal behavior. Examples of agents include applications such as virtual actors in interactive training and entertainment systems, interface agents and process scheduling. An agent is a system that tries to fulfill a set of goals in a complex, dynamic environment. It can sense the environment through its sensors and act upon the environment using its actuators. An agent is autonomous if it operates completely autonomously if it decides itself how to relate its sensor data to motor commands in such a way that its goals are attended to successfully. The system is completely self contained. It has to monitor the environment and figure out by itself what the next problem or goal to be addressed is. An agent is said to be adaptive if it is able to improve over time with the agent becoming better at achieving its goals with experience. Agents have multiple integrated competences. The agent is directly connected to its problem domain through sensors and actuators. It can affect or change this domain through these actuators. The problem domain is typically dynamic, which means that the system has a limited amount of time to act.

Agent research has a strong emphasis on “adaptation” and on a “developmental approach”. This means that the system improves its own internal structures over time, based on its experience in the environment. The agent demonstrates adaptive, robust and effective behavior. The agent explores and updates its structures using an incremental, inductive learning method. Adaptive in this context means the agent improves its goal-oriented competence over time. Robust means that it never completely breaks down. Effective means that the agent is successful at achieving its goals. The guiding principles for this research are 1) Looking at complete systems changes the problems often in a favorable way and 2) Interaction dynamics can lead to emergent complexity. What is important is that such emergent complexity is often more robust, flexible and fault-tolerant than programmed, top-down organized complexity. Systems built on these principles create certain types of agents. They act quickly because 1) they have fewer layers of information processing, 2) they are more distributed and often non-synchronized, and 3) they require less expensive computation. They are robust because 1) none of the modules is more critical than another, 2) they do not attempt to fully understand the current situation, 3) they incorporate redundant methods and 4) they adapt over time. There are specific architectures meant to address concerns with the environment. The hand built systems often become a very difficult and tricky task, in particular because these ethology-based models tend to have a lot of parameters that need to be tuned to obtain the desired behavior. These architectures themselves can be grouped into three classes: reinforcement learning systems, classifier systems and model learners.

The remaining issue with the system of agents is the computational complexity of all the learning systems discussed is too big to be practically useful to build complex agents that solve real problems. The main problem identified here is that of scaling the approach to larger, more complicated systems. Research in autonomous agents has adopted very task-driven, pragmatic solutions. As a result, the agents built using this approach end up looking more like a “bad of hacks and tricks”, than the embodiment of a set of more general laws and principles.

Reference:

  1. Maes, Pattie. “Modeling Adaptive Autonomous Agents.” <http://robotics.usc.edu/~maja/teaching/cs584/papers/maes94modeling.pdf>