Agent Types (COMP3411)

There are many different classifications for intelligent agents. For instance:

Reactive Agents

A reactive agent (sometimes called 'simple reflex agent') uses only the current percepts to choose a next action, based on a set of simply applicable rules.

They have no memory or state, so they can't use previous observations to stop repeated actions or to prevent a mistake.

They hence only work for episodic environments.

Read more on reactive agents

E.g. Try playing 'Wumpus World' with a Reactive Agent - it doesn't know where any other breezes are, so can't easily narrow down safe squares.

Model-Based Agents

Model-based agents can keep a "map" of visited places and remember the data it perceived there.

This can allow us to plan one (only one) step ahead.

The problems with being able to look into the past but not the future are that you can't perform anything with complex tasks where you need to plan ahead (as in the next step isn't necessarily logical).

E.g. Trying to travel to New York requires lots and lots of planning (how to get there, etc).

Planning Agents

Planning Agents contain a world model (map), as well as the ability to plan out a long path of decisions.

Planning%20Agent.png

The World Model can be represented by things such as:

  • Transition Tables
  • Dynamical Systems
  • Parametric Models
  • Knowledge Bases

In response to these, Planning for each might be:

  • State-Based Search
  • Simulation
  • Goals / utility
  • Logical Inference

Learning

Learning is a set of techniques used to improve the existing modules. It allows the agent to adapt to new situations, and enables us to design systems not programmable by hand.

The complexity of learning and complexity of application are distinct things. An agent may take an extensive period of time to learn an algorithm/approach, but may be instantaneous to apply. Or vice versa.

See Machine Learning.