Unlike reactive agents and model-based agents, Planning Agent can use search techniques to plan several steps ahead to find the goal.

There are two classes of these planning agents:

- Uninformed: Can only differentiate between goal and not-goal
- Informed: Uses heuristics to get "closer" towards the goal

# Steps to Solving a Problem

- Formulate the Goal
- Specify the Task
- Work out states
- Work out operators/actions (how to transition between states)

- Find the solution (sequence of actions)
- Execute the solution

## Example

- Goal: Get to Bucharest in time
- Task:
- States: Various cities
- Operators/Actions: Driving along roads

- The solution: List of cities, e.g. Arad, Sibiu, … Bucharest
- Execute the solution: Drive through the given cities

# Single-State Task Specification

We define a task by:

- Initial State (e.g. "at Hornsby")
- State Space (all possible states) (e.g. "all stations along the North Shore line")
- Actions and Operators (or Successor Functions) (e.g. "Move one station North")
- Goal Test
- Explicit Test (is x == goal)
- Implicit test (distToGoal(x))

- Path Cost (sum of Distances + Actions + etc)
- Total Cost (Search Cost + Path Cost)
- Solution (State-Action Sequence - initial state -> goal state path)

# Choosing States and Actions

In order to solve problems the state space must be abstracted (the real world is too complex). Hence the agent's environment does not have to be an exact model of the real world.

## Example - 8-Puzzle

- States: Locations of the number-tiles
- Operators: Move Up, Down, Left, Right
- Goal Test: isGoal(currentState)
- Path Cost: numMoves

# Path Search Algorithms

To *search* is to find state-action sequences (solutions) that lead to desirable states. It's a function performed on a task.

It's an offline simulated exploration of state space by generating successors of the seen states (by expanding them).

## Path Search Problems vs CSP's

PSP's have the easy part be knowing what the goal is, and the hard part be getting there. e.g. Rubik's cube.

CSP's have the easy part be getting to the solution, but the hard part be knowing what it is. e.g. N-Queens.

## Generating Action-Sequences (Finding Solutions)

We're building up a search tree:

Start with the initial state, if it's not the goal then expand one of its successors (making a choice between multiple if necessary).

## Search Tree vs State Space

A Search Tree is very different from a State Space.

A search tree is superimposed over a state space, and has the initial state as its root, and the states without successors as leaf nodes.

However where a State Space only contains the x possible states, a Search Tree contains the infinitely many paths between them all, and different ways of getting to each state.

Hence multiple nodes in the Search Tree can contain the same State.

### Data Structures for Search Tree Nodes

We need some kind of struct with all the information about the node and how we got there:

- Parent
- Operator applied to parent (hence implicit storing of state).
- Depth
- Path Cost

Once we have a node we need a host of queuing functions to allow us to expand the different nodes.

We call the collection of nodes waiting to be expanded the *frontier*. This is essentially just what's in the queue.

It can be implemented as a Priority Queue.

### Data Structures for State

The data structure for the state represents the physical configuration, but no other information about it.