CS 440/ECE 448

Margaret Fleck

Margaret Fleck

When a search algorithm finds the goal, it must be able to return best path to the goal. So, during search, each seen state must remember the best path back to the start state. To do this efficiently, each state stores a pointer to the previous state in the best path. When the goal is found, we retrieve the best path by chaining backwards until we reach the start state.

- The search space has a small natural depth (e.g. constraint-based problems).
- Our system resources impose a limit on how far away we can search (e.g. complex games or large robot environments).

We can dramatically improve the low-memory version of DFS by running it with a progressively increasing bound on the maximum path length. This is a bit like having a dog running around you on a leash, and gradually letting the leash out. Specifically:

For k = 1 to infinityRun DFS but stop whenever a path contains at least k states.

Halt when we find a goal state

This is called "iterative deepening." Its properties

- Overall search behavior looks like BFS.
- Each iteration starts from scratch, forgetting all previous work.
- Uses very little memory.

Each iteration repeats the work of the previous one. So iterative deepening is most useful when the new search area at distance k is large compared to the area search in previous iterations, so that the search at distance k is dominated by the paths of length k.

These conditions don't hold for 2D mazes. The search area at radius k is proportional to k, whereas the area inside that radius is proportional to \(k^2\).

However, suppose that the search space looks like a 4-ary tree. We know from discrete math that there are \(4^k\) nodes at level k. Using a geometric series, we find that there are \(1/3\cdot (4^k -1)\) nodes in levels 1 through k-1. So the nodes in the lowest level dominate the search. This type of situation holds (approximately) for big games such as chess.

Backwards and bidirectional search are basically what they sound like. Backwards search is like forwards search except that you start from the goal. In bidirectional search, you run two parallel searches, starting from start and goal states, in hopes that they will connect at a shared state. These approaches are useful only in very specific situations.

Backwards search makes sense when you have relatively few goal states, known at the start. This often fails, e.g. chess has a huge number of winning board configurations. Also, actions must be easy to run backwards (often true).

Example: James Bond is driving his Lexus in central Boston and wants to get to Dr. Evil's lair in Hawley, MA (population 337). There's a vast number of ways to get out of Boston but only one road of significant size going through Hawley. In that situation, working backwards from the goal might be much better than working forwards from the starting state.