CS 440/ECE 448
Fall 2019
Margaret Fleck
Midterm 1 skills list
The first midterm will be on Friday October 4th, in class. It will
cover material through the September 25th lecture, plus a small amount
of material from the September 27th lecture.
Historical and other trivia
We've seen a lot of trivia, most of it not worth memorizing. The following
items are the exceptions. Be able to explain (very briefly) what they are and (approximately)
what time period they come from.
- Shakey and Blocks world
- Waltz line labelling
- STRIPS planning
- Boston Dynamics
- Google self-driving bike
- McCulloch and Pitts
Search
You've done an MP on this, so can be expected to answer quite
detailed questions, e.g about A*.
Key examples: graph search, maze search, Missionaries and Cannibals puzzle.
8-puzzle
Basic search methods
- Search problem formulation: initial state, actions, transition model, goal state(s),
path cost, state space
- Outline for search algorithms: frontier, explored (done) states, best
path, avoiding loops, storing or reconstructing return path
- Uninformed search strategies: breadth-first, uniform cost search,
depth-first, iterative deepening, bidirectional
Edit distance (problem and algorithm)
A* search
- basic definitions and algorithm
- definitions of admissible and consistent, how do they relate? why
is consistency useful?
- returns optimal path if heuristic admissible
- what makes one heuristic better than another?
- weighted A*
Comparison of search methods
- Advantages and disadvantages of each of these search methods compared
to the others
Constraint satisfaction problems
Key examples (be familiar with basic rules)
- n-queens, map/graph coloring, Sudoku, Cryptarithmetic, Waltz line labelling
- Graph coloring is NP-complete.
Backtracking search (DFS)
- Variable assignments can be done in any order, search is to a known depth
- Why isn't looping a worry?
- Heuristics for variable and value selection:
most constrained/most constraining variable, least constraining value
- Forward checking, constraint propagation
- AC-3 algorithm
Hill-climbing
- how it works (high-level idea only)
- how it differs from backtracking search
Planning
- State and action representation
- Preconditions and effects (postconditions)
- Situation space vs. plan space planners
- Forward and backward planning
- Sussman anomaly, interleaved planning
- The Frame Problem
- Towers of Hanoi takes time exponential in input size (number of rings)
Partial order (plan space) planning:
- the idea
- plan modification operations
- plan defects (open preconditions, threats) and how to fix them
Configuration space
You've done an MP on this, but the math/geometry turns evil very fast.
So expect concrete 2D problems involving relatively simple situations.
- Example configuration spaces: circular robot, n-link arm,
rectangular robot with/without rotation.
- Sketch, or answer questions about,
the configuration space for a (very simple) robot motion planning problem.
General knowledge (i.e. superficial only)
- types of robots (e.g. arms, moving carts, snakes, ...)
- ideal robot path: short but also smooth, safe
- graph-based planning: visibility graphs, cell decompositions, generalized
Voronoi diagrams
Probability
- Random variables, axioms of probability
- Joint, marginal, conditional probability
Naive Bayes
Basic definitions and model:
- Bayes rule
- Likelihood, prior, posterior
- argmax operator
- Independence and conditional independence
- Maximum a posteriori (MAP) esimate, Maximum likelihood (ML) estimate,
factoring out P(evidence)
- How does prior affect these estimates?
The Naive Bayes algorithm:
- How do we combine several conditionally independent pieces of evidence
into one estimate of P(cause | evidence)?
- How do we choose the best value for the cause/class?
- How does the size of a Naive Bayes model compare to a full joint distribution?
- Why does it matter that Naive Bayes reduces the number of parameters we need to estimate?
The MP for this is still in the future. So detailed implementation issues will
be on the second midterm.
Testing
- Roles of training, development, test datasets.
- Evaluation metrics for classification (true positive rate, accuracy, recall, ...)