Effective Classical Planning and Beyond: Dealing with Uncertainty and Reward

(slides)

Planning is concerned with the development of solvers for a wide range of problems involving the selection of actions for achieving goals. In these problems, actions may be deterministic or not, full or partial sensing may be available, costs may be associated with actions, states, or both, and so on. A range of closely related state-models is used to make sense of these various classes of problems and their solutions, while planning algorithms aim to compute such solutions (plans) efficiently.

In the last few years, significant progress has been made in so-called classical planning where information is complete and actions are deterministic. Classical planners can currently solve large and complex problems involving hundred of actions quickly by focusing the search for plans with heuristics automatically obtained from the the problem encoding.

In this talk, I will present a general framework for AI planning, review such heuristics, and focus on recent extensions for handling simple forms of uncertainty and preference (reward) in a computationally effective manner.