Ph.D. Dissertation, Indiana University, 1995.
Susan E. Fox
A fundamental problem for artificial intelligence is creating systems that can operate well in complex and dynamic domains. In order to perform well in such domains, artificial intelligence systems must be able to learn from novel and unexpected situations. There are many well-researched learning methods for augmenting domain knowledge, but little attention has been given to learning how to manipulate that knowledge more effectively. This research develops a method for learning about reasoning methods themselves. It proposes a model for a combined system which can learn new domain knowledge, but is also able to alter its reasoning methods when they prove inadequate.
Model-based reasoning is used as the basis of an ``introspective reasoner'' that monitors and refines the reasoning process. In this approach, a model of the desired performance of an underlying system's reasoning is compared to the actual performance to detect discrepancies. A discrepancy indicates a reasoning failure; the system explains the failure by looking for other related failures in the model, and repairs the flaw in the reasoning process which caused the failure. The framework for this introspective reasoner is general and can be transfered to different underlying systems.
The ROBBIE (Re-Organization of Behavior By Introspective Evaluation) system combines a case-based planner with an introspective component implementing the approach described above. ROBBIE's implementation provides insights into the kinds of knowledge and knowledge representations that are required to model reasoning processes. Experiments have shown a practical benefit to introspective reasoning as well; ROBBIE performs much better when it learns about its reasoning as well as its domain than when it learns only about its domain.
See http://www.cs.indiana.edu/~leake/INDEX.html for additional publications in the Artificial Intelligence/Cognitive Science report and reprint archive maintained by David Leake.