Creativity by Case-Based Reasoning (CBR): SWALE Project Home Page
The racehorse Swale running the Belmont Stakes. This is a link to a photo at
http://www.championsgallery.com.
The URL for this page is
http://www.cs.indiana.edu/~leake/projects/swale.
Project Description
The SWALE project explores case-based reasoning (CBR) as a basis for
creativity. In the CBR model of creativity, creativity comes from
retrieving knowledge that is not routinely applied to a situation, and
using it in a new way. In this view, the key issues for creativity
are how to retrieve appropriate knowledge for novel uses and how to
adapt it to fit novel circumstances. Depending on the retrieval and
adaptation processes used, CBR can provide solutions anywhere along a
spectrum of creativity, ranging from straightforward reapplications of
old knowledge all the way to highly novel views.
The task in which the SWALE project studies creativity is creative
explanation of anomalous events. SWALE is a story understanding
program that detects anomalous events and uses CBR to explain the
anomalies. Its explanation process is based on the retrieval and
application of cases storing prior explanations, called explanation
patterns from its memory. This case-based reasoning process can
take place at any point along the spectrum of creativity, from the
completely non-creative application of a perfectly appropriate XP, to
the very novel use of an inappropriate XP that must be totally revised
to be usable.
The following illustrations of SWALE's reasoning are excerpted and
adapted from Schank, Roger C., and Leake, David B. Creativity and
Learning in a Case-Based Explainer. Artificial Intelligence
40(1-3):353-385, 1989, and reprinted in Carbonell, J., ed, Machine
Learning: Paradigms and Methods, MIT Press, Cambridge, MA, 1990.
A simplified version of the SWALE code,
developed in scheme for pedagogical purposes, can be downloaded and
run to illustrate this process.
The Range of Explanations Generated by SWALE
In order to give an idea of the range of explanations that a system
using the case-based approach can generate, below are some remindings
that the SWALE system has, and a few of the explanations it generates
from them for its namesake example, the story of the racehorse Swale:
In 1984, Swale was the best 3-year-old racehorse, and he was winning
all the most important races. A few days after a major victory, he
returned from a light morning gallop and collapsed outside his stable.
The shocked racing community tried to figure
out why. Many hypotheses appeared, but the actual cause was never
determined.
The SWALE system detects the anomaly in Swale's premature death
(see (Leake,
1992) for a discussion of the system's anomaly detection
process). It then builds explanations of the death by retrieving and
"tweaking"
remindings of XPs for other episodes of death. It picks the best of
these candidate explanations, and adds it to its XP library for future
use. Some are reasonable explanations; others are quite fanciful or
can be ruled out on the basis of other knowledge. However, they show
that a memory-based explanation system, even if it has a limited range
of XPs and of retrieval and tweaking strategies, can come up with a
variety of interesting explanations.
- Reminding Thinking of other deaths of those in peak physical
condition causes the system to be reminded of the death of the
runner Jim Fixx, who died when his running over-taxed a hereditary heart defect.
Explanation Swale might have had a heart defect that caused
his racing to prompt a heart-attack.
- Reminding Thinking about other deaths of young stars, the system
is reminded of Janis Joplin's death from a drug overdose.
Explanation 1 The pressure of being a superstar was too much for
Swale, and he turned to drugs to escape. He died of an overdose.
Explanation 2 Swale might have been given
performance-enhancing drugs by a trainer, and died of an accidental
overdose.
- Reminding Thinking of folkloric causes of death causes the system to
recall the old wives' tale too much sex will kill you.
Explanation 1 Although racehorses are prohibited from sex during
their racing careers, Swale might have died of a heart-attack
from the excitement of just thinking about life on the stud farm.
Explanation 2 Swale might have committed suicide because he
became depressed when thinking about sex.
Explanation 3 Swale might have died in an accident when he
was distracted by thinking about sex.
Requirements for Creative Case-Based Explanation
The above examples show that interesting explanations arise when we try
to use an XP that doesn't quite apply. In order to obtain creative
explanations, an explainer might try to intentionally misapply
XPs. Interesting ideas can arise from using old explanations to deal
with situations where those explanations were never intended to be used.
While using an XP that doesn't apply gives a fresh perspective on a
situation, the idea of building a system to intentionally misapply XPs
raises many issues: Which XPs do you retrieve? Which tweaks should
be applied? How long should the tweaking process be continued? As our
research progresses, we hope to be able to answer these questions.
However, we can suggest some of the things that are needed to build
creative case-based explainers:
- We need heuristics for the intentional reminding of explanation
patterns XP retrieval is the process of formulating questions to memory:
we characterize an anomalous situation in terms of a set of indices,
and ask what XPs in memory explain similar situations. When no answer
is available, we must reformulate the question into one that we can
answer. When no solution is
directly available, people often fall back on asking standard questions
that give background information. Answers to explanation questions
like what physical causes underlie this event?, what special
circumstances made the event happen now?, what motivates the actor
of this surprising action?, how did the victim enable this bad
event?, or what groups might the actor be trying to serve?, may
suggest relevant factors that can be used as indices for XP retrieval.
Though the XPs accessed in this way might not be directly applicable,
it may be possible to adapt them. A creative system needs a set of
explanation questions for gathering information, rules for selecting
which questions to apply in a given situation, and rules for transforming
them to fit.
- We need tweaking strategies that can do significant revisions
Tweaking must also be able to make significant revisions. Rather than
requiring tweaks to always maintain causal structure, we should allow them
to make broad changes. Their
revisions will not always be successful, but failures produce new
possibilities for still more revision.
- We need heuristics for knowing when to keep alive seemingly useless
hypotheses
In addition to chosing between explanations generated by the system,
the evaluation process also has a more direct part in the creative process.
We cannot tweak a candidate explanation indefinitely; the evaluator must
decide whether a hypothesis is worth pursuing. This estimation will
always be imperfect, but the better it is, the more resources the system
will be able to devote to fruitful revision of XPs. One heuristic would
be to continue tweaking an explanation as long as each tweak generates
a better explanation. But the decision whether to continue tweaking
should also depend on the availability of competing candidate XPs, and
on an estimate of how important the final explanation is to goals of
the system (since that affects how many resources should be expended
explaining).
- We need a system with a rich memory of explanations
Finally, a creative case-based explainer must have access to a wide range
of explanation patterns. There are two ways that people or machines
might learn new XPs: by being taught them directly (as children are given
explanation patterns by parents, teachers, or friends), or by learning
new ones through creative misapplication. One step towards making a
computer creative would be to collect an extensive list of XPs that it
could use as the starting point for adaptation. Many interesting
explanations might be constructed starting with a collection of
culturally-shared XPs, such as proverbs.
More details on the SWALE project and system can be found in the
references below.
Code On-Line
A simplified version of the SWALE code,
developed for pedagogical purposes and published in the book Inside
Case-Based Explanation, is available to illustrate SWALE's
explanation process.
Sample References
- Schank, Roger C., and Leake, David B. Creativity and Learning in a
Case-Based Explainer. Artificial Intelligence 40(1-3):353-385, 1989,
and reprinted in Carbonell, J., ed, Machine Learning: Paradigms
and Methods, MIT Press, Cambridge, MA, 1990.
- Schank, Roger C. Explanation Patterns: Understanding Mechanically
and Creatively. Erlbaum, 1986.
- Leake, David B. Focusing
Construction and Selection of Abductive Hypotheses,
Proceedings of the Eleventh International Joint
Conference on Artificial Intelligence, 1993, pp. 24-29. This paper
describes the relationship of case-based explanation to other
explanation approaches.
View abstract
only.
Project Team
Research on the SWALE project was conducted by Alex Kass, David Leake,
and Chris Owens, advised by Roger Schank and Chris Riesbeck.
Contacts
This page is maintained by David Leake,
Computer Science Department, Indiana University.