The Nature of Explanation
XAI research generally assumed at first that “explanation” only
involves the process of providing an explanation to the user, on the
assumption that an explanation consists of text or a graphic, which is
good and sufficient in itself. But ITS research clearly demonstrated how
explanation must be understood from the user’s perspective as alearning process , and thus from the program’s perspective
as an instructive process (which includes explaining) rather than a
“one-off,” stand-alone question-answer interaction. This is true
whether the learning process is an activity involving a person and
machine, a group of people, or process of self-explanation by a person
or program. For some XAI applications, explanation will be part of an
activity that extends over multiple uses and interactions, especially
because a neural network program can continually evolve. XAI researchers
have thus begun to consider ways in which the user can explore how the
AI program works and its vulnerabilities (a concern ignored by that ITS
programs that focus on textbook knowledge).
ITS research demonstrated that “explanation” is an interaction among
the user, the artifact, and their activity in a task context. In
particular, the format/medium, content, and timing of explanations may
differ to support different information needs for different tasks. In
critical, time-pressed situations the only practical support may be
directing the user’s attention; in activities over hours or days, such
long-term care for a patient, the program may serve more as an assistant
in constructing situation-specific models and action plans.
The process of instruction, including explaining, necessarily involves
shared languages and methods for communicating. The earliest ITSs
demonstrated some form of natural language capability, such as
mixed-initiative question-answering, case-method dialogue, Socratic
discourse, or customized narrative presentations. ITSs have also used
graphic presentations and animated simulations to convey relationships
and causality. Similarly, a general consensus has emerged among XAI
researchers that the explanation process must involve the exchange of
meaningful (versus computationally formal) information.