Brought to you by EarthWeb
IT Library Logo

Click Here!
Click Here!

Search the site:
 
EXPERT SEARCH -----
Programming Languages
Databases
Security
Web Services
Network Services
Middleware
Components
Operating Systems
User Interfaces
Groupware & Collaboration
Content Management
Productivity Applications
Hardware
Fun & Games

EarthWeb Direct EarthWeb Direct Fatbrain Auctions Support Source Answers

EarthWeb sites
Crossnodes
Datamation
Developer.com
DICE
EarthWeb.com
EarthWeb Direct
ERP Hub
Gamelan
GoCertify.com
HTMLGoodies
Intranet Journal
IT Knowledge
IT Library
JavaGoodies
JARS
JavaScripts.com
open source IT
RoadCoders
Y2K Info

Previous Table of Contents Next


4.3. EMPIRICAL EXPLORATION

People have recognized that the ability to provide relevant and informative explanations regarding various machine reasoning processes is one of the most important features of an expert system. The "Wizard of Oz" technique has been used in the development of expert systems as well as in user interface design and evaluation. Using the "Wizard of Oz" technique where, unknown to the subject, a person provides a simulation of the system as an expert, an experiment was carried out that looked at the usefulness of various types of explanation.

Although explanation capability is one of the distinguishing characteristics of expert systems, the explanation facilities of most existing systems are quite primitive. The task of justifying expert decisions is an intelligence-requiring activity, and the appropriate model for machine-produced justifications should be explanations written by people. It is essential that an expert system is able to synthesize knowledge from a variety of sources and produce coherent, multisentential text similar to that produced by a domain expert.

Different types of explanations provided by an expert system may have significant impact on the usability, user satisfaction, and overall acceptance of the system. A number of studies addressed this issue. For example, three commonly used types of explanations are: (1) rule-based explanations, (2) condition-based explanations, and (3) rule-and-condition combined explanations. Available evidence indicates that the level of user satisfaction indeed depends on the type of explanation provided. In general, the rule-and-condition combined explanations tend to be the most satisfactory and useful.

Knowledge-based systems that interact with humans often need to define their terminology, elucidate their behavior, or support their recommendations or conclusions. In general, they need to explain themselves. Unfortunately, current computer systems often generate explanations that are unnatural, ill-connected, or simply incoherent. They typically have only one method of explanation that does not allow them to recover from failed communication. At a minimum, this can irritate an end-user and potentially decrease their productivity. More dangerous, poorly conveyed information may result in misconceptions on the part of the user that can lead to bad decisions or invalid conclusions, which may have costly or even dangerous implications. To address this problem, human-produced explanations have been studied with the aim of transferring explanation expertise to machines. It has been suggested that a domain-independent taxonomy of abstract explanatory utterances is needed to develop a taxonomy of multisentence explanations.

Explanatory utterances can be classified based on their content and communicative function. These utterance classes and additional text analysis can be used subsequently to construct a taxonomy of text types. This text taxonomy may characterize multisentence explanations according to the content they convey, the communicative acts they perform, and their intended effect on the addressee's knowledge, beliefs, goals, and plans. It is suggested that the act of explanation presentation is an action-based endeavor and introduces and defines an integrated theory of communicative acts (rhetorical, illocutionary, and locutionary acts).

To use this theory, one can formalize several of these communicative acts as plan operators and then show their use by a hierarchical text planner that composes natural language explanations. One can therefore classify a range of reactions that readers may have to explanations and illustrate how a system can respond to these given a plan-based approach.

In contrast to symbolic systems, neural networks have no explicit, declarative knowledge representation and therefore have considerable difficulties in generating explanation structures. In neural networks, knowledge is encoded in numeric parameters (weights) and distributed all over the system. It has been found that connectionist systems benefit from the explicit coding of relations and the use of highly structured networks in order to allow explanation and explanation components. Connectionist semantic networks, i.e., connectionist systems with an explicit conceptual hierarchy, belong to a class of artificial neural networks that can be extended by an explanation component which gives meaningful responses to a limited class of "How?" questions.

4.4. FITTING INTO THE USER ENVIRONMENT

Traditional wisdom has been that the design of an expert system requires iterative interaction with the knowledge engineer until the knowledge is satisfactorily encoded by the knowledge engineer into the system. To fit an expert system into a workflow, the interface designer must consider contextual issues associated with the acceptance of end-users. User-centered design, getting end-users involved early in the design process, has been recognized as a way to improve the situation. In this case, the interface designer needs to collaborate with end-users (who may not necessarily be domain experts themselves).

The ultimate criterion of success for interactive expert systems is that they will be used, and used to effect, by individuals other than the system developers. Many developers are still not involving users in an optimal way. New approaches have been suggested to better bring the user into the expert system development process, and these approaches incorporate both ethnographic analysis and formal user testing (Berry, 1994).

Expert systems have a range of use. For example, an expert system can be used to obtain "second opinion" in a much similar way to consulting a knowledgeable colleague; an expert system can be used as a "what if" system to predict and test various scenarios. The interface designer must be clear about the goals of the system and the user.


Previous Table of Contents Next

footer nav
Use of this site is subject certain Terms & Conditions.
Copyright (c) 1996-1999 EarthWeb, Inc.. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Please read our privacy policy for details.