Brought to you by EarthWeb
IT Library Logo

Click Here!
Click Here!

Search the site:
 
EXPERT SEARCH -----
Programming Languages
Databases
Security
Web Services
Network Services
Middleware
Components
Operating Systems
User Interfaces
Groupware & Collaboration
Content Management
Productivity Applications
Hardware
Fun & Games

EarthWeb Direct EarthWeb Direct Fatbrain Auctions Support Source Answers

EarthWeb sites
Crossnodes
Datamation
Developer.com
DICE
EarthWeb.com
EarthWeb Direct
ERP Hub
Gamelan
GoCertify.com
HTMLGoodies
Intranet Journal
IT Knowledge
IT Library
JavaGoodies
JARS
JavaScripts.com
open source IT
RoadCoders
Y2K Info

Previous Table of Contents Next


6. RESEARCH ISSUES AND FUTURE TRENDS

From an implementational perspective, it is accepted today that significant knowledge, beyond that which is required for purposes of ES operation, needs to be formalized for the purposes of ES explanation. An increasing amount of research is therefore being directed at what was earlier in this chapter termed as the explanation acquisition and explanation validation stages of the explanation facility development process. In commercial applications of ES as well, there is an increasing trend toward encapsulating explanatory information in the applications, often in the form of "canned" generic explanations that are relevant for multiple situations. This represents a suboptimal solution due to the difficulty of implementing fully contextual and relevant explanations that foster learning and problem-solving. A potentially useful approach to this situation is the development of computer-aided software engineering-type workbenches that facilitate the encoding of explanatory information during the knowledge acquisition phase of expert system development. Such tools may also be potentially useful for overcoming the "maintenance" problem that affects explanation facilities.

Learning theories are also becoming more prominent in the study of the design and use of ES explanations. This is because they provide a wider theoretical context and perspective to the role of explanations in expert systems. While we know much about how human experts explain to other humans for purposes of fostering learning, there is still much to be learned about how automated experts should explain in such situations. A related perspective here is that which is termed the "learning-working conflict," i.e., in many contexts, asking for and using ES explanations during a problem-solving process involves making a direct trade-off between long-term learning and immediate efficiency in problem-solving.

There are also efforts underway to extend our current understanding of ES explanations from the current largely diagnostic task environment to that of design or heuristic configuration tasks. Initial results suggest that the demand for explanations as well as the nature of explanation facilities for these tasks is significantly different from diagnostic settings. For example, explanations pertaining to "modelling notation," "sample applications," and "error-correction" have been found to be a necessary requisite for an expert system that supports object-oriented data modeling tasks. Such findings may well challenge and change our current conceptualizations of the types of explanations that expert systems ought to provide.

From an empirical perspective there is also a need for the development of a contingency theory for the use of expert system explanations. While it is recognized that ES explanations are not relevant for all applications of expert systems, our knowledge and understanding of which explanations are relevant to which situations is still not fully developed. However, the increasing number of studies that indicate that ES explanations are relevant to and of value to users certainly calls for more research attention to be directed to ES explanations technology and development methodologies.

7. SUMMARY

This chapter has provided two vital perspectives pertaining to explanation facilities being viewed as part of the total expert systems functionality. First, it has considered the critical design issues pertaining to the development of such explanation technology as well as suggested a specific development process for it. Second, it has focused on the use of such explanation facilities as a means of understanding the interface design features for explanations. These have to be considered from the perspective that it may be prudent for the expert systems community to widen the definition of the "output" or value that such systems provide to their users. Given the "fragility" of expertise and the difficulty in modeling and maintaining knowledge, there may well be a need for us to view the explanations provided as the primary output, rather than focusing on specific optimal system recommendations. Human experts, after all, are not always correct but can consistently provide thoughtful, relevant, and contextual explanations that foster learning.

REFERENCES

Abu-Hakima, S. and F. Oppacher, Improving explanations in knowledge-based systems:
RATIONALE, Knowledge Acquisition, Vol. 2, 1990, pp. 301-343.
American Association of Artificial Intelligence (AAAI), Proceedings of the AAAI
Workshop on Explanation, St. Paul, MN, 1988.
Chandrasekaran, B., M. C. Tanner, and J.R. Josephson, Explanation: The role of control
strategies and deep models, in J.A. Hendler, Expert Systems: The User Interface, Ablex, Norwood, NJ, 1988.
Clancey, W.J., The epistemology of a rule-based expert system -- A framework for
explanations, Artificial Intelligence, Vol. 20, 1983, pp. 215-231.
Dhaliwal, J.S. and I. Benbasat, The use and effects of knowledge-based system
explanations: Theoretical foundations and a framework for empirical evaluation, Information Systems Research, Vol. 7, No. 3, September 1996, pp. 342-362.
Dhaliwal, J.S., An Experimental Investigation of the Use of Explanations Provided by
Knowledge-Based Systems, Unpublished Doctoral Dissertation, University of British Columbia, Canada, 1993.
Lamberti, D.M. and W.A. Wallace, Intelligent interface design: An empirical
assessment of knowledge presentation in expert systems, MIS Quarterly, Vol. 14, September 1990, pp. 279-311.
Lerch, F.J., M.J. Prietula, and C.T. Kulik, The Turing Effect: Discovering How We Trust
Machine Advice, Unpublished Manuscript, Graduate School of Industrial Administration, Carnegie Mellon University, Pittsburgh, PA, 1990.
Swartout, W.R., XPLAIN: A system for creating and explaining expert consulting
programs, Artificial Intelligence, Vol. 21, 1983, pp. 285-325.
Swartout, W.R. and S. W. Smoliar, On making expert systems more like experts, Expert
Systems, Vol. 4, 1987, pp. 196-207.
Teach, R.L. and E.H. Shortliffe, An analysis of physicians' attitudes, Computers In
Biomedical Research, 14, December, 1981, pp. 542-558.
Wexelblat, R.L., On interface requirements for expert systems, AI Magazine, Fall, 1989, pp.
66-78.
Wick, M.R. and J.R. Slagle, An explanation facility for today's expert systems," IEEE Expert,
Spring, 1989, pp. 26-36.
Ye, R. and P.E. Johnson, The impact of explanation facilities on user acceptance of expert
systems advice, MIS Quarterly, Vol. 19, No. 2, June 1995, pp. 157-172.


Previous Table of Contents Next

footer nav
Use of this site is subject certain Terms & Conditions.
Copyright (c) 1996-1999 EarthWeb, Inc.. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Please read our privacy policy for details.