![]() |
|||
![]()
|
![]() |
![]() |
![]() |
6. RESEARCH ISSUES AND FUTURE TRENDSFrom an implementational perspective, it is accepted today that significant knowledge, beyond that which is required for purposes of ES operation, needs to be formalized for the purposes of ES explanation. An increasing amount of research is therefore being directed at what was earlier in this chapter termed as the explanation acquisition and explanation validation stages of the explanation facility development process. In commercial applications of ES as well, there is an increasing trend toward encapsulating explanatory information in the applications, often in the form of "canned" generic explanations that are relevant for multiple situations. This represents a suboptimal solution due to the difficulty of implementing fully contextual and relevant explanations that foster learning and problem-solving. A potentially useful approach to this situation is the development of computer-aided software engineering-type workbenches that facilitate the encoding of explanatory information during the knowledge acquisition phase of expert system development. Such tools may also be potentially useful for overcoming the "maintenance" problem that affects explanation facilities. Learning theories are also becoming more prominent in the study of the design and use of ES explanations. This is because they provide a wider theoretical context and perspective to the role of explanations in expert systems. While we know much about how human experts explain to other humans for purposes of fostering learning, there is still much to be learned about how automated experts should explain in such situations. A related perspective here is that which is termed the "learning-working conflict," i.e., in many contexts, asking for and using ES explanations during a problem-solving process involves making a direct trade-off between long-term learning and immediate efficiency in problem-solving. There are also efforts underway to extend our current understanding of ES explanations from the current largely diagnostic task environment to that of design or heuristic configuration tasks. Initial results suggest that the demand for explanations as well as the nature of explanation facilities for these tasks is significantly different from diagnostic settings. For example, explanations pertaining to "modelling notation," "sample applications," and "error-correction" have been found to be a necessary requisite for an expert system that supports object-oriented data modeling tasks. Such findings may well challenge and change our current conceptualizations of the types of explanations that expert systems ought to provide. From an empirical perspective there is also a need for the development of a contingency theory for the use of expert system explanations. While it is recognized that ES explanations are not relevant for all applications of expert systems, our knowledge and understanding of which explanations are relevant to which situations is still not fully developed. However, the increasing number of studies that indicate that ES explanations are relevant to and of value to users certainly calls for more research attention to be directed to ES explanations technology and development methodologies. 7. SUMMARYThis chapter has provided two vital perspectives pertaining to explanation facilities being viewed as part of the total expert systems functionality. First, it has considered the critical design issues pertaining to the development of such explanation technology as well as suggested a specific development process for it. Second, it has focused on the use of such explanation facilities as a means of understanding the interface design features for explanations. These have to be considered from the perspective that it may be prudent for the expert systems community to widen the definition of the "output" or value that such systems provide to their users. Given the "fragility" of expertise and the difficulty in modeling and maintaining knowledge, there may well be a need for us to view the explanations provided as the primary output, rather than focusing on specific optimal system recommendations. Human experts, after all, are not always correct but can consistently provide thoughtful, relevant, and contextual explanations that foster learning. REFERENCES
|
![]() |
|
Use of this site is subject certain Terms & Conditions. Copyright (c) 1996-1999 EarthWeb, Inc.. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Please read our privacy policy for details. |