Brought to you by EarthWeb
IT Library Logo

Click Here!
Click Here!

Search the site:
 
EXPERT SEARCH -----
Programming Languages
Databases
Security
Web Services
Network Services
Middleware
Components
Operating Systems
User Interfaces
Groupware & Collaboration
Content Management
Productivity Applications
Hardware
Fun & Games

EarthWeb Direct EarthWeb Direct Fatbrain Auctions Support Source Answers

EarthWeb sites
Crossnodes
Datamation
Developer.com
DICE
EarthWeb.com
EarthWeb Direct
ERP Hub
Gamelan
GoCertify.com
HTMLGoodies
Intranet Journal
IT Knowledge
IT Library
JavaGoodies
JARS
JavaScripts.com
open source IT
RoadCoders
Y2K Info

Previous Table of Contents Next


1.1. HISTORICAL OVERVIEW AND TERMINOLOGY

The rapid prototyping approach has been dominant in the area of expert systems for many years. Such systems employ "surface knowledge," consisting of "rules of thumb" that human experts commonly employ. The rules are highly specific to their particular domains, and are often expressed in the form of "if-then" production rules. As a consequence, the earliest techniques for V&V of expert systems were entirely empirical, i.e., adaptations of the Turing test.

A common characteristic of the early V&V methods is that they were mainly product oriented, i.e., the focus is the interface, or inference engine, or the knowledge base itself. When it comes to the V&V of knowledge bases, later work emphasized the verification of more formal properties, such as consistency and redundancy.

However, the systems constructed using the rapid prototyping approach have not in general had the expected success. This has led to more sophisticated methodologies for expert system construction. As a consequence, the focus of V&V has moved from the product to the validation of the process. In other words, the methods deal now with issues such as modeling and design.

An important step in the evolution of expert systems has been the identification of a level of discourse above the programming level, i.e., the "knowledge level," which is due to Newell. The adoption of a knowledge-level perspective focuses the analysis of expertise on issues such as identifying the abstract task features, what knowledge the task requires, and what kind of model the expert makes of the domain. The identification of the knowledge level in expert systems has facilitated the distinction between deep and surface knowledge. The distinction focuses not on the pattern of inference, but on the domain models which underly the expertise. Deep knowledge makes explicit the models of the domain and the inference calculus that operates on these models. Deep knowledge is that which includes a model of a particular world -- principles, axioms, laws -- that can be used to make inferences and deductions beyond those possible with rules. As a consequence, a large part of the work on V&V has been oriented toward the models underlying the domains and the transition between them.

The deep knowledge movement in knowledge engineering (KE) can be compared with a similar movement in software engineering (SE). In the early days of conventional software development, assembly or low-level languages were used in order to obtain efficient execution. Only later, were the advantages of high-level and both informal and formal languages taken into consideration. It is on the abstract representations that the arguments for correctness, completeness, and consistency are considered. A similar development in techniques has been observed in KE through the use of domain models. KE is more and more seen as the incremental discovery and creation of a model for the domain of interest.

The symbol/knowledge level distinction has been approached from the perspective of V&V by Vermesan and Bench-Capon, (1995). A further differentiation at the knowledge level is made, as the validity of a system depends on the validity of the underlying model, whether it is incompletely (or non-existent), implicitly, or explicitly defined. The three levels are defined as follows:

  • Symbol level, which represents the executable representation of the knowledge
  • Knowledge level with implicit model, often in the head of the expert
  • Knowledge level with explicit model, initially independent of any implementation and not necessarily executable

This classification proved useful for surveying V&V of expert systems, as presented by Vermesan and Bench-Capon (1995), mainly because it answers the question: What is one verifying and validating against? At the symbol level, one mainly checks (i.e., one is not verifying and validating against something but rather is looking for internal coherence). At the knowledge level with implicit model, one mainly validates the system behavior against the human expert or/and other sources of knowledge. Finally, at the knowledge level with explicit model, one verifies the executable knowledge base against the model itself.

This raises the issue of defining the terminology in V & V; that is, what are the definitions for V&V? As verification and validation of expert systems is still a maturing field, a consensus among different definitions does not exist yet. SE defines validation as the process that ensures system compliance with software requirements, while verification ensures system compliance with the requirements established during the previous level of specification. Adopting these definitions to expert systems is not straightforward. Therefore, almost every application has developed and defined its own terminology, although to a great extent they converge toward a common meaning:

  • Verification checks the well-defined properties of an expert system against its specification. Depending on the kind of properties, verification can focus on the knowledge base, the inference engine, or the user interface. Further distinctions can be made: verification of I/O behavior, verification of the path followed to achieve a given deduction, etc.
  • Validation checks whether an expert system corresponds to the system it is supposed to represent. Like verification, validation can focus on the same particular system aspects.

Verification and validation terminology is extended with the terms testing and evaluation:

  • Testing is the examination of the behavior of a program by executing the program on sample data sets.
  • Evaluation focuses on the accuracy of the system's embedded knowledge and advice. It helps to determine the system's attributes, such as usefulness, intelligibility, credibility of results, etc.

Relevant material on testing can be found in (Miller, 1990), while for the evaluation aspect, a relevant discussion can be found in (Liebowitz, 1986).


Previous Table of Contents Next

footer nav
Use of this site is subject certain Terms & Conditions.
Copyright (c) 1996-1999 EarthWeb, Inc.. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Please read our privacy policy for details.