Brought to you by EarthWeb
IT Library Logo

Click Here!
Click Here!

Search the site:
 
EXPERT SEARCH -----
Programming Languages
Databases
Security
Web Services
Network Services
Middleware
Components
Operating Systems
User Interfaces
Groupware & Collaboration
Content Management
Productivity Applications
Hardware
Fun & Games

EarthWeb Direct EarthWeb Direct Fatbrain Auctions Support Source Answers

EarthWeb sites
Crossnodes
Datamation
Developer.com
DICE
EarthWeb.com
EarthWeb Direct
ERP Hub
Gamelan
GoCertify.com
HTMLGoodies
Intranet Journal
IT Knowledge
IT Library
JavaGoodies
JARS
JavaScripts.com
open source IT
RoadCoders
Y2K Info

Previous Table of Contents Next


3. APPLICATION OF EXPERT SYSTEM VERIFICATION AND VALIDATION

3.1. COMPONENTS OF VERIFICATION AND VALIDATION

In the early days of expert systems, a lot of work emphasized the functional requirements of the system, or the specification of the expected behavior of the system. Functional requirements capture the nature of the interaction between the system and its environment -- they specify what the system is to do. They can be expressed in two different ways. The declarative approach seeks to describe what the system must do without any indication of how it is to do it. The procedural approach, on the other hand, aims to describe what the system must do in terms of an outline design for accomplishing it.

However, good performance of the system may hide other faults that can create problems for the maintenance of the system or provision of explanations. Nonfunctional requirements (also called constraints) should also be taken into consideration, although they usually restrict the types of system solutions that should be considered. They often specify what the system should not do. Examples of nonfunctional requirements include safety, security, performance, operating constraints, and costs. These system level requirements are translated at the level of knowledge base in nonfunctional requirements, such as logical consistency of the knowledge, redundancy, efficiency, usefulness, etc.

Both functional and nonfunctional requirements should be considered in V&V activities. Some of the major V&V components are listed in Table 2. Although they reflect some of the well known features and characteristics of a software product as found in the ISO/IEC 9126 standard, the fundamental nature of expert system software requires slightly different components of the developing system to be considered.

3.2. METHODS AND TECHNIQUES

The earliest validation technique on AI was Turing's proposal on how to decide if a program could be considered "intelligent"; the responses of the expert system, together with those from a human expert, are presented to an independent human expert. Although the techniques received some criticism, the idea of blind testing has remained central in validation of the earliest expert systems.

In addition to testing, other techniques have been used by expert system developers to analyze anomalies that are indicative of errors in the construction of such systems, and that can lead to faulty behavior at run-time. The most common anomalies included: inconsistency, redundancy, subsumption, circularity, unreachable goal, unfireable rule.

Currently, the dominant techniques for V&V activities cover a wide range. For the purpose of this chapter they are clustered into two main groups: non-method and method specific techniques, focusing on the latter.

3.2.1. Nonmethod-Specific Techniques

These techniques usually involve human analysis of the product, relying on individuals to use their experience to find errors. Such analyses are error prone, as they do not rely on the semantics of the product, and in general are not automated. The most common non-method specific techniques are reviews, walkthroughs, and inspections. These actions are general examinations of programs and all seek to identify defects and discrepancies of the software against specifications, plans, and standards. At the review, the product is scrutinized in whatever way makes most sense: a piece of text can be taken page by page; a piece of code can be approached procedure by procedure; designs, diagram by diagram; diagrams, bloc by bloc. Informal reviews are conducted on an as-needed basis, while formal reviews are conducted at the end of each life-cycle phase and the acquirer of the software is formally involved. Inspections attempt to detect and identify defects, while walkthroughs in addition, consider possible solutions. Inspections and walkthroughs are performed by a group composed of peers from the software quality assurance, development, and test. Formal inspections are significantly more effective than walkthroughs as they are performed by teams led by a moderator who is formally trained in the inspection process.


TABLE 2
Verification and Validation Components
 
  Characteristic Description
Competency This deals with the quality of the knowledge in a system relative to human skills. It can be assessed by comparing the source with other sources of expertise.
Completeness The completeness of a system with respect to the requirement specification is a measure of the portion of specification implemented in the system. Applied to an expert system, this involves ensuring that all the knowledge is referenced, and that there is no attempt to access non-existent knowledge.
Consistency This means the requirement specification or expert system is free of internal contradiction. For example, a KB that contains two rules that specify opposite conclusions starting from same condition are not internally consistent.
Correctness The knowledge within a knowledge base should be 100% correct. However, different human experts may have different opinions on the correctness of a knowledge base.
Testability The system should be designed in such a way as to permit a testing plan to be carried out. For example, if the requirement specification for an expert system states that "the system should perform at the level of an expert," such a specification would be difficult to test. However, the same requirements could be stated in another way: "the system should arrive at the same conclusion as the human expert on 97% of a set of test cases" it is easier to test.
Relevance This criteria determines if the system has extraneous information with respect to the requirement specification. For example, the relevance criteria is violated in the case where the expert system can solve problems or has features that are not specified in the requirement specification.
Usability One can have a perfectly working system, but if it does not meet the demand of users, it will not be used.
Reliability Reliability determines how often the system fails to arrive at the correct solution to a problem. An expert system has high reliability if it consistently arrives at the correct solution for a large proportion of problems that are given to it.

Inspection of an expert system aims at detecting semantically incorrect knowledge in the KB. This activity is usually performed manually by a human expert who has expertise in the application domain. The expert can be the same expert who provided the knowledge for the KB, but could also be an expert independent of those involved in the ES development. There are a limited number of errors that human experts can detect "by eye," i.e., those errors that can be found within the same piece of knowledge. Errors that come from the interaction of several KB components are more difficult to detect.

Although important, nonmethod-specific techniques are generally not enough to assure that the software being developed will satisfy functional and other requirements and each step in the process of building the software will yield the right product.


Previous Table of Contents Next

footer nav
Use of this site is subject certain Terms & Conditions.
Copyright (c) 1996-1999 EarthWeb, Inc.. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Please read our privacy policy for details.