![]() |
|||
![]()
|
![]() |
![]() |
![]() |
3. APPLICATION OF EXPERT SYSTEM VERIFICATION AND VALIDATION3.1. COMPONENTS OF VERIFICATION AND VALIDATIONIn the early days of expert systems, a lot of work emphasized the functional requirements of the system, or the specification of the expected behavior of the system. Functional requirements capture the nature of the interaction between the system and its environment -- they specify what the system is to do. They can be expressed in two different ways. The declarative approach seeks to describe what the system must do without any indication of how it is to do it. The procedural approach, on the other hand, aims to describe what the system must do in terms of an outline design for accomplishing it. However, good performance of the system may hide other faults that can create problems for the maintenance of the system or provision of explanations. Nonfunctional requirements (also called constraints) should also be taken into consideration, although they usually restrict the types of system solutions that should be considered. They often specify what the system should not do. Examples of nonfunctional requirements include safety, security, performance, operating constraints, and costs. These system level requirements are translated at the level of knowledge base in nonfunctional requirements, such as logical consistency of the knowledge, redundancy, efficiency, usefulness, etc. Both functional and nonfunctional requirements should be considered in V&V activities. Some of the major V&V components are listed in Table 2. Although they reflect some of the well known features and characteristics of a software product as found in the ISO/IEC 9126 standard, the fundamental nature of expert system software requires slightly different components of the developing system to be considered. 3.2. METHODS AND TECHNIQUESThe earliest validation technique on AI was Turing's proposal on how to decide if a program could be considered "intelligent"; the responses of the expert system, together with those from a human expert, are presented to an independent human expert. Although the techniques received some criticism, the idea of blind testing has remained central in validation of the earliest expert systems. In addition to testing, other techniques have been used by expert system developers to analyze anomalies that are indicative of errors in the construction of such systems, and that can lead to faulty behavior at run-time. The most common anomalies included: inconsistency, redundancy, subsumption, circularity, unreachable goal, unfireable rule. Currently, the dominant techniques for V&V activities cover a wide range. For the purpose of this chapter they are clustered into two main groups: non-method and method specific techniques, focusing on the latter. 3.2.1. Nonmethod-Specific Techniques These techniques usually involve human analysis of the product, relying on individuals to use their experience to find errors. Such analyses are error prone, as they do not rely on the semantics of the product, and in general are not automated. The most common non-method specific techniques are reviews, walkthroughs, and inspections. These actions are general examinations of programs and all seek to identify defects and discrepancies of the software against specifications, plans, and standards. At the review, the product is scrutinized in whatever way makes most sense: a piece of text can be taken page by page; a piece of code can be approached procedure by procedure; designs, diagram by diagram; diagrams, bloc by bloc. Informal reviews are conducted on an as-needed basis, while formal reviews are conducted at the end of each life-cycle phase and the acquirer of the software is formally involved. Inspections attempt to detect and identify defects, while walkthroughs in addition, consider possible solutions. Inspections and walkthroughs are performed by a group composed of peers from the software quality assurance, development, and test. Formal inspections are significantly more effective than walkthroughs as they are performed by teams led by a moderator who is formally trained in the inspection process.
Inspection of an expert system aims at detecting semantically incorrect knowledge in the KB. This activity is usually performed manually by a human expert who has expertise in the application domain. The expert can be the same expert who provided the knowledge for the KB, but could also be an expert independent of those involved in the ES development. There are a limited number of errors that human experts can detect "by eye," i.e., those errors that can be found within the same piece of knowledge. Errors that come from the interaction of several KB components are more difficult to detect. Although important, nonmethod-specific techniques are generally not enough to assure that the software being developed will satisfy functional and other requirements and each step in the process of building the software will yield the right product.
|
![]() |
|
Use of this site is subject certain Terms & Conditions. Copyright (c) 1996-1999 EarthWeb, Inc.. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Please read our privacy policy for details. |