Brought to you by EarthWeb
IT Library Logo

Click Here!
Click Here!

Search the site:
 
EXPERT SEARCH -----
Programming Languages
Databases
Security
Web Services
Network Services
Middleware
Components
Operating Systems
User Interfaces
Groupware & Collaboration
Content Management
Productivity Applications
Hardware
Fun & Games

EarthWeb Direct EarthWeb Direct Fatbrain Auctions Support Source Answers

EarthWeb sites
Crossnodes
Datamation
Developer.com
DICE
EarthWeb.com
EarthWeb Direct
ERP Hub
Gamelan
GoCertify.com
HTMLGoodies
Intranet Journal
IT Knowledge
IT Library
JavaGoodies
JARS
JavaScripts.com
open source IT
RoadCoders
Y2K Info

Previous Table of Contents Next


4.1.2. Validation

Validation is the process of ensuring that an expert system satisfies the requirements of the clients. Since the requirements of the expert system are expressed informally, it is not possible to automate validation completely. However, software tools can be used to facilitate some phases of the validation process.

Practical validation usually consists of constructing a set of test problems, obtaining the responses of the expert system to these problems, and comparing the results with expected responses. Although the idea is simple, validating a large expert system can be a formidable task.

The expectations for early expert systems were straightforward: users thought that the expert system should behave like a human expert. For example, the performance of an expert system designed to diagnose disease should be comparable to that of an experienced doctor performing diagnosis. But even this simple criterion is fraught with difficulties. Should the expert system behave like a particular doctor or like a panel of doctors? Doctors occasionally make mistakes and are sometimes subjected to malpractice suits; if an expert system makes a mistake, who is liable? These issues, however, are outside the scope of this chapter. For the purpose of validation, we need to know two things: how the problem set is designed, and how the results provided by the expert system in response to the problems are evaluated.

The easiest approach to validation is to ask the expert or experts whose knowledge is embodied in the expert system. The disadvantage of this approach is that the experts will tend to test for exactly the cases that they put into the expert system initially and they will not adequately test scenarios that did not occur to them.

A more difficult and expensive approach to validation is to recruit an independent group of experts to design tests and specify acceptable responses. This approach is not always feasible: in some areas of expertise, there may be different "schools" who disagree about the test results. If the expert system is proprietary, the clients may not want to reveal its capabilities to experts who are not employed by the company.

It is sometimes possible to derive test problems from previous experience. Consider, for example, an expert system used by a bank to assess loan requests. The bank could use data obtained from previous loans and repayment records to construct the test cases for the expert system and then compare the predictions of the expert system to what actually happened.

4.2. MODULARIZATION

As mentioned in Section 1, the development of expert system technology has followed the development of software engineering technology. During the 1970s, for example, software engineers recognized the importance of modular program structure and added features to programming languages to support modularity. They introduced the principle of "high cohesion and low coupling" as an aid to module design: an individual module is "cohesive" if it performs a simple, well-defined task; a complete system is "weakly coupled" if the connections between modules are minimized. During the 1980s, expert system designers followed the lead of software engineers and began to build modular expert systems.

There are several reasons for preferring a modular system to a monolithic system. A modular system is easier to understand because modules can be understood individually. A modular system is easier to test because modules can be tested independently. Module testing reduces the time complexity of testing, mentioned in Section 4.1.1. Correction and maintenance are both simplified if the expert system is modularized because the scope of many changes is restricted to a small number of modules or even to a single module.

A module in an expert system is a group of rules (Philip, 1993). Since a rule has several "inputs" (facts that must be asserted in order for the rule to fire) and a single "output" (the rule fires, asserting its conclusion), a rule is analogous to a function or procedure rather than to a single statement in a software system. Thus, the number of rules in a module is arbitrary: even a single rule may constitute a module.

An expert system module is cohesive if it is responsible for a well-defined subdomain of the knowledge embodied in the expert system. An individual rule is cohesive if each of its clauses are part of a single piece of knowledge.

Attempts to simplify a modular rule base can lead to loss of cohesion. Suppose that the rule base contains two rules, R1 and R2, that are cohesive but contain a number of clauses in common. Creating a third rule, R3, containing the common rules may reduce the total size of the rule base. But the new rule base may be harder to understand, test, and maintain if the new rule R3 is not cohesive (Philip, 1993). Consequently, the designer of a modular rule base should focus on high cohesion and low coupling rather than on the number and complexity of individual rules.


Previous Table of Contents Next

footer nav
Use of this site is subject certain Terms & Conditions.
Copyright (c) 1996-1999 EarthWeb, Inc.. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Please read our privacy policy for details.