Brought to you by EarthWeb
IT Library Logo

Click Here!
Click Here!

Search the site:
 
EXPERT SEARCH -----
Programming Languages
Databases
Security
Web Services
Network Services
Middleware
Components
Operating Systems
User Interfaces
Groupware & Collaboration
Content Management
Productivity Applications
Hardware
Fun & Games

EarthWeb Direct EarthWeb Direct Fatbrain Auctions Support Source Answers

EarthWeb sites
Crossnodes
Datamation
Developer.com
DICE
EarthWeb.com
EarthWeb Direct
ERP Hub
Gamelan
GoCertify.com
HTMLGoodies
Intranet Journal
IT Knowledge
IT Library
JavaGoodies
JARS
JavaScripts.com
open source IT
RoadCoders
Y2K Info

Previous Table of Contents Next


3.4.4. CONKRET

CONKRET (CONtrol Knowledge REfinement Tool) is a tool to refine control knowledge. It checks the functionality of metarules responsible for the dynamic generation of goals and strategies of an expert system. Implicit control is not treated. Explicit control is represented by metarules. It is assumed that the KB has already been checked and contains no structural anomalies, before executing CONKRET. The goal is to achieve a correct solution, and with CONKRET the user is trying to improve the way this solution is found. In other words, it improves the problem-solving strategies, so as to avoid exploring the whole search space.

The tool deals with expert systems represented in VETA, a metalanguage developed in VALID. A strategy in VETA is defined as a sequence of goals that the inference engine should pursue to reach a solution. A metarule has associated a CF used by the inference engine to select the metarule with the highest certainty when more than one is fireable. Every metarule belongs to a set of rules. The actions allowed as conclusions of metarules are the following:

  • Actions on rules: inhibit a set of rules from firing
  • Actions on goals: create, add, remove, or reorder goals from the current strategy
  • Problem-solving termination

Metarules are responsible for the strategy generation during the problem-solving process. They may sometimes work improperly, making the problem-solving strategy deficient for a particular case. For example, when the metarule M is fired but the goals it provides are eventually found not to be relevant to the solution, it means that the firing of M wastes resources while trying to achieve the unnecessary goals. Another type of deficiency can be produced while solving a case when goals that should be considered are not. CONKRET does not automatically update the KB, it just suggests to the user which repairing actions better fit the given inputs. It works in the following way:

  • Three types of deficiencies are detected: extra goal, omitted goal, and misplaced goal.
  • The cause of every deficiency is identified by executing a particular algorithm for each type of deficiency.
  • The most probable causes of the failure are selected.
  • Suggestions are offered to the user to repair the deficiencies.

The deficiencies are detected by representing to the input of CONKRET the following information: a case (observable data of an expert system), a trace (information about the KBS execution of the case), and a goal standard (the correct sequence of goals that should follow the KBS inference mechanism to solve the case efficiently). Another tool has to be used to obtain the goal standard for a given case and produce a trace based on optimality criteria such as simplicity, path focus, etc.

3.4.5. IN-DEPTH

IN-DEPTH II is an incremental verifier that can perform the verification of a part of the KB or just consider some specific verification issues. The incremental verification process is formulated in the following way: let KB0 be a verified knowledge base on which a change operator [Theta] is applied to an object obj, generating a new knowledge base KB1, so that:

KB1 = KB0 + [Theta](obj)

where [Theta][element of] {ADD, MODIFY, REMOVE} and obj[element of]{rule, module, metarule}.

As complexity is a fundamental problem, the question is what kind of test should be performed and on which elements of KB1 to verify the new knowledge base with a minimum effort? The verification is composed of two steps:

  1. If KB1 contains new objects that do not appear in KB0, these objects should be verified.
  2. Changes in KB1 can affect the verification results on objects in KB1 [union] KB0 (objects already verified at KB0). Those verification tests and those objects for which verification results obtained in KB0 do not hold in KB1, should be repeated.

Consider the following example: KB1 = KB0 + ADD(m) where m is a module; the first step consists of verifying the new module, i.e., testing that it does not contain inconsistencies. The second step consists of determining how m affects the results of verification tests performed on KB0, and to what extent these results are maintained in KB1. For example, if m contains a new way of deducing a fact f existing in KB0, all verification tests involving f should be repeated since they are incomplete in KB1.

The verification method is based on computing extended labels for KB objects and testing that some relations hold among these labels. In particular, KBinv is the set of objects in KB1 [intersection] KB0 for which their extended labels do not change from KB0 to KB1, i.e., the results of verification tests in KB0 are still valid in KB1. It is shown that KBinv does not depend on the changes performed in KB1 by building a directed dependency graph G representing all dependencies on KBmax, where:

KBmax = KB1, if [Theta][element of] {ADD,MODIFY}

KBmax = KB0, if [Theta][element of] {REMOVE}

G is formed by (N,E) where N is a set of nodes and E is a set of directed edges. Each fact, rule, module, or metarule of KBmax is represented by a different node in N. Each node is labeled by the object it represents. There is an edge from ni to nj when the object labeling ni depends on the object labeling nj by three possible dependencies. The transitive closure is computed (C of G is a directed graph such that there is an edge v,w in C if there is a directed path from v to w in G). Those nodes that cannot be reached from the nodes corresponding to the modified objects, represent the objects forming the set KBinv.

Using the method described above, IN-DEPTH II works in two steps:

  1. It identifies the new objects added to the KB and verifies them
  2. It verifies again the objects in KB that do not appear in the set KBinv

3.4.6. Conclusions

In terms of current work, some of the verification systems are in use as research prototypes, while others come to supplement the existing commercial tools. Few of these tools include verification facilities, and those that do exist are largely restricted to simple checks. New and more sophisticated verification systems appear, although there is no getting away from the fact that full verification of anomalies in rule bases based on first-order logic is intractable. However, object orientation might solve some of the existing or potential problems inherent in the verification and validation of pure rule-based systems.


Previous Table of Contents Next

footer nav
Use of this site is subject certain Terms & Conditions.
Copyright (c) 1996-1999 EarthWeb, Inc.. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Please read our privacy policy for details.