Brought to you by EarthWeb
IT Library Logo

Click Here!
Click Here!

Search the site:
 
EXPERT SEARCH -----
Programming Languages
Databases
Security
Web Services
Network Services
Middleware
Components
Operating Systems
User Interfaces
Groupware & Collaboration
Content Management
Productivity Applications
Hardware
Fun & Games

EarthWeb Direct EarthWeb Direct Fatbrain Auctions Support Source Answers

EarthWeb sites
Crossnodes
Datamation
Developer.com
DICE
EarthWeb.com
EarthWeb Direct
ERP Hub
Gamelan
GoCertify.com
HTMLGoodies
Intranet Journal
IT Knowledge
IT Library
JavaGoodies
JARS
JavaScripts.com
open source IT
RoadCoders
Y2K Info

Previous Table of Contents Next


3.3. PROBABILISTIC REASONING

Uncertainty, manifested as partial or incomplete knowledge about the world is another problem with the classical logic approach to intelligent system design. We associate with the sentences stored in the knowledge base their evidence, which is the degree of belief that the system has about facts of the world described by those sentences. The main tool for dealing with degrees of beliefs is probability theory, which typically assigns to sentences numbers between 0 and 1, interpreted to represent the degree of belief that the system has over that sentence. Probability theory only recently has been adopted in AI as a principled method of representing uncertainty, largely because of the intractability of the most general techniques that existed in computing conditional probabilities. An important key to recent improvements has been the exploitation of explicit dependence and independence assumptions, largely established with research started in the middle 1980s, which led to the development of belief networks, a technique for efficiently computing conditional and joint probabilities distributions (Pearl). A belief network is a directed acyclic graph in which: (1) the nodes represent random variables, (2) an arrow between two nodes represents the influence of one node over the other, and (3) each node has a conditional probability table that measures the effects that the parents have over that node. The joint probability distribution can be calculated from the information in the network. Algorithms have been developed to incrementally build a belief network corresponding to n random variables x1, ..., xn from which the joint probability distribution P(x1, ..., xn) can be computed. This kind of computation requires one to select an ordering of the random variables that guarantees the conditional independence property between a given node and its parents and furthermore, requires that the construction of the network is such that the axioms of probability are not violated. Once a network has been constructed, then it can be used to make inferences of various types, for example, those found in diagnostic and model-based systems and many other intelligent systems applications.

The Dempster-Shafer theory was designed to deal with the distinction between uncertainty and ignorance (Dempster; Shafer). In this approach, formulae are assigned an interval [Belief, Plausibility] in which the degree of certainty must lie. Belief measures the strength of the evidence in favor of a set of formulae. Plausibility measures the extent to which evidence in favor of a formula leaves room for belief in the negation of the formula. Thus, the believe-plausibility interval measures not only our level of belief in a formula, but also the amount of information we have.

Fuzzy set theory was developed to determine how well an object satisfies a vague description by allowing degrees of set membership (Zadeh). Fuzzy logic determines the degree of truth of a sentence as a function of the degree of truth of its components. It has the problem that it is inconsistent when rendered as propositional or first-order logic as the truth of does not necessarily evaluate to 1. However, fuzzy logic has been very successful in commercial applications, particularly in control systems of home electronic appliances. There are arguments that the success of these applications has to do with the fact that the knowledge bases are small, inferences are simple, tunable parameters exist to improve system performance, and that the fuzzy logic implementation is independent of their success. Scaling fuzzy logic to more complex problems will result in the same problems faced by probabilistic techniques and certainty factors (Elk).

Certainty factors was a popular technique for uncertain reasoning in expert systems during the 1970s and 1980s. It was invented for use in the medical expert system MYCIN. Numeric values between -a and a were assigned to rules and input data and ad-hoc heuristics were used to calculate the certainty factor of the output of a fact, which was inferred by one or more rules. It was assumed that the rules had little interaction between them so that an independence assumption between them could be made. But as rule interaction increased, incorrect degrees of belief were computed that overcounted evidence. Certainty factors have been shown to be more or less equivalent to a kind of probabilistic reasoning, which has largely been supplanted in modern knowledge-based systems development by belief networks, Dempster-Shafer theory, and fuzzy logic.

3.4. INDUCTION

Induction is used to generalize knowledge from particular observations. We start with a set of formulae representing facts about the world. We distinguish a subset of these facts as knowledge to be generalized and treat the rest as background theory. We require that the background theory [cap gamma] does not imply the knowledge base [open triangle].

We define a formula [capital phi] to be an inductive conclusion if and only if the following conditions hold:

  1. The conclusion is consistent with the background theory and knowledge base:

  2. The conclusion explains the data:

Inductive reasoning is not necessarily sound. Although an inductive conclusion must be consistent with the formulae in the background theory and knowledge base, it need not be a logical consequence of these formulae, although not every induction conclusion is unsound. Thus, inductive inference is a type of nonmonotonic reasoning.

Concept formation is a common type of inductive inference. The knowledge base asserts a common property of some observations and denies that property to others, and the inductive conclusion is a universally quantified formula that summarizes the conditions under which an observation has that property. The work of Winston (Winston), Mitchell (Mitchell), Michalski (Michalski), Buchanan (Buchanan), and Lenat (Lenat) have contributed to the development of concept-formation as a type of knowledge acquisition and machine learning for expert systems.


Previous Table of Contents Next

footer nav
Use of this site is subject certain Terms & Conditions.
Copyright (c) 1996-1999 EarthWeb, Inc.. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Please read our privacy policy for details.