Brought to you by EarthWeb
IT Library Logo

Click Here!
Click Here!

Search the site:
 
EXPERT SEARCH -----
Programming Languages
Databases
Security
Web Services
Network Services
Middleware
Components
Operating Systems
User Interfaces
Groupware & Collaboration
Content Management
Productivity Applications
Hardware
Fun & Games

EarthWeb Direct EarthWeb Direct Fatbrain Auctions Support Source Answers

EarthWeb sites
Crossnodes
Datamation
Developer.com
DICE
EarthWeb.com
EarthWeb Direct
ERP Hub
Gamelan
GoCertify.com
HTMLGoodies
Intranet Journal
IT Knowledge
IT Library
JavaGoodies
JARS
JavaScripts.com
open source IT
RoadCoders
Y2K Info

Previous Table of Contents Next


A special kind of method, autoepistemic logic was drafted by Stalnaker and Moore for creating different closed worlds for different agents, based on their views, beliefs. The intriguing problem is a belief on the belief of the other agent. The metaphor of the autoepistemic logic procedure is the game with many distorting mirrors, each reflecting the deformed picture of the previous one, and performing some operation on the real object but seen on the last mirror!

The real difficulty is, like in real life, the possible and unforeseeable conflict among the agents resp. their worlds. This is where the knowledge representation joins the disciplines of conflict resolution: decision support, social choice, voting and polling, theories and methods of uncertainty and games. This relation of uncertainty calculations and decision was mentioned before. The decision support system which does or advises a choice, a preference of possible conflict resolution should be an integral knowledge part of the expert system. Which conflict resolution procedure should be used, what the previously definable preferences are, these are obvious prerequisite knowledge constituents as this was exemplified by the different possible strategies of medical diagnosis and therapy. An important result of Arrow was the theoretical proof for the non-existence of any finally rational choice in the presence of several rational preferences. This fact is experienced in our everyday life, and it suggests the application of a sophisticated decision support system for seeing pros and cons of alternatives but the final decision is mostly a human responsibility. Knowledge representation should know this, the simplest solution is in any dubious situation a return to a man-machine dialog with the user or with the domain expert.

The third way of creating coherence in knowledge is the creation of new frames, new structures, which is mostly a human learning and ingenuity task, but can be supported also by machine learning.

The first-order logic structures behind the various systems permit not only the use of these systems for knowledge acquisition in a conducted domain expert-knowledge engineer dialog but can be a background for learning if information can be fed into the system in a suitable way. In the learning mode of operation not only the previously defined slots are filled in with the new information and according to a prescribed way but a search can be initiated for new structures, such as new relations for a relational database.

The open world is fundamentally represented in natural language; this is the reason language is still, and remains for ever, a subject of investigations in human relations. The bridge between natural language and computer representation is, therefore, a basic problem and -- as far as it can be approximated -- a basic tool for knowledge representation. This is the real blackboard! Syntactical parsing, semantic analysis belong to these tools, all achievements of general computer linguistics. These are, therefore, ingredients of every more sophisticated knowledge representation system.

Case-based reasoning can use all these methods of representation. The essential problem is the same as that of human expertize: remembering to a certain similar and some only likely similar cases the expert should decide if the similar decision is applicable or not. The practice of the Anglo-Saxon precedence-based Common Law or of a medical consultant are the best examples. The cases are represented in frames, scripts; the distances to other schemes are mostly defined by fuzzy-like estimations, Bayesian-types of conditional probabilities, distance measures used in clustering problems like the Nearest Neighbor methods. The representation of the singular cases and case prototypes like a given diagnostic pattern or a textbook malady description is the simpler part of the task; the invention of the pertinent similarity measure, definition of its relation to viewpoints (very special modalities) is the real human challenge.

3.5. CONNECTIVITY, PATTERNS, AND SCHEMES

The most important representation devices of the connectivist-pattern view are the neural nets. The evaluation of pattern-type representations, i.e., the conceptual clustering of net images, pattern representations in data files, etc. is a further representation task. Three major methods are available, all in many variations.

The first is the statistical clustering procedure; the most popular one is the Nearest Neighbor algorithm which collects the points of nets and data representations which are closer to one or the other density center.

The second group of methods proceeds with the minimization of defining characteristics, such as a minimal word explanation of an event. This happens with the cited geometrical metaphor: the data points create a multidimensional object in a multidimensional space. The eigendirections, main axes of these information objects represent the main definitional characteristics, the length of the axes, the relevance of the characteristics. An evaluation decreases the number of these axes, i.e., the definitional characteristics as far as they define somehow the pattern, until they separate it from others.

The third group takes the coded information patterns in the sense of the information and code theory. The differences in information content, entropy, and the code distances of code representations, e.g. Hamming-distances are the features that define similarities and separations, i.e. representations of conceptual definitions.

3.6. REPRESENTATIONAL RELATIONS OF FURTHER INTELLIGENT ACTIONS

Representation is a device for the creation and use of intelligent systems such as expert systems. This means that the representation form cannot be separated from the previous and further steps of application, i.e., from knowledge acquisition, inference and explanation of actions. As it was referred to, the choice of knowledge representation forms and methods depend on these applications, and that is the reason advanced shells and toolkits offer various methods.

A typical representation problem for further processing is the application of genetic algorithms. The individual components which should be varied in composition for a feasible solution should be selected by a certain knowledge about the specific problem, e.g. parts of an engine to be assembled in a feasible and most efficient way. The next part of required knowledge is the permitted extent of variations, and finally the fitness algorithm that evaluates the variational generations.


Previous Table of Contents Next

footer nav
Use of this site is subject certain Terms & Conditions.
Copyright (c) 1996-1999 EarthWeb, Inc.. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Please read our privacy policy for details.