EarthWeb   
HomeAccount InfoLoginSearchMy ITKnowledgeFAQSitemapContact Us
     

   
  All ITKnowledge
  Source Code

  Search Tips
  Advanced Search
   
  

  

[an error occurred while processing this directive]
Previous Table of Contents Next


The nonparametric representation is indirect in the sense that any input datum is expressed via a collection of receptive fields (RBFs). The way in which these fields are matched with the data is not unique including possibility - necessity and compatibility computations. The nonparametric representation maintains the size of the input layer of the network under control as it directly depends upon the number of the receptive fields partitioning the input space.

Some illustrative simulation studies concern a classification of nonnumeric patterns situated in a two-dimensional feature space. Our objective is to analyze the behavior of the neural classifier for data (patterns) of different level of granularity as well as to investigate various topologies of the preprocessing layer and analyze their ability to cope with uncertainty. The nonlinear classification boundary is given as a sine wave, x2= 5 sin(x1). Both x1 and x2 are distributed in the [0, 5] interval. The patterns are assigned to class ω1 if x2 < 5 sin(x1) or to the second class if x2 > 5 sin( x1). The elements of the partition space are defined as Gaussian-like membership functions (fuzzy relations) of the form

with the series of modal values

m1 = (-2.5 -3.0)
m2 = (-4.5 0.0)
m3 = (-2.5 3.0)
m4 = (-1.5 -2.0)
m5 = (-1.5 -3.0)
m6 = (1.5 3.0)
m7 = (0.45 -3.5)
m8 = (2.4 -3.5)
m9 = (3.9 2.5)

The nonnumeric aspect of the patterns is admitted in the form of sets - squares of width “2d”. This parameter (d) is referred to as a granularity of data. The higher the value of “d”, the lower the granularity of the patterns. A class membership of such nonnumeric patterns is defined by computing regions of the squares located at the corresponding side of the classification boundary. More specifically, the class membership in ω1 is computed as

while the class membership in ω2 is governed by the expression

where X denotes a characteristic function of the binary relation. In the series of experiments, we consider the same training set as far as the centers of the patterns are concerned. The granularity of the patterns is modified. An example of the nonnumeric patterns is shown in Fig. 6.5.


Figure 6.5  Nonnumeric data used in the experiments

Two architectures of feedforward neural networks with a single hidden layer are studied -the difference between them arises at the preprocessing layer:

  the input layer with the possibility and necessity measures (possibility - necessity decoding)
  the input layer determining possibility values (possibility decoding)

Depending on the decoding, the input layer generates either 9 or 18 signals that are fed into the hidden layer. Similarly, the hidden layer, depending on the preprocessing layer, comprises 9 or 18 neurons.

A function for average differences between the possibility and necessity values is computed in the context of the above Gaussian membership functions for the patterns of different granularity is increasing, Fig. 6.6. If “d” goes up, a gap between the corresponding possibility and necessity values increases.


Figure 6.6  Differences between possibility and necessity values regarded as a function of “d”

The training of the networks with these two forms of the preprocessing layers is carried out for the data with d = 0.4. The performance of learning is monitored by the standard sum of squared errors (performance index) between the target class membership values and those produced by the network, Fig. 6.7.


Figure 6.7  Performance index in successive learning epochs

The obtained connections between the input and output layer are summarized in Fig. 6.8.


Figure 6.8  Connections of the neural network: input - hidden layer (i) possibility coding (ii) possibility and necessity coding

We visualize the results of this training of the network in terms of the membership values, Figs. 6.9 and 6.10.


Figure 6.9  Class membership values for the neural network with possibility -necessity coding


Figure 6.10  Class membership values for the neural network with the possibility coding in the input layer

The testing of the neural network is carried out for the same training data - these patterns are the same in terms of their modal values but now exhibit a variable granularity level (d). Based on the results in Figs. 6.11 and 6.12, several observations are worth making:

  the lowest performance index occurs for the same “d” as characterizing the data set used for the training purposes
  if “d” is lower, then Q increases as the network is not capable of handling data of increased precision
  the values of “d” higher than used originally during the training lead to a poorer performance of the network due to its eventual smoothing (averaging) behavior.


Figure 6.11  Performance index Q for several values of “d”

The performance of the network with the possibility encoding is ostensibly weaker than the previous one with the possibility - necessity encoding. This illustrates an importance of the preprocessing layer.


Figure 6.12  Performance index Q for several values of “d”

6.6. Neural calibration of membership functions

The linguistic terms play an instrumental role in encoding both numerical and nonnumerical information that takes place prior to its further processing. It is obvious that linguistic terms (fuzzy sets) are not universal. When speaking about comfortable speed, we confine ourselves to a certain context and interpret this term accordingly. When the context changes, so does the meaning of the term. Nevertheless, an order of the terms forming the frame of cognition is retained. For instance, in the frames

the order of the basic terms is preserved no matter how much the meaning attached to the terms tends to vary. The membership functions of the elements of and could be very distinct, though. As illustrated in Fig. 6.13, the same notion of comfortable speed in is more specific than its linguistic counterpart existing in .


Figure 6.13  A frame of cognition of comfortable speed and its realization


Previous Table of Contents Next

Copyright © CRC Press LLC

HomeAccount InfoSubscribeLoginSearchMy ITKnowledgeFAQSitemapContact Us
Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Read EarthWeb's privacy statement.