EarthWeb   
HomeAccount InfoLoginSearchMy ITKnowledgeFAQSitemapContact Us
     

   
  All ITKnowledge
  Source Code

  Search Tips
  Advanced Search
   
  

  

[an error occurred while processing this directive]
Previous Table of Contents Next


Some example characteristics of the referential neurons are portrayed in Fig. 7.13.


Figure 7.13  Characteristics of referential neurons: (i) equality neuron, (ii) difference neuron, (iii) inclusion neuron, (iv) dominance neuron, s-norm: probabilistic sum, t-norm: product, implication induced by product w = [0.05 0.05] r = [0.4 0.5]

To model complex situations, the referential neurons can be encapsulated into a form of a neural network. An example is a tolerance neuron which consists of DOMINANCE and INCLUSION neurons placed in the hidden layer and a single AND neuron in the output layer, Fig. 7.14.


Figure 7.14  Tolerance neuron

The above neuron generates a tolerance region as shown in Fig. 7. 15.


Figure 7.15  2D and 3D characteristics of a tolerance neuron AND neuron: min operator, INCL and DOM neuron: a → b = min(1,b/a), a, b ∈ [0, 1] wij = 0.05, vi = 0.0 reference points: INCL neuron: r = [0.8 0.9], DOM neuron: g = [0.5 0.4]

The ordinal sum of t-norms can be used in another model of a fuzzy neuron with more diversified processing occurring at the level of synaptic processing. Assume that the number of t-norms as well as their type have been fixed. We leave, however, the ranges of the individual t-norms variable and regard them as adjustable parameters of the neuron. In a concise form this reads as

where a and b are the vectors of the ranges of the contributing t - norms.

7.4. Learning in fuzzy neural networks

The issue of learning embraces a number of essential components. First, it is a question of learning with respect to the level of available supervision. The second is the question whether we deal with structural or parametric changes of the network or whether we can work with both of them. We commence with these issues and then focus on a number of case studies that illustrate several learning procedures.

7.4.1. Learning policies for parametric learning in fuzzy neural networks

The learning of the fuzzy neural network embraces a number of diverse scenarios. It usually heavily depends on the initial information available to the problem which can be immediately accommodated in the network. For instance, in many situations it is obvious in advance that some connections will be weak or even nonexistent. This allows us to build an initial configuration of the network that is very divergent from a fully connected network. This initial knowledge tangibly enhances the learning procedure, eliminating a need to modify all the connections of the network and thus preventing us from proceeding with learning from scratch. On the other hand, if the initial domain knowledge about the problem (network) is not sufficient, then a fully connected structure yielding higher values of its entropy function (Machado and Rocha, 1990; Rocha, 1992) would be strongly recommended.

In many cases the role of the individual layers is also obvious so that one can project the behavior of the network (and evaluate its learning capabilities) with this respect. The following two general strategies of learning are worth pursuing:

  successive reductions. One starts with a large and eventually excessive neural network (containing many elements in the hidden layer), analyzes the results of learning and, if possible, reduces the size of the network. These reductions are carried out as long as they do not drastically affect the quality of learning (by slowing it down significantly and/or elevating the values of the minimized performance index). The main advantage of this strategy lies in fast learning. This is achieved due to the “underconstraint” nature of the successive networks. A certain shortcoming is that the network which is constructed in this way can be fairly “overdistributed”.
  successive expansions. The starting point in this strategy is a small neural network which afterwards is expanded successively based on the values of the obtained performance index. Too high values of the index may suggest further expansions. The network derived in this way could be made compact. Nevertheless, under some circumstances a total computational overhead (many unsuccessfully extended structures of the neural networks) may not be acceptable and could make this approach computationally quite costly.

7.4.2. Performance index

One can easily envision a number of different ways in which the similarity (distance) between the outputs of the neural network and the target fuzzy sets can be expressed. While the Euclidean distance is common, we confine ourselves to the equality index. There are two reasons behind studying this option. First, the equality index retains a clear logic set-oriented interpretation that could be of particular interest in the setting of fuzzy neural networks. Secondly, with a suitable selection of the underlying t-norms, it may assure some robustness features.

Let x and y denote two membership values, x, y ∈ [0, 1]. Let us recall that the equality index computes a level of similarity between these two elements in the following way

Owing to the robustness aspects we are interested in, it is instructive to concentrate on the Lukasiewicz implication of the form

After simple calculations we derive the following expression

The equality index becomes a piecewise linear function of the arguments. Evidently,

The equality index can be rewritten in a slightly different manner by observing that both 1 -x + y and 1 - y + x can be combined into a single expression 1 - |x-y| that is nothing but a complement of the Hamming distance between x and y. Thus

It is also visible that the Lukasiewicz-based equality index as based on the L1 metric promotes significant robustness properties of the ensuing fuzzy neural network.


Previous Table of Contents Next

Copyright © CRC Press LLC

HomeAccount InfoSubscribeLoginSearchMy ITKnowledgeFAQSitemapContact Us
Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Read EarthWeb's privacy statement.