EarthWeb   
HomeAccount InfoLoginSearchMy ITKnowledgeFAQSitemapContact Us
     

   
  All ITKnowledge
  Source Code

  Search Tips
  Advanced Search
   
  

  

[an error occurred while processing this directive]
Previous Table of Contents Next


7.1.3.1. Representing inhibitory information

As the coding range being commonly encountered in fuzzy sets constitutes the unit interval, the inhibitory effect to be conveyed by some variables can be achieved by including their complements instead of the direct variables themselves, say . Hence, the higher the value of xi, the lower the activation level associated with it. Thus the original input space [0,1]n is augmented and the neurons are now described as follows

OR neuron:

AND neuron:

The reader familiar with two-valued digital systems and their design can easily recognize that any OR neuron acts as a generalized maxterm (Schneeweiss, 1989) summarizing xi and their complements whereas the AND neurons can be viewed as the generalization of miniterms (product terms) encountered in digital circuits. Symbolically, the complemented variable (input) is denoted by a small dot as visualized in Fig. 7.4 (i).


Figure 7.4  Representing inhibitory and excitatory information in logic neurons

There is also another way of representing inhibitory information in the neuron (or the network). Instead of combining all the inputs (both inhibitory and excitatory), we split these inputs and expand the architecture by adding few extra neurons. The concept is illustrated in Fig. 7.4 (ii) The inhibitory inputs are first aggregated in the second neuron. The outputs of the neurons are combined AND wise by the third neuron situated in the outer layer. Consider that all the inhibitory inputs are equal to 1. Then the output z2 is equal to 0. Assuming that the connections of the AND neuron are equal to zero, this draws the value of the output (y) down to zero.

7.1.3.2. Computational enhancements of the neurons

Despite the well-defined semantics of these neurons, the main concern one may eventually raise about these constructs happens to be on a numerical side. Once the connections (weights) are set (after learning) each neuron realizes an in (rather than an on) mapping between the unit hypercubes, that means that the values of the output y for all possible inputs cover a subset of the unit interval but not necessarily the entire [0,1]. More specifically, for the OR neuron the values of y are included in whereas the accessible range of the output values of the AND neuron is limited to .

This observed shortcoming could be alleviated by augmenting the neuron by a nonlinear element placed in series with the previous logical component, Fig. 7.5.


Figure 7.5  Fuzzy neuron with a nonlinear processing element

The neurons obtained in this manner are formalized accordingly

where Ψ: [0,1] → [0,1] is a nonlinear monotonic mapping. In general, we can even introduce mappings whose monotonicity is restricted to some regions of the unit interval. A useful two-parametric family of the sigmoidal nonlinearities is specified in the form

u, m ∈ [0,1], σ ∈ R.

By adjusting the parameters of the function (that is m and σ), various forms of the nonlinear characteristics of the element can be easily obtained. Especially, the values of σ determine either an increasing or decreasing type of the characteristics of the obtained neuron while the second parameter (m) shifts the entire characteristics along the unit interval.

The incorporation of this nonlinearity changes the numerical characteristics of the neuron, however its essential logical behavior is sustained; refer to Figs. 7.6 and 7.7 that summarize some of the static input - output relationships encountered there (with the triangular norms set up as the product and probabilistic sum).


Figure 7.6  Nonlinear characteristics of the OR neuron with w1 = 0.7, w2= 0.8; m = 0.6; σ = 10


Figure 7.7  Nonlinear characteristics of the AND neuron with w1 = 0.1, w2= 0.05; m = 0.6; σ = 10

It is also of interest to mention that the nonlinearities of the logic neurons can be treated as efficient models of the linguistic modifiers (Yager, 1983). Since the very inception the issue of linguistic modifiers (hedges) in fuzzy computations has been studied in various contexts (IEEE Trans. on Neural Networks, 1992; Kacprzyk, 1983, 1985, 1986). The calculus of these objects has been also developed in different ways, cf. Yager (1983). In the setting of fuzzy neurocomputations, the nonlinear element used in the discussed neurons can be viewed as the linguistic modifier defined over [0, 1]. Depending upon the specific value of the parameter σ we are looking at the modifiers of the type “at least” (realized as a monotonically increasing function, σ > 0) or “at most” (a monotonically decreasing function, σ < 0). A necessary calibration of the modifier based on the provided data is accomplished through the learning of the neuron as clarified before. This, in particular, pertains to the parameters of this nonlinearity (m and σ). One can model some other linguistic modifiers in the same way; Fig. 7.8 includes some additional examples of the quantifiers.


Figure 7.8  Examples of linguistic modifiers

In a nutshell, the use of the modifier produces a “weightless” neuron in which all the connections are equal, wi=w, i=1,2,…,n. Hence the flexibility of the neuron resides to a significant extent within the parameters of the modifier.


Previous Table of Contents Next

Copyright © CRC Press LLC

HomeAccount InfoSubscribeLoginSearchMy ITKnowledgeFAQSitemapContact Us
Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Read EarthWeb's privacy statement.