EarthWeb   
HomeAccount InfoLoginSearchMy ITKnowledgeFAQSitemapContact Us
     

   
  All ITKnowledge
  Source Code

  Search Tips
  Advanced Search
   
  

  

[an error occurred while processing this directive]
Previous Table of Contents Next


The resulting connections are given below: the first matrix summarizes connections between the inputs and the hidden layer; the second one concerns the connections of the OR neuron in the output layer

Even without any pruning one can conclude that the neural network fully complies with the logical expression of the XOR problem.

B. The data are given below; they involve two boolean functions with three arguments (variables)

The learning was carried out for several dimensions of the hidden layer(h); h was varied from 2, 3, and 4. The learning rate was 0.05. The results are displayed in Figs. 7.22 and 7.23.

The learning led to successful results (zero performance index) or got stuck at some level of the performance index due to an insufficient topology of the network (too small size of the hidden layer). This effect happens in case of h = 2. The training was successful both for h =3 and h = 4.


Figure 7.22  Performance index in successive learning epochs for h = 2 and h = 3


Figure 7.23  Performance index in successive learning epochs for h = 4

The results (connections of the network) are given below:

for h = 2:

hidden - input layer (the successive columns are denoted by x1, x2, and x3,; the remaining are used to indicate the complements of these three variables)

output - hidden layer

h = 3:

h = 4:

All the obtained networks have been subject to some slight pruning with the threshold levels set to 0.5 (for the AND and OR neurons). For comparative reasons we mapped the original learning data on two Karnaugh maps, Fig. 7.24.


Figure 7.24  Karnaugh maps (K-maps) of the training data

For h = 2 we derive the following boolean expression

which when mapped onto the K-maps do not coincide with the original Boolean functions, Fig. 7.25. In fact, the network has introduced some additional entries whose existence is visible by comparing these two maps with the K-maps of the orginal functions.


Figure 7.25  K-map derived from the fuzzy neural network for h = 2

For h = 3 we obtain

The fuzzy neural networks “discover” the same product terms as those used for building the original functions, see Fig. 7.26.


Figure 7.26  K-map derived from the fuzzy neural network for h = 3

The network with h = 4 produces the following results

The network is excessively large which becomes fully reflected in the second expression whose last product does not make any sense (as it automatically cancels out). An interesting phenomenon happens for the first function; it becomes more compact by taking advantage of a reduced product term. In fact, these terms can be obtained by “eyeballing” the Karnaugh maps of the relationships. Interestingly enough, the network was capable of minimimizing these logical expressions as a byproduct of the minimization of the performance index.


Figure 7.27  K-map derived from the fuzzy neural network for h = 4

C. We now consider three Boolean functions

and minimize them with the use of fuzzy neurocomputing. The elements on these lists summarize the minterms forming the Boolean functions. For instance, the second minterm (2) is the product not(A)not(B)CB. First we start with proposing a suitable architecture of the network. The format of the Boolean functions immediately stipulates the use of a fuzzy neural network with a single hidden layer of AND neurons followed by the output layer of three OR neurons. Secondly, we develop a training set which arises directly from the form of the above functions. We get

The minimal number of nodes in this layer leading to the zero value of the performance index (being here a standard MSE criterion) equals 7. This is a minimal number of minterms necessary to implement the functions under consideration. The connections are initialized to small random numbers coming from the unit interval. The training is carried out with the learning rate set to 0.35. The zero error was achieved after around 100 learning epochs, see Fig. 7.28.


Figure 7.28  Performance index in successive learning epochs

The produced parameters are equal to

input - hidden layer

hidden - output layer

7.5.3. FNN as a model of approximate reasoning

There is no doubt that Approximate Reasoning is one of the prolific areas of applied research in fuzzy sets. Essentially, we are concerned with the development of some sound models of reasoning in the presence of approximate premises and rules formed by linguistic data. In the simplest version the generalized version of modus ponens reads as

In brief, this states that A implies B. A is referred to as an antecedent while B forms a conclusion. The fuzzy relation (R) describes a collection of inference constraints and, in fact, stems from the originally available experimental results of reasoning of the type “if condition then conclusion”. To make an entire discussion concise, we assume that all fuzzy sets of premises as well as conclusions are defined in finite universes of discourse, card(A) = n, card(B) = m. Then the membership function of the conclusion B is computed as

j=1, 2, …, m. The implementation of the above relationship yields an array of specialized processing elements, Fig. 7.29.

Observe that each column in this array forms a single n-input inclusion neuron. To enhance the structure we may admit some extra adjustable connections of the neuron. The proposed enhancement gives rise to the expression

and reads as

Thus the learning occurs at the parametric level through changes of the values of the connections (wj) and the point of reference (rj).


Figure 7.29  Array of processing units implementing approximate reasoning; highlighted is a single implication (inclusion) neuron


Previous Table of Contents Next

Copyright © CRC Press LLC

HomeAccount InfoSubscribeLoginSearchMy ITKnowledgeFAQSitemapContact Us
Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Read EarthWeb's privacy statement.