EarthWeb   
HomeAccount InfoLoginSearchMy ITKnowledgeFAQSitemapContact Us
     

   
  All ITKnowledge
  Source Code

  Search Tips
  Advanced Search
   
  

  

[an error occurred while processing this directive]
Previous Table of Contents Next


where Ai is the fuzzy set associated with the i-th approximation knot. More specifically, we require that Ai(mi) = 1. Naturally, Ai(x) is a decreasing function of a distance between x and the i-th knot. Li represents a linear approximation of the function centered around the i-th approximation knot,

where again

and

The overall optimization task can be then formulated accordingly

  for a fixed number of knots and a predefined form of the fuzzy sets defined in the condition parts of the rules determine such that the global approximation error achieves minimum

where the summarization (aggregation) of the rules is carried out in the standard form

As stated, this problem is an ideal candidate for the GA optimization - we search for the distribution of knots that minimizes Q.

The problem easily generalizes to n - dimensional relationships. Here the function of interest is defined as

Its linear approximation completed around x0 reads as

For the m - knot approximation we obtain

with

and

which is a straightforward generalization of the one-dimensional case. The local approximation fields (Li) are viewed as fuzzy relations instead of fuzzy sets. They could also come equipped with an extra width parameter controlling its spread (defining a region of relevancy of the local approximation being supported by the given rule). Then the optimization concerns both the approximation knots as well as the spreads of the condition parts of the rules - all these parameters need to be coded as a part of the chromosome. Confining ourselves to so-called hyperellipsoidal fuzzy relations defined by the Gaussian membership functions

the GA optimization problem comes in the form

where

Here M is a large positive constant while

and

We discuss three illustrative examples of the proposed approximation method. In all the cases we assumed the same collection of the parameters of the GA. The size of the population is 40. The mutation rate and crossover rate are equal to 0.15 and 0.5, respectively. Furthermore the GA exploits a floating-point encoding with the crossover parameter α set to 0.3. The fitness function is defined as 5 - Q with Q being the integral of the approximation error. In all examples the fuzzy sets have triangular membership functions with 1/2 overlap between any two adjacent linguistic terms. This type of the frame of cognition is completely characterized by the location of their modal values - approximation knots.

Example 3. The function under discussion is a sine wave

defined in [0, 1]. The number of the approximation knots is 3 (m=3). The performance of the approximation is summarized in terms of the average fitness and a fitness of the best individual, Fig. 8.14.


Figure 8.14  Fitness values throughout the GA run

The results produced by the optimized rule-based approximation are given in Fig. 8.15; here we illustrate the outcome produced by the best individual throughout all populations and the best individual encountered in the first population.


Figure 8.15  Performance of approximation for different individuals

The GA optimized results (modal values of the membership functions) equal 0.161, 0.464, and 0.835. The same experiment was repeated for 5 linguistic terms, Fig. 8.16. As expected, the approximation becomes better. Moreover, the results significantly improved after the GA optimization.


Figure 8.16  Fitness in successive GA populations


Figure 8.17  Results of rule-based approximation for m = 5

Example 4. Consider the function

We carry out an approximation with m = 3. As before, the results are summarized in terms of the fitness function, Fig. 8.18. Similarly, the resulting approximation is provided in Fig. 8.19. The optimal individual (obtained in the third population) is (0.199920, 0.449683, 0.715302).


Figure 8.18  Fitness function in successive populations


Figure 8.19  Original function f(x) and its rule-based approximation

Additionally, Fig. 8.20 illustrates a distribution of error across the universe of discourse for the best individual and the best individual in the initial population.


Figure 8.20  Approximation squared error for the best solution and the initial solution (as found in the initial population)

Example .5. Here we are concerned with a piecewise linear function, Fig. 8.21. Even though the form of the function is not complicated, the approximation is not that easy. The best individual in the starting population of the GA performs very poorly in comparison with the best individual, Fig. 8.22. The optimized approximation knots are found to be 0.214104, 0.321791, and 0.791274.


Figure 8.21  Fitness function in successive populations of GA


Figure 8.22  Piecewise linear function and its approximations

8.8. Genetic optimization of neural networks

An evolutionary optimization of neural networks has been an area of ongoing research with a number of alternatives of encoding, optimization, decoding as well several types of genetic operations (Back, 1993; Grau, 1994; Honvand and Uhr, 1993). GA addresses both structural and parametric optimization of the networks. The structural optimization deals with the faculties of topological optimization of the network including a number of layers, interconnections, form of feedback, types of neurons, etc. The parametric optimization deals only with modifications of the parameters of the networks.

8.8.1. Parametric optimization of neural networks

The goals of this hybrid neurogenetic endeavor are to resolve some learning shortcomings plaguing gradient-based learning (including the standard backpropagation) such as local minima and nondifferentiable performance indexes. GAs alleviate these shortcomings to a great extent yet they should be used in a hybrid version that still relies on classic optimization techniques.

A straightforward encoding method is to collect all connections into a long string of binary (binary encoding) or decimal (real coding) numbers (Miller et al., 1989; Valenzuela-Rendon, 1991), Fig. 8.23.


Figure 8.23  Organization of the connections of the network; the string starts from the input layer (bias connections set to 0)


Previous Table of Contents Next

Copyright © CRC Press LLC

HomeAccount InfoSubscribeLoginSearchMy ITKnowledgeFAQSitemapContact Us
Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Read EarthWeb's privacy statement.