![]() |
|
|||
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
|
![]() |
[an error occurred while processing this directive]
where Ai is the fuzzy set associated with the i-th approximation knot. More specifically, we require that Ai(mi) = 1. Naturally, Ai(x) is a decreasing function of a distance between x and the i-th knot. Li represents a linear approximation of the function centered around the i-th approximation knot, where again and The overall optimization task can be then formulated accordingly
where the summarization (aggregation) of the rules is carried out in the standard form As stated, this problem is an ideal candidate for the GA optimization - we search for the distribution of knots that minimizes Q. The problem easily generalizes to n - dimensional relationships. Here the function of interest is defined as Its linear approximation completed around x0 reads as For the m - knot approximation we obtain with and which is a straightforward generalization of the one-dimensional case. The local approximation fields (Li) are viewed as fuzzy relations instead of fuzzy sets. They could also come equipped with an extra width parameter controlling its spread (defining a region of relevancy of the local approximation being supported by the given rule). Then the optimization concerns both the approximation knots as well as the spreads of the condition parts of the rules - all these parameters need to be coded as a part of the chromosome. Confining ourselves to so-called hyperellipsoidal fuzzy relations defined by the Gaussian membership functions the GA optimization problem comes in the form where Here M is a large positive constant while and We discuss three illustrative examples of the proposed approximation method. In all the cases we assumed the same collection of the parameters of the GA. The size of the population is 40. The mutation rate and crossover rate are equal to 0.15 and 0.5, respectively. Furthermore the GA exploits a floating-point encoding with the crossover parameter α set to 0.3. The fitness function is defined as 5 - Q with Q being the integral of the approximation error. In all examples the fuzzy sets have triangular membership functions with 1/2 overlap between any two adjacent linguistic terms. This type of the frame of cognition is completely characterized by the location of their modal values - approximation knots. Example 3. The function under discussion is a sine wave defined in [0, 1]. The number of the approximation knots is 3 (m=3). The performance of the approximation is summarized in terms of the average fitness and a fitness of the best individual, Fig. 8.14.
The results produced by the optimized rule-based approximation are given in Fig. 8.15; here we illustrate the outcome produced by the best individual throughout all populations and the best individual encountered in the first population.
The GA optimized results (modal values of the membership functions) equal 0.161, 0.464, and 0.835. The same experiment was repeated for 5 linguistic terms, Fig. 8.16. As expected, the approximation becomes better. Moreover, the results significantly improved after the GA optimization.
Example 4. Consider the function We carry out an approximation with m = 3. As before, the results are summarized in terms of the fitness function, Fig. 8.18. Similarly, the resulting approximation is provided in Fig. 8.19. The optimal individual (obtained in the third population) is (0.199920, 0.449683, 0.715302).
Additionally, Fig. 8.20 illustrates a distribution of error across the universe of discourse for the best individual and the best individual in the initial population.
Example .5. Here we are concerned with a piecewise linear function, Fig. 8.21. Even though the form of the function is not complicated, the approximation is not that easy. The best individual in the starting population of the GA performs very poorly in comparison with the best individual, Fig. 8.22. The optimized approximation knots are found to be 0.214104, 0.321791, and 0.791274.
8.8. Genetic optimization of neural networksAn evolutionary optimization of neural networks has been an area of ongoing research with a number of alternatives of encoding, optimization, decoding as well several types of genetic operations (Back, 1993; Grau, 1994; Honvand and Uhr, 1993). GA addresses both structural and parametric optimization of the networks. The structural optimization deals with the faculties of topological optimization of the network including a number of layers, interconnections, form of feedback, types of neurons, etc. The parametric optimization deals only with modifications of the parameters of the networks. 8.8.1. Parametric optimization of neural networksThe goals of this hybrid neurogenetic endeavor are to resolve some learning shortcomings plaguing gradient-based learning (including the standard backpropagation) such as local minima and nondifferentiable performance indexes. GAs alleviate these shortcomings to a great extent yet they should be used in a hybrid version that still relies on classic optimization techniques. A straightforward encoding method is to collect all connections into a long string of binary (binary encoding) or decimal (real coding) numbers (Miller et al., 1989; Valenzuela-Rendon, 1991), Fig. 8.23.
Copyright © CRC Press LLC
![]() |
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
![]() |