EarthWeb   
HomeAccount InfoLoginSearchMy ITKnowledgeFAQSitemapContact Us
     

   
  All ITKnowledge
  Source Code

  Search Tips
  Advanced Search
   
  

  

[an error occurred while processing this directive]
Previous Table of Contents Next


From this point on, we can proceed with detailed optimization tools including standard gradient-based techniques, Here fuzzy sets of connections serve as important guarding zones of the fine-grain optimization method navigating through the search space. This reduces the extra effort of selecting a suitable pace of learning (such as learning rate, momentum rate, etc.). Refer to Fig. 8.26. The gradient-based learning starts with some representatives of the fuzzy sets of the connections; those could be close to the modal values of the corresponding linguistic terms. Denote the minimized performance index by Q; the overall level of the resulting activation of the linguistic terms is denoted by λ. Both Q and λ are functions of the connections, conn, Q(conn(iter), λ(conn(iter))). As usual, the learning follows the standard update procedure

with β being the learning rate. Its value, however, is not that critical as the method is guarded by the satisfaction level of the linguistic terms of the connections

where “p” denotes a number of the connections in the network. The membership values are combined AND-wise. The gradient-based search is narrowed down to the regions where λ does not fall below a certain threshold level, Fig. 8.26. The exploitation of the regions with higher values of λ becomes more intense than the peripheral zones characterized by some lower values of λ.


Figure 8.26  Gradient-based search of the connection space guarded by the satisfaction level (λ) of the fuzzy connections

The development of a suitable topology of a neural network is far more challenging than a parametric adjustment of the existing structure. Here gradient-based methods are of no help. An evolution of an architecture of the network is centered around a way in which the topology becomes encoded. One among possible and simple ways is to represent the topology of the network via a so-called connectivity matrix (Miller et al., 1989). The matrix summarizes a way in which the neurons are connected (in a simplest version no values of the connections are provided).

The connectivity matrix, scanned row after row produces a string of bits that is used in further optimization, refer to Fig. 8.27.


Figure 8.27  Neural network and its connectivity matrix

In general, we identify a number of useful postulates that any genetic representation needs to satisfy, cf. Balakrishnian and Honvar (1995). They elaborate on the properties of mappings between a space of genotypes and phenotypes , Fig. 8.28. Below we list the most essential of them:


Figure 8.28  Mapping between a space of genotypes and phenotypes

completeness The genetic representation is complete if every neural network in the solution set can be constructed (in principle, at least)
closure The genetic representation is closed if every genotype in decodes to an acceptable (meaningful) phenotype in
compactness Out of different genotypes decoding to the same phenotype we prefer a compact one, namely the one whose storage requirements are minimal
scalability The genotype should be scalable both in terms of the change in the size of the required phenotype as well as a pertinent decoding time

The same authors elaborate also on some other features of the genetic representation such as ontogenetic plasticity, modularity, redundancy, and complexity.

8.9. Genetic optimization of rule-based systems

Rule-based systems are interesting constructs from the point of view of evolutionary optimization. Being modular by their nature, they can be mapped onto the GA search space in many different ways. In fact, this is a fundamental issue one has to address very carefully in order to achieve a superb performance of the optimization scheme. We primarily elaborate on several main trends rather than going into implementation details.

Quite obviously, the rules are encoded as binary strings. A rule

  if input1 is A and input2 is B then output is C

produces the string

where A, B, and C are represented (encoded) by some binary equivalents; here we assumed the assignment A = 001, B = 100, C = 010. We can also encode their numeric values. Assume that all fuzzy sets (A, B, C) are given by triangular membership functions. The real-number string

  0.7 1 1.5 30 40 60 -5 0 5

describes bounds (lower and upper) of the fuzzy sets along with their modal values

Rule protocols can be encoded in an analogous manner. There are, however, two generic approaches to the way in which the population of strings is formed and what those strings really represent.

Pittsburgh approach (Smith, 1980): In this approach, an individual chromosome consists of a family of rules (that forms an entire protocol), Fig. 8.29. Subsequently, a population is a family of such protocols. The fitness function is obtained for the protocol rather than for a single rule. This type of assignment stipulates that a significant improvement of a single rule could be difficult to achieve.


Figure 8.29  Pittsburgh approach to rule-based system optimization

The Michigan approach (Holland and Reitman, 1978) uses a single set of rules. Each chromosome in the population represents a single rule. The entire population is used to represent a rule base. The production system, utilizing this set of rules, senses the environment and generates actions. Each rule can be evaluated with respect to its ability to achieve a given goal. Genetic operators are applied to the rules based upon the basis of their strength.

8.10. Conclusions

Undoubtedly, the CI technology leads to coherent and multifacet design platforms. The contributing areas of neurocomputing, fuzzy sets, and evolutionary computing interact vigorously while taking over during some specific design phases. Moreover, it should be stressed that they geared at processing of information of particular granularity (for instance, a global character of optimization supported by evolutionary mechanisms followed by fine-grain gradient-like optimization techniques). We have strongly emphasized this phenomenon of synergy via a number of illustrative examples including the design of rule-based systems and neural networks.


Previous Table of Contents Next

Copyright © CRC Press LLC

HomeAccount InfoSubscribeLoginSearchMy ITKnowledgeFAQSitemapContact Us
Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Read EarthWeb's privacy statement.