![]() |
|||
![]()
|
![]() |
![]() |
![]() |
Chapter 12
|
1. | Introduction | ||
2. | Background | ||
2.1. | Initialization | ||
2.2. | Evaluation | ||
2.3. | Selection | ||
2.4. | Search | ||
2.4.1. | Crossover | ||
2.4.2. | Mutation | ||
2.5. | Termination | ||
2.6. | Variations | ||
3. | Techniques and Applications | ||
4. | Research Issues | ||
5. | Ffuture Trends and Summary | ||
References |
Genetic algorithms solve problems using the Darwinian concept of evolution. They were first described by John Holland in the 1970s (Holland, 1975). We can apply genetic algorithms, or GAs, when we are not quite sure how to solve a problem that has a very large search space. GAs are best used to find nonlinear solutions where there does not exist any previously developed mathematical or heuristic approach.
In many fields, and especially in artificial intelligence, practitioners often combine various techniques in order to take advantage of the best characteristics of each technique. Most of these systems are known as hybrid systems. There are many examples of hybrid systems. Typically, you can find implementations that combine techniques. These implementations include:
The artificial intelligence field is not replete with examples that use genetic algorithms combined with expert system technology. In this chapter several applications that successfully combined the two approaches, and also discuss in general how someone can use the two technologies to create a successful system.
GAs have been used in a variety of real-world applications. They have been implemented in the areas of:
There exist various approaches to problem-solving that follow some type of biological paradigm. In computer science, and especially artificial intelligence, a family of problem-solving techniques has grown and become popular. These techniques emulate Darwinian evolution by following the only the strongest survive strategy. Genetic algorithms, along with several other strategies, fall into this category. The following tree in Figure 1 illustrates the relationships (Schwefel, 1994).
Although the terminology is similar, the evolutionary computational techniques differ in unique ways. Evolutionary Computing, also known as Evolutionary Algorithms, is a term that encompasses the entire range of problem-solving systems which use evolution as the key implementation mechanism. GAs use chromosomes, or genotypes, of fixed length that encode the genetic constitution of an organism. Another term frequently encountered when discussing GAs is phenotype, which is used to refer to the physical appearance of an organism. Each chromosome represents a solution that is tested for fitness, with better solutions passing their genes to subsequent generations. Genetic programming is similar to GAs, but differs in the fact that the genotypes are executable code. Genetic programming is normally done in LISP to take advantage of its symbolic nature, but there are implementations that use pointers in C/C++. Evolutionary Strategies (ES) and Evolutionary Programming (EP) are very similar. ES depend on mutation and recombination and use real valued parameters. EP uses only mutation to evolve a population; it emphasizes the link between parent and offspring. EP also follows a top-down approach, as opposed to the bottom-up approach of GAs. Successful evolution using EP depends on the expressed behavior of an object, while in GAs it depends on the individual genes. Classifier systems generally start with no prior knowledge and learn a program by induction from a randomly generated population.
GAs, along with other aforementioned paradigms, solve problems by emulating evolutionary concepts. A population of possible genotypes (genetic constitution of an organism) is created, evaluated, recombined, and mutated to generate more and different genotypes. The best are kept as a basis for evolving better genotypes. The general steps to applying GAs are as follows:
FIGURE 1 Evolutionary computing.
We can solve a problem using genetic algorithms by first generating a population of possible genotypes. We then apply a fitness function that assigns a fitness value to each genotype. Computing fitness is usually the most time-consuming portion of the algorithm, since the fitness is calculated for each individual in the population. The population is then recombined, with the number of each genotype increasing proportionally to its fitness. In other words, if a genotype is twice as good as another genotype, then there now would be approximately twice as many good genotypes as these other genotypes now in the population. Next, the best genotypes, paired off like parents, are combined in a manner that hopefully passes along the best genes of each genotype to the new child genotype. The fitness function is again applied and the cycle continues until a certain number of trials have been reached or a measured fitness equals some goal. The biological parallel is obvious. Parent genotypes combine to pass their best genes to their children, which hopefully have retained the best characteristics of their parents. Mutation, where a small percentage of genes are switched to help generate different genotypes, also can be used. The following sections describe these steps in detail.
Previous | Table of Contents | Next |
![]() |
|
Use of this site is subject certain Terms & Conditions. Copyright (c) 1996-1999 EarthWeb, Inc.. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Please read our privacy policy for details. |