EarthWeb   
HomeAccount InfoLoginSearchMy ITKnowledgeFAQSitemapContact Us
     

   
  All ITKnowledge
  Source Code

  Search Tips
  Advanced Search
   
  

  

[an error occurred while processing this directive]
Previous Table of Contents Next


Let us now consider another type of the search operator where the differentiability requirement can be dropped. In probabilistic search, the simplest stochastic method is known as a Pure Random Search (Brooks, 1958). The algorithm generates a sequence of identically distributed points in ∂ and keeps track of the best points found so far. The sequence of steps comprises the following

i = 0
repeat

  set Max = -∞
  generate a new point x in ∂
  if f(x) > Max then set Max = f(x) and set solution as x
  increment i, i = i + 1

until termination conditions satisfied

After termination of the loop, the solution is returned with the value of the function. Some convergence properties can be revealed by assuming continuity of the optimized function.

The probability that after “n” iterations the obtained point is located in set B is computed as

As each event is characterized by the probability

(observe that we have confined ourselves to the uniform probability density function), we get

(note that x1, x2xn have been treated as independent events)

Owing to the continuity of the optimized function, the integral

assumes a nonzero value.

This, in turn, implies that the expression

approaches zero once the number of iterations tends to infinity. In limit, we get

meaning that we can reach the search region with probability 1. The probabilistic search algorithm offers a probabilistic asymptotic guarantee yet this method is not very efficient. In particular, the expected number of iterations (n) increases exponentially in the dimension of the problem. In comparison with the gradient-based approach, the probabilistic methods apply to a broader class of nondifferentiable optimization problems, yet their efficiency (including convergence speed) becomes reduced.

The method shown above is the simplest variant among an entire family of random search algorithms. It uses a minimal information about the optimized function and does not require any extra search hints. The existing improvements exploit some additional facts about the optimized function that force an improvement of the solution in each iteration. For more details the reader may consult Boender and Romeijn (1995).

5.3. Genetic algorithms - fundamentals and a basic algorithm

In this section we discuss a generic version of the genetic algorithm (GA). Let us first emphasize the main fundamental difference between the genetic approach and the methods discussed in the previous sections. Most importantly, the GA hinges on a population of potential solutions and as such, exploits the mechanisms of natural selection well known in evolution (survival of the fittest). We start with an initial population of “N” elements in the search space, determine a suitability of survival of its individuals and evolve the population to retain the individuals with the highest values of the fitness function. When proceeding with this form of evolution and moving from one population to another, we end up with the individuals with the highest abilities to survive. To emulate the paradigms of natural selection and support adaptation, we allow the individual solutions to recombine and mutate. In particular, by performing crossover we generate new individuals (offsprings). To maintain diversity we admit mutation, altering a current content of strings.

Fundamentally, before all these genetic manipulations can be carried out, one has to transform the original search space into an equivalent representation space, a so-called GA search space.


Figure 5.2  From search space to GA search space

The GA operates on a space of genotypes (chromosomes) - the representatives of the corresponding elements in the search space. The former are usually referred to as phenotypes. The GA search space is composed of strings of symbols. In the simplest case the symbols used in the genotypes originate from a two-element alphabet {0, 1}. The GA philosophy is straightforward and can be summarized in a very succinct way. Denote by a family of elements (strings) forming the GA search space in evolution “t”. The GA exploits a single loop through a number of evolving populations (Michalewicz, 1992; Goldberg, 1989):

   begin
                iteration = 0
                          initiate population 
                          evaluate population 
                while (not termination criterion) do
          begin
                iteration = iteration+1
                          selection (iteration) from (iteration-1)
                          alter (iteration)
                          evaluate (iteration)
          end
   end

The evolution process as summarized by the above pseudocode is straightforward and self-explanatory. Starting with an initial population of strings, we evaluate each of its elements by a certain fitness function. The fitness function describes how well a given string performs in the setting of the given optimization task. The ensuing selection process is guided by the values of the fitness function. In a nutshell, all strings with high values of the fitness function have high chances of survival. Those with low fitness are then gradually eliminated.

The standard mechanism of roulette wheel delivers as a simple selection algorithm. Let us first normalize all fitness values to 1. These normalized values are then viewed as probabilities

The sum of fitness values in the denominator

characterizes a total fitness of the population . We construct a roulette wheel whose sectors are formed to reflect the probabilities of the strings, Fig. 5.3. Let us now spin the wheel N times and select the strings. The strings with low fitness are selected quite rarely. Those with high fitness values (being represented by broad sectors of the spinning wheel) are picked up more often and appear more frequently in the next population.


Figure 5.3  Roulette wheel as a selection mechanism


Previous Table of Contents Next

Copyright © CRC Press LLC

HomeAccount InfoSubscribeLoginSearchMy ITKnowledgeFAQSitemapContact Us
Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Read EarthWeb's privacy statement.