![]() |
|
|||
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
|
![]() |
[an error occurred while processing this directive]
Let us now consider another type of the search operator where the differentiability requirement can be dropped. In probabilistic search, the simplest stochastic method is known as a Pure Random Search (Brooks, 1958). The algorithm generates a sequence of identically distributed points in ∂ and keeps track of the best points found so far. The sequence of steps comprises the following
After termination of the loop, the solution is returned with the value of the function. Some convergence properties can be revealed by assuming continuity of the optimized function. The probability that after n iterations the obtained point is located in set B∂ is computed as As each event (observe that we have confined ourselves to the uniform probability density function), we get (note that x1, x2 xn have been treated as independent events) Owing to the continuity of the optimized function, the integral assumes a nonzero value. This, in turn, implies that the expression approaches zero once the number of iterations tends to infinity. In limit, we get meaning that we can reach the search region with probability 1. The probabilistic search algorithm offers a probabilistic asymptotic guarantee yet this method is not very efficient. In particular, the expected number of iterations (n) increases exponentially in the dimension of the problem. In comparison with the gradient-based approach, the probabilistic methods apply to a broader class of nondifferentiable optimization problems, yet their efficiency (including convergence speed) becomes reduced. The method shown above is the simplest variant among an entire family of random search algorithms. It uses a minimal information about the optimized function and does not require any extra search hints. The existing improvements exploit some additional facts about the optimized function that force an improvement of the solution in each iteration. For more details the reader may consult Boender and Romeijn (1995). 5.3. Genetic algorithms - fundamentals and a basic algorithmIn this section we discuss a generic version of the genetic algorithm (GA). Let us first emphasize the main fundamental difference between the genetic approach and the methods discussed in the previous sections. Most importantly, the GA hinges on a population of potential solutions and as such, exploits the mechanisms of natural selection well known in evolution (survival of the fittest). We start with an initial population of N elements in the search space, determine a suitability of survival of its individuals and evolve the population to retain the individuals with the highest values of the fitness function. When proceeding with this form of evolution and moving from one population to another, we end up with the individuals with the highest abilities to survive. To emulate the paradigms of natural selection and support adaptation, we allow the individual solutions to recombine and mutate. In particular, by performing crossover we generate new individuals (offsprings). To maintain diversity we admit mutation, altering a current content of strings. Fundamentally, before all these genetic manipulations can be carried out, one has to transform the original search space into an equivalent representation space, a so-called GA search space.
The GA operates on a space of genotypes (chromosomes) - the representatives of the corresponding elements in the search space. The former are usually referred to as phenotypes. The GA search space is composed of strings of symbols. In the simplest case the symbols used in the genotypes originate from a two-element alphabet {0, 1}. The GA philosophy is straightforward and can be summarized in a very succinct way. Denote by begin iteration = 0 initiate population The evolution process as summarized by the above pseudocode is straightforward and self-explanatory. Starting with an initial population of strings, we evaluate each of its elements by a certain fitness function. The fitness function describes how well a given string performs in the setting of the given optimization task. The ensuing selection process is guided by the values of the fitness function. In a nutshell, all strings with high values of the fitness function have high chances of survival. Those with low fitness are then gradually eliminated. The standard mechanism of roulette wheel delivers as a simple selection algorithm. Let us first normalize all fitness values to 1. These normalized values are then viewed as probabilities The sum of fitness values in the denominator characterizes a total fitness of the population
Copyright © CRC Press LLC
![]() |
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
![]() |