In contrast with the previous methods, population-based algorithms are characterized by working with a set of solutions (called population) in each iteration. Here we introduce some representative population-based metaheuristics:

' Evolutionary Algorithms (EA) are based on the postulates of the evolutionary theory, that is, they are inspired by biological evolution processes: selection, recombination, mutation, and reproduction [14]. This family of techniques presents an iterative and stochastic process that operates on a population of individuals (solutions).

EAs start by generating the population randomly or by means of some heuristic seeding procedure. The global structure of the process consists of three main phases (emulating the biological evolution): selection, production, and replacement that are repeated until meeting certain stop criteria. In the first phase individuals of the population are chosen to be recombined during the production phase. Usually, the individuals are chosen according to their fitness. To increase the diversity, a mutation operator is applied to the individuals generated previously. Finally, a new population is created by using the current one and/or the best individuals generated, giving way to the next generation of the algorithm. Different algorithms have been proposed based on this general scheme. These proposals can be classified into three main categories that were developed independently, they are: the Evolutionary Programming (EP) proposed by Fogel [15], the Genetic Algorithms (GA) introduced by Holland [16], and the Evolution Strategies (ES) submitted by Rechenberg [17].

' Estimation of Distribution Algorithms (EDAs) [18] have similar behaviors to the previously presented EAs, and many authors even consider EDAs as a special kind of EA. Like EAs, EDAs operate on a population of candidate solutions, but, unlike them, do not use recombination and mutation to generate the new solutions, but a probability distribution mechanism instead. Graphic probabilistic models are commonly used tools to represent in an efficient manner the probability distributions when working with EDAs. Some authors [19, 20, 21] propose using Bayesian networks to represent the probability distributions in discrete domains, while Gaussian networks are most often employed for continuous domains [22].

' Scatter Search (SS) [23] is a metaheuristic whose principles were presented in [24]. The main contribution of this algorithm is the idea of maintaining a relatively small set of tentative solutions (called the reference set or RefSet). This set of solutions is characterized by its quality and variety (distant in the search space). This set is divided into subsets of solutions to which we apply an operation of recombination and other improvements.

Scatter Search consists of five main component processes:

1. A Diversification Generation Method to generate a collection of diverse trial solutions.

2. An Improvement Method to transform a trial solution into one or more enhanced trial solutions.

3. A Reference Set Update Method to build and maintain a reference set consisting of the b best solutions found organized to provide efficient accessing by other parts of the solution procedure.

4. A Subset Generation Method to operate on the reference set, to produce a subset of its solutions as a basis for creating combined solutions.

5. A Solution Combination Method to transform a given subset of solutions produced by the Subset Generation Method into one or more combined solutions.

Finally, note that it is currently receiving much attention from the research community [25].

' Ant Colony Optimization (ACO) [26] algorithm is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. It is based on the behavior of ants seeking a path between their colony and a source of food. The original idea has since diversified to solve a wider class of numerical problems, and as a result, several problems have emerged, drawing on various aspects of the behavior of ants. The behavior of the ants is as follows: Initially, ants explore randomly the area near to the nest. As soon as an ant finds food, it comes back to the nest. In turn, this ant deposits a chemical called pheromone, creating pheromone trails. If other ants find such a path, they are likely not to keep travelling at random, but to instead follow the trail, returning and reinforcing it if they eventually find food. The indirect communication between ants using pheromone trails enables them to find the shortest path between the nest and the food. ACO tries to solve optimization problems by simulating this behavior. The technique is based on two main steps: Construct Ant Solutions and Update Pheromones (pheromone matrix).

' Particle Swarm Optimization (PSO) [27] is a population based metaheuristic inspired in the social behavior of organisms such as bird flocking and fish schooling. It is based on the factors that influence the decisions of an agent that is part of a set of similar agents. In PSO, each candidate solution to the problem is called particle and the population of particles is called swarm. During the iterative process, each particle adjusts its position (state) according to its own experience, and according to the experience of a neighboring particle, making use of the best position encountered by itself and its neighbor.

' Differential Evolution (DE) [28] is a non-deterministic method based on a population of individuals who are real vectors that represent the solutions in the search space. It was developed to optimize real (float) parameters of a real valued function. The generation of new individuals is carried out by applying differential operators of mutation and crossover to the individuals that are selected randomly. By differential mutation the proportional difference of the randomly chosen parents is added to a third individual, also chosen randomly. After the mutation, a recombination operator is applied over each individual (target) to generate an intermediate individual (trial). Finally, taken into account the fitness values, a selection operator decides the acceptance (or not) of trial individuals for the new generation.

3.3 Algorithms used in this Work

In this section, we describe the five metaheuristic algorithms used in this study. Specifically, they are Simulated Annealing (SA), Genetic Algorithm (GA), Evolutionary Strategy (ES), Particle Swarm Optimization (PSO), and Differential Evolution (DE).

We select these five algorithms because they are able to work on continuos (real valued) search spaces. Also, these techniques were selected with the aim of experimenting with different population structures, as well as different reproduction mechanisms.

3.3.1 Simulated Annealing (SA)

Simulated annealing (SA) is a generic probabilistic metaheuristic algorithm. It is used for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. Simulated Annealing is one of the first metaheuristic algorithms with an explicit strategy to escape from local minimum. The basic idea is to allow movements to select solutions worse than the current solution. The probability of performing such a movement decreases during the search process (see Algorithm 1). The operation of a typical SA is summarized in Figure 3.2.

The whole process starts by generating an initial solution s (randomly or using some heuristic) and starting the temperature parameter (T). The algorithm works iteratively keeping a single tentative solution s at any time. In every iteration, a new solution s' ?? N(s) is generated from the previous one (Line 5), and either replaces it or not depending on an acceptance criterion (Lines 6-10). The acceptance criterion works depending on the fitness values (f(s) and f(s')) and temperature T. s' replaces s as current solution if f(s) < f(s') (s' has better quality). However, in the case of f(s) ' f(s'), s' replaces it with probability prob (Equation 3.6). This probability depends on the difference between their quality (f(s') ' f(s)) values and T (Line 9). This acceptance criterion provides the way of escaping from local optima.

The temperature value (T) is updated (generally decreasing) during the search. Thus, at the beginning the probability of accepting new solutions are high and it gradually decreases along the iterations, following a cooling schedule (Line 11). The algorithm ends by selecting only those solutions that improve the current one (s). This process is based on annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local optimum of the internal energy) and wander randomly through states of higher energy, the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one.

The algorithm is a result of the combination of two different processes: random step and iterative improvement. In the first phase, exploration of the search space becomes more random and erratic, producing movements to worst solutions. However, as it iterates, the erratic component gradually decreases the search converging to an optimum (local). The selection of solutions that do not improve the current one is controlled by two factors: the difference between the fitness functions and temperature (see Equation 3.6). First, if we fix the temperature, the higher the difference f(s') ' f(s), less probability to move from s to s'. Moreover, the higher the temperature T, the more probability to accept s' as a new solution.

The selection of a suitable cooling strategy is crucial for the behavior of the algorithm. The cooling strategy Q defines the temperature T for each moment k, Tk+1 = Q(Tk, k). The most used one is Tk+1 = ?? ?? Tk, where ?? ?? (0, 1). The cooling strategy and the initial temperature must be adapted to problems because the cost of escaping a local optimum depends on the structure of the search space. There are many ways leading to different variants of SA as: Fast SA (FSA), Very Fast SA (VFSA) or Adaptative SA (ASA) [29]. Ingber [30] proposed an empirical solution which starts with an initial random sampling of search space and T0 is calculated from the average and the standard deviation of the values of the objective function.

SA has been successfully applied to many optimization problems: the Quadratic Assignment Problem (QAP) [31] and the textitJob Shop Scheduling (JSS) [32]. In the field of wireless mobile networks, it has been used as an efficient method for the hand off mechanism in cellular networks [33] and problems of optimizing MANET routing protocols [34] or the clustering of the nodes [35].

**Source:** Essay UK - http://www.essay.uk.com/free-essays/information-technology/population-based-metaheurstics.php

This Information Technology essay was submitted to us by a student in order to help you with your studies.

This page has approximately ** words.**

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay UK, *Population-Based Metaheurstics*. Available from:
<https://www.essay.uk.com/free-essays/information-technology/population-based-metaheurstics.php> [06-06-20].

If you are the original author of this content and no longer wish to have it published on our website then please click on the link below to request removal: