Overview of evolutionary algorithms.
Evolutionary algorithms become optimisation techniques designed on the principles of natural selection and genetic information transfer in evolutionary biology. In evolutionary algorithms, candidate solutions are represented as individuals, and individuals are evolved through genetic manipulation (e.g. crossover, mutation) to search for the optimal solution. An overview of evolutionary algorithms is given below.
1. individual representation: an individual representation is defined to represent the solution to the problem to be optimised. For example, binary, real-valued and permutation representations are common, and the individual chooses the appropriate representation method according to the characteristics of the problem.
2. adaptivity function: an adaptivity function is defined to evaluate the individuals. The adaptivity function provides a measure of the goodness of the solution by the individual and serves as the objective function of the optimisation. The adaptivity function is used to quantify the goodness or superiority of the solution.
3. generation of initial populations: random initial populations are generated. The initial individuals act as candidate solutions to the problem and provide a starting point for evolution.
4. genetic manipulation: genetic manipulation (crossover, mutation) is applied to the population to generate new individuals. Cross-fertilisation combines parts of an individual’s genetic information to produce a new individual, while mutation randomly changes the genetic information of an individual.
5. selection: individuals are selected on the basis of their fitness to form the next generation of individuals. In general, individuals with high levels of adaptation are more likely to be selected, and individuals with high levels of adaptation are inherited by the next generation.
6. convergence decision: genetic manipulation and selection are repeated until convergence criteria are met. Convergence criteria are defined based on the accuracy of the solution and the number of iterations.
Evolutionary algorithms have properties based on natural evolutionary processes, such as maintaining diversity, avoiding local solutions and balanced evolution between search and convergence. This makes the approach effective for complex problems such as non-linear, multi-variable and multi-local optimisation problems, and allows for a wide range of applications.
Algorithms related to evolutionary algorithms.
Evolutionary Algorithms (EAs) include several major algorithms. Typical evolutionary algorithms include Genetic Algorithm (GA), Evolution Strategy (ES), Genetic Programming (GP) and Differential Evolution (DE), etc. Each of these algorithms is described below.
1. Genetic Algorithm (GA): represents individuals as genes and uses genetic manipulation (crossover, mutation) to evolve individuals and form the next generation of individuals by natural selection or selection of individuals based on their fitness. It is widely used in the search for solutions to search and optimisation problems.
2. evolutionary strategies (ES): these use statistical methods to evolve individuals and focus on the continuous variation of individuals rather than genetic manipulation. Individuals are usually represented as a multi-dimensional vector with mean and variance, making ES a suitable approach for high-dimensional sequential optimisation problems.
3.Genetic Programming (GP): where individuals are represented using functions or program representations, evolved by genetic manipulation and new programmes are generated by changing the structure of the programme GP is an approach applied to problems such as function approximation, symbolic regression and machine learning. 4. differential evolution (Differential evolution)
4 Differential Evolution (DE): an evolutionary algorithm specialising in continuous space optimisation problems, where individuals are represented as real-valued vectors and genetic operations are performed using difference vectors. DE is an effective approach for problems such as parameter optimisation and constraint optimisation.
Application of evolutionary algorithms.
Evolutionary algorithms have been widely applied in various domains. The following are examples of applications of evolutionary algorithms.
1. optimisation problems: evolutionary algorithms are applied to various optimisation problems. Examples include machine learning hyper-parameter optimisation, combinatorial optimisation problems (e.g. travelling salesman problem, knapsack problem) and constraint optimisation problems.
2. machine learning: evolutionary algorithms have been applied to various aspects of machine learning. Examples include symbolic and symbolic regression using genetic programming, feature selection and dimensionality reduction using genetic algorithms, and training neural networks using evolutionary strategies.
3. robotics: evolutionary algorithms are applied to robot control and design. Examples include the optimisation of robot movement patterns and gait control using evolutionary strategies, and the optimisation of robot structure and design using genetic algorithms.
4. design and optimisation: evolutionary algorithms are used to design and optimise products and systems. Examples include optimising the design of cars and aircraft, the design of buildings and structures, and the design of electronic and integrated circuits.
5. game development: evolutionary algorithms have also been applied to the development of games and the creation of AI. For example, genetic algorithms are used to optimise the behaviour patterns and strategies of game characters, and evolutionary strategies are used to optimise the learning and behaviour of AI.
Evolutionary algorithm’s are flexible and versatile, and can be used to explore and optimise solutions to a variety of problems.
Examples of evolutionary algorithm implementations.
The following is a simple example of a Genetic Algorithm (GA) implementation in Python. In this example, a genetic algorithm is implemented to solve the problem of finding the minimum of a two-dimensional continuous space.
import numpy as np
# Definition of the adaptivity function (in this example a two-dimensional Rosenbrock function is used)
def fitness_function(x):
return - (1 - x[0])**2 - 100 * (x[1] - x[0]**2)**2
# Implementation of genetic algorithms.
def genetic_algorithm(population_size, num_generations, mutation_rate, crossover_rate, bounds):
# Generation of initial populations
population = np.random.uniform(bounds[0], bounds[1], (population_size, len(bounds)))
for _ in range(num_generations):
# Calculation of the degree of adaptation
fitness = np.array([fitness_function(individual) for individual in population])
# selection
parents = population[np.random.choice(population_size, size=population_size, replace=True, p=fitness/fitness.sum())]
# (genetic) crossing over
for i in range(0, population_size, 2):
if np.random.rand() < crossover_rate:
crossover_point = np.random.randint(1, len(bounds))
parents[i, crossover_point:], parents[i+1, crossover_point:] = parents[i+1, crossover_point:], parents[i, crossover_point:].copy()
# mutation
for i in range(population_size):
if np.random.rand() < mutation_rate:
mutation_gene = np.random.randint(0, len(bounds))
parents[i, mutation_gene] = np.random.uniform(bounds[0, mutation_gene], bounds[1, mutation_gene])
population = parents
# Search for optimal solutions
best_individual = population[np.argmax([fitness_function(individual) for individual in population])]
return best_individual, fitness_function(best_individual)
# main function
if __name__ == "__main__":
# Hyperparameter settings.
population_size = 100
num_generations = 100
mutation_rate = 0.1
crossover_rate = 0.8
bounds = np.array([[-5, -5], [5, 5]]) # Setting the search range
# Running genetic algorithms.
best_solution, best_fitness = genetic_algorithm(population_size, num_generations, mutation_rate, crossover_rate, bounds)
# Display of results
print("Best Solution:", best_solution)
print("Best Fitness:", best_fitness)
Challenges and countermeasures for evolutionary algorithms.
Evolutionary algorithms are powerful optimisation methods, but several challenges exist. The following are some of the challenges of evolutionary algorithms and general responses to them.
1. convergence to local solutions: evolutionary algorithms have the ability to explore a diverse solution space, but may converge to local solutions.
maintaining diversity: in order to maintain diversity, the probability of genetic manipulation can be adjusted or search strategies (e.g. combining with local search, expanding the search space based on adaptivity) can be employed to encourage escape from local solutions.
2. setting the adaptivity: designing the adaptivity function and setting appropriate parameters is difficult.
designing the adaptivity function: design an appropriate adaptivity function according to the nature and purpose of the problem, taking care that the adaptivity function accurately reflects the goodness of the solution.
parameter tuning: conduct experiments and evaluations to tune the parameters of the adaptivity function and genetic manipulation to find the optimal settings.
3. high computational cost: evolutionary algorithms are computationally expensive and time-consuming for large problems.
parallelisation: parallelising evolutionary algorithms using multiple computational resources improves computational speed.
Use of approximation methods: for large problems, evolutionary algorithms can be combined with approximation methods or meta-heuristics to reduce computational costs.
4. choice of solution representation: choose an appropriate representation of the solution depending on the nature of the problem.
selection of an appropriate representation: according to the nature of the problem, the genotype, the form of the representation, the method of coding the genes, etc.
Reference information and reference books
A reference book is The Mathematics of Meta-heuristics.
‘Metaheuristics and applications‘.
コメント