Overview of meta-heuristics and reference books

Mathematics Machine Learning Technology Artificial Intelligence Technology  Programming Technology Algorithms and Data structures Digital Transformation Navigation of this blog

Overviews

Meta-heuristics can be algorithms used to solve optimization problems. An optimization problem is one where the objective is to maximize or minimize an objective function, and meta-heuristics uses probabilistic methods to search for candidate solutions in the solution space and find the optimal solution. Meta-heuristics include algorithms such as genetic algorithms, particle swarm optimization, and simulated annealing.

The advantages and disadvantages of meta-heuristics are listed below.

[Pros]
  • High versatility: Meta-heuristics can be used for all kinds of optimization problems. Therefore, meta-heuristics are applied in various fields.
  • Less prone to local solutions: meta-heuristics use a probabilistic search method, which reduces the possibility of falling into local solutions and allows finding a global optimal solution.
  • Easy parameterization: Meta-heuristics often require numerical parameterization, but they are easy to set up and the influence of the initial values is relatively small, making them highly flexible.
[Cons].

  • No guarantee of solution quality: Meta-heuristics cannot guarantee the quality of the optimal solution, since they use probabilistic search methods. Therefore, it is difficult to evaluate the quality of the solution and there may be uncertainty in the execution results.
  • Large computational complexity: Meta-heuristics often involve random searches in the search space, which tends to be computationally expensive. Also, for problems that require a long time for a single evaluation, meta-heuristics may not be practical due to the time required for the search.
  • Difficulty in adjusting parameters: Meta-heuristics often require adjustment of numerical parameters, but it is difficult to select appropriate parameters, and adjustment may be time-consuming. In addition, the efficiency of the search may be reduced if appropriate parameters are not selected.

Meta-heuristics is an organic combination of empirical methods (heuristics) for solving difficult optimization problems, and has recently become a popular optimization algorithm among practitioners as a framework for solving practical problems with ease. When solving practical problems, robust solutions can be designed in a relatively short time if one has some programming skills, an eye for selecting meta-heuristics, and a knack for design.

Mathematical Metaheuristics

From Mathematical Metaheuristics. Describe the reading notes.

About Meta-heuristics

In this book, we attempt to show that meta-heuristics are not just a list of ideas, but a mathematical explanation of why the ideas work. It also explains the fusion of meta-heuristics with a branch of optimization called mathematical programming. We will not only describe the general algorithms, but also teach you how to design your own efficient meta-heuristics from scratch by applying them to various concrete applications.

Chapter 1. Introduction

1.1 What are meta-heuristics?

Meta-heuristics are heuristics in algorithms for combinatorial optimization problems that do not depend on a specific computational problem. In recent years, it has been extended from the above definition to refer to general heuristics in general that are independent of a specific problem. Therefore, some interpretations are not limited to algorithms for combinatorial optimization problems, but also include algorithms for continuous optimization problems.

Usually, when a "solution" to a problem exists, it is applicable only to that problem. However, when the solution is expanded to be as close to the "answer" as possible, as in the case of approximation algorithms, there are methods that can be used for multiple problems, such as local search and greedy methods. Meta-heuristics is a basic algorithmic framework that is not limited to a specific problem, but is designed to be generic to any problem. In other words, it is a heuristic algorithm that is independent of any particular problem, but only of its methods. Therefore, it can be applied to any problem. This is useful for problems such as the NP-hard problem, for which no algorithm seems to exist to find the optimal solution in polynomial time. In general, however, meta-heuristics are often less accurate than problem-specific heuristics in terms of average solution accuracy. This is because general-purpose search must be implemented without requiring prior knowledge of the problem, and thus must inevitably proceed from a disadvantageous position with respect to methods that search for solutions by making effective use of them.

1.2 What is an optimization problem?

An optimization problem is a problem to analyze the state in which the value of a real-valued function or an integer-valued function defined on a specific set is minimized (or maximized) [1]. Such problems are also collectively called mathematical programming problems or mathematical programs [1]. Optimization problems are one of the fundamental problems that arise in a wide variety of fields, including the natural sciences, engineering, and social sciences, and their history dates back to the variational problems of the 18th century [2]. 1]. Many problems related to the mathematical analysis of real-world phenomena and abstract theories can be placed in the general category of optimization problems. Optimization problems in physics and computer vision are sometimes called energy minimization problems by considering the function under consideration as representing the energy of the modeled system.

1.3 Basic Strategies of Metaheuristics

1.3.1 Use of Nearest Neighbor
1.3.2 Use of the construction method
1.3.3 Use of Components
1.3.4 Retention of Multiple Solutions
1.3.5 Use of Randomness
1.3.6 Problem Transformation
1.3.7 Using Search History

Chapter 2 Typical Meta-heuristics

2.1 Local Optimality Methods

Local search methods are one of the simplest algorithmic frameworks of approximation algorithms. In a broad sense, it is used as a generic term for algorithms with the method framework described below, and in a narrow sense, it is used in the sense of mountain climbing methods. Many of today's meta-heuristics methods use this framework.

The term "local search method" is mainly used for search algorithms, while the term "iterative method" is used in the field of numerical analysis. The difference between the two is that iterative methods often assume that the continuity of the target function and first-order differential equations are known, and often require optimal solutions, whereas local search methods mainly aim to find the best possible approximate solution even for discrete functions or when the content of the function itself is unknown. Even when the discrete function or the content of the function itself is unknown.

2.2 Other local search methods Translated with www.DeepL.com/Translator (free version)

2.3 Iterative Local Search Method

When repeating the local search method, the strategy of trying to find a better solution from a solution close to the obtained locally optimal solution, rather than trying to improve the solution from the initial solution, is called iterative local search method

2.4 Simulated Annealing Method

Simulated annealing (SA) is a general-purpose random-choice algorithm for global optimization problems. It gives a good approximation to the globally optimal solution of a given function in a vast search space. It was invented by S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi in 1983 [1] and rediscovered by V. Cerny in 1985 [2].

Its name comes from annealing in metallurgy. Annealing is the process of heating a metallic material and then gradually cooling it to grow crystals and reduce their defects. The heat causes the atoms to leave their initial positions (local minima of internal energy) and wander around to more energetic states. By cooling slowly, the atoms are more likely to attain states with even more miniscule internal energy than the initial state.

In iterating over the solution, the SA algorithm seeks a solution in a random neighborhood of the current solution, which is affected by the value of the given function and the global parameter T (meaning temperature). And due to the aforementioned similarity with the physical process, the value of T (temperature) gradually decreases. Because of this, the solution changes boldly at first because T is large, but converges as T approaches zero. Since we can easily move up the gradient at first, we do not have to think about what to do when we fall into local minima, which can be a problem with the mountain climbing method.

2.5 Tabu search method

Tabu search is a meta-heuristics method and is recognized as a generalization of the local search method based on the concept of artificial intelligence. The same meta-heuristics methods include genetic algorithms and methods that mimic specific natural phenomena, such as the annealing method.

This method searches for multiple neighboring states and transitions to the best neighboring state among them, writing the operations at the time of state transition to a queue called a taboo list. By not performing the operations in the taboo list, the state is prevented from looping, and the optimal solution is searched without stalling the search. The important point here is that if the operation is not on the taboo list, the transition is performed even if the state becomes worse. This prevents the search from becoming stagnant at the local solution.

2.6 Guided Local Search Method

2.7 Large Neighborhood Search Method

The larger the neighborhood size, the better the quality of the obtained solution, but the computation time increases. It is important to find large neighborhoods efficiently by using some technique. This method is called the large-neighborhood search method.

2.8 Search space smoothing and alternating smoothing

2.9 Component optimization method

The part optimization method defines the nearest neighbor solution in meta-heuristics as a mathematical programming problem as follows. The number of elements of the subproblem SPr is r, and we choose a subset J ⊆ I such that |J| = r. When the solution xf ix i (IJ) of the subproblem for the elements of IJ is fixed, the subproblem SPr can be expressed as follows.

If r is sufficiently small, the solution of the subproblem SPr can be efficiently obtained by dynamic programming or branch-and-bound methods.

2.10 Multilevel method

Multilevel models are statistical methods that are already used by many researchers in the social sciences. In this sense, it is no longer a "new statistical method" but rather a "statistical method that has gained citizenship. In fact, multilevel models are used in many papers, and reviewers sometimes request the use of multilevel models in peer review. The need to learn multilevel models is increasing year by year.

2.11 Greedy Random Adaptive Search

A local search method is applied to a large number of randomly generated initial solutions. The best one among them is used as the approximate optimal solution.

2.12 Bee Swarming Method

In the real world, ants begin their wanderings randomly, and when they find food, they return to their colony with a pheromone trail. When other ants find the path, they stop their random wanderings and start following their tracks, reinforcing the path when they find food. Over time, however, the pheromone trail begins to evaporate and loses its attracting power. The longer the path, the more easily the pheromone evaporates. In contrast, if the path is short, the march takes less time and is reinforced faster than the pheromone evaporates, so the pheromone concentration remains high. Thus, when one ant finds a good (i.e., short) path from the colony to the food source, other ants are more likely to follow that path, and a positive feedback effect will eventually cause all ants to follow one path. The idea of the ant colony optimization algorithm is to mimic this behavior by "simulated ants" walking around a graph that represents the problem to be solved. The ant colony optimization algorithm was used to generate an approximate optimal solution to the traveling salesman problem. This method is more effective than annealing or genetic algorithms when the graph is dynamically changing. Because the ant colony optimization algorithm runs continuously, it can adapt to changes in real time. This makes it a potential application in network routing and urban transportation systems.

2.13 Genetic Algorithms

Genetic algorithms prepare multiple "individuals," which are genetic representations of data (candidate solutions), and search for solutions by repeatedly performing operations such as crossover and mutation, preferentially selecting individuals with high adaptability. The degree of adaptation is given by an adaptivity function. The advantage of this method is that it can be applied even when there is no knowledge of the differentiability or unimodality of the evaluation function. The necessary conditions are the total orderliness of the evaluation function and that the search space has a topology. Also, depending on how the genes are expressed, the method can be applied to various problems such as combinatorial optimization problems and NP-hard problems.

2.14 Scattered Search Method

Chapter 3: Matching Mathematical Programming and Metaheuristics

3.1 Branch-and-Bound Method

The branch-and-bound (BB) method is a general-purpose algorithm for finding optimal solutions to various optimization problems (especially discrete and combinatorial optimization). It consists of a branching operation and a bounding operation. All candidate solutions are systematically enumerated, and suboptimal candidates are "lumped together" and discarded, using upper and lower bound estimates of the optimized quantities.

3.2 Why Fusion?

3.3 Variable Fixation Method

3.4 Censored Branch Limit and Dive Method

3.5 Relaxation Fixation Method

3.6 Capacity scaling method

3.7 MIP Neighborhood Local Search Method

3.8 Local branching method

3.9 MIP merging method

Chapter 4 Applications

4.1 Graph partitioning problem

4.1.1 Problem definition and formulation
4.1.2 Neighborhood
4.1.3 Speed-up with auxiliary memory
4.1.4 Penalty function method
4.1.5 Pseudo-Executable Search Method
4.1.6 Pseudo-annealing method
4.1.7 Forbidden search
4.1.8 Python implementation

4.2 Maximum Stable Set Problem

4.2.1 Problem Definition and Formulation
4.2.2 Nearest Neighbor
4.2.3 Forbidden Search Method
4.2.4 Flat Search
4.2.5 Implementation in Python

4.3 Graph coloring problem

4.3.1 Problem Definition and Formulation
4.3.2 Neighborhood
4.3.3 Speed-up with Auxiliary Memory
4.3.4 Forbidden Search Method
4.3.5 Fusion of Genetic Algorithm and Forbidden Search Method
4.3.6 Implementation in Python

4.4 Traveling Salesman Problem

4.4.1 Problem Definition and Formulation
4.4.2 Nearest Neighbor
4.4.3 Local search methods
4.4.4 First Improvement and Best Improvement
4.4.5 Python Implementation

4.5 Quadratic Assignment Problem

4.5.1 Problem Definition and Formulation
4.5.2 Nearest neighbor
4.5.3 Computing the Difference of Objective Functions
4.5.4 Forbidden Search Method
4.5.5 Implementation in Python

4.6 Multi-Restricted Knapsack Problem

4.6.1 Problem Definition and Formulation
4.6.2 Forbidden Search Method
4.6.3 Solution Method Using Linear Programming Relaxation
4.6.4 Implementation in Python

4.7 Number Splitting Problem

4.7.1 Problem Definition and Formulation
4.7.2 Difference Method
4.7.3 Number of divisions is greater than or equal to 3
4.7.4 Python Implementation

Appendix Python Explanation
Metaheuristics and Applications

The next section is “Metaheuristics and Applications.

This book is a systematic compilation of algorithms and application examples of a new optimization method, “meta-heuristics”. The Algorithms section covers a wide range of state-of-the-art methods such as Memetic Algorithm, Artificial Immune System, Ant Colony Optimization Method, Particle Swarm Optimization, etc. The Applications section covers a wide range of scheduling problems, delivery planning problems, applications to structural optimal design, etc. The applications section covers a wide range of scheduling problems, delivery planning problems, and applications to structural optimal design. This book can be used as a handbook and is a must-have for students, engineers, and researchers involved in systems engineering and information technology.

Chapter 1 Evolutionary Optimization Methods

1-1 Genetic Algorithms

1.1.1 Overview
1.1.2 Algorithms
1.1.3 Individual Representation
1.1.4 Crossing Rule
1.1.5 Mutation Rule
1.1.6 Selection Rule
1.1.7 Other designs
1.1.8 Numerical rule
1.1.9 Algorithm Improvements

1-2 Memetic Algorithm

1.2.1 Introduction
1.2.2 Meme and Memetic Algorithm
1.2.3 Lamarck Rule
1.2.4 Basic Structure of the Algorithm
1.2.5 Considerations in Implementing the Algorithm
1.2.6 Future Research Issues

1-3 Immune Optimization Algorithm

1.3.1 Artificial Immune System
1.3.2 Immune Optimization System
1.3.3 Computational examples

Chapter 2 Multi-Agent Optimization Methods

2-1 Market-Oriented Programming (MOP) and Optimization

2.1.1 Market-Oriented Programming as an Optimization Algorithm Incorporating Sociality
2.1.2 Walrasian Virtual Market
2.1.3 Market Oriented Programming
2.1.4 Agent Formulation
2.1.5 Characterization of Pareto-optimal solution derivation
2.1.6 Characterization of the Solution by Computer Jumonen
2.1.7 Conclusion

2-2 Ant colony optimization method

2.2.1 Stigmergy and optimization of social insects
2.2.2 Ant colony optimization method
2.2.3 Numerical experiments
2.2.4 Summary

2-3 Particle Swarm Optimization

2.3.1 Overview of Particles Swarm Optimization
2.3.2 Algorithm and Features of Particle Swarm Optimization
2.3.3 Definition of Swarm Activity and Numerical Stability Analysis of PSO
2.3.4 Analytical and Numerical-Experimental Stability Analysis of PSO
2.3.5 Typical methods and numerical examples of PSO
2.3.6 Summary

Chapter 3 Optimization Methods Based on Mechanics Models

3-1 Nonlinear Dynamical Systems and Optimization

3.1.1 Introduction
3.1.2 Steepest-Descent Gradient Systems
3.1.3 Gradient system with inertia term

3-2 Nonlinear Dynamical Systems and Constrained Optimization on Bounded Domains

3.2.1 Introduction
3.2.2 Main solution methods for constrained optimization problems using relaxation techniques
3.2.3 Dynamical system models that strictly reflect constraints and their properties

3-3 Discretized Gradient Dynamical Systems and Global Optimization by Parameter Adjustment Methods

3.3.1 Introduction
3.3.2 Stability and Optimization of Continuous-Time Gradient Dynamical Models
3.3.3 Discretization of Gradient Dynamics Models and Stability of Immobile Points
3.3.4 Global Optimal Solution Search Method by Parameter Adjustment for Discrete Gradient Dynamical Systems

3-4 Stability Theory-Based Optimization of Nonlinear Systems

3.4.1 Preface
3.4.2 Problem Formulation and Overview of Search Methods
3.4.3 Type-1 Unstable Equilibrium Point Search Method
3.4.4 Example Calculations with Benchmark Functions
3.4.5 Conclusion

Chapter 4 Simulated Annealing, Tabu Search, and Tunneling Algorithms

4-1 Simulated Annealing

4.1.1 Introduction
4.1.2 Combinatorial Optimization Problem
4.1.3 LS and Neighborhood Structure
4.1.4 Simulated Annealing
4.1.5 Gradual Convergence
4.1.6 Cooling schedules
4.1.7 Theoretical considerations for cooling schedules
4.1.8 Specific operations
4.1.9 Numerical experiments
4.1.10 Conclusion

4-2 Tabu Search

4.2.1 Preface
4.2.2 Tabu search
4.2.3 Methods for improving TS
4.2.4 TS Research Trends and Tips in Operation
4.2.5 Afterword

4-3 Tunneling Algorithm and Its Applications

4.3.1 Tunneling Algorithm
4.3.2 Dynamic Tunneling Algorithm
4.3.3 Generalized Random Tunneling Algorithm

Chapter 5: Analysis and Evaluation of Hybrid and Heuristic Methods

5-1 Thermodynamic Algorithms

5.1.1 Introduction
5.1.2 Free energy minimization principle
5.1.3 Properties of TDGA
5.1.4 Detecting and Avoiding Convergence of TDGA
5.1.5 Conclusion

5-2 Evolutionary Multi-Objective Optimization

5.2.1 Introduction
5.2.2 Multi-objective optimization
5.2.3 Traditional Multi-Objective Optimization Methods
5.2.4 EMO Approach
5.2.5 EMO Algorithm
5.2.6 Research Issues in EMO

5-3 Distributed Genetic Algorithms

5.3.1 Introduction
5.3.2 Parallelization of GAs
5.3.3 Study of Distributed Effects of Distributed Genetic Algorithms
5.3.4 Loosely Distributed GAs
5.3.5 Conclusion

5-4 Analysis and Evaluation of Heuristic Methods

5.4.1 Introduction
5.4.2 Experimental Analysis
5.4.3 Invitation to Theoretical Experiments
5.4.4 Conclusion

Chapter 6 Applications to the Scheduling Problem

6-1 Application to the Nurse Scheduling Problem

6.1.1 Nurse Scheduling Problem
6.1.2 Modeling the NSP
6.1.3 GA-based solution method
6.1.4 Numerical experiments
6.1.5 Summary

6-2 Application to Complex Production Scheduling Problems

6.2.1 Description of the Die Problem
6.2.2 Characteristics of Mold Problems and Design Guidelines for Solution
6.2.3 How to Construct a GA for a Die Problem
6.2.4 Example Calculations for Mold Problems
6.2.5 Necessity of Rescheduling
6.2.6 Describing Festival Scheduling Problems
6.2.7 Characteristics of Festival Scheduling Problems and Design Guidelines for Solutions
6.2.8 How to Construct a GA for the Rescheduling Problem
6.2.9 Computational Examples for the Festival Scheduling Problem
6.2.10 Summary

6-3 Delivery Planning System with Tabu Search

6.3.1 Introduction
6.3.2 Delivery Planning Problem
6.3.3 Delivery Planning Method
6.3.4 Delivery Planning System
6.3.5 Application Examples
6.3.6 Postscript

6-4 Application to Delivery Planning (Inductive Local Search Method)

6.4.1 The Problem of Delivery Planning
6.4.2 Formulation of delivery planning
6.4.3 Guided Local Search Method
6.4.4 Application of GLS to Delivery Planning
6.4.5 Simulation Example
6.4.6 Conclusion

Chapter 7 Applications to Energy Systems

7-1 Application to Voltage Reactive Power Control of Power Systems

7.1.1 Preface
7.1.2 Formulation of the Voltage Reactive Power Distribution Problem
7.1.3 Particle Swarm Optimization (PSO)
7.1.4 Application of PSO to the Reactive Power Allocation Problem
7.1.5 Application Examples
7.1.6 Summary

7-2 Application to Power System Stability Control

7.2.1 Preface
7.2.2 Design of Nonlinear Stabilizing Control Systems for Power Systems
7.2.3 Summary

7-3 Applications to Reactor Core Design and Core Control

7.3.1 Introduction
7.3.2 Core Design and Core Control of Boiling Water Reactor
7.3.3 Formulation of an Integrated Core Design and Core Control Optimization Problem
7.3.4 Two-stage genetic algorithm for the integrated core design and control optimization problem
7.3.5 Application to a Real Plant

7-4 Application to Optimal Operation of Cogeneration Systems

7.4.1 Preface
7.4.2 Formulation of the Optimal Operation of a Cogeneration System
7.4.3 Application of PSO
7.4.4 Application examples
7.4.5 Conclusion

7-5 Application to Design of Energy Supply System Equipment Configuration

7.5.1 Introduction
7.5.2 Formulation of the equipment configuration optimization problem
7.5.3 Solving the equipment configuration optimization problem
7.5.4 Application Examples

Chapter 8 Applications to Design and Control

8-1 Application to Robot Behavior Learning

8.1.1 Optimization and Learning of Robot Behavior
8.1.2 Overview of Intelligent Synthetic Motion Control Method
8.1.3 Applicative Optimization and Behavioral Evolution of Large-scale Behaviors
8.1.4 Application to Soccer Behavior Learning
8.1.5 Summary and Future

8-2 Application to Structural Optimal Design

8.2.1 History of Structural Optimal Design
8.2.2 Summary of Structural Optimal Design Problems - Classification by Design Variables
8.2.3 Characteristics of Structural Design Problems-Modeling Error
8.2.4 Manufacturing Constraints
8.2.5 Application of Heuristic Methods
8.2.6 Optimal Design Using Approximation Models
8.2.7 Optimal Design of Complex Domains
8.2.8 Conclusion

Appendix - Overview of Typical Differential Problems and Benchmark Problems

1 Nonlinear Optimization Problems

1.1 Formulation and Solution Definition of Nonlinear Optimization Problems
1.2 Representative Benchmark Problems for Nonlinear Optimization Problems

2 Traveling Salesman Problem

2.1 Introduction
2.2 Formulation of the TSP
2.3 Applications
2.4 Algorithm Overview
2.5 Benchmark Problem
2.6 Performance of the TSP Algorithm
2.7 Conclusion

3 Scheduling Problem

3.1 Flow Shop Problem
3.2 Job Shop Problem

コメント

タイトルとURLをコピーしました