Adaptive PSO (Self-Adaptive Particle Swarm Optimisation) Overview
Adaptive PSO (self-adaptive particle swarm optimisation) is a type of particle swarm optimisation algorithm described in ‘Overview and implementation of particle swarm optimisation’, which aims to improve the search performance by dynamically adjusting the algorithm parameters.
PSO is a meta-heuristic algorithm that searches for a solution using the following procedure.
- A group of particles (population agents) move around in the search space to find the best solution.
- Each particle has a position (candidate solution) and a velocity, which is updated according to the following rules
Aim for their best position so far (pbest).
Aim for the best overall position (gbest).
– Update formula:\[
v_{i}(t+1) = \omega v_{i}(t) + c_1 r_1 (pbest_i – x_i) + c_2 r_2 (gbest – x_i)
\] \[
x_{i}(t+1) = x_i(t) + v_{i}(t+1)
\] - – \( \omega \): inertia coefficient
– \( c_1, c_2 \): learning coefficient (pbest, velocity weight towards gbest)
– \( r_1, r_2 \): random number (0-1)
In normal PSO, the parameters (in particular the inertia coefficients \(\omega\) and the learning coefficients \(c_1, c_2\)) are fixed or pre-set, which poses the following challenges.
- Search capability (diversity) may be insufficient in the early stages.
- High risk of convergence to local solutions in the later stages.
In Adaptive PSO, the parameters are changed in a self-adaptive way by
- Dynamic adjustment of the inertia coefficient (\omega\): adopt a large value (extensive search) in the early stages and a smaller value (convergence) in the later stages.
Example: linear damping method
\[.
\omega = \omega_{\text{max}} – \frac{(\omega_{\text{max}} – \omega_{\text{min}})}{\text{iteration max}} \times \text{current iteration}
\] - Adaptive control of the learning coefficients (\(c_1, c_2\)): \(c_1\): focus on own best position in the search phase, focus on the best position of the population in the later phase. Dynamic modification of learning coefficients using Gaussian and probabilistic methods.
- Adaptive adjustment of velocity constraints: adaptively modifies the range over which the particle’s velocity is constrained to prevent excessive oscillations.
Typical self-adaptive strategies include.
- Performance-based adaptation: parameters are changed based on particle progress (improvement of the objective function).
- Population diversity adaptation: adjust to improve search performance when the variance of the entire particle population is reduced.
- Mixed strategies: combine multiple adaptation criteria to balance exploration and convergence.
Advantages of Adaptive PSO include.
- Enhanced search capability in the early stages and faster convergence around the optimum solution.
- Easier escape from local solutions in high-dimensional and complex multimodal problems.
- Reduced need for users to fine-tune parameters in advance.
Adaptive PSO is a more flexible and efficient algorithm by dynamically adjusting the parameters while maintaining the simplicity of the basic PSO.
implementation example
Below is an example of a Python implementation of Adaptive PSO (Self-Adaptive Particle Swarm Optimisation). The code shows how the inertia coefficient \(\omega\) can be changed adaptively with linear decay. The parameters \(c_1\) (individual best-fit) and \(c_2\) (overall best-fit) are also changed adaptively.
Example implementation in Python
import numpy as np
# Objective function (e.g. Rosenbrock function)
def objective_function(x):
return sum(100.0 * (x[1:] - x[:-1]**2.0)**2.0 + (1 - x[:-1])**2.0)
# The Adaptive PSO class
class AdaptivePSO:
def __init__(self, num_particles, dimensions, bounds, max_iter, w_max=0.9, w_min=0.4, c1_init=2.5, c2_init=0.5):
self.num_particles = num_particles
self.dimensions = dimensions
self.bounds = bounds
self.max_iter = max_iter
self.w_max = w_max
self.w_min = w_min
self.c1_init = c1_init
self.c2_init = c2_init
self.init_particles()
def init_particles(self):
self.positions = np.random.uniform(self.bounds[0], self.bounds[1], (self.num_particles, self.dimensions))
self.velocities = np.random.uniform(-1, 1, (self.num_particles, self.dimensions))
self.personal_best_positions = np.copy(self.positions)
self.personal_best_scores = np.array([objective_function(p) for p in self.personal_best_positions])
self.global_best_position = self.personal_best_positions[np.argmin(self.personal_best_scores)]
self.global_best_score = min(self.personal_best_scores)
def optimize(self):
for t in range(self.max_iter):
# Adaptation of inertia coefficients
w = self.w_max - (self.w_max - self.w_min) * (t / self.max_iter)
# Adaptation of learning coefficients
c1 = self.c1_init - (self.c1_init - 1.5) * (t / self.max_iter)
c2 = self.c2_init + (2.5 - self.c2_init) * (t / self.max_iter)
for i in range(self.num_particles):
# Velocity update
r1, r2 = np.random.rand(self.dimensions), np.random.rand(self.dimensions)
cognitive = c1 * r1 * (self.personal_best_positions[i] - self.positions[i])
social = c2 * r2 * (self.global_best_position - self.positions[i])
self.velocities[i] = w * self.velocities[i] + cognitive + social
# position update
self.positions[i] += self.velocities[i]
# Application of boundary constraints
self.positions[i] = np.clip(self.positions[i], self.bounds[0], self.bounds[1])
# Individual best position update.
score = objective_function(self.positions[i])
if score < self.personal_best_scores[i]:
self.personal_best_scores[i] = score
self.personal_best_positions[i] = self.positions[i]
# Global best position update
min_idx = np.argmin(self.personal_best_scores)
if self.personal_best_scores[min_idx] < self.global_best_score:
self.global_best_score = self.personal_best_scores[min_idx]
self.global_best_position = self.personal_best_positions[min_idx]
# Show progress
print(f"Iteration {t+1}/{self.max_iter}, Best Score: {self.global_best_score:.4f}")
return self.global_best_position, self.global_best_score
# examples showing the use (of a word)
if __name__ == "__main__":
# problem description
num_particles = 30
dimensions = 5
bounds = [-5, 5]
max_iter = 100
# Algorithm execution
optimizer = AdaptivePSO(num_particles, dimensions, bounds, max_iter)
best_position, best_score = optimizer.optimize()
print("\noptimal solution:", best_position)
print("optimum value:", best_score)
Code points.
- Inertia coefficient: (\(\omega\))linear decay\(\omega=\omega_{max}-\frac{(\omega_{max}-\omega_{min})\cdot t}{max_itre}\)designed to be high in the early stages and decrease in the later stages to enhance search in the early stages.
- Adaptive adjustment of learning coefficients: \(c_1\)Higher in the early stages, lower in the later stages (individual focus → convergence). \(c_2\): lower in the early stages, higher in the later stages (population focus → elaboration).
- Application of boundary constraints: restrict the position of particles so that they do not exceed the search area.
Application example: this code is designed to solve a minimisation problem for the Rosenbrock function (standard optimisation test function). To optimise other objective functions, change the objective_function.
Application examples
Adaptive PSO (self-adaptive particle swarm optimisation) is particularly effective when the optimisation problem is complex and multidimensional. Specific application examples are given below.
1. hyperparameter optimisation of machine learning models
- Problem description: the performance of a machine learning model is highly dependent on the hyper-parameters of the model (e.g. learning rate, number of hidden layers, activation function, etc.) Adaptive PSO is used to optimise these hyper-parameters and to find the best combination.
- Application example (neural network optimisation): optimise hyper-parameters such as learning rate, batch size, number of epochs, number of hidden layers, activation function, etc. Use Adaptive PSO to dynamically adjust these parameters while searching for the optimal solution. The search is extensive in the early stages of the search and converges in the latter stages to improve accuracy.
- Implementation overview:
- Cross-validation score of the model (e.g. accuracy or F1 score) as the objective function.
- Adaptive PSO places a hyperparameter at each particle location, trains and evaluates the model with that parameter.
- The hyperparameters with the best scores are selected.
2. trajectory optimisation of complex robots
- Problem description: when optimising the movement and trajectory of a robot, multiple objective functions may be involved. For example, the problem of optimising the trajectory of a robot to reach its destination while avoiding obstacles.
- Application example (Robot movement path optimisation): Adaptive PSO is used to find a trajectory for a robot to reach its destination in the shortest possible time while avoiding obstacles. Objective functions include travel time, energy consumption and minimum distance from obstacles.
- Implementation overview:
- Particles are treated as parameters representing the position and speed of the robot and these are searched for.
- The objective function includes the time and energy spent on the robot’s movement path and the efficiency of obstacle avoidance.
- Adaptive PSO adaptively adjusts search and convergence to optimise the movement path.
3. load distribution in power systems
- Problem description: If a power system has several power plants, each with different generation costs, the problem is to optimise how much load should be distributed to each plant in order to minimise the overall generation cost.
- Application example (optimising load distribution): optimise how to distribute load to multiple power plants using Adaptive PSO. The objective function can be total generation cost, overall system efficiency, stability, etc.
- Implementation overview:
- The particles represent the load that should be allocated to each power station.
- The objective function minimises the overall cost based on the cost function for each power plant.
- Adaptive PSO finds the optimal solution by dynamically adjusting the load allocation for each power station.
4. structural optimisation of car design
- Problem description: the problem of optimising the material selection and structural arrangement of a car body in order to reduce weight and maintain strength in car body design.
- Application example (structural optimisation of car body): optimise the arrangement and material selection of each part of the car body using Adaptive PSO. The objective function considers strength, durability, cost and weight.
- Implementation overview:
- Particles represent parameters for the placement of each component and material properties.
- Objective functions include the strength, cost and weight of the vehicle body, which are optimised to minimise or maximise them.
- Adaptive PSO allows the search to dynamically adjust parameters towards the optimal solution.
5. portfolio optimisation in finance
- Problem description: the problem of constructing an optimal portfolio by combining financial assets with different risks and returns. The objective is to maximise returns while minimising risk.
- Application example (portfolio optimisation): use Adaptive PSO to determine the proportion of financial assets (e.g. stocks, bonds, commodities) to maximise returns while minimising risk. The objective function includes a measure that takes into account risk (variance) and return (expected value).
- Implementation overview:
- The particles represent the investment proportions of each asset.
- The objective function evaluates the expected return and risk (variance) of the portfolio and finds the optimal asset allocation.
- The Adaptive PSO searches for an asset allocation that optimally adjusts risk and return.
Adaptive PSO is a powerful tool for efficiently solving complex, multi-dimensional optimisation problems through dynamic parameter adjustment, and the above application examples can be used for practical problems in machine learning, robotics, energy, manufacturing, finance and other fields.
reference book
Reference books on Adaptive PSO (Self-Adaptive Particle Swarm Optimisation) and related fields are listed below.
1.Particle Swarm Optimisation
– Author(s): James Kennedy, Russell Eberhart
– Publisher: Springer
– Abstract: Provides a comprehensive introduction to the basic theory and algorithms of PSO. Starting from the basics of particle swarm optimisation, the book also deals with its application as an evolutionary method and provides in-depth insights into Adaptive PSO.
2. Swarm Intelligence: from Natural to Artificial Systems
– Author(s): Eric Bonabeau, Marco Dorigo, Guy Theraulaz
– Publisher: Oxford University Press
– Abstract: Provides a comprehensive description of Swarm Intelligence, focusing on the theoretical background and practical applications of PSO and other swarm intelligence algorithms.
3. Particle Swarm Optimisation: Theory, Techniques, and Applications
– Author(s): M. B. Arya, D. B. Dey, S. K. Gupta
– Publisher: Wiley
– Abstract: Introduces the basic theory of PSO as well as its application to various optimisation problems. In particular, techniques for improving Adaptive PSO and PSO (e.g. dynamic adjustment, self-adaptation) are also described.
4. Nature-Inspired Optimisation Algorithms
– Author: Xin-She Yang
– Publisher: Elsevier
– Abstract: Describes swarm intelligence-based optimisation algorithms (PSO, genetic algorithm, soma swarm algorithm, etc.), and also discusses evolutionary forms of PSO and how to improve it, especially adaptive algorithms.
5. Handbook of Swarm Intelligence: Concepts, Principles and Applications
– Author(s): Schuessler, Markus, et al.
– Publisher: Springer
– Abstract: A handbook detailing various theoretical and application examples of swarm intelligence algorithms, with theoretical and practical perspectives on Adaptive PSO and its optimisation methods.
6. Evolutionary Optimisation Algorithms
– Author: Danesh Tarapore
– Publisher: CRC Press
– Abstract: Provides a detailed guide to the design and implementation of evolutionary algorithms (e.g. genetic algorithms, PSO), including content on Adaptive PSO and adaptive changes in algorithms.
7.Metaheuristic Optimisation via Memory and Evolution: Tabu Search and Scatter Search
– Author(s): S. O. Ochoa, J. L. Rios
– Publisher: Wiley
– Abstract: Describes techniques for metaheuristic algorithms, in particular evolutionary variants of PSO and adaptive algorithms, and provides information on specific implementations of Adaptive PSO.
コメント