Overview and Implementation of Particle Swarm Optimization

Mathematics Machine Learning Technology Artificial Intelligence Technology  Programming Technology Algorithms and Data structures Digital Transformation Navigation of this blog
Overview of PSO(Particle Swarm Optimization)

Particle Swarm Optimization (PSO) is a type of evolutionary computation algorithm inspired by the flocking behavior of nature, modeling the behavior of flocks of birds and fish, in which multiple individuals form a flock and search for the optimal solution.

PSO is characterized by its ability to search a broader search space than genetic algorithms, which tend to fall into local solutions. It also has a shorter computation time than other evolutionary computation algorithms and may be able to find the optimal solution faster.

PSO has been widely used to solve machine learning and optimization problems, and numerous studies and practical examples have been reported.

The advantages and disadvantages of PSO are listed below.

[Pros]
  • Easy parameter tuning: PSO does not require parameter tuning in many cases. Therefore, it is easier to use than other optimization algorithms.
  • Strong for nonlinear optimization: PSO has a strong approach to multimodal nonlinear optimization problems. This makes it applicable to a wider range of problems than other optimization algorithms.
  • Fast convergence: PSO is an efficient algorithm for convergence to the optimal solution by adjusting the position and speed of individuals. Therefore, it converges faster than other optimization algorithms and can find the optimal solution at high speed.
[Cons]
  • Prone to local solutions: PSO tends to fall into local solutions because of the random movement of individuals. In many cases, this can be solved by randomly changing the initial values, but this may limit the search for a solution space.
  • Unstable convergence: PSO can have unstable convergence when applied to large problems. Therefore, it may be less suitable for high-dimensional problems than other optimization algorithms.
  • Prone to over-search: PSO narrows the search area as the individual approaches the optimal solution. Therefore, it is easy to fall into over-searching and get trapped near the optimal solution.

The PSO algorithm proceeds in the following steps

  1. Initialize the population: randomly determine the initial position and initial velocity. Each individual is placed at a random position in the search space and has a random velocity.
  2. Compute the evaluation function for each individual: Using the position of each individual, compute the value of the evaluation function.
  3. Update each individual’s “best solution he/she has found so far”: Each individual remembers the best solution he/she has found so far. This is called the personal best.
  4. Update the “best overall solution” of the population: The best solution of the entire population is called the Global Best. The global best is updated if the solution is better than the personal best.
  5. Update the velocity of each individual: Each individual calculates a new velocity using its current velocity and acceleration. The acceleration is calculated by the following equation. [a_i = c_1 times rand() times (p_{best_i} – x_i) + c_2 times rand() times (g_{best} – x_i)] where ai is the acceleration, c1 and c2 are random coefficients, pbesti is the personal best that individual i has, xi is the current position, and gbest is the overall global best.
  6. Update the position of each individual: Each individual calculates a new position using its current position and new velocity.
  7. Check for termination conditions: If the termination conditions are met, the search is terminated. r. If not, return to step 2. Otherwise, return to step 2.

The PSO searches for the optimal solution by repeating the above steps. As the position and velocity of the population in the search space change, the search proceeds closer to the optimal solution.

PSO can be applied to a wide variety of problems. Some specific applications are discussed below.

  • Pattern Recognition: PSO may be used to optimize the weights of a neural network. For example, PSO has been reported to be highly accurate in pattern recognition problems such as face recognition and speech recognition.
  • Robot Control: PSO is also applied to robot control problems. For example, PSO has been reported to provide high control accuracy in inverse kinematics and trajectory generation problems.
  • Combinatorial optimization problems: Combinatorial optimization problems involve finding the optimal solution under certain constraints. PSO has also been applied to these problems to obtain high solution quality.
  • Parameter Optimization: PSO is also used to optimize the parameters of a function. For example, PSO may be used to optimize parameters in simulation and optimal control problems.
  • Neural Network Training: PSO is also used to train neural networks. For example, PSO may be used to optimize the weights and biases of a multilayer perceptron.

In addition to these examples, PSO has been applied to a variety of other problems.

Some actual implementations using PSO include.

Implementation in Python

See “Python and Machine Learning” for more information on building a python environment.

PSO implementation using pyswarm, a python PSO library

pip install pyswarms
# Import modules
import numpy as np

# Import PySwarms
import pyswarms as ps
from pyswarms.utils.functions import single_obj as fx

# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2

Original implementation. Code is available on github.

PSO implementation in R

See “R Language and Machine Learning” for information on building an R environment.

A page on PSO implementation using R with carefully described theory and tutorial. It is not a simple hello world, but has sample codes for various practical examples.

Example 1: Engineering: Optimizing Spring Weight

This problem is a replication of the problem in Hu et al. (2003) 3. which is to minimize the weight of a tension-compression spring subject to minimum deflection, shear stress, surge frequency, outer diameter limits, and design variable constraints. The design variables are average coil diameter D, wire diameter d, and effective number of coils N:

Example 2: Finance: Portofolio Optimization

This problem is replicated from Zhu et al. (2011)5 , where the PSO algorithm is employed for portfolio selection and optimization in investment management.

The portfolio optimization problem involves managing a portfolio of assets that minimizes a risk target under constraints to guarantee a given level of return. One of the basic principles of financial investment is diversification, where the investor diversifies his investments across different types of assets. Portfolio diversification minimizes the investor’s exposure to risk and maximizes portfolio returns.

The fitness function is the adjusted Sharpe ratio of the restricted portfolio. It combines information from the mean and variance of the assets and serves as a risk-adjusted measure of average return and is often used to evaluate portfolio performance.

The Sharpe ratio helps explain whether a portfolio’s excess returns are the result of smart investment decisions or too much risk. A given portfolio or fund may enjoy higher returns than its peers, but it is a good investment only if those higher returns are not accompanied by excessive additional risk.

The higher the Sharpe ratio of a portfolio, the better its risk-adjusted performance. A negative Sharpe ratio as a result of the analysis means that the risk-free rate exceeds the portfolio’s return or that the portfolio’s return is expected to be negative.

Implementation in Clojure

See “Setting up a Clojure development environment with SublimeText4, VS code and LightTable” for more information on setting up a Clojure environment.

github page of CAPSOS (Cellular Automata, Particle Swarm Optimization, Synth).

(ns capsos.pso-test
  (:use midje.sweet
        capsos.pso))


;; (defn update-particle-xy
;;   [gbest p]
;;   (let [pbest (:pbest p)
;;         nvelx (+ (* 2 (rand) (- (first pbest) (:x p)))
;;                  (* 2 (rand) (- (first gbest) (:x p))))
;;         nvely (+ (* 2 (rand) (- (second pbest) (:y p)))
;;                  (* 2 (rand) (- (second gbest) (:y p))))]
;;     (assoc p :x (+ nvelx (:x p)) :y (+ nvely (:y p)) :vel [nvelx nvely])))


(def ^:dynamic ^java.util.Random *rnd* (java.util.Random. 42)) 
;;(.nextDouble *rnd*)


(make-particle 1000 1000)
;; => {:x 740, :y 815, :pbest [740 815]}

(last (take 90000 (iterate (partial update-particle-xy [50 50]) {:x 740, :y 815, :pbest [740 815]})))

(update-particle-xy [50 50] {:x 740, :y 815, :pbest [740 815]})
{:norm-vel [-0.36547344376116647 -0.930821766991594], :vel [-473.25580940929353 -1205.3319229434064], :y 814.0691782330084, :x 739.6345265562388, :pbest [740 815]}


{:norm-vel [-0.4051418860917991 -0.9142538225974118], :vel [-633.3074826612985 -1429.138301368925], :y 649.0857461774026, :x 637.5948581139082, :pbest [1 1]}

;; (facts "Particle XY updating"
;;        (update-particle-xy [0 0] {:pbest [0 0] :x 1 y 1})
;;        => 
    
;;        )

コメント

Exit mobile version
タイトルとURLをコピーしました