Orthogonal learning particle swarm optimization

Zhi-Hui Zhan, Jun Zhang, Yun Li, Yu-Hui Shi

Research output: Contribution to journalArticle

449 Citations (Scopus)

Abstract

Particle swarm optimization (PSO) relies on its learning strategy to guide its search direction. Traditionally, each particle utilizes its historical best experience and its neighborhood's best experience through linear summation. Such a learning strategy is easy to use, but is inefficient when searching in complex problem spaces. Hence, designing learning strategies that can utilize previous search information (experience) more efficiently has become one of the most salient and active PSO research topics. In this paper, we proposes an orthogonal learning (OL) strategy for PSO to discover more useful information that lies in the above two experiences via orthogonal experimental design. We name this PSO as orthogonal learning particle swarm optimization (OLPSO). The OL strategy can guide particles to fly in better directions by constructing a much promising and efficient exemplar. The OL strategy can be applied to PSO with any topological structure. In this paper, it is applied to both global and local versions of PSO, yielding the OLPSO-G and OLPSO-L algorithms, respectively. This new learning strategy and the new algorithms are tested on a set of 16 benchmark functions, and are compared with other PSO algorithms and some state of the art evolutionary algorithms. The experimental results illustrate the effectiveness and efficiency of the proposed learning strategy and algorithms. The comparisons show that OLPSO significantly improves the performance of PSO, offering faster global convergence, higher solution quality, and stronger robustness.

LanguageEnglish
Pages832-847
Number of pages16
JournalIEEE Transactions on Evolutionary Computation
Volume15
Issue number6
Early online date2 Sep 2010
DOIs
Publication statusPublished - 31 Dec 2011

Fingerprint

Particle swarm optimization (PSO)
Particle Swarm Optimization
Learning Strategies
Learning
Active Particles
Topological Structure
Particle Swarm Optimization Algorithm
Experimental design
Global Convergence
Summation
Evolutionary algorithms
Evolutionary Algorithms
Design of experiments
Learning Algorithm
Benchmark
Robustness
Experience
Experimental Results

Keywords

  • global optimization
  • orthogonal experimental design (OED)
  • orthogonal learning particle swarm optimization (OLPSO)
  • particle swarm optimization (PSO)
  • swarm intelligence

Cite this

Zhan, Zhi-Hui ; Zhang, Jun ; Li, Yun ; Shi, Yu-Hui. / Orthogonal learning particle swarm optimization. In: IEEE Transactions on Evolutionary Computation. 2011 ; Vol. 15, No. 6. pp. 832-847.
@article{0eae423a78dd4778bae70cd1700b1e1f,
title = "Orthogonal learning particle swarm optimization",
abstract = "Particle swarm optimization (PSO) relies on its learning strategy to guide its search direction. Traditionally, each particle utilizes its historical best experience and its neighborhood's best experience through linear summation. Such a learning strategy is easy to use, but is inefficient when searching in complex problem spaces. Hence, designing learning strategies that can utilize previous search information (experience) more efficiently has become one of the most salient and active PSO research topics. In this paper, we proposes an orthogonal learning (OL) strategy for PSO to discover more useful information that lies in the above two experiences via orthogonal experimental design. We name this PSO as orthogonal learning particle swarm optimization (OLPSO). The OL strategy can guide particles to fly in better directions by constructing a much promising and efficient exemplar. The OL strategy can be applied to PSO with any topological structure. In this paper, it is applied to both global and local versions of PSO, yielding the OLPSO-G and OLPSO-L algorithms, respectively. This new learning strategy and the new algorithms are tested on a set of 16 benchmark functions, and are compared with other PSO algorithms and some state of the art evolutionary algorithms. The experimental results illustrate the effectiveness and efficiency of the proposed learning strategy and algorithms. The comparisons show that OLPSO significantly improves the performance of PSO, offering faster global convergence, higher solution quality, and stronger robustness.",
keywords = "global optimization, orthogonal experimental design (OED), orthogonal learning particle swarm optimization (OLPSO), particle swarm optimization (PSO), swarm intelligence",
author = "Zhi-Hui Zhan and Jun Zhang and Yun Li and Yu-Hui Shi",
year = "2011",
month = "12",
day = "31",
doi = "10.1109/TEVC.2010.2052054",
language = "English",
volume = "15",
pages = "832--847",
journal = "IEEE Transactions on Evolutionary Computation",
issn = "1089-778X",
number = "6",

}

Orthogonal learning particle swarm optimization. / Zhan, Zhi-Hui; Zhang, Jun; Li, Yun; Shi, Yu-Hui.

In: IEEE Transactions on Evolutionary Computation, Vol. 15, No. 6, 31.12.2011, p. 832-847.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Orthogonal learning particle swarm optimization

AU - Zhan, Zhi-Hui

AU - Zhang, Jun

AU - Li, Yun

AU - Shi, Yu-Hui

PY - 2011/12/31

Y1 - 2011/12/31

N2 - Particle swarm optimization (PSO) relies on its learning strategy to guide its search direction. Traditionally, each particle utilizes its historical best experience and its neighborhood's best experience through linear summation. Such a learning strategy is easy to use, but is inefficient when searching in complex problem spaces. Hence, designing learning strategies that can utilize previous search information (experience) more efficiently has become one of the most salient and active PSO research topics. In this paper, we proposes an orthogonal learning (OL) strategy for PSO to discover more useful information that lies in the above two experiences via orthogonal experimental design. We name this PSO as orthogonal learning particle swarm optimization (OLPSO). The OL strategy can guide particles to fly in better directions by constructing a much promising and efficient exemplar. The OL strategy can be applied to PSO with any topological structure. In this paper, it is applied to both global and local versions of PSO, yielding the OLPSO-G and OLPSO-L algorithms, respectively. This new learning strategy and the new algorithms are tested on a set of 16 benchmark functions, and are compared with other PSO algorithms and some state of the art evolutionary algorithms. The experimental results illustrate the effectiveness and efficiency of the proposed learning strategy and algorithms. The comparisons show that OLPSO significantly improves the performance of PSO, offering faster global convergence, higher solution quality, and stronger robustness.

AB - Particle swarm optimization (PSO) relies on its learning strategy to guide its search direction. Traditionally, each particle utilizes its historical best experience and its neighborhood's best experience through linear summation. Such a learning strategy is easy to use, but is inefficient when searching in complex problem spaces. Hence, designing learning strategies that can utilize previous search information (experience) more efficiently has become one of the most salient and active PSO research topics. In this paper, we proposes an orthogonal learning (OL) strategy for PSO to discover more useful information that lies in the above two experiences via orthogonal experimental design. We name this PSO as orthogonal learning particle swarm optimization (OLPSO). The OL strategy can guide particles to fly in better directions by constructing a much promising and efficient exemplar. The OL strategy can be applied to PSO with any topological structure. In this paper, it is applied to both global and local versions of PSO, yielding the OLPSO-G and OLPSO-L algorithms, respectively. This new learning strategy and the new algorithms are tested on a set of 16 benchmark functions, and are compared with other PSO algorithms and some state of the art evolutionary algorithms. The experimental results illustrate the effectiveness and efficiency of the proposed learning strategy and algorithms. The comparisons show that OLPSO significantly improves the performance of PSO, offering faster global convergence, higher solution quality, and stronger robustness.

KW - global optimization

KW - orthogonal experimental design (OED)

KW - orthogonal learning particle swarm optimization (OLPSO)

KW - particle swarm optimization (PSO)

KW - swarm intelligence

UR - http://www.scopus.com/inward/record.url?scp=82455205925&partnerID=8YFLogxK

UR - https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=4235

U2 - 10.1109/TEVC.2010.2052054

DO - 10.1109/TEVC.2010.2052054

M3 - Article

VL - 15

SP - 832

EP - 847

JO - IEEE Transactions on Evolutionary Computation

T2 - IEEE Transactions on Evolutionary Computation

JF - IEEE Transactions on Evolutionary Computation

SN - 1089-778X

IS - 6

ER -