allel evolutionary algorithms for MOPS.
2.1 Particle Swarm Optimisation for MOPs
The Particle Swarm Optimisation (PSO) method is a
well-known optimization algorithm in the area of bi-
ologically inspired computing. Here, the algorithm
mimics flocks of birds when flying, [3]. Each indi-
vidual (i.e particle) updates its location in the search
by information transmitted from other particles.
It can be used to solve a general function F(x),
where xis a possible solution vector in a multidimen-
sional space. Here, a group of individuals (poten-
tial solutions of F(x), called boids) continously up-
date their positions to reach required destination. The
goal is to reach the global minima using data received.
In this work, the classical PSO is integrated with a
weight factor for restricting the velocity, [5]. The for-
mulas shown below are used to update the the nth par-
ticle velocity and position:
Vn
i+1 =wV n
i+c1r1(Pn
L−xn
i)+c2r2(Pg−xn
i),(1)
and
xn
i+1 =xn
i+αV n
i+1.(2)
Here, αis the time step; Pn
Lis the local best vector
for nth particle, and Pgis the global best vector for all
particles; wis inertia weight factor; c1and c2are so-
cial factors; r1and r2are random numbers (between
0 and 1);
Single-objective PSOs have been successfully
extended to multi-objective optimization problems
by: Adding an external archive for collecting non-
dominated points. Adding a turbulence operator such
as the highly-disruptive polynomial mutation. Re-
placing the global best particles with a set of lead-
ers from an external archive. Managing the exter-
nal archive by using an update mechanism and lim-
iting the size of the external archive using crowding
or clustering.
2.2 The SMPSO Algorithm
SMSPO, [6], stands for Speed-constrained Multiob-
jective PSO. It is state-of-the-art in the area of evo-
lutionary multiobjective optimization. Its main fea-
tures are in the use of a constriction coefficient to con-
trol the particle’s velocity. The use of velocity con-
striction. The use of polynomial mutation rather than
uniform/non-uniform mutation. Different ranges for
C1 and C2 [1.5,2.5] and inertia weight set to 0.1.
2.3 Parallelisation of Evolutionary
Algorithms
A number of approaches, [7], [8], [9], [10], [11],
[12], have been proposed and carried out in the
past regarding how to parallelize evolutionary algo-
rithms. The main approaches are Master-worker, Is-
land, Diffusion and hybrid models. Master-worker
(also called global parallelization) model simply de-
codes and evaluates the fitness function of each in-
dividual of the population on the slaves. In the Is-
land model, the main population is divided into sub-
populations (or demes) and placed on different nodes.
Each node has its sub-population and runs an EA as
usual. It has two variants: with or without migration
(i.e. sending individuals to other nodes). A similar
approach is called cross-pollination as in [13].
In the previous two approaches the parallelization
was on the level of the population. However, in the
diffusion model, [10], (also called pollination models,
[14], fine-grain parallel evolutionary algorithms and
neighborhood model) the parallelism is exploited on
the individuals’ level. Here, each individual is placed
on a processor and forms a neighborhood structure
with few individuals from its local environment.
It is possible to mix two approaches or more in
a hybrid parallel evolutionary algorithm as in [15].
Interesting work in this area is the work of [16], (in
a system called pMOHypEA), where it is possible
to structure the population from coarse-grain island
models to fine-grained diffusion models using the
idea of hypergraphs.
Every two generations (as set in the experiments)
the local subpopulations on the remote processors are
gathered by the master processor, clustered into ksub-
populations using the clustering centroids, and redis-
tributed to the remote processors. The clustering is
applied to the current Pareto-front. It has two modes:
1) cluster the search space, and 2) cluster the objec-
tive space. Also, zone constraints are added to every
processor using the constrained dominance principle,
[17], to limit the subpopulations to their specific parti-
tion. However, it is possible to mark individuals as in-
valid in case they are assigned to another centroid than
the subpopulation they belong to. From the results ob-
tained by experiments, it is clear that objective space
clustering outperforms search space clustering. Addi-
tionally, when the zone constraints are deactivated the
results improved. This means that each processor can
explore and exploit the entire objective space and pro-
cessors can overlap considerably. Also, in the experi-
ments they use a total population size of 600 individu-
als which might be needed for the clustering approach
to work. No computational speedup results were re-
ported as we think the process gathering, clustering,
and redistributing the results could hinder the compu-
tational performance significantly. More on paralleli-
sation approached can be found in [18], [19], [20]
WSEAS TRANSACTIONS on COMPUTER RESEARCH
DOI: 10.37394/232018.2024.12.35