• No results found

Fig. 6.3.Implementation of QGWO.

6.3 Quantum Grey Wolf Optimizer

Quantum based search process by an agent (instead of the Newtonian random walk) leads to determination of effective optimal positions in the search space. It is possible due to occurrence of some search agents at far distance from a point, with a certain probability.

Initially, Sun et al. [97, 98] introduced the quantum behaved Particle Swarm Optimization (QPSO) by imposing the quantum mechanics rule rather than the classical Newtonian dynamics in traditional PSO. They assumed that particles are flying in the D-dimensional Hilbert space rather than the Newtonian space and having a quantum potential field to ensure the bound state for avoiding the explosion and thus ensuring convergence. The outperformed results of QPSO compare to PSO [99], inspired us to develop the quantum version of grey wolf optimizer to accelerate the search capability and fast convergence. Recently, a quantum-based framework is applied in a binary domain called QI-BGWO to solve the unit commitment (UC) problem [100].

Mathematical formulation of traditional GWO achieves the convergence of algorithm and it depends on each wolf position which is updated byα,β andγ wolf according to following final update equation:

⃗X(t+1) =⃗Xav(t)−1 3

A⃗α. ⃗Dα+A⃗β. ⃗Dβ +A⃗δ. ⃗Dδ

(6.7) where⃗Xav(t) = (X⃗α+X⃗β +X⃗δ)/3 is the mean position of the best three wolf: α,β and γ. It means local attraction of grey wolf depends on the average of best three wolf as well as their random individual positions. The local movement of each grey wolf is defined as,

Gm,t= (g1m,t,g2m,t, ....,gDm,t)wheret denotes the iteration,Dis the dimension (no of variables) involved in the problem for m =1,2, ...M wolf population. Hence, positions of local attractor formth wolf anddth dimension attiteration is written as

gdm,t =Aα∗Xav(t)d + (1−Cα)∗Xα(t)d + (1−Cβ)∗Xd

β(t)+ (1−Cδ)∗Xδ(t)d (6.8) where Xav(t)d is the mean position of best three α,β,and δ wolf fordth dimension. The parameterAα andCkis similar as defined in original GWO. The term(1−Ck)in Eq.(6.8) is included for moving the corresponding wolf in the random negative and positive direction according to following criterion

1−Ck=









+ve, ifrand<0.5.

0, ifrand=0.5.

−ve, ifrand>0.5.

(6.9)

When grey wolves are converging to their own local attractors then their current position,α, β andδ positions and their average positions are all converging to a single point. It leads to the global convergence of GWO. From the concept of quantum mechanism, a wave function Ψ(X,t)is needed to define the quantum state of each wolf (which depends on the position of wolf). Its amplitude square|Ψ(X)|2represents the probability measure for position of wolf appears atX. The simplest Delta potential well centered atgis considered for movement of wolves in bounded state in the search space. For one-dimensional Hilbert space (D=1) where position of the wolf is considered asX andgm,t asg, the potential field is represented by [98]

V(X) =−γ.δ(X−g) =−γ.δ(Y) (6.10) whereY =X−gandγ is the depth of the potential well which is infinite at the origin and zero elsewhere. The mechanism to implement QGWO is shown in Fig.6.3 for D-dimensional Hilbert space where each dimension of the wolf’s position is bounded by delta potential well and updated independently. Consider the wolves position, its local attractor, characteristic length L of the delta potential well and the random variable R at iteration t similar to the update equation derived for QPSO in [98]. It is used to measure thedth (1≤d≤D) dimension ofmth (1≤m≤M) wolf position for(t+1)thiteration as follow:

Xm,t+1j =Aα∗gdm,t±Ldm,t

2 ln(1/Rdm,t+1) (6.11)

6.3 Quantum Grey Wolf Optimizer 107

Fig. 6.4. Flowchart of Quantum behaved Grey Wolf Optimizer.

whereRdm,t+1is a sequence of random numbers uniformly distributed on (0,1), varying witht for eachmandd. The value ofLdm,t is defined as

Ldm,t=2.a.|Xm,td −Wtd| (6.12)

whereWt=Wt1,Wt2, ....Wt2is the known as the average positions of the grey wolf attiteration that is

Wtd = 1 M

M

m=1

Xm,td (6.13)

Hence, final position update equation of the QGWO is:

Xm,t+1d = Aα∗gdm,t±a|Xm,td −Wtd|ln(1/Rdm,t+1) (6.14) where a = 0.5−1×h t

MItr i

(6.15) The Eq.(6.15) is considered a standard form of QGWO algorithm. Here, the term gdm,t depends on theα,β andδ and their mean position. The parameterAα is used to exploit more in the direction ofα wolf and it’s value depends onawhich can be adjusted to balance the local and global search of the algorithm. In the present work, it is linearly varied between -0.5 to 0.5 over the course of iteration which takes positive value in first half and negative in remaining half iteration as per the Eq.(6.15). The flowchart of proposed QGWO is shown in Fig.6.4.

6.3.1 Simulation and comparative analysis of QGWO for benchmark function optimization

The performance of QGWO is analyzed and compared with traditional GWO on 24 bench- mark unimodal and multi-modal functions having fixed (2D) and multi-dimensional problem.

The information of these functions are given in Appendix B with their pictorial view. The convergence curve of each function is plotted with 30 wolves as population size (M) and 100 iterations for both QGWO (in red) and GWO (in blue) in Fig.6.5. For most of the function, QGWO takes very less time to reach towards the global optima (minimum fitness value) compared to original GWO as observed from the figures. It means leaders in QGWO are converging quickly and avoid the local optima and stagnation in most cases apart from Egg holder(F5) and Rosenbrock function (F18;fast convergence but value of global minima is high compare to GWO). The mean best fitness value for F1-F24 functions (average ofα over 50 runs) obtained from PSO, GWO and proposed QGWO are shown in Table 6.1. The best fitness value indicated in bold (obtained by QGWO), it shows the superiority of proposed algorithm over PSO and GWO by hitting the least value (global optima for most functions) in confined search space.

The performance comparison of QGWO is carried out with PSO and quantum version of PSO given in the Ref. [97] on seven benchmark suit: Schaffers function (F7), Griewank

6.3 Quantum Grey Wolf Optimizer 109

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

5 10 15 20 25 30 35

GWO QGWO

(a)Bukin N.6

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

-2.05 -2 -1.95 -1.9

-1.85 GWO

QGWO

(b)Cross-In-Tray

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

50 100 150 200 250 300

350 GWO

QGWO

(c)De-Jong Func

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

-1 -0.95 -0.9 -0.85 -0.8 -0.75 -0.7 -0.65 -0.6

-0.55 GWO

QGWO

(d)Drop Wave

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

-950 -900 -850 -800 -750 -700 -650 -600

-550 GWO

QGWO

(e)Egg holder

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

50 100 150 200 250

300 GWO

QGWO

(f)Goldstein Price

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35

0.4 GWO

QGWO

(g)Scheffer N 1

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35

GWO QGWO

(h)Scheffer N 2

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

0.1 0.2 0.3 0.4 0.5 0.6

GWO QGWO

(i)3-Hamp Camel

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

-1 -0.5 0 0.5 1 1.5

2 GWO

QGWO

(j)6-Hamp Camel

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

2 4 6 8 10 12 14 16 18

20 GWO

QGWO

(k)Ackely

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

10 20 30 40 50 60

GWO QGWO

(l)Alpine

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

0 100 200 300 400 500

600 GWO

QGWO

(m)Griewank

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

1000 2000 3000 4000 5000 6000 7000 8000 9000

GWO QGWO

(n)Hyper Ellipsoid

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

GWO QGWO

(o)Powel Sum

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

20 40 60 80 100 120

140 GWO

QGWO

(p)Quartic

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

0 50 100 150 200 250 300 350 400

450 GWO

QGWO

(q)Rastrigin

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

×105

0.5 1 1.5 2

2.5 GWO

QGWO

(r)Rosenbrock

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

200 400 600 800 1000

1200 GWO

QGWO

(s)Schwefel 2.20

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

10 20 30 40 50 60 70

80 GWO

QGWO

(t)Schwefel 2.21

Iteration

5 10 15 20 25 30 35 40 45 50

Fitness value

×1042

0.5 1 1.5 2 2.5

GWO QGWO

(u)Schwefel 2.22

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

×109

1 2 3 4 5 6 7 8 9 10 11

GWO QGWO

(v)Schwefel 2.23

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

20 40 60 80 100 120 140 160 180

200 GWO

QGWO

(w)Sphere Func

Iteration

10 20 30 40 50 60 70 80 90 100

Fitness value

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6

0.8 GWO

QGWO

(x)Xin-She-Yang N

Fig. 6.5.Convergence of QGWO and its comparison with GWO on benchmark functions.

Function (F13), De Jong Function with no noise (F’16), Rastrigin (F17), Rosenbrock (F18), Rosenbrock variant (F18 with d=2) and Sphere Function (F23). Average best fitness is

Table 6.1 Best mean fitness values over 100 trial runs of different algorithms.

Function SPSO [101] GWO [94] QGWO

F1 0.01000 0.118 0.100

F2 -2.06261 -2.0626 -2.0626

F3 2.982 2.982 2.982

F4 -1 -1 -1

F5 -715.980 -9.60E+02 -9.34E+02

F6 2.999 3.0000 3.0000

F7 0 0 0

F8 0.010 0 0

F9 1.336E-47 2.10E-207 0

F10 -1.0316 -1.0316 -1.0316

F11 2.0133 1.00E-13 8.88E-16

F12 0.0012 1.067E-19 0

F13 4.141E-07 0 0

F14 1.223E-07 1.38E-29 0

F15 3.623E-29 3.25E-103 0

F16 0.0324 1.158E-51 0

F17 35.818 5.155 0

F18 25.109 27.106 28.77

F19 0.0059 1.915E-16 0

F20 2.83 1.75E-07 0

F21 0.0487 1.75E-15 0

F22 5.142E-21 4.173E-98 0

F23 2.12E-09 2.00E-29 0

F24 -1 -1 -1

calculated after 50 trial runs for each by taking population sizes: 20, 40 and 80 for iteration:

1000, 1500 and 2000 with dimensions 10, 20 and 30 (first five functions, 2 for last two functions). The fitness value mentioned for each function in Table 6.2 indicates that proposed QGWO outperforms for most of the functions. The PSO and its variants perform better for Rosenbrock variant function compare to GWO and QGWO. Compared to all earlier version of quantum-inspired PSO algorithms and original GWO, proposed QGWO results do not deviate with the increase in dimension.