Artificial bee colony algorithm(3)
In the simulations a 2-2-1 feed-forward neural network having six connection weights and no biases (having six parameters, XOR6), a 2-2-1 feed-forward neural network having six connection weights and three biases (having 9 parameters, XOR9) and a 2-3-1 feed-forward neural network having nine connection weights and four biases (having thirteen weights, XOR13) were used.
In Table 1, mean MSE values of 30 runs of each configuration are recorded for ABC and for the standard Particle Swarm Optimization (PSO) (); each run of the algorithms was started with a random population with different seeds. The population size, SN, was set to 50 and the limit value was set to SN*n, where n is dimension of the weight set.
Table 1: Mean MSE of 30 runs of ABC and standard PSO algorithms in ANN training processXOR6 XOR9 XOR13
ABC 0.007051 0.006956 0.006079
PSO 0.097255 0.057676 0.014549
From the results presented in Table 1, it is clear that the basic ABC algorithm is more successful than the standard PSO at training neural networks on the XOR benchmark problem in all cases ().
A Constrained Optimization Problem: Welded Beam DesignThe welded beam design is a real-life application problem. As illustrated in Fig.2, the problem consists in dimensioning a welded steel beam and the welding length so as to minimize its cost subject to constraints on shear stress, \(\tau\ ,\) bending stress in the beam, \(\sigma\ ,\) buckling load on the bar, \(P_c\ ,\) end deflection of the beam, \(\delta\ ,\) and side constraints. There are four design variables\[ x_1, x_2, x_3, x_4\ ,\] which in structural engineering are commonly symbolized by the letters shown in Fig. 2 (\( h, l, t, b\)). Structural analysis of this beam leads to the following nonlinear objective function subject to five nonlinear and two linear inequality constraints as given below:
\[\tag{10} min f(X) = 1.1047x_1^2x_2 + 0.004811x_3x_4(14.0 + x_2) \]
subject to\[
{\rm{ }}\begin{array}{*{20}l}
{} & {g_1 (X):{\rm{ }}\tau (x) - \tau _{\max } {\rm{ }} \le 0} \\
{} & {g_2 (X):{\rm{ }}\sigma (x) - \sigma _{\max } \le 0} \\
{} & {g_3 (X):{\rm{ }}x_1 - x_4 \le 0} \\
{} & {g_4 (X):{\rm{ }}0.10471x_1^2 + 0.04811x_3 x_4 (14.0 + x_2 ) - 5.0 \le 0} \\
{} & {g_5 (X):{\rm{ }}0.125 - x_1 \le 0} \\
{} & {g_6 (X):{\rm{ }}\delta (x) - \delta _{\max } \le 0} \\
{} & {g_7 (X):{\rm{ }}P - P_c (x) \le 0} \\
\end{array}
\]
The optimum solution is located on the boundaries of the feasible region, and the ratio of the feasible region to the entire search space is quite small for this problem, which makes it a truly difficult problem for any optimization algorithm.
Generally, a constraint handling technique should be incorporated to the optimization algorithms proposed for solving unconstrained problems. Therefore, in order to handle the constraints of this problem, the ABC algorithm employs Deb’s rules, which are used instead of the greedy selection employed between \(\vec{\upsilon_{m}}\) and \(\vec{x_{m}}\) in the version of ABC proposed for unconstrained optimization problems (). Deb’s method uses a tournament selection operator, where two solutions are compared at a time by applying the following criteria ():
Any feasible solution satisfying all constraints is preferred to any infeasible solution violating any of the constraints,
Among two feasible solutions, the one having better fitness value is preferred,
Among two infeasible solutions, the one having the smaller constraint violation is preferred.
Table 2 presents the values of the variables and the constraints for the optimum solution found by ABC. Table 3 shows the summary statistics obtained from 30 runs of the ABC algorithm as compared to those of the \((\mu+\lambda)\)-ES method (). The low mean and standard deviation values show the robustness of the ABC algorithm.
Table 2: Parameter and constraint values of the best solution obtained by the ABC algorithmx1 x2 x3 x4 g1 g2 g3 g4 g5 g6 g7 f(x)
0.20573 3.470489 9.036624 0.20573 0 -2E-06 0 -3.43298 -0.08073 -0.23554 0 1.724852
Table 3: Statistics of the results obtained by the ABC and (\(\mu+\lambda\)-ES) algorithmsBest Mean Std. Dev. Evaluations
ABC 1.724852 1.741913 3.1E-02 30000
(\(\mu+\lambda\)-ES) 1.724852 1.777692 8.8E-2 30000
Other Approaches Inspired from Honeybee Foraging BehaviourApproaches for Combinatorial Optimization
相关文章: