Artificial bee colony algorithm(2)
where \(\vec{x_k}\) is a randomly selected food source, \(i\) is a randomly chosen parameter index and \(\phi_{mi}\) is a random number within the range \([-a,a]\ .\) After producing the new food source \(\vec {\upsilon_m}\ ,\) its fitness is calculated and a greedy selection is applied between \(\vec{\upsilon_{m}}\) and \(\vec{x_{m}}\ .\)
The fitness value of the solution, \(fit_m(\vec{x_m})\ ,\) might be calculated for minimization problems using the following formula ()
\[\tag{7} fit_ m (\vec{x_m})= \left\{ {\begin{array}{*{20}c} {\frac{1}{{1 + f_m (\vec{x_m})}}} & {} & {{\rm if}~~{\rm{ }}f_m(\vec{x_m}) \ge 0} \\ {1 + abs(f_m (\vec{x_m}))} & {} & {{\rm if}~~{\rm{ }}f_m (\vec{x_m}) < 0} \\ \end{array}} \right\}\]
where \(f_m(\vec{x_m})\) is the objective function value of solution \(\vec{x_m}\ .\)
Unemployed bees consist of two groups of bees: onlooker bees and scouts. Employed bees share their food source information with onlooker bees waiting in the hive and then onlooker bees probabilistically choose their food sources depending on this information. In ABC, an onlooker bee chooses a food source depending on the probability values calculated using the fitness values provided by employed bees. For this purpose, a fitness based selection technique can be used, such as the roulette wheel selection method ().
The probability value \(p_m\) with which \(\vec{x_m}\) is chosen by an onlooker bee can be calculated by using the expression given in equation () :
\[\tag{8} p_m = \frac{{fit_m(\vec{x_m}) }}{{\sum\limits_{m = 1}^{SN} {fit_m (\vec{x_m})} }} \ .\]
After a food source \( \vec{x_m}\) for an onlooker bee is probabilistically chosen, a neighbourhood source \( \vec{\upsilon_m}\) is determined by using equation (), and its fitness value is computed. As in the employed bees phase, a greedy selection is applied between \(\vec{\upsilon_{m}}\) and \(\vec{x_{m}}\ .\) Hence, more onlookers are recruited to richer sources and positive feedback behaviour appears.
The unemployed bees who choose their food sources randomly are called scouts. Employed bees whose solutions cannot be improved through a predetermined number of trials, specified by the user of the ABC algorithm and called “limit” or “abandonment criteria” herein, become scouts and their solutions are abandoned. Then, the converted scouts start to search for new solutions, randomly. For instance, if solution \(\vec{x_m}\) has been abandoned, the new solution discovered by the scout who was the employed bee of \(\vec{x_m}\) can be defined by (). Hence those sources which are initially poor or have been made poor by exploitation are abandoned and negative feedback behaviour arises to balance the positive feedback.
In summary, the ABC algorithm,
1) is inspired by the foraging behaviour of honeybees,
2) is a global optimization algorithm,
3) has been initially proposed for numerical optimization (e.g.: ),
4) can be also used for combinatorial optimization problems (eg: ),
5) can be used for unconstrained and constrained optimization problems (eg: ; ; ),
6) employs only three control parameters (population size, maximum cycle number and limit) that are to be predetermined by the user,
7) is quite simple, flexible and robust (some of the relevant publications expressing these merits of the ABC algorithm are ; ; ; , included in the References list).
Applications of ABC An Unconstrained Optimization Problem: Neural Network Training for the XOR problemTraining an artificial neural network is an optimization task since it is desired to find the optimal set of weights of a neural network in the training process. The goal is to optimize the network weights by minimizing an objective function such as the mean square error (MSE) given by ():
\[\tag{9} E(\vec{w}(t)) = \frac{1}{n}\sum\limits_{j = 1}^n {\sum\limits_{k = 1}^K {(d_k - o_k )^2 } } \ ,\]
where \(E(\vec{w}(t)) \) is the error at the \(t\)th iteration; \(\vec{w}(t) \) is the vector of the weights in the connections at the \(t\)th iteration; \(d_k\) is the desired output node; \(o_k\) is the actual value of the \(k\)th output node; \(K\) is the number of output nodes; and \(n\) is the number of patterns.
The neural networks are being successfully applied to solving problems in pattern classification, function approximation, optimization, pattern matching and associative memories.
Exclusive-OR (XOR) is a difficult classification problem mapping two binary inputs to a single binary output as (0 0;0 1;1 0;1 1)> (0;1;1;0). This classical benchmark problem is a hard task also for the neural networks.
Figure 1: An example of neural network structure for the XOR problem相关文章: