Department of Applied Mathematics, Dalian University of Technology, 116024 Dalian, China
Academic Editor: M. De la Sen
Copyright © 2012 Wei Wu and Atlas Khan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Self-organizing map (SOM) neural networks have been widely applied in information sciences. In particular, Su and Zhao proposes in (2009) an SOM-based optimization (SOMO) algorithm in order to find a wining neuron, through a competitive learning process, that stands for the minimum of an objective function. In this paper, we generalize the SOM-based optimization (SOMO) algorithm to so-called SOMO-m algorithm with m winning neurons. Numerical experiments show that, for m>1, SOMO-m algorithm converges faster than SOM-based optimization (SOMO) algorithm when used for finding the minimum of functions. More importantly, SOMO-m algorithm with m≥2 can be used to find two or more minimums simultaneously in a single learning iteration process, while the original SOM-based optimization (SOMO) algorithm has to fulfil the same task much less efficiently by restarting the learning iteration process twice or more times.