Improved Nelder-Mead Optimization Method in Learning Phase of Artificial Neural Network

  • Mustafa Adnan Merdan
  • Hasan Erdinc Kocer
  • Mohammed Hussein Ibrahim
Keywords: Artificial neural network, Training algorithm, Nelder-Mead optimization algorithm

Abstract

It is difficult to find the optimum weight values of artificial neural networks for optimization problem. In this study, Nelder-Mead optimization method [17] has been improved and used for determining the optimal values of weights. The results of the proposed improved Nelder-Mead method are compared with results of the standard Nelder-Mead method which is used in ANNs learning algorithm.  The most common data sets are taken from UCI machine learning repository.  According to the experimental results, in this study better results are achieved in terms of speed and performance.

Downloads

Download data is not yet available.

References

APA Kecman, V. (2001). Learning and soft computing: support vector machines, neural networks, and fuzzy logic models. MIT press.

Alhamdoosh, M., & Wang, D. (2014). Fast decorrelated neural network ensembles with random weights. Information Sciences, 264, 104-117.

Slowik, A. (2011). Application of an adaptive differential evolution algorithm with multiple trial vectors to artificial neural network training. IEEE Transactions on Industrial Electronics, 58(8), 3160-3167.

Mohaghegi, S., del Valle, Y., Venayagamoorthy, G. K., & Harley, R. G. (2005, June). A comparison of PSO and backpropagation for training RBF neural networks for identification of a power system with STATCOM. In Swarm Intelligence Symposium, 2005. SIS 2005. Proceedings 2005 IEEE (pp. 381-384).

Montana, D. J., & Davis, L. (1989, August). Training Feedforward Neural Networks Using Genetic Algorithms. In IJCAI (Vol. 89, pp. 762-767).

Malinak, P., &Jaksa, R. (2007, September). Simultaneous gradient and evolutionary neural network weights adaptation methods. In Evolutionary Computation, 2007. CEC 2007. IEEE Congress on (pp. 2665-2671). IEEE.

Kennedy, J. (2011). Particle swarm optimization. In Encyclopedia of machine learning (pp. 760-766). Springer US.

Carvalho, M., &Ludermir, T. B. (2007, September). Particle swarm optimization of neural network architectures and weights. In Hybrid Intelligent Systems, 2007. HIS 2007. 7th International Conference on (pp. 336-339). IEEE.

Dorigo, M., Birattari, M., &Stutzle, T. (2006). Ant colony optimization. IEEE computational intelligence magazine, 1(4), 28-39.

Hu, Liang, et al. "Optimization of neural network by genetic algorithm for flowrate determination in multipath ultrasonic gas flowmeter." IEEE Sensors Journal 16.5 (2016): 1158-1167.

Kapanova, K. G., Dimov, I., &Sellier, J. M. A genetic approach to automatic neural network architecture optimization. Neural Computing and Applications, 1-12.

Han, J., Pei, J., &Kamber, M. (2011). Data mining: concepts and techniques. Elsevier.

Schalkoff, R. J. (1997). Artificial neural networks (Vol. 1). New York: McGraw-Hill.‏

Kelley, C. T. (1999). Detection and remediation of stagnation in the nelder-mead algorithm using a sufficient decrease condition. SIAM journal on optimization, 10(1), 43-55.

Lagarias J. C., J. A. Reeds, M. H. Wright, and P. E. Wright, 1998. Convergence properties of the Nelder-Mead simplex method in low dimensions. SIAM Journal on Optimization, 9(1), p. 112- 147.

Asuncion, A., & Newman, D. (2007). UCI machine learning repository.‏

Nelder, John A., R. Mead (1965). A simplex method for function minimization. Computer Journal. 7: 308–313.

Published
2018-12-27
How to Cite
[1]
M. A. Merdan, H. E. Kocer, and M. H. Ibrahim, “Improved Nelder-Mead Optimization Method in Learning Phase of Artificial Neural Network”, IJISAE, vol. 6, no. 4, pp. 271-274, Dec. 2018.
Section
Research Article