Department of Mathematics, Faculty of Science, Jazan University, P.O. Box 2097, Jazan, Saudi Arabia
Academic Editor: Ying U. Hu
Copyright © 2010 Abdelkrim El Mouatasim. This is an open access article distributed under the
Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
The random perturbation of generalized reduced gradient method for optimization under nonlinear differentiable constraints is proposed. Generally speaking, a particular iteration of this method proceeds in two phases. In the Restoration Phase, feasibility is restored by means of the resolution of an auxiliary nonlinear problem, a generally nonlinear system of equations. In the Optimization Phase, optimality is improved by means of the consideration of the objective function, on the tangent subspace to the constraints. In this paper, optimal assumptions are stated on the Restoration Phase and the Optimization Phase that establish the global convergence of the algorithm. Some numerical examples are also given by mixture problem and octagon problem.