Journal of Applied Mathematics and Stochastic Analysis
Volume 2006 (2006), Article ID 72762, 23 pages
doi:10.1155/JAMSA/2006/72762

Approximation and optimality necessary conditions in relaxed stochastic control problems

Seïd Bahlali,1 Brahim Mezerdi,1 and Boualem Djehiche2

1Laboratory of Applied Mathematics, University of Biskra, P.O. Box 145, Biskra 07000, Algeria
2Department of Mathematics, Division of Mathematical Statistics, Royal Institute of Technology, Stockholm 100 44, Sweden

Received 28 April 2005; Accepted 5 March 2006

Copyright © 2006 Seïd Bahlali et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We consider a control problem where the state variable is a solution of a stochastic differential equation (SDE) in which the control enters both the drift and the diffusion coefficient. We study the relaxed problem for which admissible controls are measure-valued processes and the state variable is governed by an SDE driven by an orthogonal martingale measure. Under some mild conditions on the coefficients and pathwise uniqueness, we prove that every diffusion process associated to a relaxed control is a strong limit of a sequence of diffusion processes associated to strict controls. As a consequence, we show that the strict and the relaxed control problems have the same value function and that an optimal relaxed control exists. Moreover we derive a maximum principle of the Pontriagin type, extending the well-known Peng stochastic maximum principle to the class of measure-valued controls.