A study of optimization problems involving stochastic systems with jumps
MetadataShow full item record
The optimization problems involving stochastic systems are often encountered in financial systems, networks design and routing, supply-chain management, actuarial science, telecommunications systems, statistical pattern recognition analysis associated with electronic commerce and medical diagnosis.This thesis aims to develop computational methods for solving three optimization problems, where their dynamical systems are described by three different classes of stochastic systems with jumps.In Chapter 1, a brief review on optimization problems involving stochastic systems with jumps is given. It is then followed by the introduction of three optimization problems, where their dynamical systems are described by three different classes of stochastic systems with jumps. These three stochastic optimization problems will be studied in detail in Chapters 2, 3 and 4, respectively. The literature reviews on optimization problems involving these three stochastic systems with jumps are presented in the last three sections of each of Chapters 2, 3 and 4, respectively.In Chapter 2, an optimization problem involving nonparametric regression with jump points is considered. A two-stage method is proposed for nonparametric regression with jump points. In the first stage, we identify the rough locations of all the possible jump points of the unknown regression function. In the second stage, we map the yet to be decided jump points into pre-assigned fixed points. In this way, the time domain is divided into several sections. Then the spline function is used to approximate each section of the unknown regression function. These approximation problems are formulated and subsequently solved as optimization problems. The inverse time scaling transformation is then carried out, giving rise to an approximation to the nonparametric regression with jump points. For illustration, several examples are solved by using this method. The result obtained are highly satisfactory.In Chapter 3, the optimization problem involving nonparametric regression with jump curves is studied. A two-stage method is presented to construct an approximating surface with jump location curve from a set of observed data which are corrupted with noise. In the first stage, we detect an estimate of the jump location curve in a surface. In the second stage, we shift the jump location curve into a row pixels or column pixels. The shifted region is then divided into two disjoint subregions by the jump location row pixels. These subregions are expanded to two overlapping expanded subregions, each of which includes the jump location row pixels. We calculate artificial values at these newly added pixels by using the observed data and then approximate the surface on each expanded subregions in which the artificial values at the pixels in the jump location row pixels for each expanded subregion. The curve with minimal distance between the two surfaces is chosen as the curve dividing the region. Subsequently, two nonoverlapping tensor product cubic spline surfaces are obtained. Then, by carrying out the inverse space scaling transformation, the two fitted smooth surfaces in the original space are obtained. For illustration, a numerical example is solved using the method proposed.In Chapter 4, a class of stochastic optimal parameter selection problems described by linear Ito stochastic differential equations with state jumps subject to probabilistic constraints on the state is considered, where the times at which the jumps occurred as well as their heights are decision variables. We show that this constrained stochastic impulsive optimal parameter selection problem is equivalent to a deterministic impulsive optimal parameter selection problem subject to continuous state inequality constraints, where the times at which the jumps occurred as well as their heights remain as decision variables. Then we show that this constrained deterministic impulsive optimal parameter selection problem can be transformed into an equivalent constrained deterministic impulsive optimal parameter selection problem with fixed jump times. We approximate the continuous state inequality constraints by a sequence of canonical inequality constraints. This leads to a sequence of approximate deterministic impulsive optimal parameter selection problems subject to canonical inequality constraints. For each of these approximate problems, we derive the gradient formulas of the cost function and the constraint functions. On this basis, an efficient computational method is developed. For illustration, a numerical example is solved.Finally, Chapter 5 contains some concluding remarks and suggestions for future studies.
Showing items related by title, author, creator and subject.
Chai, Qinqin (2013)In this thesis, we develop new computational methods for three classes of dynamic optimization problems: (i) A parameter identification problem for a general nonlinear time-delay system; (ii) an optimal control problem ...
Loxton, Ryan Christopher (2010)In this thesis, we develop numerical methods for solving five nonstandard optimal control problems. The main idea of each method is to reformulate the optimal control problem as, or approximate it by, a nonlinear programming ...
Li, Bin (2011)In this thesis, we consider several types of optimal control problems with constraints on the state and control variables. These problems have many engineering applications. Our aim is to develop efficient numerical methods ...