Stochastic control and applications

Organizer: Héctor Jasso, CINVESTAV

Numerical approximations for continuous-time average Markov decision processes
Tomás Prieto-Rumeau, Universidad Nacional de Educación a Distancia, Spain
tprieto@ccia.uned.es

We study a numerical approximation of the optimal long-run average cost of a continuous-time Markov decision process with Borel state and action spaces, and bounded transition and reward rates. Our approach uses a suitable discretization of the state and action spaces to approximate the original control model. The approximation error for the optimal average reward is then bounded by a combination of terms related to the discretization of the state and action spaces, namely, the Wasserstein distance between an underlying probability measure and a measure with finite support, and the Hausdorff distance between the original and the discretized actions sets. When approximating the underlying probability measure with its empirical probability measure we get convergence in probability at an exponential speed. An application to a queueing system is shown.

On the Modelling of Uncertain Impulse Control for Continuous Markov Processes
Richard Stockbridge, University of Wisconsin, USA
stockbri@uwm.edu

The literature contains complex constructions of models for impulse control. When the underlying strong Markov process has continuous paths, however, a simpler model can be developed which takes the single path space as its probability space and uses a single filtration with respect to which the intervention times must be stopping times. Moreover, this model construction allows for uncertain impulse control whereby the decision maker selects an impulse but the intervention may result in a different impulse occurring. An example of such an uncertain impulse arises in inventory management where an order is placed but only some random fraction of the ordered amount is delivered. This talk describes the construction of the probability measure on the path space for an admissible intervention policy subject to an uncertain impulse mechanism. An added feature is that when the intervention policy results in deterministic distributions for each impulse, the paths between interventions are independent and, moreover, if the same distribution is used for each impulse, then the cycles following the initial cycle are identically distributed. This talk also identifies a class of impulse policies under which the resulting controlled process is Markov. The decision to use an (s,S) ordering policy in inventory management provides an example of this latter situation so a benefit of the constructed model is that one is allowed to use classical renewal arguments.

Switching Diffusions with Mean-Field Interactions
George Yin, Wayne State University, USA
gyin@wayne.edu

We study switching diffusions with mean-field interactions. The motivation stems from regime-switching control systems involving mean-field terms, in which the random switching is modeled as a continuous-time Markov chain. The talk is based on two papers. We first obtain a law of large numbers for empirical measures. In contrast to the existing literature, the limit measure is not deterministic but random, characterized by the conditional distribution (given the history of the switching process) of the solution to a stochastic McKean-Vlasov differential equation with Markovian switching. Then stochastic maximum principles for switching diffusions with mean-field interactions are established. [Joint works with Son Luu Nguyen, Tuan Anh Hoang, and Dung Tien Nguyen.]

Long term investment
Erick Treviño, IMATE-UNAM, MX

Portfolio allocation is at the core of stochastic finance and has been studied from different angles. In this talk we adopt the perspective of an institutional investor who must plan choices for a very long horizon of time. As is usually done in the literature this choice problem is approximated as an asymptotic criterion to be maximized. We will present conditions under which convex duality can be established and takes the form of a sensitivity control alike problem for specific dynamics.