Optimal control of the two-state Markov process in discrete time


Cite item

Full Text

Open Access Open Access
Restricted Access Access granted
Restricted Access Subscription Access

Abstract

Using the control problem of a two-state Markov process in discrete time as an example, we consider the basic stages concerning the application of theory of conditional Markov processes to synthesize optimal algorithms of the control of stochastic systems. It is assumed that the control changes the statistical properties of the states of a controlled plant. The numerical method for solving the problem and the results of solving particular example are presented. The special features of the solution of this problem compared to the well-known problem in continuous time are discussed.

About the authors

A. V. Bondarenko

State Research Institute of Aviation Systems (State Scientific Center of Russian Federation); Moscow Institute of Physics and Technology (State University)

Author for correspondence.
Email: mma1943@mail.ru
Russian Federation, Moscow, 125319; Dolgoprudnyi, Moscow oblast, 141700

M. A. Mironov

State Research Institute of Aviation Systems (State Scientific Center of Russian Federation)

Email: mma1943@mail.ru
Russian Federation, Moscow, 125319


Copyright (c) 2017 Pleiades Publishing, Ltd.

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies