“Finite State Approximations for Countable State Infinite Horizon Discounted Markov Decision Processes”
Authors: Sjur D. Flåm,Affiliation: University of Bergen
Reference: 1987, Vol 8, No 2, pp. 117-123.
Keywords: Markov decision processes, approximation, epiconvergence
Abstract: It is proved that the optimal policy of a Markov decision process where the state space is truncated, will approximate the policy in case of no truncation.
PDF (572 Kb) DOI: 10.4173/mic.1987.2.4
DOI forward links to this article:
[1] Aditya Mahajan and Mehnaz Mannan (2014), doi:10.1007/s10479-014-1652-0 |
[1] ATTOUCH, H., WETS, R.J.-B., (1983). Approximation and convergence in nonlinear optimization, Nonlinear Programming, 4, eds. O. Mangasarian et al.Academic Press, New York.
[2] ATTOUCH, H, WETS, R.J.-B., (1983). A convergence theory for saddle functions, Trans. Am. Math. Soc., 280, 1-49 doi:10.2307/1999600
[3] BERTSEKAS, D., (1976). Dynamic Programming and Stochastic Control, Academic Press, New York.
[4] BERTSEKAS, D., SHREVE, S., (1978). Stochastic Optimal Control: The Discrete Time Case, Academic Press, New York.
[5] DENARDO, E., (1982). Dynamic Programming, Prentice-Hall, New York.
[6] FLÅM, S.D, (1987). Approximating Some Convex Programs in terms of Borel Fields, to appear in Mathematical Programming Study, 31.
[7] FOX, B., (1971). Finite-state approximations to denumerable-state dynamic programs, J. Math. Anal. App., 34, 675-670 doi:10.1016/0022-247X(71)90106-5
[8] GIHMAN, I.I., SKOROHOD, A.V., (1979). Controlled Stochastic Processes, Springer-Verlag, New York.
[9] KALL, P., (1986). Approximation to optimization problems: an elementary review, Mathematics of Operations Research, 11, 9-18 doi:10.1287/moor.11.1.9
[10] KALL, P., (1987). Stochastic Programs with Recourse: An Upper Bound and the Related Moment Problem, Tech. Report, Institut für Operations Research der Universität Zürich.
[11] MACQUEEN, J., (1967). A test for suboptimal actions in Markovian decision problems, Operations Research, 15, 559-561 doi:10.1287/opre.15.3.559
[12] MENDELSSOHN, R., (1980). Improved bounds for aggregated linear programs, Operations Research, 28, 1450-1453 doi:10.1287/opre.28.6.1450
[13] NORMAN, J., WHITE, D., (1968). A method for approximating solutions to stochastic dynamic programming problems using expectations, Operations Research, 16, 296-306 doi:10.1287/opre.16.2.296
[14] ROSS, S., (1983). Introduction to Stochastic Dynamic Programming, Academic Press, New York.
[15] WHITE., D., (1980). Finite-state approximations for denumerable-state infinite-horizon discounted Markov decision processes, J. Math. Anal. App., 74, 292-295 doi:10.1016/0022-247X(80)90128-6
[16] WHITT, W., (1978). Approximation of dynamic programs, I and II, Mathematics of Operations Research, 3, 231-243, 175-185.
BibTeX:
@article{MIC-1987-2-4,
title={{Finite State Approximations for Countable State Infinite Horizon Discounted Markov Decision Processes}},
author={Flåm, Sjur D.},
journal={Modeling, Identification and Control},
volume={8},
number={2},
pages={117--123},
year={1987},
doi={10.4173/mic.1987.2.4},
publisher={Norwegian Society of Automatic Control}
};