Markov decision processes: discrete stochastic dynamic by Martin L. Puterman

By Martin L. Puterman

An up to date, unified and rigorous therapy of theoretical, computational and utilized study on Markov choice procedure types. Concentrates on infinite-horizon discrete-time types. Discusses arbitrary kingdom areas, finite-horizon and continuous-time discrete-state types. additionally covers transformed coverage new release, multichain versions with regular gift criterion and delicate optimality. contains a wealth of figures which illustrate examples and an in depth bibliography.

Show description

Read or Download Markov decision processes: discrete stochastic dynamic programming PDF

Best mathematicsematical statistics books

Controlled Markov Processes and Viscosity Solutions

This booklet is meant as an creation to optimum stochastic regulate for non-stop time Markov tactics and to the speculation of viscosity recommendations. Stochastic keep watch over difficulties are handled utilizing the dynamic programming procedure. The authors procedure stochastic keep watch over difficulties by means of the tactic of dynamic programming.

Finite Markov Chains: With a New Appendix ''Generalization of a Fundamental Matrix''

Finite Markov Chains: With a brand new Appendix "Generalization of a primary Matrix"

Extra resources for Markov decision processes: discrete stochastic dynamic programming

Sample text

1 MODEL FORMULAHON The Role of Model Assumptions The model in Sec. 2 provides a focus for discussing the role of assumptions about S and A,. Suppose first that S is a Borel measurable subset of Euclidean space. rr = ( d , ) E IIMDand suppose d,(s) = a. 1) becomes if the density p,(uls, a) exists. 1) may be represented by the Lebesgue-Stieltjes integral v(u)p,(duls,a). For the above expressions to be meaningful requires that u ( . ) p , ( * ( sa, ) be Lebesgue integrable, or u ( * ) be Lebesgue-Stieljes integrable with respect to p,(du(s,a ) for each s E S and a E A,.

The primary focus of this book will be models with discrete T . A particular continuous time model (a semi-Markov decision process) will be discussed (Chapter 11). 2 State and Action Sets At each decision epoch, the system occupies a sfate. If, at some decision epoch, the decision maker observes the system in state s E S, he may choose action a from the set of allowable actions in state s, A,. Let A = U S E ,A, (Fig. ) Note we assume that S and A, do not vary with t . We expand on this point below.

Upon substitution’of d,, it becomes ss For analyzing multi-period models we require this to be a measurable and integrable function on S. This necessitates imposing assumptions on rl(s, . ), p,(uls, ), and d,( 1. At a minimum we require d,(s) to be a measurable function from S to A , which means that we must restrict the set of admissible decision rules to include only measurable functions. To identify measurable functions requires a topology on A. In Sec. 4). For discrete S, d* E D,,since D,consisted of the set of all functions from S to A , but, as discussed above, we require that d* be measurable.

Download PDF sample

Rated 4.32 of 5 – based on 16 votes