|
![]() |
|||
|
||||
OverviewThis book is concerned with a class of discrete-time stochastic control processes known as controlled Markov processes (CMP's), also known as Markov decision processes or Markov dynamic programs. Starting in the mid-1950swith Richard Bellman, many contributions to CMP's have been made, and applications to engineering, statistics and operations research, among other areas, have also been developed. The purpose of this book is to present some recent developments on the theory of adaptive CMP's, i. e. , CMP's that depend on unknown parameters. Thus at each decision time, the controller or decision-maker must estimate the true parameter values, and then adapt the control actions to the estimated values. We do not intend to describe all aspects of stochastic adaptive control; rather, the selection of material reflects our own research interests. The prerequisite for this book is a knowledgeof real analysis and prob ability theory at the level of, say, Ash (1972) or Royden (1968), but no previous knowledge of control or decision processes is required. The pre sentation, on the other hand, is meant to beself-contained,in the sensethat whenever a result from analysisor probability is used, it is usually stated in full and references are supplied for further discussion, if necessary. Several appendices are provided for this purpose. The material is divided into six chapters. Chapter 1 contains the basic definitions about the stochastic control problems we are interested in; a brief description of some applications is also provided. Full Product DetailsAuthor: Onesimo Hernandez-LermaPublisher: Springer-Verlag New York Inc. Imprint: Springer-Verlag New York Inc. Edition: Softcover reprint of the original 1st ed. 1989 Volume: 79 Dimensions: Width: 15.50cm , Height: 0.90cm , Length: 23.50cm Weight: 0.266kg ISBN: 9781461264545ISBN 10: 1461264545 Pages: 148 Publication Date: 29 October 2012 Audience: Professional and scholarly , Professional & Vocational Format: Paperback Publisher's Status: Active Availability: Manufactured on demand ![]() We will order this item for you from a manufactured on demand supplier. Table of Contents1 Controlled Markov Processes.- 1.1 Introduction.- 1.2 Stochastic Control Problems.- 1.3 Examples.- 1.4 Further Comments.- 2 Discounted Reward Criterion.- 2.1 Introduction.- 2.2 Optimality Conditions.- 2.3 Asymptotic Discount Optimality.- 2.4 Approximation of MCM’s.- 2.5 Adaptive Control Models.- 2.6 Nonparametric Adaptive Control.- 2.7 Comments and References.- 3 Average Reward Criterion.- 3.1 Introduction.- 3.2 The Optimality Equation.- 3.3 Ergodicity Conditions.- 3.4 Value Iteration.- 3.5 Approximating Models.- 3.6 Nonstationary Value Iteration.- 3.7 Adaptive Control Models.- 3.8 Comments and References.- 4 Partially Observable Control Models.- 4.1 Introduction.- 4.2 PO-CM: Case of Known Parameters.- 4.3 Transformation into a CO Control Problem.- 4.4 Optimal I-Policies.- 4.5 PO-CM’s with Unknown Parameters.- 4.6 Comments and References.- 5 Parameter Estimation in MCM’s.- 5.1 Introduction.- 5.2 Contrast Functions.- 5.3 Minimum Contrast Estimators.- 5.4 Comments and References.- 6 Discretization Procedures.- 6.1 Introduction.- 6.2 Preliminaries.- 6.3 The Non-Adaptive Case.- 6.4 Adaptive Control Problems.- 6.5 Proofs.- 6.6 Comments and References.- Appendix A. Contraction Operators.- Appendix B. Probability Measures.- Total Variation Norm.- Weak Convergence.- Appendix C. Stochastic Kernels.- Appendix D. Multifunctions and Measurable Selectors.- The Hausdorff Metric.- Multifunctions.- References.- Author Index.ReviewsAuthor InformationTab Content 6Author Website:Countries AvailableAll regions |