Normal view MARC view ISBD view

Markov Decision Processes With Their Applications [electronic resource] /by Qiying Hu, Wuyi Yue.

by Hu, Qiying [author.]; Yue, Wuyi [author.]; SpringerLink (Online service).
Material type: materialTypeLabelBookSeries: Advances in Mechanics and Mathematics: 14Publisher: Boston, MA : Springer US, 2008.Description: XVI, 298 p. online resource.ISBN: 9780387369518.Subject(s): Mathematics | Mathematical optimization | Operations research | Distribution (Probability theory) | Industrial engineering | Mathematics | Operations Research, Mathematical Programming | Probability Theory and Stochastic Processes | Calculus of Variations and Optimal Control; Optimization | Industrial and Production EngineeringDDC classification: 519.6 Online resources: Click here to access online
Contents:
Discretetimemarkovdecisionprocesses: Total Reward -- Discretetimemarkovdecisionprocesses: Average Criterion -- Continuous Time Markov Decision Processes -- Semi-Markov Decision Processes -- Markovdecisionprocessesinsemi-Markov Environments -- Optimal control of discrete event systems: I -- Optimal control of discrete event systems: II -- Optimal replacement under stochastic Environments -- Optimalal location in sequential online Auctions.
In: Springer eBooksSummary: Markov decision processes (MDPs), also called stochastic dynamic programming, were first studied in the 1960s. MDPs can be used to model and solve dynamic decision-making problems that are multi-period and occur in stochastic circumstances. There are three basic branches in MDPs: discrete-time MDPs, continuous-time MDPs and semi-Markov decision processes. Starting from these three branches, many generalized MDPs models have been applied to various practical problems. These models include partially observable MDPs, adaptive MDPs, MDPs in stochastic environments, and MDPs with multiple objectives, constraints or imprecise parameters. Markov Decision Processes With Their Applications examines MDPs and their applications in the optimal control of discrete event systems (DESs), optimal replacement, and optimal allocations in sequential online auctions. The book presents four main topics that are used to study optimal control problems: *a new methodology for MDPs with discounted total reward criterion; *transformation of continuous-time MDPs and semi-Markov decision processes into a discrete-time MDPs model, thereby simplifying the application of MDPs; *MDPs in stochastic environments, which greatly extends the area where MDPs can be applied; *applications of MDPs in optimal control of discrete event systems, optimal replacement, and optimal allocation in sequential online auctions. This book is intended for researchers, mathematicians, advanced graduate students, and engineers who are interested in optimal control, operation research, communications, manufacturing, economics, and electronic commerce.
Tags from this library: No tags from this library for this title. Add tag(s)
Log in to add tags.
    average rating: 0.0 (0 votes)
Item type Current location Call number Status Date due Barcode
T57.6-57.97 (Browse shelf) Available
Long Loan MAIN LIBRARY
QA402-402.37 (Browse shelf) Available

Discretetimemarkovdecisionprocesses: Total Reward -- Discretetimemarkovdecisionprocesses: Average Criterion -- Continuous Time Markov Decision Processes -- Semi-Markov Decision Processes -- Markovdecisionprocessesinsemi-Markov Environments -- Optimal control of discrete event systems: I -- Optimal control of discrete event systems: II -- Optimal replacement under stochastic Environments -- Optimalal location in sequential online Auctions.

Markov decision processes (MDPs), also called stochastic dynamic programming, were first studied in the 1960s. MDPs can be used to model and solve dynamic decision-making problems that are multi-period and occur in stochastic circumstances. There are three basic branches in MDPs: discrete-time MDPs, continuous-time MDPs and semi-Markov decision processes. Starting from these three branches, many generalized MDPs models have been applied to various practical problems. These models include partially observable MDPs, adaptive MDPs, MDPs in stochastic environments, and MDPs with multiple objectives, constraints or imprecise parameters. Markov Decision Processes With Their Applications examines MDPs and their applications in the optimal control of discrete event systems (DESs), optimal replacement, and optimal allocations in sequential online auctions. The book presents four main topics that are used to study optimal control problems: *a new methodology for MDPs with discounted total reward criterion; *transformation of continuous-time MDPs and semi-Markov decision processes into a discrete-time MDPs model, thereby simplifying the application of MDPs; *MDPs in stochastic environments, which greatly extends the area where MDPs can be applied; *applications of MDPs in optimal control of discrete event systems, optimal replacement, and optimal allocation in sequential online auctions. This book is intended for researchers, mathematicians, advanced graduate students, and engineers who are interested in optimal control, operation research, communications, manufacturing, economics, and electronic commerce.

There are no comments for this item.

Log in to your account to post a comment.
@ Jomo Kenyatta University Of Agriculture and Technology Library

Powered by Koha