Publication:
Finite-action approximation of Markov decision processes

Placeholder

Institution Authors

Research Projects

Journal Title

Journal ISSN

Volume Title

Type

bookPart

Access

restrictedAccess

Publication Status

Published

Journal Issue

Abstract

In this chapter, we study the finite-action approximation of optimal control policies for discrete-time Markov decision processes (MDPs) with Borel state and action spaces, under discounted and average cost criteria. One main motivation for considering this problem stems from the optimal information transmission problem in networked control systems. In many applications of networked control, perfect transmission of the control actions to an actuator is infeasible when there is a communication channel of finite capacity between a controller and an actuator. Hence, the actions of the controller must be discretized (quantized) to facilitate reliable transmission. Although the problem of optimal information transmission from a plant/sensor to a controller has been studied extensively (see, e.g., [148] and references therein), much less is known about the problem of transmitting actions from a controller to an actuator. Such transmission schemes usually require a simple encoding/decoding rule since the actuator does not have the computational capability of the controller to use complex algorithms. For this reason, time-invariant scalar quantization is a practically useful encoding method for controller-actuator communication.

Date

2018

Publisher

Birkhäuser Basel

Description

Keywords

Citation


Page Views

0

File Download

0