Show simple item record

dc.contributor.authorSaldı, Naci
dc.date.accessioned2020-07-07T09:13:12Z
dc.date.available2020-07-07T09:13:12Z
dc.date.issued2019-07
dc.identifier.issn0018-9286en_US
dc.identifier.urihttp://hdl.handle.net/10679/6720
dc.identifier.urihttps://ieeexplore.ieee.org/document/8600330
dc.description.abstractIn this paper, we consider the finite-state approximation of a discrete-time constrained Markov decision process (MDP) under the discounted and average cost criteria. Using the linear programming formulation of the constrained discounted cost problem, we prove the asymptotic convergence of the optimal value of the finite-state model to the optimal value of the original model. With further continuity condition on the transition probability, we also establish a method to compute approximately optimal policies. For the average cost, instead of using the finite-state linear programming approximation method, we use the original problem definition to establish the finite-state asymptotic approximation of the constrained problem and compute approximately optimal policies. Under Lipschitz-type regularity conditions on the components of the MDP, we also obtain explicit rate of convergence bounds quantifying how the approximation improves as the size of the approximating finite-state space increases.en_US
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.relation.ispartofIEEE Transactions on Automatic Control
dc.rightsrestrictedAccess
dc.titleFinite-state approximations to discounted and average cost constrained Markov decision processesen_US
dc.typeArticleen_US
dc.peerreviewedyesen_US
dc.publicationstatusPublisheden_US
dc.contributor.departmentÖzyeğin University
dc.contributor.authorID(ORCID 0000-0002-2677-7366 & YÖK ID 283091) Saldı, Naci
dc.contributor.ozuauthorSaldı, Naci
dc.identifier.volume64en_US
dc.identifier.issue7en_US
dc.identifier.startpage2681en_US
dc.identifier.endpage2696en_US
dc.identifier.wosWOS:000473489700003
dc.identifier.doi10.1109/TAC.2018.2890756en_US
dc.subject.keywordsConstrained Markov decision processes (MDPs)en_US
dc.subject.keywordsFinite-state approximationen_US
dc.subject.keywordsQuantizationen_US
dc.subject.keywordsStochastic controlen_US
dc.identifier.scopusSCOPUS:2-s2.0-85068177189
dc.contributor.authorMale1
dc.relation.publicationcategoryArticle - International Refereed Journal - Institutional Academic Staff


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record


Share this page