Oxford logo
[BBC+14] T. Brázdil, V. Brožek, K. Chatterjee, V. Forejt and A. Kučera. Markov Decision Processes with Multiple Long-run Average Objectives. LMCS, 10(1). 2014. [pdf] [bib] http://www.lmcs-online.org/ojs/viewarticle.php?id=1109&layout=abstract
Downloads:  pdf pdf (367 KB)  bib bib
Notes: This is a journal version of [BBC+11]. There is a related tool paper [BCFK15].
Abstract. We study Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) functions. We consider two different objectives, namely, expectation and satisfaction objectives. Given an MDP with k limit-average functions, in the expectation objective the goal is to maximize the expected limit-average value, and in the satisfaction objective the goal is to maximize the probability of runs such that the limit-average value stays above a given vector. We show that under the expectation objective, in contrast to the case of one limit-average function, both randomization and memory are necessary for strategies even for epsilon-approximation, and that finite-memory randomized strategies are sufficient for achieving Pareto optimal values. Under the satisfaction objective, in contrast to the case of one limit-average function, infinite memory is necessary for strategies achieving a specific value (i.e. randomized finite-memory strategies are not sufficient), whereas memoryless randomized strategies are sufficient for epsilon-approximation, for all epsilon>0. We further prove that the decision problems for both expectation and satisfaction objectives can be solved in polynomial time and the trade-off curve (Pareto curve) can be epsilon-approximated in time polynomial in the size of the MDP and 1/epsilon, and exponential in the number of limit-average functions, for all epsilon>0. Our analysis also reveals flaws in previous work for MDPs with multiple mean-payoff functions under the expectation objective, corrects the flaws, and allows us to obtain improved results.

QAV:

Home

People

Projects

Publications