Oxford logo
[KKNP08c] M. Kattenbelt, M. Kwiatkowska, G. Norman and D. Parker. A Game-based Abstraction-Refinement Framework for Markov Decision Processes. Technical report RR-08-06, Oxford University Computing Laboratory. February 2008. [ps.gz] [pdf] [bib]
Downloads:  ps.gz ps.gz (313 KB)  pdf pdf (417 KB)  bib bib
Notes: This report is superseded by the journal paper [KKNP10].
Abstract. In the field of model checking, abstraction refinement has proved to be an extremely successful methodology for combating the state-space explosion problem. However, little practical progress has been made in the setting of probabilistic verification. In this paper we present a novel abstraction-refinement framework for Markov decision processes (MDPs), which are widely used for modelling and verifying systems that exhibit both probabilistic and nondeterministic behaviour. Our framework comprises an abstraction approach based on stochastic two-player games, two refinement methods and an efficient algorithm for the abstraction-refinement loop. The key idea behind the abstraction approach is to maintain a separation between nondeterminism present in the original MDP and nondeterminism introduced during the abstraction process, each type being represented by a different player in the game. Crucially, this allows lower and upper bounds to be computed for the values of reachability properties of the MDP. These give a quantitative measure of the quality of the abstraction and form the basis of the corresponding refinement methods. We describe a prototype implementation of our framework and present experimental results demonstrating automatic generation of compact, yet precise, abstractions for a large selection of real-world case studies.