Abstract.
Markov decision processes (MDPs) are often used for modelling distributed systems with probabilistic failure or randomisation. We consider the problem of model repair for MDPs defined as follows: if the MDP fails to satisfy a property, we aim to find new values for the transition probabilities so that the property is guaranteed to hold, while at the same time the cost of repair is minimised. Because solving the MDP repair problem exactly is infeasible, in this paper we focus on approximate solution methods. We first formulate a region-based approach, which yields an interval in which the minimal repair cost is contained. As an alternative, we also consider sampling- based approaches, which are faster but unable to provide lower bounds on the repair cost. We have integrated both methods into the probabilistic model checker PRISM and demonstrated their usefulness in practice using a computer virus case study.
|