Adaptive Aggregation of Markov Chains: Quantitative Analysis of Chemical Reaction Networks

Investor logo

Warning

This publication doesn't include Institute of Computer Science. It includes Faculty of Informatics. Official publication website can be found on muni.cz.
Authors

ABATE Alessandro ČEŠKA Milan BRIM Luboš KWIATKOWSKA Marta

Year of publication 2015
Type Article in Proceedings
Conference 27th International Conference, CAV 2015, San Francisco, CA, USA, July 18-24, 2015, Proceedings
MU Faculty or unit

Faculty of Informatics

Citation
web http://link.springer.com/chapter/10.1007%2F978-3-319-21690-4_12#
Doi http://dx.doi.org/10.1007/978-3-319-21690-4_12
Field Informatics
Keywords continuous-time Markov chains; parameter exploration; model checking
Description Quantitative analysis of Markov models typically proceeds through numerical methods or simulation-based evaluation. Since the state space of the models can often be large, exact or approximate state aggregation methods (such as lumping or bisimulation reduction) have been proposed to improve the scalability of the numerical schemes. However, none of the existing numerical techniques provides general, explicit bounds on the approximation error, a problem particularly relevant when the level of accuracy affects the soundness of verification results. We propose a novel numerical approach that combines the strengths of aggregation techniques (state-space reduction) with those of simulation-based approaches (automatic updates that adapt to the process dynamics). The key advantage of our scheme is that it provides rigorous precision guarantees under different measures. The new approach, which can be used in conjunction with time uniformisation techniques, is evaluated on two models of chemical reaction networks, a signalling pathway and a prokaryotic gene expression network: it demonstrates marked improvement in accuracy without performance degradation, particularly when compared to known state-space truncation techniques.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.

More info