Learning-Based Mean-Payoff Optimization in an Unknown MDP under Omega-Regular Constraints

Investor logo

Warning

This publication doesn't include Institute of Computer Science. It includes Faculty of Informatics. Official publication website can be found on muni.cz.
Authors

KŘETÍNSKÝ Jan PEREZ Guillermo RASKIN Jean-Francois

Year of publication 2018
Type Article in Proceedings
Conference 29th International Conference on Concurrency Theory (CONCUR 2018)
MU Faculty or unit

Faculty of Informatics

Citation
Doi http://dx.doi.org/10.4230/LIPIcs.CONCUR.2018.8
Keywords Learning; Mean-Payoff; Markov decision process; Omega-Regular Specification
Description We formalize the problem of maximizing the mean-payoff value with high probability while satisfying a parity objective in a Markov decision process (MDP) with unknown probabilistic transition function and unknown reward function. Assuming the support of the unknown transition function and a lower bound on the minimal transition probability are known in advance, we show that in MDPs consisting of a single end component, two combinations of guarantees on the parity and mean-payoff objectives can be achieved depending on how much memory one is willing to use. (i) For all epsilon and gamma we can construct an online-learning finite-memory strategy that almost-surely satisfies the parity objective and which achieves an epsilon-optimal mean payoff with probability at least 1 - gamma. (ii) Alternatively, for all epsilon and gamma there exists an online-learning infinite-memory strategy that satisfies the parity objective surely and which achieves an epsilon-optimal mean payoff with probability at least 1 - gamma. We extend the above results to MDPs consisting of more than one end component in a natural way. Finally, we show that the aforementioned guarantees are tight, i.e. there are MDPs for which stronger combinations of the guarantees cannot be ensured.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.

More info