MLTL Benchmark Generation via Formula Progression
Date
2018-11-08
Authors
Li, Jianwen
Major Professor
Advisor
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
Springer Nature Switzerland AG
Abstract
Safe cyber-physical system operation requires runtime verification (RV), yet the burgeoning collection of RV technologies remain comparatively untested due to a dearth of benchmarks with oracles enabling objectively evaluating their performance. Mission-time LTL (MLTL) adds integer temporal bounds to LTL to intuitively describe missions of such systems. An MLTL benchmark for runtime verification is a 3-tuple consisting of (1) an MLTL specification phi; (2) a set of finite input streams representing propositional system variables (call this computation pi) over the alphabet of phi; (3) an oracle stream of < v, t > pairs where verdict v is the result (true or false) for time t of evaluating whether pi(t) vertical bar= phi (computation pi at time t satisfies formula phi). We introduce an algorithm for reliably generating MLTL benchmarks via formula progression. We prove its correctness, demonstrate it executes efficiently, and show how to use it to generate a variety of useful patterns for the evaluation and comparative analysis of RV tools.
Series Number
Journal Issue
Is Version Of
Versions
Series
Type
Presentation
Comments
This is a post-peer-review, pre-copyedit version of a proceeding published as Li, J., Rozier, K.Y. (2018). MLTL Benchmark Generation via Formula Progression. In: Colombo, C., Leucker, M. (eds) Runtime Verification. RV 2018. Lecture Notes in Computer Science, vol 11237. Springer, Cham.
The final authenticated version is available online at DOI: 10.1007/978-3-030-03769-7_25.
Copyright 2018 Springer Nature Switzerland AG.
Posted with permission.