An In-Depth Study of Runtime Verification Overheads during Software TestingACM SIGSOFT Distinguished Paper Award
Runtime verification (RV) monitors program executions against formal specifications (specs). Researchers showed that RV during software testing amplifies the bug-finding ability of tests, and found hundreds of new bugs by using RV to monitor passing tests in open-source projects. But, RV’s runtime overhead is widely seen as a hindrance to its broad adoption, especially during continuous integration. Yet, there is no in-depth study of the prevalence, usefulness for bug finding, and components of these overheads during testing, so that researchers can better understand how to speed up RV.
We study RV overhead during testing, monitoring developer-written unit tests in 1,544 open-source projects against 160 specs of correct JDK API usage. We make four main findings. (1) RV overhead is below 12.48 seconds, which others considered acceptable, in 40.9% of projects, but up to 5,002.9x (or, 28.7 hours) in the other projects. (2) 99.87% of monitors that RV generates to dynamically check program traces are wasted; they can only find bugs that the other 0.13% find. (3) Contrary to conventional wisdom, RV overhead in most projects is dominated by instrumentation, not monitoring. (4) 36.74% of monitoring time is spent in test code or libraries.
As evidence that our study provides a new basis that future work can exploit, we perform two more experiments. First, we show that offline instrumentation (when possible) greatly reduces RV runtime overhead for single versions of many projects. Second, we show that simply amortizing high instrumentation costs across multiple program versions can outperform, by up to 4.53x, a recent evolution-aware RV technique that uses complex program analysis.
Wed 18 SepDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
15:30 - 16:50 | Empirical StudiesTechnical Papers at EI 10 Fritz Paschke Chair(s): Cristian Cadar Imperial College London | ||
15:30 20mTalk | Bugs in Pods: Understanding Bugs in Container Runtime Systems Technical Papers Jiongchi Yu Singapore Management University, Xiaofei Xie Singapore Management University, Cen Zhang Nanyang Technological University, Sen Chen Tianjin University, Yuekang Li UNSW, Wenbo Shen Zhejiang University DOI | ||
15:50 20mTalk | An Empirical Study on Kubernetes Operator Bugs Technical Papers Qingxin Xu Institute of Software at Chinese Academy of Sciences; University of Chinese Academy of Sciences, Yu Gao Institute of Software at Chinese Academy of Sciences; University of Chinese Academy of Sciences, Jun Wei Institute of Software at Chinese Academy of Sciences; University of Chinese Academy of Sciences DOI | ||
16:10 20mTalk | Understanding Misconfigurations in ROS: An Empirical Study and Current Approaches Technical Papers Paulo Canelas Carnegie Mellon University, Bradley Schmerl Carnegie Mellon University, Alcides Fonseca LASIGE; University of Lisbon, Christopher Steven Timperley Carnegie Mellon University DOI Pre-print Media Attached | ||
16:30 20mTalk | An In-Depth Study of Runtime Verification Overheads during Software TestingACM SIGSOFT Distinguished Paper Award Technical Papers DOI |