Large Language Models for Equivalent Mutant Detection: How Far Are We?ACM SIGSOFT Distinguished Paper Award
Mutation testing is vital for ensuring software quality. However, the presence of equivalent mutants is known to introduce redundant cost and bias issues, hindering the effectiveness of mutation testing in practical use. Although numerous equivalent mutant detection (EMD) techniques have been proposed, they exhibit limitations due to the scarcity of training data and challenges in generalizing to unseen mutants. Recently, large language models (LLMs) have been extensively adopted in various code-related tasks and have shown superior performance by more accurately capturing program semantics. Yet the performance of LLMs in equivalent mutant detection remains largely unclear. In this paper, we conduct an empirical study on 3,302 method-level Java mutant pairs to comprehensively investigate the effectiveness and efficiency of LLMs for equivalent mutant detection. Specifically, we assess the performance of LLMs compared to existing EMD techniques, examine the various strategies of LLMs, evaluate the orthogonality between EMD techniques, and measure the time overhead of training and inference. Our findings demonstrate that LLM-based techniques significantly outperform existing techniques (i.e., the average improvement of 35.69% in terms of F1-score), with the fine-tuned code embedding strategy being the most effective. Moreover, LLM-based techniques offer an excellent balance between cost (relatively low training and inference time) and effectiveness. Based on our findings, we further discuss the impact of model size and embedding quality, and provide several promising directions for future research. This work is the first to examine LLMs in equivalent mutant detection, affirming their effectiveness and efficiency.
Wed 18 SepDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
10:30 - 11:50 | Code Mutation and ReductionTechnical Papers at EI 10 Fritz Paschke Chair(s): Andreas Zeller CISPA Helmholtz Center for Information Security | ||
10:30 20mTalk | Large Language Models for Equivalent Mutant Detection: How Far Are We?ACM SIGSOFT Distinguished Paper Award Technical Papers Zhao Tian Tianjin University, Honglin Shu Kyushu University, Dong Wang Tianjin University, Xuejie Cao Tianjin University, Yasutaka Kamei Kyushu University, Junjie Chen Tianjin University DOI Pre-print | ||
10:50 20mTalk | An Empirical Examination of Fuzzer Mutator Performance Technical Papers James Kukucka George Mason University, Luís Pina University of Illinois at Chicago, Paul Ammann George Mason University, Jonathan Bell Northeastern University DOI | ||
11:10 20mTalk | Equivalent Mutants in the Wild: Identifying and Efficiently Suppressing Equivalent Mutants for Java Programs Technical Papers Benjamin Kushigian University of Washington, Samuel Kaufman University of Washington, Ryan Featherman University of Washington, Hannah Potter University of Washington, Ardi Madadi University of Washington, René Just University of Washington DOI | ||
11:30 20mTalk | LPR: Large Language Models-Aided Program Reduction Technical Papers Mengxiao Zhang University of Waterloo, Yongqiang Tian Hong Kong University of Science and Technology, Zhenyang Xu University of Waterloo, Yiwen Dong University of Waterloo, Shin Hwei Tan Concordia University, Chengnian Sun University of Waterloo DOI |