ISSTA 2024
Mon 16 - Fri 20 September 2024 Vienna, Austria
co-located with ISSTA/ECOOP 2024

The ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA) is the leading research symposium on software testing and analysis, bringing together academics, industrial researchers, and practitioners to exchange new ideas, problems, and experience on how to analyze and test software systems.

ISSTA 2024 will feature two submission deadlines. You can choose to submit at either deadline, but papers submitted to the first deadline may receive a chance to submit a major revision of the initial submission to the second deadline. Papers submitted to the second deadline will be either accepted or rejected, i.e., there is no option for a major revision.

Accepted Papers

Title
AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation
Technical Papers
Pre-print
A Large-Scale Empirical Study on Improving the Fairness of Image Classification Models
Technical Papers
DOI
A Large-Scale Evaluation for Log Parsing Techniques: How Far Are We?
Technical Papers
DOI Pre-print
An Empirical Examination of Fuzzer Mutator Performance
Technical Papers
An Empirical Study of Static Analysis Tools for Secure Code Review
Technical Papers
An Empirical Study on Kubernetes Operator Bugs
Technical Papers
An In-depth Study of Runtime Verification Overheads during Software Testing
Technical Papers
API Misuse Detection via Probabilistic Graphical Model
Technical Papers
DOI
Arfa: an Agile Regime-based Floating-point Optimization Approach for Rounding Errors
Technical Papers
AsFuzzer: Differential Testing of Assemblers with Error-Driven Grammar Inference
Technical Papers
Atlas: Automating Cross-Language Fuzzing on Android Closed-Source Libraries
Technical Papers
DOI
AutoCodeRover: Autonomous Program Improvement
Technical Papers
Automated Data Binding Vulnerability Detection for Java Web Frameworks via Nested Property Graph
Technical Papers
Automated Deep Learning Optimization via DSL-Based Source Code Transformation
Technical Papers
DOI
Automated Program Repair via Conversation: Fixing 162 out of 337 bugs for $0.42 each using ChatGPT
Technical Papers
Automating Zero-Shot Patch Porting for Hard Forks
Technical Papers
DOI
Benchmarking Automated Program Repair: An Extensive Study on Both Real-World and Artificial Bugs
Technical Papers
DOI
Better Not Together: Staged Solving for Context-Free Language Reachability
Technical Papers
Beyond Pairwise Testing: Advancing 3-wise Combinatorial Interaction Testing for Highly Configurable Systems
Technical Papers
BRAFAR: Bidirectional Refactoring, Alignment, Fault Localization, and Repair for Programming Assignments
Technical Papers
Bridge and Hint: Extending Pre-trained Language Models for Long-Range Code
Technical Papers
DOI
Bugs in Pods: Understanding Bugs in Container Runtime Systems
Technical Papers
C2D2: Extracting Critical Changes for Real-World Bugs with Dependency-Sensitive Delta Debugging
Technical Papers
DOI
Calico: Automated Knowledge Calibration and Diagnosis for Elevating AI Mastery in Code Tasks
Technical Papers
Call Graph Soundness in Android Static Analysis
Technical Papers
Can Graph Database Systems Correctly Handle Writing Operations? A Metamorphic Testing Approach with Graph-State Persistence Oracle
Technical Papers
CEBin: A Cost-Effective Framework for Large-Scale Binary Code Similarity Detection
Technical Papers
DOI
Certified Continual Learning for Neural Network Regression
Technical Papers
Characterizing and Detecting Program Representation Faults of Static Analysis Frameworks via Two-Dimensional Testing
Technical Papers
CLAP: Learning Transferable Binary Code Representations with Natural Language Supervision
Technical Papers
DOI
CoderUJB: An Executable and Unified Java Benchmark for Practical Programming Scenarios
Technical Papers
DOI
CoEdPilot: Recommending Code Edits with Learned Prior Edit Relevance, Project-wise Awareness, and Interactive Nature
Technical Papers
DOI
Commit Artifact Preserving Build Prediction
Technical Papers
CooTest: An Automated Testing Approach for V2X Communication Systems
Technical Papers
CoSec: On-the-Fly Security Hardening of Code LLMs via Supervised Co-Decoding
Technical Papers
CREF: An LLM-based Conversational Software Repair Framework for Programming Tutors
Technical Papers
Dance of the ADS: Orchestrating Failures through Historically-Informed Scenario Fuzzing
Technical Papers
DAppFL: Just-in-Time Fault Localization for Decentralized Applications in Web3
Technical Papers
DOI
Datactive: Data Fault Localization for Object Detection Systems
Technical Papers
DBStorm: Generating Various Effective Workloads for Testing Isolation Levels
Technical Papers
DDGF: Dynamic Directed Greybox Fuzzing with Path Profiling
Technical Papers
Decomposition of Deep Neural Networks into Modules via Mutation Analysis
Technical Papers
Define-Use Guided Path Exploration for Better Forced Execution
Technical Papers
DOI
DeFort: Automatic Detection and Analysis of Price Manipulation Attacks in DeFi Applications
Technical Papers
DOI
DeLink: Source File Information Recovery in Binaries
Technical Papers
Detecting Build Dependency Errors in Incremental Builds
Technical Papers
DOI
DiaVio: LLM-Empowered Diagnosis of Safety Violations in ADS Simulation Testing
Technical Papers
DOI
Distance-Aware Test Input Selection for Deep Neural Networks
Technical Papers
DOI
DistillSeq: A Framework for Safety Alignment Testing in Large Language Models using Knowledge Distillation
Technical Papers
Domain Adaptation for Code Model-based Unit Test Case Generation
Technical Papers
Efficient DNN-Powered Software with Fair Sparse Models
Technical Papers
Pre-print
Empirical Study of Move Smart Contract Security: Introducing MoveScan for Enhanced Analysis
Technical Papers
Enhancing Multi-Agent System Testing with Diversity-Guided Exploration and Adaptive Critical State Exploitation
Technical Papers
Enhancing Robustness of Code Authorship Attribution through Expert Feature Knowledge
Technical Papers
DOI
Enhancing ROS System Fuzzing through Callback Tracing
Technical Papers
DOI
Equivalent Mutants in the Wild: Identifying and Efficiently Suppressing Equivalent Mutants for Java Programs
Technical Papers
Evaluating the Effectiveness of Decompilers
Technical Papers
DOI
Evaluating the Trustworthiness of Deep Neural Networks in Deployment - A Comparative Study (Replicability Study)
Technical Papers
Exploration-Driven Reinforcement Learning for Avionic System Fault Detection (Experience Paper)
Technical Papers
Face It Yourselves: An LLM-Based Two-Stage Strategy to Localize Configuration Errors via Logs
Technical Papers
DOI
FastLog: An End-to-End Method to Efficiently Generate and Insert Logging Statements
Technical Papers
DOI
FDI: Attack Neural Code Generation Systems through User Feedback Channel
Technical Papers
Feedback-Directed Partial Execution
Technical Papers
Feedback-Driven Automated Whole Bug Report Reproduction for Android Apps
Technical Papers
Finding Cuts in Static Analysis Graphs to Debloat Software
Technical Papers
Foliage: Nourishing Evolving Software by Characterizing and Clustering Field Bugs
Technical Papers
Following the "Thread": Toward Finding Manipulatable Bottlenecks In Blockchain Clients
Technical Papers
FortifyPatch: Towards Tamper-Resistant Live Patching in Linux-Based Hypervisor
Technical Papers
DOI
FRIES: Fuzzing Rust Library Interactions via Efficient Ecosystem-Guided Target Generation
Technical Papers
FT2Ra: A Fine-Tuning-Inspired Approach to Retrieval-Augmented Code Completion
Technical Papers
DOI
FunRedisp: Reordering Function Dispatch in Smart Contract to Reduce Invocation Gas Fees
Technical Papers
DOI
Fuzzing JavaScript Interpreters with Coverage-Guided Reinforcement Learning for LLM-based Mutation
Technical Papers
Fuzzing MLIR Compiler Infrastructure via Operation Dependency Analysis
Technical Papers
Graph Neural Networks for Vulnerability Detection: A Counterfactual Explanation
Technical Papers
DOI Pre-print
Guardian: A Runtime Framework for LLM-based UI Exploration
Technical Papers
How Effective Are They? Exploring Large Language Model Based Fuzz Driver Generation
Technical Papers
Identifying Smart Contract Security Issues in Code Snippets from Stack Overflow
Technical Papers
DOI Pre-print
Inconsistencies in TeX-produced Documents
Technical Papers
Pre-print
Interoperability in Deep Learning: A User Survey and Failure Analysis of ONNX Model Converters
Technical Papers
Pre-print
Interpretability based Neural Network Repair
Technical Papers
Interprocedural Path Complexity Analysis
Technical Papers
DOI
Isolation-Based Debugging for Neural Networks
Technical Papers
DOI
Large Language Models can Connect the Dots: Exploring Model Optimization Bugs with Domain Knowledge-aware Prompts
Technical Papers
Large Language Models for Equivalent Mutant Detection: How Far are We?
Technical Papers
Learning to SAT-verifiably Check LTL Satisfiability via Differentiable Trace Checking
Technical Papers
LENT-SSE: Leveraging Executed and Near Transactions for Speculative Symbolic Execution of Smart Contracts
Technical Papers
LLM4Fin: Fully Automating LLM-Powered Test Case Generation for FinTech Software Acceptance Testing
Technical Papers
Logos: Log Guided Fuzzing for Protocol Implementations
Technical Papers
LPR: Large Language Models-Aided Program Reduction
Technical Papers
DOI
Ma11y: A Mutation Framework for Web Accessibility Testing
Technical Papers
DOI
Maltracker: A Fine-Grained NPM Malware Tracker Copiloted by LLM-Enhanced Dataset
Technical Papers
MicroRes: Versatile Resilience Profiling in Microservices via Degradation Dissemination Indexing
Technical Papers
DOI
Midas: Mining Profitable Exploits in On-Chain Smart Contracts via Feedback-Driven Fuzzing and Differential Analysis
Technical Papers
Model-less Is the Best Model: Generating Pure Code Implementations to Replace On-Device DL Models
Technical Papers
DOI
Multi-modal Learning for WebAssembly Reverse Engineering
Technical Papers
DOI
NativeSummary: Summarizing Native Binary Code for Inter-language Static Analysis of Android Apps
Technical Papers
NeuFair: Neural Network Fairness Repair with Dropout
Technical Papers
Neurosymbolic Repair of Test Flakiness
Technical Papers
One Size Does Not Fit All: Multi-Granularity Patch Generation for Better Automated Program Repair
Technical Papers
One-to-One or One-to-Many? Suggesting Extract Class Refactoring Opportunities with Intra-class Dependency Hypergraph Neural Network
Technical Papers
Oracle-guided Program Selection from Large Language Models
Technical Papers
PatchFinder: A Two-Phase Approach to Security Patch Tracing for Disclosed Vulnerabilities in Open-Source Software
Technical Papers
Policy Testing with MDPFuzz (Replicability Study)
Technical Papers
Practitioners’ Expectations on Automated Test Generation
Technical Papers
Precise Compositional Buffer Overflow Detection via Heap Disjointness
Technical Papers
DOI Pre-print
Preserving Reactiveness: Understanding and Improving the Debugging Practice of Blocking-call Bugs
Technical Papers
Prospector: Boosting Directed Greybox Fuzzing for Large-scale Target Sets with Iterative Prioritization
Technical Papers
Reproducing Timing-dependent GUI Flaky Tests in Android Apps via A Single Event Delay
Technical Papers
Revisiting Test-Case Prioritization on Long-Running Test Suites
Technical Papers
Scalable, Sound and Accurate Jump Table Analysis
Technical Papers
SCALE: Constructing Symbolic Comment Trees for Software Vulnerability Detection
Technical Papers
DOI
See the Forest, not Trees: Unveiling and Escaping the Pitfalls of Error-Triggering Inputs in Neural Network Testing
Technical Papers
Segment-based Test Case Prioritization: a Multi-objective Approach
Technical Papers
SelfPiCo: Self-Guided Partial Code Execution with LLMs
Technical Papers
Semantic Constraint Inference for Web Form Test Generation
Technical Papers
Silent Taint-Style Vulnerability Fixes Identification
Technical Papers
DOI
Sleuth: A Switchable Dual-Mode Fuzzer to Investigate Bug Impacts Following a Single PoC
Technical Papers
SQLess: Dialect-Agnostic SQL Query Simplification
Technical Papers
Synthesis-based Enhancement for GUI Test Case Migration
Technical Papers
Synthesis of Sound and Precise Storage Cost Bounds via Unsound Resource Analysis and Max-SMT
Technical Papers
Synthesizing Boxes Preconditions for Deep Neural Networks
Technical Papers
Tacoma: Enhanced Browser Fuzzing with Fine-Grained Semantic Alignment
Technical Papers
TeDA: A Testing Framework for Data Usage Auditing in Deep Learning Model Development
Technical Papers
Testing Gremlin-Based Graph Database Systems via Query Disassembling
Technical Papers
Test Selection for Deep Neural Networks using Meta-Models with Uncertainty Metrics
Technical Papers
ThinkRepair: Self-Directed Automated Program Repair
Technical Papers
Total Recall? How Good Are Static Call Graphs Really?
Technical Papers
DOI Pre-print
Towards Automatic Oracle Prediction for AR testing: Assessing Virtual Object Placement Quality under Real-world Scenes
Technical Papers
Towards More Complete Constraints for Deep Learning Library Testing via Complementary Set Guided Refinement
Technical Papers
Towards Understanding the Bugs in Solidity Compiler
Technical Papers
Toward the Automated Localization of Buggy Mobile App UIs from Bug Descriptions
Technical Papers
Traceback: A Fault Localization Technique for Molecular Programs
Technical Papers
DOI
Uncovering and Mitigating the Impact of Code Obfuscation on Dataset Annotation with Antivirus Engines
Technical Papers
Understanding Misconfigurations in ROS: An Empirical Study and Current Approaches
Technical Papers
Unimocg: Modular Call-Graph Algorithms for Consistent Handling of Language Features
Technical Papers
DOI
UniTSyn: A Large-Scale Dataset Capable of Enhancing the Prowess of Large Language Models for Program Testing
Technical Papers
UPBEAT: Test Input Checks of Q# Quantum Libraries
Technical Papers
DOI
VioHawk: Detecting Traffic Violations of Autonomous Driving Systems through Criticality-guided Simulation Testing
Technical Papers
VRDSynth: Synthesizing Programs for Multilingual Visually Rich Document Information Extraction
Technical Papers
Pre-print
Wapplique: Testing WebAssembly Runtime via Execution Context-aware Bytecode Mutation
Technical Papers
WASMaker: Differential Testing of WebAssembly Runtimes via Semantic-aware Binary Generation
Technical Papers
When to Stop? Towards Efficient Code Generation in LLMs with Excess Token Prevention
Technical Papers
Your "Notice" is Missing: Detecting and Fixing Violations of Modification Terms in Open Source Licenses during Forking
Technical Papers

Call for Papers

ISSTA invites three kinds of submissions. The majority of submissions is expected to be “Research Papers”, but submissions that best fit the description of “Experience Papers” or “Replicability Studies” should be submitted as such.

Research Papers

Authors are invited to submit research papers describing original contributions in testing or analysis of computer software. Papers describing original theoretical or empirical research, new techniques, methods for emerging systems, in-depth case studies, infrastructures of testing and analysis, or tools are welcome.

Experience Papers

Authors are invited to submit experience papers describing a significant experience in applying software testing and analysis methods or tools and should carefully identify and discuss important lessons learned so that other researchers and/or practitioners can benefit from the experience. Of special interest are experience papers that report on industrial applications of software testing and analysis methods or tools.

Replicability Studies

ISSTA would like to encourage researchers to replicate results from previous papers. A replicability study must go beyond simply re-implementing an algorithm and/or re-running the artifacts provided by the original paper. It should at the very least apply the approach to new, significantly broadened inputs. Particularly, replicability studies are encouraged to target techniques that previously were evaluated only on proprietary subject programs or inputs. A replicability study should clearly report on results that the authors were able to replicate as well as on aspects of the work that were not replicable. In the latter case, authors are encouraged to make an effort to communicate or collaborate with the original paper’s authors to determine the cause for any observed discrepancies and, if possible, address them (e.g., through minor implementation changes). We explicitly encourage authors to not focus on a single paper/artifact only, but instead to perform a comparative experiment of multiple related approaches.

In particular, replicability studies should follow the ACM guidelines on replicability (different team, different experimental setup): The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently. This means that it is also insufficient to focus on reproducibility (i.e., different team, same experimental setup) alone. Replicability Studies will be evaluated according to the following standards:

  • Depth and breadth of experiments
  • Clarity of writing
  • Appropriateness of conclusions
  • Amount of useful, actionable insights
  • Availability of artifacts

We expect replicability studies to clearly point out the artifacts the study is built on, and to submit those artifacts to the artifact evaluation. Artifacts evaluated positively will be eligible to obtain the prestigious Results Reproduced badge.

Two Submission Deadlines and Major Revisions

ISSTA 2024 features two submission deadlines. The instructions in this call apply to both deadlines. You can choose to submit at either deadline. Only papers submitted to the first deadline may receive a chance to submit a major revision of the initial submission to the second deadline. Papers submitted to the second deadline will be either accepted or rejected, i.e., there is no option for a major revision.

Papers that are rejected during the first round may not be resubmitted to the second round. Authors who try to bypass this rule (e.g., by changing the paper title without significantly changing paper content, or by making small changes to the paper content) will have their papers desk-rejected without further consideration. Papers rejected from the first or second submission round can, of course, be submitted to ISSTA 2025 without any restrictions.

Submission Guidelines

Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for ISSTA. By submitting an article to an ACM Publication, authors are acknowledging that that all co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of the paper, in addition to other potential penalties, as per ACM Publications Policy.

Research Papers, Experience Papers, and Replicability Studies should be at most 10 pages in length, with at most 2 additional pages for references only. The page limit is strict, i.e., papers that take more than 10 pages for anything apart from references (including any section, figure, text, or appendix), will be desk-rejected. Experience papers and replicability studies should clearly specify their category in the paper title upon submission, e.g., “XXX (Experience Paper)”. All authors should use the official “ACM Master article template”, which can be obtained from the ACM Proceedings Template pages. Latex users should use the “sigconf” option, as well as the “review” (to produce line numbers for easy reference by the reviewers) and “anonymous” (omitting author names) options. To that end, the following Latex code can be placed at the start of the Latex document:

\documentclass[sigconf,review, anonymous]{acmart}
\acmConference[ISSTA 2024]{ACM SIGSOFT International Symposium on Software Testing and Analysis}{16-20 September, 2024}{Vienna, Austria}

Submit your papers via the HotCRP ISSTA 2024 submission website.

Each submission will be reviewed by at least three members of the program committee. Authors will have an opportunity to respond to reviews during a rebuttal period. Submissions will be evaluated on the basis of originality, importance of contribution, soundness, evaluation, quality of presentation, appropriate comparison to related work, and verifiability/transparency of the work. Some papers may have more than three reviews, as the PC chair may solicit additional reviews based on factors such as reviewer expertise and strong disagreement between reviewers. The authors will have a chance to read the additional reviews and respond to them during the additional short response period. The program committee as a whole will make final decisions about which submissions to accept for presentation at the conference.

Double-blind Reviewing

ISSTA 2024 will conduct double-blind reviewing. Submissions should not reveal the identity of the authors in any way. Authors should leave out author names and affiliations from the body of their submission. They should also ensure that any citations to related work by themselves are written in third person, that is, “the prior work of XYZ” as opposed to “our prior work”.

Authors have the right to upload preprints on arXiv or similar sites, but they must avoid specifying that the paper was submitted to ISSTA.

Authors with further questions on double-blind reviewing are encouraged to contact the Program Chair by email.

Open Science Policy and “Data Availability” Section

ISSTA has adopted an open science policy. Openness in science is key to fostering scientific progress via transparency, reproducibility, and replicability. The steering principle is that all research results should be accessible to the public, if possible, and that empirical studies should be reproducible. In particular, we actively support the adoption of open data and open source principles, and encourage all contributing authors to disclose data to increase reproducibility and replicability.

Upon submission, authors are asked to make their code, data, etc. available to the program committee, or to comment on why this is not possible or desirable. Data must be shared in an anonymized way (e.g., no information on authors/affiliations in the code) via a site that reveals neither the authors’ nor the reviewers’ identities (e.g., not via Google Drive). At least one of the reviewers will check the provided data. While sharing the data is not mandatory for submission or acceptance, it will inform the program committee’s decision. Furthermore, we ask authors to provide a supporting statement on the data availability (or lack thereof) in their submitted papers in a section named “Data Availability” after the Conclusion section.

Publication Date

The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of your conference. The official publication date affects the deadline for any patent filings related to published work.

Questions? Use the ISSTA Technical Papers contact form.