ISSTA 2024
Mon 16 - Fri 20 September 2024 Vienna, Austria
co-located with ISSTA/ECOOP 2024

The ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA) is the leading research symposium on software testing and analysis, bringing together academics, industrial researchers, and practitioners to exchange new ideas, problems, and experience on how to analyze and test software systems.

ISSTA 2024 will feature two submission deadlines. You can choose to submit at either deadline, but papers submitted to the first deadline may receive a chance to submit a major revision of the initial submission to the second deadline. Papers submitted to the second deadline will be either accepted or rejected, i.e., there is no option for a major revision.

Accepted Papers

Title
A Large-Scale Empirical Study on Improving the Fairness of Image Classification Models
Technical Papers
DOI
A Large-Scale Evaluation for Log Parsing Techniques: How Far Are We?
Technical Papers
DOI
API Misuse Detection via Probabilistic Graphical Model
Technical Papers
DOI
Atlas: Automating Cross-Language Fuzzing on Android Closed-Source Libraries
Technical Papers
DOI
Automated Deep Learning Optimization via DSL-Based Source Code Transformation
Technical Papers
DOI
Automating Zero-Shot Patch Porting for Hard Forks
Technical Papers
DOI
Benchmarking Automated Program Repair: An Extensive Study on Both Real-World and Artificial Bugs
Technical Papers
DOI
Bridge and Hint: Extending Pre-trained Language Models for Long-Range Code
Technical Papers
DOI
C2D2: Extracting Critical Changes for Real-World Bugs with Dependency-Sensitive Delta Debugging
Technical Papers
DOI
CEBin: A Cost-Effective Framework for Large-Scale Binary Code Similarity Detection
Technical Papers
DOI
CLAP: Learning Transferable Binary Code Representations with Natural Language Supervision
Technical Papers
DOI
CoderUJB: An Executable and Unified Java Benchmark for Practical Programming Scenarios
Technical Papers
DOI
CoEdPilot: Recommending Code Edits with Learned Prior Edit Relevance, Project-wise Awareness, and Interactive Nature
Technical Papers
DOI
DAppFL: Just-in-Time Fault Localization for Decentralized Applications in Web3
Technical Papers
DOI
Define-Use Guided Path Exploration for Better Forced Execution
Technical Papers
DOI Pre-print
DeFort: Automatic Detection and Analysis of Price Manipulation Attacks in DeFi Applications
Technical Papers
DOI
Detecting Build Dependency Errors in Incremental Builds
Technical Papers
DOI
DiaVio: LLM-Empowered Diagnosis of Safety Violations in ADS Simulation Testing
Technical Papers
DOI
Distance-Aware Test Input Selection for Deep Neural Networks
Technical Papers
DOI
Enhancing Robustness of Code Authorship Attribution through Expert Feature Knowledge
Technical Papers
DOI
Enhancing ROS System Fuzzing through Callback Tracing
Technical Papers
DOI
Evaluating the Effectiveness of Decompilers
Technical Papers
DOI
Face It Yourselves: An LLM-Based Two-Stage Strategy to Localize Configuration Errors via Logs
Technical Papers
DOI
FastLog: An End-to-End Method to Efficiently Generate and Insert Logging Statements
Technical Papers
DOI
FortifyPatch: Towards Tamper-Resistant Live Patching in Linux-Based Hypervisor
Technical Papers
DOI
FT2Ra: A Fine-Tuning-Inspired Approach to Retrieval-Augmented Code Completion
Technical Papers
DOI
FunRedisp: Reordering Function Dispatch in Smart Contract to Reduce Invocation Gas Fees
Technical Papers
DOI
Graph Neural Networks for Vulnerability Detection: A Counterfactual Explanation
Technical Papers
DOI
Interprocedural Path Complexity Analysis
Technical Papers
DOI
Isolation-Based Debugging for Neural Networks
Technical Papers
DOI
LPR: Large Language Models-Aided Program Reduction
Technical Papers
DOI
Ma11y: A Mutation Framework for Web Accessibility Testing
Technical Papers
DOI
MicroRes: Versatile Resilience Profiling in Microservices via Degradation Dissemination Indexing
Technical Papers
DOI
Model-less Is the Best Model: Generating Pure Code Implementations to Replace On-Device DL Models
Technical Papers
DOI
Multi-modal Learning for WebAssembly Reverse Engineering
Technical Papers
DOI
Precise Compositional Buffer Overflow Detection via Heap Disjointness
Technical Papers
DOI Pre-print
SCALE: Constructing Symbolic Comment Trees for Software Vulnerability Detection
Technical Papers
DOI
Silent Taint-Style Vulnerability Fixes Identification
Technical Papers
DOI
Total Recall? How Good Are Static Call Graphs Really?
Technical Papers
DOI Pre-print
Traceback: A Fault Localization Technique for Molecular Programs
Technical Papers
DOI
Unimocg: Modular Call-Graph Algorithms for Consistent Handling of Language Features
Technical Papers
DOI
UPBEAT: Test Input Checks of Q# Quantum Libraries
Technical Papers
DOI

Call for Papers

ISSTA invites three kinds of submissions. The majority of submissions is expected to be “Research Papers”, but submissions that best fit the description of “Experience Papers” or “Replicability Studies” should be submitted as such.

Research Papers

Authors are invited to submit research papers describing original contributions in testing or analysis of computer software. Papers describing original theoretical or empirical research, new techniques, methods for emerging systems, in-depth case studies, infrastructures of testing and analysis, or tools are welcome.

Experience Papers

Authors are invited to submit experience papers describing a significant experience in applying software testing and analysis methods or tools and should carefully identify and discuss important lessons learned so that other researchers and/or practitioners can benefit from the experience. Of special interest are experience papers that report on industrial applications of software testing and analysis methods or tools.

Replicability Studies

ISSTA would like to encourage researchers to replicate results from previous papers. A replicability study must go beyond simply re-implementing an algorithm and/or re-running the artifacts provided by the original paper. It should at the very least apply the approach to new, significantly broadened inputs. Particularly, replicability studies are encouraged to target techniques that previously were evaluated only on proprietary subject programs or inputs. A replicability study should clearly report on results that the authors were able to replicate as well as on aspects of the work that were not replicable. In the latter case, authors are encouraged to make an effort to communicate or collaborate with the original paper’s authors to determine the cause for any observed discrepancies and, if possible, address them (e.g., through minor implementation changes). We explicitly encourage authors to not focus on a single paper/artifact only, but instead to perform a comparative experiment of multiple related approaches.

In particular, replicability studies should follow the ACM guidelines on replicability (different team, different experimental setup): The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently. This means that it is also insufficient to focus on reproducibility (i.e., different team, same experimental setup) alone. Replicability Studies will be evaluated according to the following standards:

  • Depth and breadth of experiments
  • Clarity of writing
  • Appropriateness of conclusions
  • Amount of useful, actionable insights
  • Availability of artifacts

We expect replicability studies to clearly point out the artifacts the study is built on, and to submit those artifacts to the artifact evaluation. Artifacts evaluated positively will be eligible to obtain the prestigious Results Reproduced badge.

Two Submission Deadlines and Major Revisions

ISSTA 2024 features two submission deadlines. The instructions in this call apply to both deadlines. You can choose to submit at either deadline. Only papers submitted to the first deadline may receive a chance to submit a major revision of the initial submission to the second deadline. Papers submitted to the second deadline will be either accepted or rejected, i.e., there is no option for a major revision.

Papers that are rejected during the first round may not be resubmitted to the second round. Authors who try to bypass this rule (e.g., by changing the paper title without significantly changing paper content, or by making small changes to the paper content) will have their papers desk-rejected without further consideration. Papers rejected from the first or second submission round can, of course, be submitted to ISSTA 2025 without any restrictions.

Submission Guidelines

Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for ISSTA. By submitting an article to an ACM Publication, authors are acknowledging that that all co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of the paper, in addition to other potential penalties, as per ACM Publications Policy.

Research Papers, Experience Papers, and Replicability Studies should be at most 10 pages in length, with at most 2 additional pages for references only. The page limit is strict, i.e., papers that take more than 10 pages for anything apart from references (including any section, figure, text, or appendix), will be desk-rejected. Experience papers and replicability studies should clearly specify their category in the paper title upon submission, e.g., “XXX (Experience Paper)”. All authors should use the official “ACM Master article template”, which can be obtained from the ACM Proceedings Template pages. Latex users should use the “sigconf” option, as well as the “review” (to produce line numbers for easy reference by the reviewers) and “anonymous” (omitting author names) options. To that end, the following Latex code can be placed at the start of the Latex document:

\documentclass[sigconf,review, anonymous]{acmart}
\acmConference[ISSTA 2024]{ACM SIGSOFT International Symposium on Software Testing and Analysis}{16-20 September, 2024}{Vienna, Austria}

Submit your papers via the HotCRP ISSTA 2024 submission website.

Each submission will be reviewed by at least three members of the program committee. Authors will have an opportunity to respond to reviews during a rebuttal period. Submissions will be evaluated on the basis of originality, importance of contribution, soundness, evaluation, quality of presentation, appropriate comparison to related work, and verifiability/transparency of the work. Some papers may have more than three reviews, as the PC chair may solicit additional reviews based on factors such as reviewer expertise and strong disagreement between reviewers. The authors will have a chance to read the additional reviews and respond to them during the additional short response period. The program committee as a whole will make final decisions about which submissions to accept for presentation at the conference.

Double-blind Reviewing

ISSTA 2024 will conduct double-blind reviewing. Submissions should not reveal the identity of the authors in any way. Authors should leave out author names and affiliations from the body of their submission. They should also ensure that any citations to related work by themselves are written in third person, that is, “the prior work of XYZ” as opposed to “our prior work”.

Authors have the right to upload preprints on arXiv or similar sites, but they must avoid specifying that the paper was submitted to ISSTA.

Authors with further questions on double-blind reviewing are encouraged to contact the Program Chair by email.

Open Science Policy and “Data Availability” Section

ISSTA has adopted an open science policy. Openness in science is key to fostering scientific progress via transparency, reproducibility, and replicability. The steering principle is that all research results should be accessible to the public, if possible, and that empirical studies should be reproducible. In particular, we actively support the adoption of open data and open source principles, and encourage all contributing authors to disclose data to increase reproducibility and replicability.

Upon submission, authors are asked to make their code, data, etc. available to the program committee, or to comment on why this is not possible or desirable. Data must be shared in an anonymized way (e.g., no information on authors/affiliations in the code) via a site that reveals neither the authors’ nor the reviewers’ identities (e.g., not via Google Drive). At least one of the reviewers will check the provided data. While sharing the data is not mandatory for submission or acceptance, it will inform the program committee’s decision. Furthermore, we ask authors to provide a supporting statement on the data availability (or lack thereof) in their submitted papers in a section named “Data Availability” after the Conclusion section.

Publication Date

The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of your conference. The official publication date affects the deadline for any patent filings related to published work.

Questions? Use the ISSTA Technical Papers contact form.