ISSTA 2024
Mon 16 - Fri 20 September 2024 Vienna, Austria
co-located with ISSTA/ECOOP 2024
Thu 19 Sep 2024 16:10 - 16:30 at EI 9 Hlawka - Analyzing Neural Models Chair(s): Saeid Tizpaz-Niari

Model optimization, such as pruning and quantization, has become the \emph{de facto} pre-deployment phase when deploying deep learning~(DL) models on resource-constrained platforms.

However, the complexity of DL models often leads to non-trivial bugs in model optimizers, known as \emph{model optimization bugs}~(MOBs).

These MOBs are characterized by involving complex data types and layer structures inherent to DL models, causing significant hurdles in detecting them through traditional static analysis and dynamic testing techniques.

In this work, we leverage Large Language Models (LLMs) with prompting techniques to generate test cases for MOB detection.

We explore how LLMs can draw an understanding of the MOB domain from scattered bug instances and generalize to detect new ones, a paradigm we term as \emph{concentration and diffusion}.

We extract MOB domain knowledge from the artifacts of known MOBs, such as their issue reports and fixes, and design knowledge-aware prompts to guide LLMs in generating effective test cases.

The domain knowledge of code structure and error description provides precise in-depth depictions of the problem domain, i.e., the \emph{concentration}, and heuristic directions to generate innovative test cases, i.e., the \emph{diffusion}.

Our approach is implemented as a tool named \textsc{YanHui} and benchmarked against existing few-shot LLM-based fuzzing techniques.

Test cases generated by \textsc{YanHui} demonstrate enhanced capability to find relevant API and data combinations for exposing MOBs, leading to an 11.4% increase in generating syntactically valid code and a 22.3% increase in generating on-target code specific to model optimization.

\textsc{YanHui} detects 17 MOBs, and among them, five are deep MOBs that are difficult to reveal without our prompting technique.

Thu 19 Sep

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

15:30 - 17:10
Analyzing Neural ModelsTechnical Papers at EI 9 Hlawka
Chair(s): Saeid Tizpaz-Niari University of Texas at El Paso
15:30
20m
Talk
Model-less Is the Best Model: Generating Pure Code Implementations to Replace On-Device DL Models
Technical Papers
Mingyi Zhou Monash University, Xiang Gao Beihang University, Pei Liu CSIRO’s Data61, John Grundy Monash University, Chunyang Chen TU Munich, Xiao Chen University of Newcastle, Li Li Beihang University
DOI
15:50
20m
Talk
Decomposition of Deep Neural Networks into Modules via Mutation Analysis
Technical Papers
Ali Ghanbari Auburn University
DOI
16:10
20m
Talk
Large Language Models Can Connect the Dots: Exploring Model Optimization Bugs with Domain Knowledge-Aware Prompts
Technical Papers
Hao Guan University of Queensland; Southern University of Science and Technology, Guangdong Bai University of Queensland, Yepang Liu Southern University of Science and Technology
DOI
16:30
20m
Talk
DistillSeq: A Framework for Safety Alignment Testing in Large Language Models using Knowledge Distillation
Technical Papers
Mingke Yang ShanghaiTech University, Yuqi Chen ShanghaiTech University, Yi Liu Nanyang Technological University, Ling Shi Nanyang Technological University
DOI
16:50
20m
Talk
Evaluating Deep Neural Networks in Deployment: A Comparative Study (Replicability Study)
Technical Papers
Eduard Pinconschi Carnegie Mellon University, Divya Gopinath KBR; NASA Ames, Rui Abreu INESC-ID; University of Porto, Corina S. Păsăreanu Carnegie Mellon University; NASA Ames
DOI

Information for Participants