ISSTA 2024
Mon 16 - Fri 20 September 2024 Vienna, Austria
co-located with ISSTA/ECOOP 2024

This program is tentative and subject to change.

Thu 19 Sep 2024 16:10 - 16:30 at EI 9 Hlawka - Analyzing Neural Models

Model optimization, such as pruning and quantization, has become the \emph{de facto} pre-deployment phase when deploying deep learning (DL) models in resource-constrained platforms. However, the complexity of DL models often leads to non-trivial bugs in model optimizers, known as \emph{model optimization bugs} (MOBs). These MOBs are characterized by involving complex data types and layer structures inherent to DL models, causing significant hurdles in detecting them through traditional static analysis and dynamic testing. In this work, we leverage Large Language Models (LLMs) with prompting techniques to generate test cases for MOB detection. We explore how LLMs can draw an understanding of the MOB domain from scattered instances and generalize to detect new ones, a paradigm we term as \emph{concentration and diffusion}. We extract MOB domain knowledge from the artifacts of known MOBs, such as their issue reports and fixes, and design knowledge-aware prompts to guide LLMs in generating effective test cases. The domain knowledge of code structure and error description provides precise in-depth depictions of the problem domain, i.e., the \emph{concentration}, and heuristic directions to generate innovative test cases, i.e., the \emph{diffusion}. Our approach is implemented as a tool named YanHui and benchmarked against existing few-shot LLM-based fuzzing techniques. Test cases generated by YanHui demonstrate enhanced capability to find relevant API and data combinations for exposing MOBs, leading to an 11.4% increase in generating syntactically valid code and a 22.3% increase in generating on-target code specific to model optimization. YanHui detects 17 MOBs, and among them, five are deep MOBs that are difficult to reveal without our prompting technique.

This program is tentative and subject to change.

Thu 19 Sep

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

15:30 - 17:10
Analyzing Neural ModelsTechnical Papers at EI 9 Hlawka
15:30
20m
Talk
Model-less Is the Best Model: Generating Pure Code Implementations to Replace On-Device DL Models
Technical Papers
Mingyi Zhou Monash University, Xiang Gao Beihang University, Pei Liu Data61 at CSIRO, Australia, John Grundy Monash University, Chunyang Chen Monash University, Xiao Chen University of Newcastle, Australia, Li Li Beihang University
DOI
15:50
20m
Talk
Decomposition of Deep Neural Networks into Modules via Mutation Analysis
Technical Papers
Ali Ghanbari Auburn University
DOI
16:10
20m
Talk
Large Language Models can Connect the Dots: Exploring Model Optimization Bugs with Domain Knowledge-aware Prompts
Technical Papers
Hao Guan University of Queensland, Guangdong Bai University of Queensland, Yepang Liu Southern University of Science and Technology
16:30
20m
Talk
DistillSeq: A Framework for Safety Alignment Testing in Large Language Models using Knowledge Distillation
Technical Papers
Mingke Yang ShanghaiTech University, Yuqi Chen ShanghaiTech University, China, Yi Liu Nanyang Technological University, Ling Shi Nanyang Technological University
16:50
20m
Talk
Evaluating Deep Neural Networks in Deployment: A Comparative Study (Replicability Study)
Technical Papers
Eduard Pinconschi Carnegie Mellon University, Divya Gopinath KBR Inc., NASA Ames, Rui Abreu Meta & University of Porto, Corina Pasareanu

Information for Participants
Thu 19 Sep 2024 15:30 - 17:10 at EI 9 Hlawka - Analyzing Neural Models
Info for room EI 9 Hlawka:

Map: https://tuw-maps.tuwien.ac.at/?q=CAEG17

Room tech: https://raumkatalog.tiss.tuwien.ac.at/room/13939