ISSTA 2024
Mon 16 - Fri 20 September 2024 Vienna, Austria
co-located with ISSTA/ECOOP 2024

This program is tentative and subject to change.

Thu 19 Sep 2024 15:50 - 16:10 at EI 9 Hlawka - Analyzing Neural Models

Recently, several approaches have been proposed for decomposing deep neural network (DNN) classifiers into binary classifier modules to facilitate modular development and repair of such models. These approaches concern only the problem of decomposing classifier models, and some of them rely on the activation patterns of the neurons, thereby limiting their applicability.

In this paper, we propose a DNN decomposition technique, named Incite, that uses neuron mutation to quantify the contributions of the neurons to a given output of a model. Then, for each model output, a subgraph induced by the nodes with highest contribution scores for that output are selected and extracted as a module. Incite is agnostic to the type of the model and the activation functions used in its construction, and is applicable to not just classifiers, but to regression models as well. Furthermore, the costs of mutation analysis in Incite has been reduced by heuristic clustering of neurons, enabling its application to models with millions of parameters. Lastly, Incite prunes away the neurons that do not contribute to the outcome of the modules, producing compressed, efficient modules.

We have evaluated Incite using 16 DNN models for well-known classification and regression problems and report its effectiveness along combined accuracy (and MAE) of the modules, the overlap in model elements between the modules, and the compression ratio. We observed that, for classification models, Incite, on average, incurs 3.44% loss in accuracy, and the average overlap between the modules is 71.76%, while the average compression ratio is 1.89X. Meanwhile, for regression models, Incite, on average, incurs 18.56% gain in MAE, and the overlap between modules is 80.14%, while the average compression ratio is 1.83X.

This program is tentative and subject to change.

Thu 19 Sep

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

15:30 - 17:10
Analyzing Neural ModelsTechnical Papers at EI 9 Hlawka
15:30
20m
Talk
Model-less Is the Best Model: Generating Pure Code Implementations to Replace On-Device DL Models
Technical Papers
Mingyi Zhou Monash University, Xiang Gao Beihang University, Pei Liu Data61 at CSIRO, Australia, John Grundy Monash University, Chunyang Chen Monash University, Xiao Chen University of Newcastle, Australia, Li Li Beihang University
DOI
15:50
20m
Talk
Decomposition of Deep Neural Networks into Modules via Mutation Analysis
Technical Papers
Ali Ghanbari Auburn University
DOI
16:10
20m
Talk
Large Language Models can Connect the Dots: Exploring Model Optimization Bugs with Domain Knowledge-aware Prompts
Technical Papers
Hao Guan University of Queensland, Guangdong Bai University of Queensland, Yepang Liu Southern University of Science and Technology
16:30
20m
Talk
DistillSeq: A Framework for Safety Alignment Testing in Large Language Models using Knowledge Distillation
Technical Papers
Mingke Yang ShanghaiTech University, Yuqi Chen ShanghaiTech University, China, Yi Liu Nanyang Technological University, Ling Shi Nanyang Technological University
16:50
20m
Talk
Evaluating Deep Neural Networks in Deployment: A Comparative Study (Replicability Study)
Technical Papers
Eduard Pinconschi Carnegie Mellon University, Divya Gopinath KBR Inc., NASA Ames, Rui Abreu Meta & University of Porto, Corina Pasareanu