ISSTA 2024
Mon 16 - Fri 20 September 2024 Vienna, Austria
co-located with ISSTA/ECOOP 2024

This program is tentative and subject to change.

Thu 19 Sep 2024 16:50 - 17:10 at EI 9 Hlawka - Analyzing Neural Models

As deep neural networks (DNNs) are increasingly used in safety-critical applications, there is a growing concern for their trustworthiness. Even highly trained, high-performant networks are not 100% accurate. However, it is very difficult to predict their behavior during deployment without ground truth. In this paper, we provide a comparative and replicability study on recent approaches that have been proposed to evaluate the trustworthiness of DNNs. We find that it is very difficult to run and reproduce the results for these approaches on their replication packages and even more difficult to run them on artifacts other than their own. Further, it is difficult to compare the effectiveness of the approaches, due to the lack of clearly defined evaluation metrics. Our results indicate that more effort is needed in our research community to obtain sound techniques for evaluating the trustworthiness of neural networks in safety-critical domains. To this end, we contribute an evaluation framework that incorporates the considered approaches and enables evaluation on common benchmarks, using common metrics. Using this framework, we run a comparative study of the three approaches.

This program is tentative and subject to change.

Thu 19 Sep

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

15:30 - 17:10
Analyzing Neural ModelsTechnical Papers at EI 9 Hlawka
15:30
20m
Talk
Model-less Is the Best Model: Generating Pure Code Implementations to Replace On-Device DL Models
Technical Papers
Mingyi Zhou Monash University, Xiang Gao Beihang University, Pei Liu Data61 at CSIRO, Australia, John Grundy Monash University, Chunyang Chen Monash University, Xiao Chen University of Newcastle, Australia, Li Li Beihang University
DOI
15:50
20m
Talk
Decomposition of Deep Neural Networks into Modules via Mutation Analysis
Technical Papers
Ali Ghanbari Auburn University
DOI
16:10
20m
Talk
Large Language Models can Connect the Dots: Exploring Model Optimization Bugs with Domain Knowledge-aware Prompts
Technical Papers
Hao Guan University of Queensland, Guangdong Bai University of Queensland, Yepang Liu Southern University of Science and Technology
16:30
20m
Talk
DistillSeq: A Framework for Safety Alignment Testing in Large Language Models using Knowledge Distillation
Technical Papers
Mingke Yang ShanghaiTech University, Yuqi Chen ShanghaiTech University, China, Yi Liu Nanyang Technological University, Ling Shi Nanyang Technological University
16:50
20m
Talk
Evaluating Deep Neural Networks in Deployment: A Comparative Study (Replicability Study)
Technical Papers
Eduard Pinconschi Carnegie Mellon University, Divya Gopinath KBR Inc., NASA Ames, Rui Abreu Meta & University of Porto, Corina Pasareanu

Information for Participants
Thu 19 Sep 2024 15:30 - 17:10 at EI 9 Hlawka - Analyzing Neural Models
Info for room EI 9 Hlawka:

Map: https://tuw-maps.tuwien.ac.at/?q=CAEG17

Room tech: https://raumkatalog.tiss.tuwien.ac.at/room/13939