Evaluating Deep Neural Networks in Deployment: A Comparative Study (Replicability Study)
As deep neural networks (DNNs) are increasingly used in safety-critical applications, there is a growing concern for their reliability. Even highly trained, high-performant networks are not 100% accurate. However, it is very difficult to predict their behavior during deployment without ground truth. In this paper, we provide a comparative and replicability study on recent approaches that have been proposed to evaluate the reliability of DNNs in deployment. We find that it is hard to run and reproduce the results for these approaches on their replication packages and even more difficult to run them on artifacts other than their own. Further, it is difficult to compare the effectiveness of the approaches, due to the lack of clearly defined evaluation metrics. Our results indicate that more effort is needed in our research community to obtain sound techniques for evaluating the reliability of neural networks in safety-critical domains. To this end, we contribute an evaluation framework that incorporates the considered approaches and enables evaluation on common benchmarks, using common metrics.
Thu 19 SepDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
15:30 - 17:10 | Analyzing Neural ModelsTechnical Papers at EI 9 Hlawka Chair(s): Saeid Tizpaz-Niari University of Texas at El Paso | ||
15:30 20mTalk | Model-less Is the Best Model: Generating Pure Code Implementations to Replace On-Device DL Models Technical Papers Mingyi Zhou Monash University, Xiang Gao Beihang University, Pei Liu CSIRO’s Data61, John Grundy Monash University, Chunyang Chen TU Munich, Xiao Chen University of Newcastle, Li Li Beihang University DOI | ||
15:50 20mTalk | Decomposition of Deep Neural Networks into Modules via Mutation Analysis Technical Papers Ali Ghanbari Auburn University DOI | ||
16:10 20mTalk | Large Language Models Can Connect the Dots: Exploring Model Optimization Bugs with Domain Knowledge-Aware Prompts Technical Papers Hao Guan University of Queensland; Southern University of Science and Technology, Guangdong Bai University of Queensland, Yepang Liu Southern University of Science and Technology DOI | ||
16:30 20mTalk | DistillSeq: A Framework for Safety Alignment Testing in Large Language Models using Knowledge Distillation Technical Papers Mingke Yang ShanghaiTech University, Yuqi Chen ShanghaiTech University, Yi Liu Nanyang Technological University, Ling Shi Nanyang Technological University DOI | ||
16:50 20mTalk | Evaluating Deep Neural Networks in Deployment: A Comparative Study (Replicability Study) Technical Papers Eduard Pinconschi Carnegie Mellon University, Divya Gopinath KBR; NASA Ames, Rui Abreu INESC-ID; University of Porto, Corina S. Păsăreanu Carnegie Mellon University; NASA Ames DOI |