ISSTA 2024
Mon 16 - Fri 20 September 2024 Vienna, Austria
co-located with ISSTA/ECOOP 2024
Wed 18 Sep 2024 15:30 - 15:50 at EI 9 Hlawka - Testing and Repairing Neural Networks Chair(s): Mike Papadakis

Software engineers develop, fine-tune, and deploy deep learning (DL) models using a variety of development frameworks and runtime environments. DL model converters move models between frameworks and to runtime environments. Conversion errors compromise model quality and disrupt deployment. However, the failure characteristics of DL model converters are unknown, adding risk when using DL interoperability technologies. This paper analyzes failures in DL model converters. We survey software engineers about DL interoperability tools, use cases, and pain points (N=92). Then, we characterize failures in model converters associated with the main interoperability tool, ONNX (N=200 issues in PyTorch and TensorFlow). Finally, we formulate and test two hypotheses about structural causes for the failures we studied. We find that the node conversion stage of a model converter accounts for ∼75% of the defects and 33% of reported failure are related to semantically incorrect models. The cause of semantically incorrect models is elusive, but models with behaviour inconsistencies share operator sequences. Our results motivate future research on making DL interoperability software simpler to maintain, extend, and validate. Research into behavioural tolerances and architectural coverage metrics would be fruitful.

Wed 18 Sep

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

15:30 - 17:10
Testing and Repairing Neural NetworksTechnical Papers at EI 9 Hlawka
Chair(s): Mike Papadakis University of Luxembourg
15:30
20m
Talk
Interoperability in Deep Learning: A User Survey and Failure Analysis of ONNX Model Converters
Technical Papers
Purvish Jajal Purdue University, Wenxin Jiang Purdue University, Arav Tewari Purdue University, Erik Kocinare Purdue University, Joseph Woo Purdue University, Anusha Sarraf Purdue University, Yung-Hsiang Lu Purdue University, George K. Thiruvathukal Loyola University Chicago, James C. Davis Purdue University
DOI Pre-print
15:50
20m
Talk
Interpretability Based Neural Network Repair
Technical Papers
Zuohui Chen Zhejiang University of Technology; Binjiang Institute of Artificial Intelligence, Jun Zhou Zhejiang University of Technology; Binjiang Institute of Artificial Intelligence, Youcheng Sun University of Manchester, Jingyi Wang Zhejiang University, Qi Xuan Zhejiang University of Technology; Binjiang Institute of Artificial Intelligence, Xiaoniu Yang Zhejiang University of Technology; National Key Laboratory of Electromagnetic Space Security
DOI
16:10
20m
Talk
See the Forest, not Trees: Unveiling and Escaping the Pitfalls of Error-Triggering Inputs in Neural Network Testing
Technical Papers
Yuanyuan Yuan Hong Kong University of Science and Technology, Shuai Wang Hong Kong University of Science and Technology, Zhendong Su ETH Zurich
DOI
16:30
20m
Talk
Isolation-Based Debugging for Neural Networks
Technical Papers
Jialuo Chen Zhejiang University, Jingyi Wang Zhejiang University, Youcheng Sun University of Manchester, Peng Cheng Zhejiang University, Jiming Chen Zhejiang University; Hangzhou Dianzi University
DOI
16:50
20m
Talk
Certified Continual Learning for Neural Network Regression
Technical Papers
Long H. Pham Singapore Management University, Jun Sun Singapore Management University
DOI

Information for Participants