ISSTA 2024
Mon 16 - Fri 20 September 2024 Vienna, Austria
co-located with ISSTA/ECOOP 2024
Thu 19 Sep 2024 11:30 - 11:50 at EI 9 Hlawka - Testing Neural Networks Chair(s): Paolo Tonella

It is notoriously challenging to audit the potential unauthorized data usage in deep learning (DL) model development lifecycle, i.e., to \emph{judge whether certain private user data has been used to train or fine-tune a deep learning model without authorization}. Yet, such data usage auditing is crucial to respond to the urgent requirements of trustworthy Artificial Intelligence (AI) such as data transparency, which are promoted and enforced in recent AI regulation rules or acts like General Data Protection Regulation (GDPR) and EU AI Act. In this work, we propose \tool, a simple and flexible \emph{te}sting framework for auditing \emph{da}ta usage in DL model development process. Given a set of user’s private data to protect ($D_p$), the intuition of \tool is to apply \emph{membership inference attack\footnote{Membership inference attack is a well-known privacy attack to infer whether a data record has been used in the training dataset of a machine learning model \cite{shokri2017membership}.}} (with good intention) for judging whether the model to audit ($M_a$) is likely to be trained with $D_p$. Notably, to significantly expose the usage under membership inference, \tool applies imperceptible perturbation directed by boundary search to generate a carefully crafted test suite $D_t$ (which we call `isotope’) based on $D_p$. With the test suite, \tool then adopts membership inference combined with hypothesis testing to decide whether a user’s private data has been used to train $M_a$ with statistical guarantee. We evaluated \tool through extensive experiments on ranging data volumes across various model architectures for data-sensitive face recognition tasks. \tool demonstrates high feasibility, effectiveness and robustness under various adaptive strategies (e.g., pruning and distillation). \tool is publicly available at: https://anonymous.4open.science/r/Face_iso-54FE.

Thu 19 Sep

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

10:30 - 11:50
Testing Neural NetworksTechnical Papers at EI 9 Hlawka
Chair(s): Paolo Tonella USI Lugano
10:30
20m
Talk
Distance-Aware Test Input Selection for Deep Neural Networks
Technical Papers
Zhong Li , Zhengfeng Xu Nanjing University, Ruihua Ji Nanjing University, Minxue Pan Nanjing University, Tian Zhang Nanjing University, Linzhang Wang Nanjing University, Xuandong Li Nanjing University
DOI
10:50
20m
Talk
Test Selection for Deep Neural Networks using Meta-Models with Uncertainty Metrics
Technical Papers
Demet Demir Department of Information Systems, Graduate School of Informatics, Middle East Technical University, Ankara, Türkiye, Aysu Betin Can Department of Information Systems, Graduate School of Informatics, Middle East Technical University, Ankara, Türkiye, Elif Surer Department of Modeling and Simulation, Graduate School of Informatics, Middle East Technical University, Ankara, Türkiye
11:10
20m
Talk
Datactive: Data Fault Localization for Object Detection Systems
Technical Papers
Yining Yin Nanjing University, China, Yang Feng Nanjing University, Shihao Weng Nanjing University, Yuan Yao Nanjing University, Jia Liu Nanjing University, Zhihong Zhao
11:30
20m
Talk
TeDA: A Testing Framework for Data Usage Auditing in Deep Learning Model Development
Technical Papers
Xiangshan Gao Zhejiang University and Huawei Technology, Jialuo Chen Zhejiang University, Jingyi Wang Zhejiang University, Jie Shi Huawei International, Peng Cheng Zhejiang University, Jiming Chen Zhejiang University

Information for Participants
Thu 19 Sep 2024 10:30 - 11:50 at EI 9 Hlawka - Testing Neural Networks Chair(s): Paolo Tonella
Info for room EI 9 Hlawka:

Map: https://tuw-maps.tuwien.ac.at/?q=CAEG17

Room tech: https://raumkatalog.tiss.tuwien.ac.at/room/13939