ISSTA 2024
Mon 16 - Fri 20 September 2024 Vienna, Austria
co-located with ISSTA/ECOOP 2024
Wed 18 Sep 2024 13:50 - 14:10 at EI 7 - Fairness and Safety of Neural Networks Chair(s): Jingyi Wang

Fairness has been a critical issue that affects the adoption of deep learning models in real practice. To improve model fairness, many existing methods have been proposed and evaluated to be effective in their own contexts. However, there is still no systematic evaluation among them for a comprehensive comparison under the same context, which makes it hard to understand the performance distinction among them, hindering the research progress and practical adoption of them. To fill this gap, this paper endeavours to conduct the first large-scale empirical study to comprehensively compare the performance of existing state-of-the-art fairness improving techniques. Specifically, we target the widely-used application scenario of image classification, and utilized three different datasets and five commonly-used performance metrics to assess in total 13 methods from diverse categories. Our findings reveal substantial variations in the performance of each method across different datasets and sensitive attributes, indicating over-fitting on specific datasets by many existing methods. Furthermore, different fairness evaluation metrics, due to their distinct focuses, yield significantly different assessment results. Overall, we observe that pre-processing methods and in-processing methods outperform post-processing methods, with pre-processing methods exhibiting the best performance. Our empirical study offers comprehensive recommendations for enhancing fairness in deep learning models. We approach the problem from multiple dimensions, aiming to provide a uniform evaluation platform and inspire researchers to explore more effective fairness solutions via a set of implications.

Wed 18 Sep

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

13:30 - 14:50
Fairness and Safety of Neural NetworksTechnical Papers at EI 7
Chair(s): Jingyi Wang Zhejiang University
13:30
20m
Talk
NeuFair: Neural Network Fairness Repair with Dropout
Technical Papers
Vishnu Asutosh Dasu Pennsylvania State University, Ashish Kumar Pennsylvania State University, Saeid Tizpaz-Niari University of Texas at El Paso, Gang (Gary) Tan Pennsylvania State University
DOI
13:50
20m
Talk
A Large-Scale Empirical Study on Improving the Fairness of Image Classification Models
Technical Papers
Junjie Yang College of Intelligence and Computing, Tianjin University, Jiajun Jiang Tianjin University, Zeyu Sun Institute of Software at Chinese Academy of Sciences, Junjie Chen Tianjin University
DOI
14:10
20m
Talk
Efficient DNN-Powered Software with Fair Sparse Models
Technical Papers
Xuanqi Gao Xi’an Jiaotong University, Weipeng Jiang Xi’an Jiaotong University, Juan Zhai University of Massachusetts at Amherst, Shiqing Ma University of Massachusetts at Amherst, Xiaoyu Zhang Xi’an Jiaotong University, Chao Shen Xi’an Jiaotong University
DOI Pre-print
14:30
20m
Talk
Synthesizing Boxes Preconditions for Deep Neural Networks
Technical Papers
Zengyu Liu National University of Defense Technology, Liqian Chen National University of Defense Technology, Wanwei Liu National University of Defense Technology, Ji Wang National University of Defense Technology
DOI

Information for Participants