ISSTA 2024
Mon 16 - Fri 20 September 2024 Vienna, Austria
co-located with ISSTA/ECOOP 2024
Thu 19 Sep 2024 16:30 - 16:50 at EI 9 Hlawka - Analyzing Neural Models Chair(s): Saeid Tizpaz-Niari

Large Language Models (LLMs) have showcased their remarkable capabilities in diverse domains, encompassing natural language understanding, translation, and even code generation. The potential for LLMs to generate harmful content is a significant concern. This risk necessitates rigorous testing and comprehensive evaluation of LLMs to ensure safe and responsible use. However, extensive testing of LLMs requires substantial computational resources, making it an expensive endeavor. Therefore, exploring cost-saving strategies during the testing phase is crucial to balance the need for thorough evaluation with the constraints of resource availability. To address this, our approach begins by transferring the moderation knowledge from an LLM to a small model. Subsequently, we deploy two distinct strategies for generating malicious queries: one based on a syntax tree approach, and the other leveraging an LLM-based method. Finally, our approach incorporates a sequential filter-test process designed to identify test cases that are prone to eliciting toxic responses. By doing so, we significantly curtail unnecessary or unproductive interactions with LLMs, thereby streamlining the testing process. Our research evaluated the efficacy of DistillSeq across four LLMs: GPT-3.5, GPT-4.0, Vicuna-13B, and Llama-13B. In the absence of DistillSeq, the observed attack success rates on these LLMs stood at 31.5% for GPT-3.5, 21.4% for GPT-4.0, 28.3% for Vicuna-13B, and 30.9% for Llama-13B. However, upon the application of DistillSeq, these success rates notably increased to 58.5%, 50.7%, 52.5%, and 54.4%, respectively. This translated to an average escalation in attack success rate by a factor of 93.0% when compared to scenarios without the use of DistillSeq. Such findings highlight the significant enhancement DistillSeq offers in terms of reducing the time and resource investment required for effectively testing LLMs.

Thu 19 Sep

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

15:30 - 17:10
Analyzing Neural ModelsTechnical Papers at EI 9 Hlawka
Chair(s): Saeid Tizpaz-Niari University of Texas at El Paso
15:30
20m
Talk
Model-less Is the Best Model: Generating Pure Code Implementations to Replace On-Device DL Models
Technical Papers
Mingyi Zhou Monash University, Xiang Gao Beihang University, Pei Liu CSIRO’s Data61, John Grundy Monash University, Chunyang Chen TU Munich, Xiao Chen University of Newcastle, Li Li Beihang University
DOI
15:50
20m
Talk
Decomposition of Deep Neural Networks into Modules via Mutation Analysis
Technical Papers
Ali Ghanbari Auburn University
DOI
16:10
20m
Talk
Large Language Models Can Connect the Dots: Exploring Model Optimization Bugs with Domain Knowledge-Aware Prompts
Technical Papers
Hao Guan University of Queensland; Southern University of Science and Technology, Guangdong Bai University of Queensland, Yepang Liu Southern University of Science and Technology
DOI
16:30
20m
Talk
DistillSeq: A Framework for Safety Alignment Testing in Large Language Models using Knowledge Distillation
Technical Papers
Mingke Yang ShanghaiTech University, Yuqi Chen ShanghaiTech University, Yi Liu Nanyang Technological University, Ling Shi Nanyang Technological University
DOI
16:50
20m
Talk
Evaluating Deep Neural Networks in Deployment: A Comparative Study (Replicability Study)
Technical Papers
Eduard Pinconschi Carnegie Mellon University, Divya Gopinath KBR; NASA Ames, Rui Abreu INESC-ID; University of Porto, Corina S. Păsăreanu Carnegie Mellon University; NASA Ames
DOI

Information for Participants