ISSTA 2024
Mon 16 - Fri 20 September 2024 Vienna, Austria
co-located with ISSTA/ECOOP 2024

This program is tentative and subject to change.

Wed 18 Sep 2024 11:30 - 11:50 at EI 7 - LLMs for Code Generation

Recent advancements in large language models (LLMs) have exhibited promising capabilities in addressing various tasks such as defect detection and program repair. Despite their prevalence, LLMs still face limitations in effectively handling these tasks. Common strategies to adapt them and improve their performance for specific tasks involve fine-tuning models based on user data or employing in-context learning with examples of desired inputs and outputs. However, they pose challenges for practical adoption due to the need for high-quality data, extensive computational resources, and continuous maintenance. Furthermore, neither strategy can explain or reason about the ineffectiveness of LLMs in the given tasks.

We propose Calico to address the high cost of fine-tuning, eliminate the necessity for task-specific examples, and provide explanations of LLM deficiency. At the heart of Calico is an evolutionary approach that interleaves knowledge calibration and AI deficiency diagnosis. The key essence of Calico is as follows. First, it focuses on identifying knowledge gaps in LLMs’ program comprehension. Second, it conducts automated code refactoring to integrate the overlooked knowledge into the source code for mitigating those gaps. Third, it employs what-if analysis and counterfactual reasoning to determine a minimum set of overlooked knowledge necessary to improve the performance of LLMs in code tasks.

We extensively evaluated Calico over 8,938 programs on three most commonly seen code tasks. Our experimental results show that vanilla ChatGPT cannot fully understand code structures. With knowledge calibration, Calico improves the vanilla LLM by 20% and exhibits comparable proficiency compared to fine-tuned LLMs. Deficiency diagnosis contributes to 8% reduction in program sizes while ensuring performance. These impressive results demonstrate the feasibility of utilizing a vanilla LLM for automated SE tasks, thereby avoiding the high computational costs associated with a fine-tuned model.

This program is tentative and subject to change.

Wed 18 Sep

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

10:30 - 11:50
LLMs for Code GenerationTechnical Papers at EI 7
10:30
20m
Talk
AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation
Technical Papers
Zhensu Sun Singapore Management University, Xiaoning Du Monash University, Australia, Zhou Yang Singapore Management University, Li Li Beihang University, David Lo Singapore Management University
Pre-print
10:50
20m
Talk
When to Stop? Towards Efficient Code Generation in LLMs with Excess Token Prevention
Technical Papers
Lianghong Guo Sun Yat-sen University, Yanlin Wang Sun Yat-sen University, Ensheng Shi Xi’an Jiaotong University, Wanjun Zhong Sun Yat-sen University, Hongyu Zhang Chongqing University, Jiachi Chen Sun Yat-sen University, Ruikai Zhang Huawei Cloud Computing Technologies Co., Ltd., Yuchi Ma Huawei Cloud Computing Technologies CO., LTD., Zibin Zheng Sun Yat-sen University
11:10
20m
Talk
FT2Ra: A Fine-Tuning-Inspired Approach to Retrieval-Augmented Code Completion
Technical Papers
Qi Guo Tianjin University, China, Xiaohong Li Tianjin University, Xiaofei Xie Singapore Management University, Shangqing Liu Nanyang Technological University, Ze Tang Nanjing University, Ruitao Feng Singapore Management University, Junjie Wang Tianjin University, Jidong Ge Nanjing University, Lei Bu Nanjing University
DOI
11:30
20m
Talk
Calico: Automated Knowledge Calibration and Diagnosis for Elevating AI Mastery in Code Tasks
Technical Papers
Yuxin Qiu University of California, Riverside, Jie Hu University of California Riverside, Qian Zhang University of California, Riverside, Heng Yin University of California, Riverside

Information for Participants