ISSTA 2024
Mon 16 - Fri 20 September 2024 Vienna, Austria
co-located with ISSTA/ECOOP 2024

This program is tentative and subject to change.

Wed 18 Sep 2024 10:50 - 11:10 at EI 7 - LLMs for Code Generation

Code generation aims to automatically generate code snippets that meet given natural language requirements and plays an important role in software development. Although Code LLMs have shown excellent performance in this domain, their long generation time poses a signification limitation in practice use. In this paper, we first conduct an in-depth preliminary study with different Code LLMs on code generation task and identify a significant efficiency issue, i.e., continual generation of excess tokens. It harms the developer productivity and leads to huge computational wastes. To address it, we introduce CodeFast, an inference acceleration approach for Code LLMs on code generation. The key idea of CodeFast is to terminate the inference process in time when unnecessary excess tokens are detected. First, we propose an automatic data construction framework to obtain training data. Then, we train a unified lightweight model GenGuard applicable to multiple programming languages to predict whether to terminate inference at the current step. Finally, we enhance Code LLM with GenGuard to accelerate its inference in code generation task. We conduct extensive experiments with CodeFast on five representative Code LLMs across four widely used code generation datasets. Experimental results show that (1) CodeFast can significantly improve the inference speed of various Code LLMs in code generation, ranging form 34% to 452%, without compromising the quality of generated code. (2) CodeFast is stable across different parameter settings and can generalize to untrained datasets. Our code and data are available at https://anonymous.4open.science/r/CodeFast.

This program is tentative and subject to change.

Wed 18 Sep

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

10:30 - 11:50
LLMs for Code GenerationTechnical Papers at EI 7
10:30
20m
Talk
AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation
Technical Papers
Zhensu Sun Singapore Management University, Xiaoning Du Monash University, Australia, Zhou Yang Singapore Management University, Li Li Beihang University, David Lo Singapore Management University
Pre-print
10:50
20m
Talk
When to Stop? Towards Efficient Code Generation in LLMs with Excess Token Prevention
Technical Papers
Lianghong Guo Sun Yat-sen University, Yanlin Wang Sun Yat-sen University, Ensheng Shi Xi’an Jiaotong University, Wanjun Zhong Sun Yat-sen University, Hongyu Zhang Chongqing University, Jiachi Chen Sun Yat-sen University, Ruikai Zhang Huawei Cloud Computing Technologies Co., Ltd., Yuchi Ma Huawei Cloud Computing Technologies CO., LTD., Zibin Zheng Sun Yat-sen University
11:10
20m
Talk
FT2Ra: A Fine-Tuning-Inspired Approach to Retrieval-Augmented Code Completion
Technical Papers
Qi Guo Tianjin University, China, Xiaohong Li Tianjin University, Xiaofei Xie Singapore Management University, Shangqing Liu Nanyang Technological University, Ze Tang Nanjing University, Ruitao Feng Singapore Management University, Junjie Wang Tianjin University, Jidong Ge Nanjing University, Lei Bu Nanjing University
DOI
11:30
20m
Talk
Calico: Automated Knowledge Calibration and Diagnosis for Elevating AI Mastery in Code Tasks
Technical Papers
Yuxin Qiu University of California, Riverside, Jie Hu University of California Riverside, Qian Zhang University of California, Riverside, Heng Yin University of California, Riverside