ISSTA 2024
Mon 16 - Fri 20 September 2024 Vienna, Austria
co-located with ISSTA/ECOOP 2024

This program is tentative and subject to change.

Wed 18 Sep 2024 10:30 - 10:50 at EI 7 - LLMs for Code Generation

Besides humans and machines, Artificial Intelligence (AI) models have emerged to be another important audience of programming languages, as we come to the era of large language models (LLMs). LLMs can now excel at coding competitions and even program like developers to address various tasks, such as math calculation. Yet, the grammar and layout of existing programs are designed for humans. Particularly, abundant grammar tokens and formatting tokens are included to make the code more readable to humans. While beneficial, such a human-centric design imposes an unnecessary computational burden on LLMs where each token, either consumed or generated, indistinguishably consumes computational resources.

To improve inference efficiency and reduce computational costs, we propose the concept of AI-oriented grammar, which aims to represent the code in a way that better suits the working mechanism of AI models. Code written with AI-oriented grammar discards formats and uses a minimum number of tokens to convey code semantics effectively. To demonstrate the feasibility of this concept, we explore and implement the first AI-oriented grammar for Python, named Simple Python (SimPy). SimPy is crafted by revising the original Python grammar through a series of heuristic rules. Programs written in SimPy maintain identical Abstract Syntax Tree (AST) structures to those in standard Python, allowing execution via a modified AST parser. In addition, we explore methods to enable existing LLMs to proficiently understand and use SimPy, and ensure the changes remain imperceptible for human developers. Compared with the original Python, SimPy not only reduces token usage by 13.5% and 10.4% for CodeLLama and GPT-4, but also achieves equivalent, even improved, performance over the models trained on Python code. With these promising results, we encourage further contributions to the development of AI-oriented grammar within our community. Our goal is to make LLM-based code generation more efficient and environmentally sustainable.

This program is tentative and subject to change.

Wed 18 Sep

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

10:30 - 11:50
LLMs for Code GenerationTechnical Papers at EI 7
10:30
20m
Talk
AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation
Technical Papers
Zhensu Sun Singapore Management University, Xiaoning Du Monash University, Australia, Zhou Yang Singapore Management University, Li Li Beihang University, David Lo Singapore Management University
Pre-print
10:50
20m
Talk
When to Stop? Towards Efficient Code Generation in LLMs with Excess Token Prevention
Technical Papers
Lianghong Guo Sun Yat-sen University, Yanlin Wang Sun Yat-sen University, Ensheng Shi Xi’an Jiaotong University, Wanjun Zhong Sun Yat-sen University, Hongyu Zhang Chongqing University, Jiachi Chen Sun Yat-sen University, Ruikai Zhang Huawei Cloud Computing Technologies Co., Ltd., Yuchi Ma Huawei Cloud Computing Technologies CO., LTD., Zibin Zheng Sun Yat-sen University
11:10
20m
Talk
FT2Ra: A Fine-Tuning-Inspired Approach to Retrieval-Augmented Code Completion
Technical Papers
Qi Guo Tianjin University, China, Xiaohong Li Tianjin University, Xiaofei Xie Singapore Management University, Shangqing Liu Nanyang Technological University, Ze Tang Nanjing University, Ruitao Feng Singapore Management University, Junjie Wang Tianjin University, Jidong Ge Nanjing University, Lei Bu Nanjing University
DOI
11:30
20m
Talk
Calico: Automated Knowledge Calibration and Diagnosis for Elevating AI Mastery in Code Tasks
Technical Papers
Yuxin Qiu University of California, Riverside, Jie Hu University of California Riverside, Qian Zhang University of California, Riverside, Heng Yin University of California, Riverside