FT2Ra: A Fine-Tuning-Inspired Approach to Retrieval-Augmented Code Completion
The rise of code pre-trained models has significantly enhanced various coding tasks, such as code completion, and tools like GitHub Copilot. However, the substantial size of these models, especially large models, poses a significant challenge when it comes to fine-tuning them for specific downstream tasks. As an alternative approach, retrieval-based methods have emerged as a promising solution, augmenting model predictions without the need for fine-tuning.
Despite their potential, a significant challenge is that the designs of these methods often rely on heuristics, leaving critical questions about what information should be stored or retrieved and how to interpolate such information for augmenting predictions.
To tackle this challenge, we first perform a theoretical analysis of the fine-tuning process, highlighting the importance of delta logits as a catalyst for improving model predictions. Building on this insight, we develop a novel retrieval-based method, FT2Ra, which aims to mimic genuine fine-tuning. While FT2Ra adopts a retrieval-based mechanism, it uniquely adopts a paradigm with a learning rate and multi-epoch retrievals, which is similar to fine-tuning.
We conducted a comprehensive evaluation of FT2Ra in both token-level and line-level code completions. Our findings demonstrate the remarkable effectiveness of FT2Ra when compared to state-of-the-art methods and its potential to genuine fine-tuning.
In token-level completion, which represents a relatively easier task, FT2Ra achieves a 4.29% improvement in accuracy compared to the best baseline method on UniXcoder. In the more challenging line-level completion task, we observe a substantial more than twice increase in Exact Match (EM) performance, indicating the significant advantages of our theoretical analysis. Notably, even when operating without actual fine-tuning, FT2Ra exhibits competitive performance compared to the models with real fine-tuning.
Wed 18 SepDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
10:30 - 11:50 | |||
10:30 20mTalk | AI Coders Are among Us: Rethinking Programming Language Grammar towards Efficient Code GenerationACM SIGSOFT Distinguished Paper Award Technical Papers Zhensu Sun Singapore Management University, Xiaoning Du Monash University, Zhou Yang Singapore Management University, Li Li Beihang University, David Lo Singapore Management University DOI Pre-print | ||
10:50 20mTalk | When to Stop? Towards Efficient Code Generation in LLMs with Excess Token PreventionACM SIGSOFT Distinguished Paper Award Technical Papers Lianghong Guo Sun Yat-sen University, Yanlin Wang Sun Yat-sen University, Ensheng Shi Xi’an Jiaotong University, Wanjun Zhong Sun Yat-sen University, Hongyu Zhang Chongqing University, Jiachi Chen Sun Yat-sen University, Ruikai Zhang Huawei Cloud Computing Technologies, Yuchi Ma Huawei Cloud Computing Technologies, Zibin Zheng Sun Yat-sen University DOI | ||
11:10 20mTalk | FT2Ra: A Fine-Tuning-Inspired Approach to Retrieval-Augmented Code Completion Technical Papers Qi Guo Tianjin University, Xiaohong Li Tianjin University, Xiaofei Xie Singapore Management University, Shangqing Liu Nanyang Technological University, Ze Tang Nanjing University, Ruitao Feng Singapore Management University, Junjie Wang Tianjin University, Jidong Ge Nanjing University, Lei Bu Nanjing University DOI | ||
11:30 20mTalk | Calico: Automated Knowledge Calibration and Diagnosis for Elevating AI Mastery in Code Tasks Technical Papers Yuxin Qiu University of California at Riverside, Jie Hu University of California at Riverside, Qian Zhang University of California at Riverside, Heng Yin University of California at Riverside DOI |