Software Quality Assurance in the Era of Large Language Models
In recent years, Large Language Models (LLMs), such as GPT-4 and Claude-3.5, have shown impressive performance in various downstream applications, including software engineering. In this talk, I will discuss the potential impact of modern LLMs on the important problem of software quality assurance, along with our recent research findings. I will first talk about the new opportunities and possibilities LLMs can offer for better quality assurance of real-world software systems. Next, I will discuss the new quality assurance issues or challenges raised by LLMs themselves and deep learning in general. Lastly, I will conclude with how our software engineering community can help advance and co-evolve with code-specific or even general-purpose LLMs.
Lingming Zhang is an Associate Professor in the Department of Computer Science at the University of Illinois Urbana-Champaign. His main research interests lie in Software Engineering and Programming Languages, as well as their synergy with Machine Learning. His group has built a number of pioneering techniques on LLM-based software testing, repair, and synthesis (including TitanFuzz, AlphaRepair, and ChatRepair), and also released a series of competitive open-source code LLMs (including StarCoder2 and Magicoder), with core techniques/datasets widely adopted in industry. He is a recipient of ACM SIGSOFT Early Career Researcher Award, NSF CAREER Award, UIUC Dean’s Award for Excellence in Research, as well as research awards from Alibaba, Google, Kwai Inc., Meta, and Samsung. He currently serves as program co-chair for ASE 2025 and LLM4Code 2025, as well as associate chair for OOPSLA 2024. For more details, please visit: http://lingming.cs.illinois.edu/
Wed 18 SepDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
09:00 - 10:00 | |||
09:00 60mKeynote | Software Quality Assurance in the Era of Large Language Models Keynotes |