Fast-LLM - Can Large Language Models Synthesise Efficient Code?
Research Project, 2026
– 2030
Synthesising source code using Large Language Models (LLMs) is an important topic in software engineering research. Existing foundation models have shown the promise to generate functionally correct code based on a natural language prompt. However, little research exists on other quality characteristics, particularly efficiency in terms of execution time or memory usage. We have reasons to be concerned that even functionally correct code generated by LLMs is often woefully inefficient. If more and more code is generated using LLMs, this will have a disastrous negative impact on future software systems. Hence, the core research question of Fast-LLM is: \"Can Large Language Models Be Used to Synthesise Efficient Code?\".On a high level, the project will have three phases. Initially, we will experimentally assess the performance of LLM-synthesized code. This will also entail compiling a performance benchmark dataset. Then, we will study whether fine-tuning (using examples of highly efficient code) can be used to improve the performance of code generated using state-of-art models. Finally, we will bring in performance experts and conduct a second round of fine-tuning using reinforcement learning from human feedback (RLHF).Fast-LLM will lead to a set fine-tuned models which can be used directly. However, more importantly, the project will also propose approaches how any future foundation model can be adapted to improve software performance or other non-functional properties.
Participants
Philipp Leitner (contact)
Chalmers, Computer Science and Engineering (Chalmers), Interaction Design and Software Engineering
Funding
Swedish Research Council (VR)
Project ID: 2025-04346
Funding Chalmers participation during 2026–2030
Related Areas of Advance and Infrastructure
Information and Communication Technology
Areas of Advance