Large Language Model based Code Completion is an Effective Genetic Improvement Mutation
In this work, we introduce a novel large language model (LLM)-based masking mutation operator for Genetic Improvement (GI), which leverages code completion capabilities of large language models to replace masked code segments with contextually relevant modifications. Our approach was tested on five open-source Java projects, where we compared its effective- ness against both traditional GI mutations and an existing LLM- based replacement mutation operator using random sampling and local search algorithms. Results show that the masking mutation operator creates a search space with more compiling and test-passing patches, reducing model response time by up to 60.7% compared to the replacement mutation. Additionally, it outperforms the replacement mutation in achieving the highest runtime improvement on four out of five projects and discovers more runtime-improving patches across all projects. However, combining the masking mutation with traditional GI mutations yielded inconsistent results, suggesting further investigation is needed. This study highlights the promise of LLM-based code completion to boost the efficiency and effectiveness of GI for automated software optimisation.
Sun 27 AprDisplayed time zone: Eastern Time (US & Canada) change
11:00 - 12:30 | Morning Session 2GI at 202 Chair(s): William Langdon University College London, Vesna Nowack Imperial College London | ||
11:00 30mTalk | Large Language Model based Code Completion is an Effective Genetic Improvement Mutation GI Jingyuan Wang University College London, Carol Hanna University College London, Justyna Petke University College London | ||
11:30 30mTalk | Enhancing Software Runtime with Reinforcement Learning-Driven Mutation Operator Selection in Genetic Improvement GI Damien Bose University College London, Carol Hanna University College London, Justyna Petke University College London | ||
12:00 30mTalk | Empirical Comparison of Runtime Improvement Approaches: Genetic Improvement, Parameter Tuning, and Their Combination GI Thanatad Songpetchmongkol University College London, Aymeric Blot University of Rennes, IRISA / INRIA, Justyna Petke University College London |