Evaluating and Improving the Robustness of Security Attack Detectors Generated by LLMs

Software Institute

Date: 24 October 2024 / 16:30 - 17:30

USI East Campus, Room D1.13

Speaker: Samuele Pasini

Abstract: Large Language Models (LLMs) are increasingly used in software development and to generate security functions. However, LLMs can struggle to effectively generate code that implements security functions with requirements, such as attack detectors, and it is unclear whether this usage is effective in practice. This paper addresses the critical issue of evaluating and improving the robustness of LLM-generated security attack detectors for different types of injection. We propose a novel approach integrating Retrieval, Augmented Generation (RAG), and Self-Ranking into the LLM pipeline.
Our extensive empirical study targets code generated by LLMs to detect two prevalent injection attacks in web security, showing a significant improvement in detection performance.

Biography: Samuele Pasini is currently a PhD Researcher in the TAU Research group. He graduated at Politecnico di Milano specializing in Deep Learning, with a Thesis in the Context of Illegal Landfills Detection. Before joining USI, he worked as Computer Vision Engineer for a Swiss Startup. Samuele’s research is related to the usage of LLMs in the security domain, focusing on the robustness of generated code and Poisoning attacks.

Chair: TBD

*************************
In February 2019, the Software Institute started its SI Seminar Series. Every Thursday afternoon, a researcher of the Institute will publicly give a short talk on a software engineering argument of their choice. Examples include, but are not limited to novel interesting papers, seminal papers, personal research overview, discussion of preliminary research ideas, tutorials, and small experiments.
On our YouTube playlist you can watch some of the past seminars. On the SI website you can find more details on the next seminar, the upcoming seminars, and an archive of the past speakers.