"LLM in the Shell: Generative Honeypots" to be presented at ESORICS 2023 Poster Session

We are happy to announce that our researcher, Muris Sladić, will present our latest research, “LLM in the Shell: Generative Honeypots”, at the upcoming ESORICS conference poster session in The Hague, Netherlands, on Monday, September 25, 2023. Whether you plan to attend the conference or want to learn more about this research, check out our paper. Our research proposes a novel use of Large Language Models (LLMs) for dynamic on-the-fly creation and generation of more engaging honeypot environments.

LLM in the Shell: Generative Honeypots

Honeypots are essential tools in cybersecurity. However, most of them (even the high-interaction ones) lack the required realism to engage and fool human attackers. This limitation makes them easily discernible, hindering their effectiveness. This work introduces a novel method to create dynamic and realistic software honeypots based on Large Language Models. Preliminary results indicate that LLMs can create credible and dynamic honeypots capable of addressing important limitations of previous honeypots, such as deterministic responses, lack of adaptability, etc. We evaluated the realism of each command by experimenting with human attackers who needed to say if the answer from the honeypot was fake or not. Our proposed honeypot, called shelLM, reached an accuracy rate of 0.92.

Read the short paper at ArXiv: https://arxiv.org/pdf/2309.00155.pdf

Article Link: "LLM in the Shell: Generative Honeypots" at ESORICS 2023 Poster Session — Stratosphere IPS