Participation in ICAART 2024, Rome

Last week in Rome, our team was happy to participate with two papers in the 16th International Conference on Agents and Artificial Intelligence (ICAART).

Out of the Cage: How Stochastic Parrots Win in Cyber Security Environments

by Maria Rigaki, Ondřej Lukáš, Carlos Catania and Sebastian Garcia

Large Language Models (LLMs) have gained widespread popularity across diverse domains involving text generation, summarization, and various natural language processing tasks. Despite their inherent limitations, LLM-based designs have shown promising capabilities in planning and navigating open-world scenarios. This paper introduces a novel application of pre-trained LLMs as agents within cybersecurity network environments, focusing on their utility for sequential decision-making processes. We present an approach wherein pre-trained LLMs are leveraged as attacking agents in two reinforcement learning environments. Our proposed agents demonstrate similar or better performance against state-of-the-art agents trained for thousands of episodes in most scenarios and configurations. In addition, the best LLM agents perform similarly to human testers of the environment without any additional training process. This design highlights the potential of LLMs to address complex decision-making tas ks within cybersecurity efficiently. Furthermore, we introduce a new network security environment named NetSecGame. The environment is designed to support complex multi-agent scenarios within the network security domain eventually. The proposed environment mimics real network attacks and is designed to be highly modular and adaptable for various scenarios. [READ MORE]

Bridging the Explanation Gap in AI Security: A Task-Driven Approach to XAI Methods Evaluation

by Ondřej Lukáš and Sebastian Garcia

Deciding which XAI technique is best depends not only on the domain, but also on the given task, the dataset used, the model being explained, and the target goal of that model. We argue that the evaluation of XAI methods has not been thoroughly analyzed in the network security domain, which presents a unique type of challenge. While there are XAI methods applied in network security there is still a large gap between the needs of security stakeholders and the selection of the optimal method. We propose to approach the problem by first defining the stack-holders in security and their prototypical tasks. Each task defines inputs and specific needs for explanations. Based on these explanation needs (e.g. understanding the performance, or stealing a model), we created five XAI evaluation techniques that are used to compare and select which XAI method is best for each task (dataset, model, and goal). Our proposed approach was evaluated by running experiments for different security stakehol ders, machine learning models, and XAI methods. Results were compared with the AutoXAI technique and random selection. Results show that our proposal to evaluate and select XAI methods for network security is well-grounded and that it can help AI security practitioners find better explanations for their given tasks. [READ MORE]