Code Injection Attacks on LLM-based Multi-Agent Systems

Published:Dec 26, 2025 01:08
1 min read
ArXiv

Analysis

This paper highlights a critical security vulnerability in LLM-based multi-agent systems, specifically code injection attacks. It's important because these systems are becoming increasingly prevalent in software development, and this research reveals their susceptibility to malicious code. The paper's findings have significant implications for the design and deployment of secure AI-powered systems.

Reference

Embedding poisonous few-shot examples in the injected code can increase the attack success rate from 0% to 71.95%.