Research Paper#AI Security, LLMs, Multi-Agent Systems, Code Injection🔬 ResearchAnalyzed: Jan 3, 2026 16:38
Code Injection Attacks on LLM-based Multi-Agent Systems
Analysis
This paper highlights a critical security vulnerability in LLM-based multi-agent systems, specifically code injection attacks. It's important because these systems are becoming increasingly prevalent in software development, and this research reveals their susceptibility to malicious code. The paper's findings have significant implications for the design and deployment of secure AI-powered systems.
Key Takeaways
- •LLM-based multi-agent systems are vulnerable to code injection attacks.
- •The coder-reviewer-tester architecture is more resilient than coder or coder-tester architectures.
- •Adding a security analysis agent improves resilience without significantly impacting efficiency.
- •Advanced code injection techniques, such as embedding poisonous few-shot examples, can significantly increase attack success rates.
Reference
“Embedding poisonous few-shot examples in the injected code can increase the attack success rate from 0% to 71.95%.”