Research Paper#LLMs, Prompt Injection, Adversarial Attacks, Academic Peer Review, Multilingual NLP🔬 ResearchAnalyzed: Jan 3, 2026 18:30
Multilingual Prompt Injection Attacks on LLM Academic Reviewing
Published:Dec 29, 2025 18:43
•1 min read
•ArXiv
Analysis
This paper investigates the vulnerability of LLMs used for academic peer review to hidden prompt injection attacks. It's significant because it explores a real-world application (peer review) and demonstrates how adversarial attacks can manipulate LLM outputs, potentially leading to biased or incorrect decisions. The multilingual aspect adds another layer of complexity, revealing language-specific vulnerabilities.
Key Takeaways
- •LLMs used for academic peer review are susceptible to document-level prompt injection attacks.
- •The effectiveness of these attacks varies across languages.
- •English, Japanese, and Chinese injections were successful in altering review outcomes.
- •Arabic injections showed little to no effect.
Reference
“Prompt injection induces substantial changes in review scores and accept/reject decisions for English, Japanese, and Chinese injections, while Arabic injections produce little to no effect.”