Multilingual Prompt Injection Attacks on LLM Academic Reviewing

Published:Dec 29, 2025 18:43
1 min read
ArXiv

Analysis

This paper investigates the vulnerability of LLMs used for academic peer review to hidden prompt injection attacks. It's significant because it explores a real-world application (peer review) and demonstrates how adversarial attacks can manipulate LLM outputs, potentially leading to biased or incorrect decisions. The multilingual aspect adds another layer of complexity, revealing language-specific vulnerabilities.

Reference

Prompt injection induces substantial changes in review scores and accept/reject decisions for English, Japanese, and Chinese injections, while Arabic injections produce little to no effect.