Adversarial Attacks on Android Malware Detection via LLMs

Research#adversarial attacks🔬 Research|Analyzed: Jan 10, 2026 07:31
Published: Dec 24, 2025 19:56
1 min read
ArXiv

Analysis

This research explores the vulnerability of Android malware detectors to adversarial attacks generated by Large Language Models (LLMs). The study highlights a concerning trend where sophisticated AI models are being leveraged to undermine the security of existing systems.
Reference / Citation
View Original
"The research focuses on LLM-driven feature-level adversarial attacks."
A
ArXivDec 24, 2025 19:56
* Cited for critical analysis under Article 32.