Taxonomy of LLM Harms: A Critical Review

Ethics#LLM🔬 Research|Analyzed: Jan 10, 2026 13:00
Published: Dec 5, 2025 18:12
1 min read
ArXiv

Analysis

This ArXiv paper provides a valuable contribution by cataloging potential harms associated with Large Language Models. Its taxonomy allows for a more structured understanding of these risks and facilitates focused mitigation strategies.
Reference / Citation
View Original
"The paper presents a detailed taxonomy of harms related to LLMs."
A
ArXivDec 5, 2025 18:12
* Cited for critical analysis under Article 32.