AI for High-Stakes Decision Making with Hima Lakkaraju - #387

Research#AI Explainability📝 Blog|Analyzed: Dec 29, 2025 08:02
Published: Jun 29, 2020 19:44
1 min read
Practical AI

Analysis

This article from Practical AI discusses Hima Lakkaraju's work on the reliability of explainable AI (XAI) techniques, particularly those using perturbation-based methods like LIME and SHAP. The focus is on the potential unreliability of these techniques and how they can be exploited. The article highlights the importance of understanding the limitations of XAI, especially in high-stakes decision-making scenarios where trust and accuracy are paramount. It suggests that researchers and practitioners should be aware of the vulnerabilities of these methods and explore more robust and trustworthy approaches to explainability.
Reference / Citation
View Original
"Hima spoke on Understanding the Perils of Black Box Explanations."
P
Practical AIJun 29, 2020 19:44
* Cited for critical analysis under Article 32.