Research#AI Explainability📝 BlogAnalyzed: Dec 29, 2025 08:02

AI for High-Stakes Decision Making with Hima Lakkaraju - #387

Published:Jun 29, 2020 19:44
1 min read
Practical AI

Analysis

This article from Practical AI discusses Hima Lakkaraju's work on the reliability of explainable AI (XAI) techniques, particularly those using perturbation-based methods like LIME and SHAP. The focus is on the potential unreliability of these techniques and how they can be exploited. The article highlights the importance of understanding the limitations of XAI, especially in high-stakes decision-making scenarios where trust and accuracy are paramount. It suggests that researchers and practitioners should be aware of the vulnerabilities of these methods and explore more robust and trustworthy approaches to explainability.

Reference

Hima spoke on Understanding the Perils of Black Box Explanations.