Search:
Match:
1 results
Research#AI Interpretability📝 BlogAnalyzed: Dec 29, 2025 08:21

Evaluating Model Explainability Methods with Sara Hooker - TWiML Talk #189

Published:Oct 10, 2018 18:24
1 min read
Practical AI

Analysis

This article summarizes a podcast episode featuring Sara Hooker, an AI Resident at Google Brain. The discussion centers on the interpretability of deep neural networks, exploring the meaning of interpretability and the differences between interpreting model decisions and model function. The conversation also touches upon the relationship between Google Brain and the broader Google AI ecosystem, including the significance of the Google AI Lab in Accra, Ghana. The focus is on understanding and evaluating methods for explaining the inner workings of AI models.
Reference

We discuss what interpretability means and nuances like the distinction between interpreting model decisions vs model function.