Evaluating Model Explainability Methods with Sara Hooker - TWiML Talk #189
Published:Oct 10, 2018 18:24
•1 min read
•Practical AI
Analysis
This article summarizes a podcast episode featuring Sara Hooker, an AI Resident at Google Brain. The discussion centers on the interpretability of deep neural networks, exploring the meaning of interpretability and the differences between interpreting model decisions and model function. The conversation also touches upon the relationship between Google Brain and the broader Google AI ecosystem, including the significance of the Google AI Lab in Accra, Ghana. The focus is on understanding and evaluating methods for explaining the inner workings of AI models.
Key Takeaways
- •The episode focuses on the interpretability of deep neural networks.
- •It explores the difference between interpreting model decisions and model function.
- •The discussion includes the relationship between Google Brain and the broader Google AI landscape.
Reference
“We discuss what interpretability means and nuances like the distinction between interpreting model decisions vs model function.”