Explaining Black Box Predictions with Sam Ritchie - TWiML Talk #73
Published:Nov 25, 2017 19:26
•1 min read
•Practical AI
Analysis
This article summarizes a podcast episode from Practical AI featuring Sam Ritchie, a software engineer at Stripe. The episode focuses on explaining black box predictions, particularly in the context of fraud detection at Stripe. The discussion covers Stripe's methods for interpreting these predictions and touches upon related work, including Carlos Guestrin's LIME paper. The article highlights the importance of understanding and explaining complex AI models, especially in critical applications like fraud prevention. The podcast originates from the Strange Loop conference, emphasizing its developer-focused nature and multidisciplinary approach.
Key Takeaways
- •The podcast discusses the challenges of explaining black box predictions in AI.
- •Stripe uses black box predictions for fraud detection, highlighting a practical application.
- •The episode references the LIME paper and other approaches to explainability.
Reference
“In this episode, I speak with Sam Ritchie, a software engineer at Stripe. I caught up with Sam RIGHT after his talk at the conference, where he covered his team’s work on explaining black box predictions.”