Adversarial Examples Are Not Bugs, They Are Features with Aleksander Madry - #369
Published:Apr 27, 2020 13:18
•1 min read
•Practical AI
Analysis
This podcast episode from Practical AI features a discussion with Aleksander Madry about his paper arguing that adversarial examples are not bugs but rather features of deep learning models. The conversation likely delves into the discrepancy between expected behavior and actual behavior of these systems, exploring the characterization of adversarial patterns and their significance. The discussion may also touch upon the implications of these findings on the ongoing debate surrounding deep learning, potentially offering insights that could influence opinions on the technology's strengths and weaknesses. The focus is on understanding and interpreting the behavior of AI models.
Key Takeaways
- •The core topic is adversarial examples in deep learning.
- •The discussion centers around the idea that adversarial examples are features, not bugs.
- •The conversation explores the implications of this perspective on the deep learning debate.
Reference
“The podcast discusses Aleksander Madry's paper "Adversarial Examples Are Not Bugs, They Are Features."”