Adversarial Examples Discussion

Published:Jan 31, 2021 19:46
1 min read
ML Street Talk Pod

Analysis

This article summarizes a podcast episode discussing adversarial examples in machine learning. It highlights the ongoing research into why these examples exist and their impact on neural networks. The article mentions the 'features not bugs' paper and introduces the researchers involved, providing links to their profiles. The structure of the podcast is also outlined, indicating the topics covered.

Reference

Adversarial examples can be directly attributed to the presence of non-robust features: features derived from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans.