Attacking machine learning with adversarial examples

Security#Machine Learning Vulnerabilities🏛️ Official|Analyzed: Jan 3, 2026 18:07
Published: Feb 24, 2017 08:00
1 min read
OpenAI News

Analysis

The article introduces adversarial examples, highlighting their nature as intentionally designed inputs that mislead machine learning models. It promises to explain how these examples function across various platforms and the challenges in securing systems against them. The focus is on the vulnerability of machine learning models to carefully crafted inputs.
Reference / Citation
View Original
"Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines."
O
OpenAI NewsFeb 24, 2017 08:00
* Cited for critical analysis under Article 32.