Research#AI Safety📝 BlogAnalyzed: Dec 29, 2025 08:30

Security and Safety in AI: Adversarial Examples, Bias and Trust w/ Moustapha Cissé - TWiML Talk #108

Published:Feb 6, 2018 00:54
1 min read
Practical AI

Analysis

This article summarizes a podcast episode discussing AI security and safety. The focus is on Moustapha Cissé's research at Facebook AI Research Lab (FAIR) Paris, particularly his work on adversarial examples and robust AI systems. The discussion also touches upon bias in datasets and models that can identify and mitigate these biases. The article promotes an AI conference in New York, highlighting key speakers and offering a discount code. It provides links to show notes and related contests and series, indicating a focus on practical application and community engagement within the AI field.

Reference

We discuss the role of bias in datasets, and explore his vision for models that can identify these biases and adjust the way they train themselves in order to avoid taking on those biases.