Research Paper#Adversarial Attacks, Monocular Depth Estimation, Computer Vision🔬 ResearchAnalyzed: Jan 3, 2026 08:41
Adversarial Attack on Monocular Depth Estimation using Physics-in-the-Loop Optimization
Published:Dec 31, 2025 11:30
•1 min read
•ArXiv
Analysis
This paper addresses the vulnerability of deep learning models for monocular depth estimation to adversarial attacks. It's significant because it highlights a practical security concern in computer vision applications. The use of Physics-in-the-Loop (PITL) optimization, which considers real-world device specifications and disturbances, adds a layer of realism and practicality to the attack, making the findings more relevant to real-world scenarios. The paper's contribution lies in demonstrating how adversarial examples can be crafted to cause significant depth misestimations, potentially leading to object disappearance in the scene.
Key Takeaways
- •Demonstrates the vulnerability of monocular depth estimation models to adversarial attacks.
- •Proposes a projection-based adversarial attack method.
- •Employs Physics-in-the-Loop (PITL) optimization for realistic attack simulation.
- •Shows that adversarial examples can cause significant depth misestimations and object disappearance.
Reference
“The proposed method successfully created adversarial examples that lead to depth misestimations, resulting in parts of objects disappearing from the target scene.”