Adversarial Attack on Monocular Depth Estimation using Physics-in-the-Loop Optimization

Research Paper#Adversarial Attacks, Monocular Depth Estimation, Computer Vision🔬 Research|Analyzed: Jan 3, 2026 08:41
Published: Dec 31, 2025 11:30
1 min read
ArXiv

Analysis

This paper addresses the vulnerability of deep learning models for monocular depth estimation to adversarial attacks. It's significant because it highlights a practical security concern in computer vision applications. The use of Physics-in-the-Loop (PITL) optimization, which considers real-world device specifications and disturbances, adds a layer of realism and practicality to the attack, making the findings more relevant to real-world scenarios. The paper's contribution lies in demonstrating how adversarial examples can be crafted to cause significant depth misestimations, potentially leading to object disappearance in the scene.
Reference / Citation
View Original
"The proposed method successfully created adversarial examples that lead to depth misestimations, resulting in parts of objects disappearing from the target scene."
A
ArXivDec 31, 2025 11:30
* Cited for critical analysis under Article 32.