Adversarial Objects for Depth Estimation Attacks via Diffusion
Analysis
This paper addresses the vulnerability of monocular depth estimation (MDE) in autonomous driving to adversarial attacks. It proposes a novel method using a diffusion-based generative adversarial attack framework to create realistic and effective adversarial objects. The key innovation lies in generating physically plausible objects that can induce significant depth shifts, overcoming limitations of existing methods in terms of realism, stealthiness, and deployability. This is crucial for improving the robustness and safety of autonomous driving systems.
Key Takeaways
- •Proposes a novel diffusion-based method for generating adversarial objects.
- •Addresses limitations of existing adversarial attack methods in MDE.
- •Focuses on generating realistic and physically plausible adversarial objects.
- •Demonstrates improved effectiveness, stealthiness, and deployability compared to existing methods.
- •Has strong implications for autonomous driving safety assessment.
“The framework incorporates a Salient Region Selection module and a Jacobian Vector Product Guidance mechanism to generate physically plausible adversarial objects.”