ArtiSG: Functional 3D Scene Graphs for Robotic Manipulation
Research Paper#Robotics, Scene Understanding, Articulated Objects, Manipulation🔬 Research|Analyzed: Jan 3, 2026 08:38•
Published: Dec 31, 2025 13:10
•1 min read
•ArXivAnalysis
This paper addresses a critical limitation in robotic scene understanding: the lack of functional information about articulated objects. Existing methods struggle with visual ambiguity and often miss fine-grained functional elements. ArtiSG offers a novel solution by incorporating human demonstrations to build functional 3D scene graphs, enabling robots to perform language-directed manipulation tasks. The use of a portable setup for data collection and the integration of kinematic priors are key strengths.
Key Takeaways
- •Proposes ArtiSG, a framework for constructing functional 3D scene graphs.
- •Utilizes human demonstrations to overcome limitations of existing methods.
- •Employs a portable setup for robust data collection.
- •Integrates kinematic priors and interaction data.
- •Demonstrates superior performance in real-world experiments.
Reference / Citation
View Original"ArtiSG significantly outperforms baselines in functional element recall and articulation estimation precision."