Embodied Visual Learning with Kristen Grauman - TWiML Talk #85
Published:Dec 13, 2017 21:18
•1 min read
•Practical AI
Analysis
This article summarizes a podcast episode featuring Kristen Grauman, a computer vision expert, discussing embodied visual learning. The conversation stems from her talk at the Deep Learning Summit, focusing on how vision systems can learn to move and perceive their environment. Grauman explores the connection between movement and visual input, active looking policies, and mimicking human videography techniques for 360-degree video analysis. The article highlights the practical application of computer vision in understanding and interpreting visual data through embodied systems.
Key Takeaways
Reference
“Kristen considers how an embodied vision system can internalize the link between “how I move” and “what I see”, explore policies for learning to look around actively, and learn to mimic human videographer tendencies, automatically deciding where to look in unedited 360 degree video.”