Safer Exploration in Deep Reinforcement Learning using Action Priors with Sicelukwanda Zwane - TWiML Talk #235
Published:Mar 1, 2019 17:00
•1 min read
•Practical AI
Analysis
This article summarizes a talk by Sicelukwanda Zwane on safer exploration in deep reinforcement learning. The focus is on action priors, a technique to improve the safety of exploration in RL. The discussion covers the meaning of "safer exploration," how this approach differs from imitation learning, and its relevance to lifelong learning. The article highlights a specific research area within the broader field of AI, focusing on practical applications and advancements in RL. The Black in AI series context suggests an emphasis on diversity and inclusion within the AI community.
Key Takeaways
- •The talk focuses on safer exploration in Deep Reinforcement Learning.
- •Action priors are the core technique discussed.
- •The work is compared and contrasted with imitation learning and lifelong learning.
Reference
“In our conversation, we discuss what “safer exploration” means in this sense, the difference between this work and other techniques like imitation learning, and how this fits in with the goal of “lifelong learning.””