Eray Özkural on AGI, Simulations & Safety

Published:Dec 20, 2020 01:16
1 min read
ML Street Talk Pod

Analysis

The article summarizes a podcast episode featuring Dr. Eray Ozkural, an AGI researcher, discussing his critical views on AI safety, particularly those of Max Tegmark, Nick Bostrom, and Eliezer Yudkowsky. Ozkural accuses them of 'doomsday fear-mongering' and neoluddism, hindering AI development. The episode also touches upon the intelligence explosion hypothesis and the simulation argument. The podcast covers various related topics, including the definition of intelligence, neural networks, and the simulation hypothesis.

Reference

Ozkural believes that views on AI safety represent a form of neoludditism and are capturing valuable research budgets with doomsday fear-mongering.