Computationally-Embedded Perspective on Continual Learning
Analysis
This paper introduces a novel perspective on continual learning by framing the agent as a computationally-embedded automaton within a universal computer. This approach provides a new way to understand and address the challenges of continual learning, particularly in the context of the 'big world hypothesis'. The paper's strength lies in its theoretical foundation, establishing a connection between embedded agents and partially observable Markov decision processes. The proposed 'interactivity' objective and the model-based reinforcement learning algorithm offer a concrete framework for evaluating and improving continual learning capabilities. The comparison between deep linear and nonlinear networks provides valuable insights into the impact of model capacity on sustained interactivity.
Key Takeaways
- •Proposes a novel perspective on continual learning by embedding the agent within a universal computer.
- •Introduces the 'interactivity' objective to measure an agent's ability to adapt.
- •Develops a model-based reinforcement learning algorithm for interactivity-seeking.
- •Finds that deep linear networks sustain higher interactivity than deep nonlinear networks as capacity increases.
“The paper introduces a computationally-embedded perspective that represents an embedded agent as an automaton simulated within a universal (formal) computer.”