Alex Lewandowski, Adtiya A. Ramesh, Edan Meyer, Dale Schuurmans, Marlos C. Machado
This paper introduces a computationally-embedded perspective on continual learning, proposing an interactivity objective and showing that deep linear networks better sustain continual adaptation compared to deep nonlinear networks.
The paper explores how artificial agents can continuously learn and adapt in an ever-changing world, a concept known as the 'big world hypothesis'. The authors propose a new way to think about these agents by considering them as part of the environment they interact with, rather than separate entities. They introduce a concept called 'interactivity', which measures how well an agent can keep learning and adapting over time. Their findings suggest that certain types of neural networks are better at maintaining this adaptability as they grow more complex.