Continual learning is known for suffering from catastrophic forgetting. This phenomenon happens when the earlier learned concepts are forgotten due to the arrival of more recent learning samples. In this work, we present our novel main finding that reconstruction tasks (3D shape reconstruction, 2.5D sketch estimation and image autoencoding) do not suffer from catastrophic forgetting ([Sec. 1]). We attempt to explain this behavior in [Sec. 2]. We further provide a potential of having a proxy task to improve the performance of continual learning of classification in [Sec. 3]. In [Sec. 4] we introduce YASS—a novel yet simple baseline for classification. Finally, we present DyRT, a novel tool for tracking the dynamic of representation learning in continual learning models in [Sec. 5].
1. Reconstruction Tasks Do Not Suffer from Catastrophic Forgetting
We demonstrate that, unlike classification task, reconstruction tasks do not suffer from catastrophic forgetting when learning continually. The reconstruction algorithm used is standard SGD run on each learning exposure. We do not need to utilize additional losses, external memory, or other methods to achieve good continual learning performance. Average accuracy on seen classes is shown at each learning exposure in the following plots.





2. Representation Positive Transfer in Reconstruction Tasks
We demonstrate that, unlike classification task, 3D shape reconstruction task is able to propagate the learned feature representation forward between learning exposures (referred to as positive forward transfer), which is presumably one of the keys to the success of reconstruction tasks. In the following experiment, we utilize GDumb [1] and a variant of GDumb, GDumb++, which differs from GDumb in that the model trained at each exposure is initialized with the weights learned in the previous exposure (ie. not starting from scratch). Since both GDumb and GDumb++ are trained on the same amount of input data, any performance gap is attributable to the value of propagating the learned representation.
