Dynamic Knowledge Distillation Strategies for Continual Learning in Lifelong Autonomous Systems Without Catastrophic Forgetting

Authors

  • Yoshua Bengio Data Scientist, Canada Author

Keywords:

Continual learning, catastrophic forgetting, knowledge distillation, lifelong learning, autonomous systems, memory replay, teacher-student network.

Abstract

Lifelong learning in autonomous systems demands the ability to acquire new knowledge over time without compromising previously learned information—a challenge known as catastrophic forgetting. This paper explores dynamic knowledge distillation strategies that enable continual learning in neural models deployed in autonomous systems. By leveraging teacher-student architectures, selective memory replay, and adaptive regularization, the proposed framework ensures knowledge retention and optimal adaptation to new tasks. Through comparative evaluations on benchmark datasets, the approach demonstrates marked improvements in accuracy and task retention over existing lifelong learning techniques.

References

Hinton, G., Vinyals, O., & Dean, J. (2015). Distilling the Knowledge in a Neural Network. NIPS Workshop, Vol. 3, No. 1, pp. 1–9

Li, Z., & Hoiem, D. (2017). Learning without Forgetting. IEEE TPAMI, Vol. 40, No. 12, pp. 2935–2947

Rebuffi, S., Kolesnikov, A., Sperl, G., & Lampert, C. (2017). iCaRL: Incremental Classifier and Representation Learning. CVPR, Vol. 1, No. 2, pp. 2001–2010

Shin, H., Lee, J., Kim, J., & Kim, J. (2017). Continual Learning with Deep Generative Replay. NIPS, Vol. 30, No. 1, pp. 2990–2999

Rusu, A. A., et al. (2016). Progressive Neural Networks. arXiv Preprint, Vol. 1, No. 1, pp. 1–12

Lomonaco, V., & Maltoni, D. (2019). CORe50: A New Dataset and Benchmark for Continuous Object Recognition. PMLR, Vol. 78, No. 3, pp. 17–26

Lopez-Paz, D., & Ranzato, M. (2017). Gradient Episodic Memory for Continual Learning. NIPS, Vol. 1, No. 1, pp. 6467–6476

Aljundi, R., et al. (2018). Memory Aware Synapses. ECCV, Vol. 1, No. 3, pp. 139–154

Chaudhry, A., et al. (2019). On Tiny Episodic Memories in Continual Learning. ICLR, Vol. 1, No. 1, pp. 1–13

Serra, J., Suris, D., Miron, M., & Karatzoglou, A. (2018). Overcoming Catastrophic Forgetting with Hard Attention to the Task. ICML, Vol. 80, No. 1, pp. 4555–4564

Kirkpatrick, J., et al. (2017). Overcoming Catastrophic Forgetting in Neural Networks. PNAS, Vol. 114, No. 13, pp. 3521–3526

Farajtabar, M., et al. (2020). Orthogonal Gradient Descent for Continual Learning. AAAI, Vol. 34, No. 04, pp. 2951–2959

Mirzadeh, S. I., et al. (2020). Understanding the Role of Training Regimes in Continual Learning. NeurIPS, Vol. 33, No. 1, pp. 9220–9231

Schwarz, J., et al. (2018). Progress & Compress: A scalable framework for continual learning. ICML, Vol. 1, No. 2, pp. 4528–4537

Delange, M., et al. (2021). A Continual Learning Survey: Defying Forgetting in Classification Tasks. IEEE TPAMI, Vol. 44, No. 7, pp. 3366–3385

Downloads

Published

2025-03-08

How to Cite

Dynamic Knowledge Distillation Strategies for Continual Learning in Lifelong Autonomous Systems Without Catastrophic Forgetting. (2025). ISCSITR- INTERNATIONAL JOURNAL OF DATA SCIENCE (ISCSITR-IJDS) - ISSN: 3067-7408, 6(2), 1-7. https://iscsitr.in/index.php/ISCSITR-IJDS/article/view/ISCSITR-IJDS_06_02_001