Enhancing Cross-Domain Generalization through Unified Representation Learning in Multi-Task Artificial Intelligence Frameworks
Keywords:
Cross-domain generalization, multi-task learning, unified representation learning, domain-invariant features, AI frameworksAbstract
Cross-domain generalization remains a critical challenge in modern Artificial Intelligence (AI), especially within multi-task learning (MTL) frameworks. This paper investigates how unified representation learning can improve generalization across heterogeneous domains. By analyzing previous research in representation learning, domain-invariant feature extraction, and task-shared knowledge transfer, we present a consolidated framework that fosters cross-domain robustness. Using empirical data from previous benchmarks, we demonstrate that learning shared representations across tasks not only improves performance on known tasks but also enables better adaptation to unseen domains.
References
Zamir, Amir R., et al. "Taskonomy: Disentangling Task Transfer Learning." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
Bilen, Hakan, and Andrea Vedaldi. "Universal Representations: The Missing Link Between Faces, Text, Planktons, and Cat Breeds." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
Rebuffi, Sylvestre-Alvise, Hakan Bilen, and Andrea Vedaldi. "Efficient Parametrization of Multi-Domain Deep Neural Networks." Proceedings of the European Conference on Computer Vision (ECCV), 2018.
Zhou, Kaiyang, et al. "Domain Generalization with Domain-Specific Aggregation Modules." Advances in Neural Information Processing Systems (NeurIPS), 2020.
Ruder, Sebastian. "An Overview of Multi-Task Learning in Deep Neural Networks." arXiv preprint arXiv:1706.05098, 2017.
Li, Da, et al. "Feature-Critic Networks for Heterogeneous Domain Generalization." Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2019.
Collier, Mark, et al. "Zero-Shot Transfer for NLP via Unified Latent Spaces." Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), 2020.
Seo, Minjoon, et al. "Domain Generalization in Reading Comprehension via Attentive Representation Transfer." Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019.
Bousmalis, Konstantinos, et al. "Domain Separation Networks." Advances in Neural Information Processing Systems (NeurIPS), 2016.
Tzeng, Eric, et al. "Adversarial Discriminative Domain Adaptation." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
Maninis, Kevis-Kokitsi, et al. "Universal Representations: The Missing Link Between Faces, Text, Planktons, and Cat Breeds." Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2019.
Chen, Zhao, et al. "GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks." Proceedings of the 35th International Conference on Machine Learning (ICML), 2018.
Sun, Baochen, and Kate Saenko. "Deep CORAL: Correlation Alignment for Deep Domain Adaptation." Proceedings of the European Conference on Computer Vision (ECCV), 2016.
Wu, Yuxin, and Kaiming He. "Group Normalization." Proceedings of the European Conference on Computer Vision (ECCV), 2018.
Downloads
Published
Issue
Section
License
Copyright (c) 2023 Elton T Aberle (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.