Hybrid Modeling Approaches Combining Symbolic AI and Machine Learning for Enhanced Decision Making
Keywords:
Hybrid AI, Symbolic AI, Machine Learning, Neuro-symbolic systems, Explainable AI, Decision Making, Knowledge RepresentationAbstract
In recent years, hybrid modeling approaches that combine symbolic artificial intelligence (AI) with machine learning (ML) have gained traction as a promising paradigm for improving decision-making across domains such as healthcare, finance, and robotics. While ML offers high performance in pattern recognition, it often lacks explainability, robustness, and generalization. Conversely, symbolic AI provides interpretability and rule-based reasoning but struggles with adaptability in complex, data-rich environments. This paper explores the integration of these paradigms to address their respective limitations, drawing upon developments. We review foundational literature, highlight hybrid architectures, and present current experimental frameworks where such models have improved decision accuracy, transparency, and adaptability.
References
Garcez, Artur d'Avila, Luis C. Lamb, and Dov M. Gabbay. Neural-Symbolic Cognitive Reasoning. Springer, 2019.
Mao, Jiayuan, Chuang Gan, Pushmeet Kohli, Joshua B. Tenenbaum, and Jiajun Wu. "The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences from Natural Supervision." Proceedings of the 7th International Conference on Learning Representations (ICLR), 2019.
Serafini, Luciano, and Artur d'Avila Garcez. "Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge." arXiv preprint arXiv:1606.04422, 2016.
Kamadi, S. (2023). Identity-Driven Zero Trust Automation in GitOps: Policy-as-Code Enforcement for Secure Code Deployments. International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 9(3), 893-902. https://doi.org/10.32628/CSEIT235148
Liu, Weijie, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. "K-BERT: Enabling Language Representation with Knowledge Graph." Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 3, 2020, pp. 2901–2908.
Raedt, Luc De, Sebastijan Dumancic, Robin Manhaeve, and Tim Demeester. "From Statistical Relational to Neuro-Symbolic Artificial Intelligence." Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI), 2020, pp. 4943–4950.
Topan, Ece, and David Gunning. "The Next Decade of Neuro-Symbolic AI." AI Magazine, vol. 43, no. 1, 2022, pp. 12–23.
Silver, David, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, et al. "Mastering the Game of Go Without Human Knowledge." Nature, vol. 550, no. 7676, 2017, pp. 354–359.
Lipton, Zachary C. "The Mythos of Model Interpretability." Communications of the ACM, vol. 61, no. 10, 2018, pp. 36–43.
Holzinger, Andreas, Chris Biemann, Constantinos S. Pattichis, and Douglas B. Kell. "What Do We Need to Build Explainable AI Systems for the Medical Domain?" Reviews in the Medical Informatics, vol. 113, 2017, pp. 647–656.
Gunning, David. "Explainable Artificial Intelligence (XAI)." Defense Advanced Research Projects Agency (DARPA), 2017.
Zhang, Yin, and Qiang Yang. "An Overview of Multi-Task Learning." National Science Review, vol. 5, no. 1, 2018, pp. 30–43.
Rajkomar, Alvin, Eyal Oren, and Jeffrey Dean. "Scalable and Accurate Deep Learning with Electronic Health Records." NPJ Digital Medicine, vol. 1, no. 18, 2018.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Richard Martin Vims (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
