Cross-Domain Comparative Analysis of Decision-Making Algorithms in Autonomous and Semi-Autonomous System Architectures

Authors

  • Sami Haddadin Autonomous Systems Engineer, Germany Author

Keywords:

Autonomous Systems, Semi-Autonomous Systems, Decision-Making Algorithms, Reinforcement Learning, Markov Decision Process, Industrial Automation, Autonomous Vehicles, Robotics

Abstract

This study presents a comparative analysis of decision-making algorithms employed across autonomous and semi-autonomous system architectures within the fields of transportation, robotics, and industrial automation. We evaluate the structural, computational, and real-time performance dimensions of various algorithms, such as Markov Decision Processes (MDPs), Reinforcement Learning (RL), and Heuristic-based Decision Trees (HDT). By integrating findings from cross-domain applications, we assess algorithmic suitability based on adaptability, interpretability, and risk handling. A mixed-method approach is utilized to synthesize quantitative benchmarks with qualitative operational analyses. The results emphasize that while MDPs show optimality in constrained environments, RL algorithms outperform others in dynamically uncertain contexts. Our analysis also highlights the practical limitations of algorithm portability between domains due to task complexity and safety-critical considerations.

References

Kuwata Y., Teo J., Fiore G., et al. (2009). Real-time motion planning with MDPs for UAVs. Journal of Field Robotics, Vol. 26, Issue 4.

Silver D., Huang A., Maddison C., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, Vol. 529, Issue 7587.

Montemerlo M., Becker J., Bhat S., et al. (2008). Autonomous driving in urban environments. Journal of Field Robotics, Vol. 25, Issue 8.

LaValle S. (2006). Planning algorithms for real-time systems. IEEE Transactions on Robotics, Vol. 22, Issue 3.

Zhang H., Zhao Y. (2014). Decision trees in robotic manufacturing. Robotics and Computer-Integrated Manufacturing, Vol. 30, Issue 2.

Kober J., Peters J., Bagnell J.A. (2013). Policy learning in semi-autonomous robots. Journal of Machine Learning Research, Vol. 14, Issue 1.

Thrun S., Burgard W., Fox D. (2005). Probabilistic robotics. Communications of the ACM, Vol. 48, Issue 4.

Sutton R.S., Barto A.G. (1998). Reinforcement learning: An introduction. MIT Press, Vol. 1, Issue 1.

Schmidhuber J. (2015). Deep learning in neural networks. Neural Networks, Vol. 61, Issue 1.

Russell S., Norvig P. (2009). AI: A modern approach. Prentice Hall, Vol. 3, Issue 1.

Kaelbling L.P., Littman M.L., Cassandra A.R. (1998). Planning and acting in partially observable stochastic domains. Artificial Intelligence, Vol. 101, Issue 1-2.

Kalra N., Paddock S.M. (2016). Driving to safety: How safe is safe enough? RAND Corporation, Vol. 1, Issue 1.

Borenstein J., Pearson J. (2010). Autonomous vehicle ethics. AI & Society, Vol. 25, Issue 2.

Fox D., Hightower J., Liao L., et al. (2003). Bayesian filtering for location estimation. Communications of the ACM, Vol. 46, Issue 3.

Brooks R.A. (1991). Intelligence without representation. Artificial Intelligence Journal, Vol. 47, Issue 1-3.

Downloads

Published

2023-06-17

How to Cite

Cross-Domain Comparative Analysis of Decision-Making Algorithms in Autonomous and Semi-Autonomous System Architectures. (2023). ISCSITR- INTERNATIONAL JOURNAL OF DATA SCIENCE (ISCSITR-IJDS) - ISSN: 3067-7408, 4(1), 1-7. https://iscsitr.in/index.php/ISCSITR-IJDS/article/view/ISCSITR-IJDS_04_01_001