Hierarchical Deep Learning Frameworks Enabling Dynamic Task Allocation and Real-Time Path Optimization in Mobile Robotic Fleets

Authors

  • Williiam Jones USA Author

Keywords:

Hierarchical Deep Learning, Mobile Robots, Task Allocation, Path Optimization, Reinforcement Learning, Decentralized Control, Robotic Fleet Management

Abstract

This study proposes a hierarchical deep learning framework designed to address dynamic task allocation and real-time path optimization in mobile robotic fleets operating in variable and resource-constrained environments. The model incorporates layered decision-making using neural network architectures to perform decentralized control while allowing central policy intervention during uncertainty. We investigate reinforcement learning and convolutional layers embedded within hierarchical structures to optimize both task distribution and movement paths of heterogeneous robotic agents. Experimental simulations demonstrate significant improvements in task completion rates, response time, and energy efficiency when compared to traditional swarm-based and rule-based systems.

References

Albus, J. (1991). Real-time control system for intelligent machines. IEEE Trans. Robotics and Automation, Vol. 7, Issue 1.

Brooks, R. (1997). Intelligence without representation. Artificial Intelligence Journal, Vol. 47, Issue 1.

Levine, S., Finn, C., Darrell, T., & Abbeel, P. (2016). End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, Vol. 17, Issue 1.

Gupta, A., Devin, C., Liu, Y., Abbeel, P., & Levine, S. (2017). Learning invariant feature spaces for skill transfer. Robotics: Science and Systems, Vol. 13, Issue 1.

Parker, L. (2008). Distributed intelligence: Overview of the field and its application in multi-robot systems. Journal of Physical Agents, Vol. 2, Issue 1.

Kalra, N., Ferguson, D., & Stentz, A. (2005). Hoplites: A market-based framework for planning in a multi-robot system. Journal of Field Robotics, Vol. 22, Issue 9.

Kumar, V., & Michael, N. (2012). Opportunities and challenges with autonomous micro aerial vehicles. International Journal of Robotics Research, Vol. 31, Issue 11.

Faust, A., Orosz, G., Lawitzky, M., et al. (2018). PRM-RL: Long-range robotic navigation tasks using reinforcement learning. Conference on Robot Learning, Vol. 10, Issue 1.

Tai, L., Paolo, G., & Liu, M. (2017). Virtual-to-real deep reinforcement learning: Continuous control of mobile robots for mapless navigation. IEEE/RSJ IROS, Vol. 1, Issue 1.

Bagnell, J. A. (2015). Learning for control from demonstrations. IEEE Transactions on Automatic Control, Vol. 61, Issue 4.

Ziebart, B. D. (2010). Modeling purposeful adaptive behavior with the principle of maximum causal entropy. Journal of Artificial Intelligence Research, Vol. 39, Issue 1.

Sutton, R. S., & Barto, A. G. (1998). Reinforcement Learning: An Introduction. MIT Press, Vol. 1, Issue 1.

Schulman, J., et al. (2015). Trust Region Policy Optimization. Proceedings of ICML, Vol. 37, Issue 1.

Mnih, V., Kavukcuoglu, K., Silver, D., et al. (2015). Human-level control through deep reinforcement learning. Nature, Vol. 518, Issue 7540.

Silver, D., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, Vol. 529, Issue 7587.

Downloads

Published

2020-12-14

How to Cite

Williiam Jones. (2020). Hierarchical Deep Learning Frameworks Enabling Dynamic Task Allocation and Real-Time Path Optimization in Mobile Robotic Fleets. ISCSITR - INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING (ISCSITR-IJSRAIML) ISSN (Online): 3067-753X, 1(1), 1-6. https://iscsitr.in/index.php/ISCSITR-IJSRAIML/article/view/ISCSITR-IJSRAIML_01_01_01