Multi-Agent LLMs for Autonomous Workflow Orchestration
DOI:
https://doi.org/10.15662/IJRAI.2025.0801002Keywords:
Multi-Agent Systems, Large Language Models (LLMs), Workflow Orchestration, Autonomous Systems, Task Planning, Agent Communication, Natural Language Interfaces, Prompt Engineering, Distributed AI, Self-Healing SystemsAbstract
Workflow orchestration plays a pivotal role in modern computing systems, automating complex, multistep tasks across distributed services and agents. Traditional orchestration frameworks are typically rule-based, brittle, and require explicit programming for every scenario. With the advancement of Large Language Models (LLMs) and multi-agent systems, there emerges an opportunity to develop autonomous, adaptive workflow orchestration mechanisms capable of reasoning, collaboration, and dynamic task allocation. This paper investigates a novel approach leveraging Multi-Agent LLMs for autonomous workflow orchestration in complex environments such as cloud computing, robotic systems, and enterprise automation. Each agent in the system is powered by a specialized or generalized LLM that can interpret natural language instructions, generate subtasks, communicate with other agents, and execute or delegate actions based on context. The proposed system architecture features agents with roles such as "Planner", "Executor", and "Monitor", which interact using a shared language protocol. These agents collectively manage workflows by breaking down high-level goals into actionable steps, reallocating tasks in real-time, and recovering from failures without human intervention. We evaluate our multi-agent orchestration system in both simulated environments (e.g., software pipelines) and realworld scenarios (e.g., document processing, robotic task planning). Results show that LLM-powered agents improve task completion rates, reduce latency, and handle unanticipated failures more effectively than traditional orchestrators. The methodology combines prompt engineering, role assignment, context sharing, and chain-of-thought reasoning across agents. Challenges include prompt drift, coordination failure, and high computational cost. This paper contributes a framework, performance evaluation, and a discussion of trade-offs, emphasizing the feasibility and advantages of using LLMs in distributed multi-agent environments. The conclusion outlines future directions, including reinforcement learning for coordination and integrating symbolic reasoning for verifiability.
References
1. Wooldridge, M., & Jennings, N. R. (1995). Intelligent agents: Theory and practice. The Knowledge Engineering Review, 10(2), 115–152.
2. Jennings, N. R., Sycara, K., & Wooldridge, M. (1998). A roadmap of agent research and development. Autonomous Agents and Multi-Agent Systems, 1(1), 7–38.
3. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of NAACL-HLT 2019.
4. Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving Language Understanding by Generative Pre-training. OpenAI Technical Report.
5. Brown, T. B., Mann, B., Ryder, N., et al. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems (NeurIPS), 33.
6. Andreas, J., Rohrbach, M., Darrell, T., & Klein, D. (2016). Neural module networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 39–48.
7. Tellex, S., Thaker, P., Joseph, J., et al. (2011). Understanding natural language commands for robotic navigation and mobile manipulation. In Proceedings of the AAAI Conference on Artificial Intelligence.
8. Shinn, N., McLean, C., & McCall, B. (2021). Autonomous Agents that Collaborate using Natural Language. Preprint. arXiv:2106.10328
9. Russell, S., & Norvig, P. (2016). Artificial Intelligence: A Modern Approach (3rd ed.). Pearson Education.
10. Kraus, S. (1997). Negotiation and cooperation in multi-agent environments. Artificial Intelligence, 94(1–2), 79–97.