Multi-agent LLM system
Hierarchical multi-agent architecture: orchestrator agent dispatches to specialist agents with shared memory and tool access.
The prompt
Multi-agent LLM system with hierarchical orchestration. A user-facing Orchestrator Agent receives requests and decides which specialist agent to dispatch. Specialist agents: Research Agent (web search + reading), Code Agent (sandboxed Python execution), Data Agent (SQL queries against a warehouse), and Writer Agent (long-form synthesis). Each agent has access to tools via an MCP server (web search, code interpreter, database, file system). Shared episodic memory in Redis. Long-term memory in a vector store. The orchestrator may dispatch to multiple agents in parallel and synthesise their results. Show the human-in-the-loop checkpoint, the cost-tracking layer, and the per-agent timeout and retry policies.
What it generates
An agent system diagram with the orchestrator, specialist agents, tool access, memory, and oversight controls.
When to use it
For complex AI tasks that benefit from specialisation — research, coding, analysis pipelines — where a single agent can't do everything well.
Generate this diagram in seconds
Copy the prompt above, sign in for free, and paste it into the generator.
Related data & ai templates
ETL data pipeline
Batch + streaming ETL into a lakehouse: source → ingestion → transformation → warehouse → BI/ML consumers.
RAG (retrieval-augmented generation) pipeline
Document ingestion, embedding, vector search, LLM generation, and response streaming for a production RAG application.
MLOps training & inference pipeline
End-to-end ML lifecycle: feature store, training pipeline, model registry, online inference, and monitoring.