Deep Orchestrator: A Simple MCP Loop That Makes Deep Research Work

Qadri rebuilt a deep research agent for mcp-agent through three iterations, discovering that simpler architectures beat complex adaptive workflows. The winning approach—Deep Orchestrator—loops plan, execute, and deterministic verify, adds selective external memory via task dependencies, uses a TODO queue with parallelism, structured prompts, and a minimal policy engine. It delivers reliable performance purely via MCP, with future work on remote execution, tool selection, memory as MCP resources, and dynamic model selection.
Key Points
- A simple plan–execute–verify loop with replanning outperforms complex adaptive systems for deep research workflows.
- Deterministic checks (dependency graph, server and agent existence) paired with LLM planning greatly reduce hallucinations and failures.
- Selective external memory propagation via planner-declared task dependencies improves token efficiency without losing needed context.
- Generate a full upfront plan to populate a TODO queue (with parallel subtasks) and use a small policy engine instead of elaborate mode detection.
- MCP-centric design enables general-purpose agents; future work targets remote execution, intelligent tool selection, memory as MCP resources, and dynamic model selection.
Sentiment
The Hacker News community is broadly supportive of the architectural approach and the transparency of the author's iterative process. However, practical users who've tried the tool report real friction around cost, latency, and lack of prompt caching. The discussion is constructive rather than dismissive, with most commenters interested in solving practical problems rather than questioning the premise. The author is actively engaged and responsive, which is well-received.
In Agreement
- The simplicity-first architecture is well-received — the author's evolution from over-engineered to simple resonates with readers
- Wrapping search in MCP servers for modularity is seen as the right design approach
- Using large reasoning models for the planning phase is widely accepted as critical to output quality
- The write-up itself is praised as informative and idea-generating for other agent projects
Opposed
- Real-world performance of mcp-agent is reported as disappointing — high cost, high latency, and poor grounding in source facts
- The absence of prompt caching and modern API support (GPT-5 responses API) is a practical limitation that undermines the approach
- MCP tools consuming large amounts of context window is a fundamental bottleneck not fully addressed by the architecture
- The blog post's design was criticized for readability issues (white blurry blobs behind white/grey text)