Claude’s Advanced Tool Use: On‑Demand Discovery, Code Orchestration, and Example‑Driven Calls

Added Nov 24, 2025
Article: PositiveCommunity: PositiveDivisive
Claude’s Advanced Tool Use: On‑Demand Discovery, Code Orchestration, and Example‑Driven Calls

Anthropic released three beta features—Tool Search Tool, Programmatic Tool Calling, and Tool Use Examples—to help Claude work across large tool libraries efficiently. They reduce token usage and latency by discovering tools on demand, moving orchestration into code, and teaching correct parameter usage via examples. Together they enable more accurate, scalable agents for real-world workflows.

Key Points

  • Tool Search Tool defers most tool definitions and loads only what’s needed via on-demand search, significantly reducing token usage and improving tool selection accuracy.
  • Programmatic Tool Calling moves orchestration into code (via a code execution tool), keeping intermediate data out of context, enabling parallel calls, and reducing both latency and tokens while improving accuracy.
  • Tool Use Examples complement JSON Schema by showing concrete usage patterns, improving parameter handling and reducing malformed calls.
  • Best practices: layer features based on the main bottleneck, keep 3–5 critical tools always loaded, document return formats for PTC, and provide concise, realistic examples focused on ambiguity.
  • The features are available in beta with simple configuration (defer_loading, allowed_callers, input_examples) and supported by documentation and cookbooks.

Sentiment

Mixed but leaning positive. The community broadly appreciates Programmatic Tool Calling and Tool Use Examples as genuine improvements to agent workflows. However, significant skepticism exists around Tool Search, with many arguing that simpler approaches like CLI tools, GraphQL, or better agent architecture already solve the same problems without added complexity. A vocal minority views the entire tool ecosystem expansion as hype-driven churn rather than real progress.

In Agreement

  • Programmatic Tool Calling is a clear improvement that reduces latency and token usage by letting Claude write code to orchestrate multiple tool calls in a single execution pass
  • Tool Use Examples are valuable for reducing malformed calls and teaching usage conventions through concrete samples rather than abstract schemas
  • These features address real pain points around context window bloat when loading dozens of tool definitions upfront
  • Tool Search formalizes patterns that experienced agent builders were already implementing manually, and having it built into the model could improve accuracy
  • The shift from basic function calling to intelligent orchestration represents genuine architectural progress for real-world agent workflows

Opposed

  • Services should ship better CLI tools with --help flags rather than building MCP servers, as CLI tools benefit both humans and AI agents alike
  • Tool Search creates vendor lock-in and adds debugging opacity — when the wrong tool is selected, diagnosing whether the search layer failed introduces unnecessary complexity
  • Good agent architecture already handles tool selection through careful curation, sub-agent routing, and state-based limiting, making Tool Search redundant for well-designed systems
  • GraphQL may be a fundamentally better approach than MCP for AI agents due to its typed schema, introspection, and ability to batch operations in a single query
  • The rapid pace of tooling churn means investments in any particular tool-use pattern are likely to be obsoleted within months, making the whole ecosystem feel like a treadmill
  • This is essentially rebranding prompt engineering and context engineering as 'advanced tool use' while the underlying problems remain unsolved