[Feature]: Programmatic Tool Calling (PTC) for code based tool execution #9860
kemier
started this conversation in
Feature Requests
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
When using tool calling with LLMs, each tool invocation requires a separate model turn, leading to:
High latency for multi-tool workflows
Increased token consumption from repeated context
Limited ability to perform complex logic (filtering, aggregation, conditionals) across tool results
xref:
Anthropic's https://www.anthropic.com/engineering/advanced-tool-use
Anthropic's https://www.anthropic.com/engineering/code-execution-with-mcp
MCP Server with Code Execution https://github.com/harche/ProDisco
Proposed Feature: Programmatic Tool Calling (PTC) allows models to write Python code that orchestrates multiple tool calls within a secure sandbox. Instead of returning individual tool calls, the model generates code that:
Calls multiple tools programmatically
Processes results with Python logic (loops, filters, aggregations)
Returns only the final result to the model context
flow is like following:
LLM writes in sandbox:
result = await search_files(pattern="*.py")
↓ (calls)
Generated wrapper function (tools/filesystem.py):
async def search_files(pattern: str) -> dict:
return await mcp_client.call_tool(
server_id="filesystem-server-001",
tool_name="searchFiles",
arguments={"pattern": pattern}
)
↓ (JSON-RPC 2.0 over stdio/HTTP)
MCP Server Process
↓ (returns result)
Wrapper deserializes response
↓
LLM receives Python dict
this could help to save a lot of tokens.
Beta Was this translation helpful? Give feedback.
All reactions