A simple example to show how to build a simple chat agent with tool calling.
This repository demonstrates how little code you need to build a local-first conversational agent that streams reasoning, executes function calls, and edits files on your machine. The same design is implemented four times—Python, Go, TypeScript (Bun), and Rust—so you can compare language ergonomics while reusing the exact same tool contract.
The agent is intentionally self-contained: every runtime fits inside a single source file (main.py, main.go, main.ts) yet still exposes a polished terminal experience with streaming updates, tool execution feedback, and logging.
- Streams GPT-5 reasoning summaries and assistant text in real time.
- Implements a minimal tool registry with three ergonomic file system tools:
read_file,list_files, andedit_file. - Handles incremental function-call deltas and consolidates partial tool arguments before execution.
- Uses the same prompt contract across languages to make behavior comparisons straightforward.
- Persists session history locally (
agent.log) so you can debug interaction flows after the fact.
main.py # Python 3.12 script using openai==1.x client
main.go # Go 1.25 program using github.com/openai/openai-go v1.12.0
main.ts # Bun/TypeScript entry point using openai 6.x
Cargo.toml # Rust package manifest
src/main.rs # Rust single-file agent using reqwest + tokio
agent.log # Rolling log file populated while the agent runs
Each runtime follows the same core steps:
- Initialize the OpenAI Responses client (GPT-5 with reasoning summaries enabled).
- Register a trio of tools and their JSON schemas.
- Stream the assistant response, surfacing
thinking,assistant_delta,tool_call, andtool_resultevents to the terminal. - Execute tool calls locally, send outputs back to the model, and continue until the assistant returns textual output.
Across all runtimes you need:
- An OpenAI API key with access to the GPT-5 Responses API (
export OPENAI_API_KEY=...). - Access to the Responses streaming endpoints (tool calling requires streaming support).
Runtime-specific requirements:
| Runtime | Requirements | Notes |
|---|---|---|
| Python | Python 3.12+, pip install openai pydantic |
Script metadata at the top of main.py works with uv/pipx run if you prefer execution without manual installs. |
| Go | Go 1.25.1+, go mod tidy |
Uses the beta responses package and shared constants for reasoning support. |
| TypeScript | Bun 1.1+, bun install |
main.ts is executable via shebang (#!/usr/bin/env bun). |
| Rust | Rust 1.78+ (stable), cargo |
Lives in src/main.rs; uses reqwest streaming SSE to drive the Responses API. |
Clone the repo and choose your preferred runtime. All examples below assume you are in the project root and have set OPENAI_API_KEY.
export OPENAI_API_KEY=sk-...pip install openai pydantic
python main.pyOptional flags:
--api-keyoverrides the environment variable for one-off runs.
go run .This compiles and executes main.go. The Go version shares the same REPL semantics as the Python script, including streaming updates and tool execution.
bun install
bun main.tsIf Bun is on your PATH, the script is also directly executable:
./main.tscargo run --releasePass --api-key to override OPENAI_API_KEY for one-off runs:
cargo run -- --api-key sk-...- Start the agent with one of the commands above.
- Type natural-language requests. Example:
You: add a TODO comment to main.py near the tool registry - Watch the terminal as the assistant:
- Streams short reasoning snippets prefixed with
thinking. - Streams assistant text deltas prefixed with
assistant. - Announces tool usage, e.g.
tool: read_file (path=main.py, offset=0, length=4000). - Prints tool results before returning the final assistant reply.
- Streams short reasoning snippets prefixed with
- Enter
quitorexitto leave the session.
All interactions are appended to agent.log, which includes both tool execution traces and OpenAI client errors.
| Tool | Arguments | Behavior |
|---|---|---|
read_file |
path (required), offset, length |
Reads up to 4000 characters from a file and reports the slice returned. |
list_files |
path (optional) |
Lists directory contents with [DIR]/[FILE] markers. Defaults to the current working directory. |
edit_file |
path (required), new_text (required), old_text (optional) |
Replaces old_text with new_text or creates a new file when old_text is omitted. |
All variants share identical schemas, making it easy to add new runtimes or plug the agent into other languages. To add another tool, update the tool registry and extend the _execute_tool (Python), executeTool (TypeScript), or dispatchTool (Go) helper to call your implementation.
- Add more tools: define the JSON schema, expose it in the tool list, and extend the local dispatcher. The assistant will automatically surface the new tool when relevant.
- Customize instructions: tweak the
self.instructionsstring (Python),instructionsconstant (TypeScript), orinstructionsvariable (Go) to steer style, safety checks, or capability hints. - Change model defaults: adjust the
modelargument when constructing the Responses stream. Any model that supports tool calling and reasoning summaries will work. - Persist conversations: the current implementation is stateless between turns. To add memory, maintain a conversation array and append user/assistant messages before each call.
- Authentication errors: verify
OPENAI_API_KEYis set or pass--api-key. The agent exits early if the key is missing. - Permission errors: the
edit_filetool writes to disk. Ensure you run the agent from a directory you can modify. - Unexpected tool JSON: partial argument payloads are buffered and decoded only after the stream signals completion. If decoding fails, a warning is written to
agent.log. - Terminal artifacts: the Python version uses ANSI erase codes to update streaming output. If the display looks odd, ensure your terminal supports carriage returns and ANSI escape sequences.
Providing equivalent implementations lets you compare:
- Streaming APIs in the Python (
openai), Go (openai-go), Rust (reqwest+ SSE), and Bun/TypeScript (openai) SDKs. - How each language handles partial function-call deltas and tool registries.
- Ergonomic trade-offs between dynamic vs. static typing for tool argument decoding across four ecosystems.
Pick the ecosystem that fits your stack, or mix and match to prototype quickly in Python before migrating to Go or TypeScript for production.