diff --git a/README.md b/README.md
index 4321ce1..6930dc3 100644
--- a/README.md
+++ b/README.md
@@ -1,249 +1,349 @@
-# OpenTelemetry-MCP-Server
+# OpenTelemetry MCP Server
-Unified MCP server for querying OpenTelemetry traces across multiple backends (Jaeger, Tempo, Traceloop, etc.), enabling AI agents to analyze distributed traces for automated debugging and observability.
+[](https://www.python.org/downloads/)
+[](https://pypi.org/project/opentelemetry-mcp/)
+[](LICENSE)
-An MCP (Model Context Protocol) server for querying OpenTelemetry traces from LLM applications, with specialized support for OpenLLMetry semantic conventions.
+**Query and analyze LLM traces with AI assistance.** Ask Claude to find expensive API calls, debug errors, compare model performance, or track token usage—all from your IDE.
+An MCP (Model Context Protocol) server that connects AI assistants to OpenTelemetry trace backends (Jaeger, Tempo, Traceloop), with specialized support for LLM observability through OpenLLMetry semantic conventions.
-## Features
+---
-- **Multiple Backend Support**: Query traces from Jaeger, Grafana Tempo, or Traceloop
-- **OpenLLMetry Integration**: Automatic parsing of `gen_ai.*` semantic conventions
-- **5 Powerful Tools**:
- - `search_traces` - Search traces with advanced filters
- - `get_trace` - Get complete trace details
- - `get_llm_usage` - Aggregate token usage metrics
- - `list_services` - List available services
- - `find_errors` - Find traces with errors
-- **Token Usage Tracking**: Aggregate prompt/completion tokens across models and services
-- **CLI Overrides**: Configure via environment or command-line arguments
-- **Type-Safe**: Built with Pydantic for robust data validation
+## Table of Contents
-## Installation
+- [Quick Start](#quick-start)
+- [Installation](#installation)
+- [Features](#features)
+- [Configuration](#configuration)
+- [Tools Reference](#tools-reference)
+- [Example Queries](#example-queries)
+- [Common Workflows](#common-workflows)
+- [Troubleshooting](#troubleshooting)
+- [Development](#development)
+- [Support](#support)
-### Prerequisites
+---
-- Python 3.11 or higher
-- [UV package manager](https://github.com/astral-sh/uv) (recommended) or pip
+## Quick Start
-### Install with UV
+**No installation required!** Configure your client to run the server directly from PyPI:
-```bash
-# Clone the repository
-cd openllmetry-mcp
+```json
+// Add to claude_desktop_config.json:
+{
+ "mcpServers": {
+ "opentelemetry-mcp": {
+ "command": "pipx",
+ "args": ["run", "opentelemetry-mcp"],
+ "env": {
+ "BACKEND_TYPE": "jaeger",
+ "BACKEND_URL": "http://localhost:16686"
+ }
+ }
+ }
+}
+```
-# Install dependencies
-uv sync
+Or use `uvx` (alternative):
-# Or install in development mode
-uv pip install -e ".[dev]"
+```json
+{
+ "mcpServers": {
+ "opentelemetry-mcp": {
+ "command": "uvx",
+ "args": ["opentelemetry-mcp"],
+ "env": {
+ "BACKEND_TYPE": "jaeger",
+ "BACKEND_URL": "http://localhost:16686"
+ }
+ }
+ }
+}
```
-### Install with pip
+**That's it!** Ask Claude: _"Show me traces with errors from the last hour"_
-```bash
-pip install -e .
-```
+---
-## Quick Start
+## Installation
-The easiest way to run the server locally is using the provided startup script:
+### For End Users (Recommended)
```bash
-# 1. Configure your backend in start_locally.sh
-# Edit the file and uncomment your preferred backend (Jaeger, Traceloop, or Tempo)
+# Run without installing (recommended)
+pipx run opentelemetry-mcp --backend jaeger --url http://localhost:16686
-# 2. Run the script
-./start_locally.sh
+# Or with uvx
+uvx opentelemetry-mcp --backend jaeger --url http://localhost:16686
```
-The script will:
-
-- Auto-detect the project directory (works from anywhere)
-- Verify `uv` is installed
-- Set up your backend configuration
-- Start the MCP server in stdio mode (ready for Claude Desktop)
-
-**Supported Backends:**
-
-- **Jaeger** (local): `http://localhost:16686`
-- **Traceloop** (cloud): `https://api.traceloop.com` (requires API key)
-- **Tempo** (local): `http://localhost:3200`
-
-Edit [start_locally.sh](start_locally.sh) to switch between backends or adjust configuration.
+This approach:
-## Configuration
-
-### Environment Variables
+- ✅ Always uses the latest version
+- ✅ No global installation needed
+- ✅ Isolated environment automatically
+- ✅ Works on all platforms
-Create a `.env` file (see `.env.example`):
+### Per Client Integration
-```bash
-# Backend type: jaeger, tempo, or traceloop
-BACKEND_TYPE=jaeger
+
+Claude Desktop
-# Backend URL
-BACKEND_URL=http://localhost:16686
-
-# Optional: API key (mainly for Traceloop)
-BACKEND_API_KEY=
-
-# Optional: Request timeout (default: 30s)
-BACKEND_TIMEOUT=30
-
-# Optional: Logging level
-LOG_LEVEL=INFO
-
-# Optional: Max traces per query (default: 100)
-MAX_TRACES_PER_QUERY=100
-```
+Configure the MCP server in your Claude Desktop config file:
-### Backend-Specific Configuration
+- **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
-#### Jaeger
+**Using pipx (recommended):**
-```bash
-BACKEND_TYPE=jaeger
-BACKEND_URL=http://localhost:16686
+```json
+{
+ "mcpServers": {
+ "opentelemetry-mcp": {
+ "command": "pipx",
+ "args": ["run", "opentelemetry-mcp"],
+ "env": {
+ "BACKEND_TYPE": "jaeger",
+ "BACKEND_URL": "http://localhost:16686"
+ }
+ }
+ }
+}
```
-#### Grafana Tempo
+**Using uvx (alternative):**
-```bash
-BACKEND_TYPE=tempo
-BACKEND_URL=http://localhost:3200
+```json
+{
+ "mcpServers": {
+ "opentelemetry-mcp": {
+ "command": "uvx",
+ "args": ["opentelemetry-mcp"],
+ "env": {
+ "BACKEND_TYPE": "jaeger",
+ "BACKEND_URL": "http://localhost:16686"
+ }
+ }
+ }
+}
```
-#### Traceloop
+**For Traceloop backend:**
-```bash
-BACKEND_TYPE=traceloop
-BACKEND_URL=https://api.traceloop.com/v2
-BACKEND_API_KEY=your_api_key_here
+```json
+{
+ "mcpServers": {
+ "opentelemetry-mcp": {
+ "command": "pipx",
+ "args": ["run", "opentelemetry-mcp"],
+ "env": {
+ "BACKEND_TYPE": "traceloop",
+ "BACKEND_URL": "https://api.traceloop.com",
+ "BACKEND_API_KEY": "your_traceloop_api_key_here"
+ }
+ }
+ }
+}
```
-**Note**: The API key contains the project information. The backend uses a hardcoded project slug of `"default"` and Traceloop resolves the actual project and environment from the API key.
+
+Using the repository instead of pipx?
-### CLI Overrides
+If you're developing locally with the cloned repository, use one of these configurations:
-You can override environment variables with CLI arguments:
+**Option 1: Wrapper script (easy backend switching)**
-```bash
-openllmetry-mcp --backend jaeger --url http://localhost:16686
-openllmetry-mcp --backend traceloop --url https://api.traceloop.com --api-key YOUR_KEY
+```json
+{
+ "mcpServers": {
+ "opentelemetry-mcp": {
+ "command": "/absolute/path/to/opentelemetry-mcp-server/start_locally.sh"
+ }
+ }
+}
```
-## Usage
-
-### Quick Start with start_locally.sh (Recommended)
+**Option 2: UV directly (for multiple backends)**
-The easiest way to run the server:
-
-```bash
-./start_locally.sh
+```json
+{
+ "mcpServers": {
+ "opentelemetry-mcp-jaeger": {
+ "command": "uv",
+ "args": [
+ "--directory",
+ "/absolute/path/to/opentelemetry-mcp-server",
+ "run",
+ "opentelemetry-mcp"
+ ],
+ "env": {
+ "BACKEND_TYPE": "jaeger",
+ "BACKEND_URL": "http://localhost:16686"
+ }
+ }
+ }
+}
```
-This script handles all configuration and starts the server in stdio mode (perfect for Claude Desktop integration). To switch backends, simply edit the script and uncomment your preferred backend.
+
-### Manual Running
-
-For advanced use cases or custom configurations, you can run the server manually.
+
-#### stdio Transport (for Claude Desktop)
+
+Claude Code
-Start the MCP server with stdio transport for local/Claude Desktop integration:
+Claude Code works with MCP servers configured in your Claude Desktop config. Once configured above, you can use the server with Claude Code CLI:
```bash
-openllmetry-mcp
-# or with UV
-uv run openllmetry-mcp
+# Verify the server is available
+claude-code mcp list
-# With backend override
-uv run openllmetry-mcp --backend jaeger --url http://localhost:16686
+# Use Claude Code with access to your OpenTelemetry traces
+claude-code "Show me traces with errors from the last hour"
```
-#### HTTP Transport (for Network Access)
+
-Start the MCP server with HTTP/SSE transport for remote access:
+
+Codeium (Windsurf)
-```bash
-# Start HTTP server on default port 8000
-openllmetry-mcp --transport http
+1. Open Windsurf
+2. Navigate to **Settings → MCP Servers**
+3. Click **Add New MCP Server**
+4. Add this configuration:
-# Or with UV
-uv run openllmetry-mcp --transport http
+**Using pipx (recommended):**
-# Specify custom host and port
-uv run openllmetry-mcp --transport http --host 127.0.0.1 --port 9000
+```json
+{
+ "opentelemetry-mcp": {
+ "command": "pipx",
+ "args": ["run", "opentelemetry-mcp"],
+ "env": {
+ "BACKEND_TYPE": "jaeger",
+ "BACKEND_URL": "http://localhost:16686"
+ }
+ }
+}
```
-The HTTP server will be accessible at `http://localhost:8000/sse` by default.
-
-**Transport Use Cases:**
+**Using uvx (alternative):**
-- **stdio transport**: Local use, Claude Desktop integration, single process
-- **HTTP transport**: Remote access, multiple clients, network deployment, sample applications
+```json
+{
+ "opentelemetry-mcp": {
+ "command": "uvx",
+ "args": ["opentelemetry-mcp"],
+ "env": {
+ "BACKEND_TYPE": "jaeger",
+ "BACKEND_URL": "http://localhost:16686"
+ }
+ }
+}
+```
-### Integrating with Claude Desktop
+
+Using the repository instead?
-Configure the MCP server in your Claude Desktop config file:
+```json
+{
+ "opentelemetry-mcp": {
+ "command": "uv",
+ "args": [
+ "--directory",
+ "/absolute/path/to/opentelemetry-mcp-server",
+ "run",
+ "opentelemetry-mcp"
+ ],
+ "env": {
+ "BACKEND_TYPE": "jaeger",
+ "BACKEND_URL": "http://localhost:16686"
+ }
+ }
+}
+```
-- **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
-- **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
+
-#### Why Two Configuration Approaches?
+
-This server supports **3 different backends** (Jaeger, Tempo, Traceloop). We provide two integration methods to suit different use cases:
+
+Cursor
-- **Wrapper Script** (`start_locally.sh`) → Easy backend switching for development/testing
-- **Direct Configuration** → Standard MCP pattern, better for production or single-backend setups
+1. Open Cursor
+2. Navigate to **Settings → MCP**
+3. Click **Add new MCP Server**
+4. Add this configuration:
-Choose the approach that fits your workflow. See [Best Practices](#best-practices-choosing-an-approach) below for guidance.
+**Using pipx (recommended):**
-#### Option 1: Using start_locally.sh (Recommended for Development)
+```json
+{
+ "opentelemetry-mcp": {
+ "command": "pipx",
+ "args": ["run", "opentelemetry-mcp"],
+ "env": {
+ "BACKEND_TYPE": "jaeger",
+ "BACKEND_URL": "http://localhost:16686"
+ }
+ }
+}
+```
-**Best for:** Frequent backend switching, local development, testing multiple backends
+**Using uvx (alternative):**
```json
{
- "mcpServers": {
- "opentelemetry-mcp": {
- "command": "/absolute/path/to/opentelemetry-mcp-server/start_locally.sh"
+ "opentelemetry-mcp": {
+ "command": "uvx",
+ "args": ["opentelemetry-mcp"],
+ "env": {
+ "BACKEND_TYPE": "jaeger",
+ "BACKEND_URL": "http://localhost:16686"
}
}
}
```
-**Pros:**
+
+Using the repository instead of pipx?
-- Switch backends by editing one file (`start_locally.sh`)
-- Centralized configuration
-- Includes validation (checks if `uv` is installed)
-
-**Cons:**
+```json
+{
+ "opentelemetry-mcp": {
+ "command": "uv",
+ "args": [
+ "--directory",
+ "/absolute/path/to/opentelemetry-mcp-server",
+ "run",
+ "opentelemetry-mcp"
+ ],
+ "env": {
+ "BACKEND_TYPE": "jaeger",
+ "BACKEND_URL": "http://localhost:16686"
+ }
+ }
+}
+```
-- Requires absolute path
-- macOS/Linux only (no Windows support yet)
+
-**To switch backends:** Edit `start_locally.sh` and uncomment your preferred backend section.
+
-#### Option 2: Direct Configuration (Standard MCP Pattern)
+
+Gemini CLI
-**Best for:** Production, single backend, Windows users, following MCP ecosystem conventions
+Configure the MCP server in your Gemini CLI config file (`~/.gemini/config.json`):
-##### Jaeger Backend (Local)
+**Using pipx (recommended):**
```json
{
"mcpServers": {
- "opentelemetry-mcp-jaeger": {
- "command": "uv",
- "args": [
- "--directory",
- "/absolute/path/to/opentelemetry-mcp-server",
- "run",
- "opentelemetry-mcp"
- ],
+ "opentelemetry-mcp": {
+ "command": "pipx",
+ "args": ["run", "opentelemetry-mcp"],
"env": {
"BACKEND_TYPE": "jaeger",
"BACKEND_URL": "http://localhost:16686"
@@ -253,34 +353,36 @@ Choose the approach that fits your workflow. See [Best Practices](#best-practice
}
```
-##### Tempo Backend (Local)
+**Using uvx (alternative):**
```json
{
"mcpServers": {
- "opentelemetry-mcp-tempo": {
- "command": "uv",
- "args": [
- "--directory",
- "/absolute/path/to/opentelemetry-mcp-server",
- "run",
- "opentelemetry-mcp"
- ],
+ "opentelemetry-mcp": {
+ "command": "uvx",
+ "args": ["opentelemetry-mcp"],
"env": {
- "BACKEND_TYPE": "tempo",
- "BACKEND_URL": "http://localhost:3200"
+ "BACKEND_TYPE": "jaeger",
+ "BACKEND_URL": "http://localhost:16686"
}
}
}
}
```
-##### Traceloop Backend (Cloud)
+Then use Gemini CLI with your traces:
+
+```bash
+gemini "Analyze token usage for gpt-4 requests today"
+```
+
+
+Using the repository instead?
```json
{
"mcpServers": {
- "opentelemetry-mcp-traceloop": {
+ "opentelemetry-mcp": {
"command": "uv",
"args": [
"--directory",
@@ -289,54 +391,254 @@ Choose the approach that fits your workflow. See [Best Practices](#best-practice
"opentelemetry-mcp"
],
"env": {
- "BACKEND_TYPE": "traceloop",
- "BACKEND_URL": "https://api.traceloop.com",
- "BACKEND_API_KEY": "your_traceloop_api_key_here"
+ "BACKEND_TYPE": "jaeger",
+ "BACKEND_URL": "http://localhost:16686"
}
}
}
}
```
-**Pros:**
+
-- Standard MCP ecosystem pattern
-- Works on all platforms (Windows/macOS/Linux)
-- Can configure multiple backends simultaneously (use different server names)
-- No wrapper script dependency
+
-**Cons:**
+**_Prerequisites:_**
-- Must edit JSON config to switch backends
-- Backend configuration split between script and config file
+- Python 3.11 or higher
+- [pipx](https://pipx.pypa.io/) or [uv](https://github.com/astral-sh/uv) installed
+
+
+Optional: Install globally
-**Tip:** You can configure multiple backends at once (e.g., `opentelemetry-mcp-jaeger` and `opentelemetry-mcp-tempo`) and Claude will show both as available MCP servers.
+If you prefer to install the command globally:
-### Best Practices: Choosing an Approach
+```bash
+# Install with pipx
+pipx install opentelemetry-mcp
-| Scenario | Recommended Approach | Why |
-| ------------------------------------ | ----------------------------------- | -------------------------------------------- |
-| **Development & Testing** | Wrapper Script (`start_locally.sh`) | Easy to switch backends, centralized config |
-| **Testing multiple backends** | Wrapper Script | Edit one file to switch, no JSON editing |
-| **Production deployment** | Direct Configuration | Standard MCP pattern, explicit configuration |
-| **Single backend only** | Direct Configuration | Simpler, no wrapper needed |
-| **Windows users** | Direct Configuration | Wrapper script not yet supported on Windows |
-| **macOS/Linux users** | Either approach | Choose based on your workflow |
-| **Multiple backends simultaneously** | Direct Configuration | Configure all backends with different names |
-| **Shared team configuration** | Direct Configuration | More portable, follows MCP conventions |
+# Verify
+opentelemetry-mcp --help
-**General Guidelines:**
+# Upgrade
+pipx upgrade opentelemetry-mcp
+```
-- **Start with the wrapper script** if you're testing different backends or doing local development
-- **Switch to direct configuration** once you've settled on a backend for production use
-- **On Windows**, use direct configuration (wrapper script support coming soon)
-- **For CI/CD**, use direct configuration with environment variables
-- **For shared teams**, document the direct configuration approach for consistency
+Or with pip:
+
+```bash
+pip install opentelemetry-mcp
+```
-**Platform Support:**
+
-- **macOS/Linux**: Both approaches fully supported
-- **Windows**: Direct configuration only (PRs welcome for `.bat`/`.ps1` wrapper scripts!)
+## Features
+
+### Core Capabilities
+
+- **🔌 Multiple Backend Support** - Connect to Jaeger, Grafana Tempo, or Traceloop
+- **🤖 LLM-First Design** - Specialized tools for analyzing AI application traces
+- **🔍 Advanced Filtering** - Generic filter system with powerful operators
+- **📊 Token Analytics** - Track and aggregate LLM token usage across models and services
+- **⚡ Fast & Type-Safe** - Built with async Python and Pydantic validation
+
+### Tools
+
+| Tool | Description | Use Case |
+| -------------------------- | ----------------------------------- | ---------------------------------- |
+| `search_traces` | Search traces with advanced filters | Find specific requests or patterns |
+| `search_spans` | Search individual spans | Analyze specific operations |
+| `get_trace` | Get complete trace details | Deep-dive into a single trace |
+| `get_llm_usage` | Aggregate token usage metrics | Track costs and usage trends |
+| `list_services` | List available services | Discover what's instrumented |
+| `find_errors` | Find traces with errors | Debug failures quickly |
+| `list_llm_models` | Discover models in use | Track model adoption |
+| `get_llm_model_stats` | Get model performance stats | Compare model efficiency |
+| `get_llm_expensive_traces` | Find highest token usage | Optimize costs |
+| `get_llm_slow_traces` | Find slowest operations | Improve performance |
+
+### Backend Support Matrix
+
+| Feature | Jaeger | Tempo | Traceloop |
+| ---------------- | :----: | :---: | :-------: |
+| Search traces | ✓ | ✓ | ✓ |
+| Advanced filters | ✓ | ✓ | ✓ |
+| Span search | ✓\* | ✓ | ✓ |
+| Token tracking | ✓ | ✓ | ✓ |
+| Error traces | ✓ | ✓ | ✓ |
+| LLM tools | ✓ | ✓ | ✓ |
+
+\* Jaeger requires `service_name` parameter for span search
+
+### For Developers
+
+If you're contributing to the project or want to make local modifications:
+
+```bash
+# Clone the repository
+git clone https://github.com/traceloop/opentelemetry-mcp-server.git
+cd opentelemetry-mcp-server
+
+# Install dependencies with UV
+uv sync
+
+# Or install in development mode with editable install
+uv pip install -e ".[dev]"
+```
+
+---
+
+## Configuration
+
+### Supported Backends
+
+| Backend | Type | URL Example | Notes |
+| ------------- | ----------- | --------------------------- | -------------------------- |
+| **Jaeger** | Local | `http://localhost:16686` | Popular open-source option |
+| **Tempo** | Local/Cloud | `http://localhost:3200` | Grafana's trace backend |
+| **Traceloop** | Cloud | `https://api.traceloop.com` | Requires API key |
+
+### Quick Configuration
+
+**Option 1: Environment Variables** (Create `.env` file - see [.env.example](.env.example))
+
+```bash
+BACKEND_TYPE=jaeger
+BACKEND_URL=http://localhost:16686
+```
+
+**Option 2: CLI Arguments** (Override environment)
+
+```bash
+opentelemetry-mcp --backend jaeger --url http://localhost:16686
+opentelemetry-mcp --backend traceloop --url https://api.traceloop.com --api-key YOUR_KEY
+```
+
+> **Configuration Precedence:** CLI arguments > Environment variables > Defaults
+
+
+All Configuration Options
+
+| Variable | Type | Default | Description |
+| ---------------------- | ------- | -------- | -------------------------------------------------- |
+| `BACKEND_TYPE` | string | `jaeger` | Backend type: `jaeger`, `tempo`, or `traceloop` |
+| `BACKEND_URL` | URL | - | Backend API endpoint (required) |
+| `BACKEND_API_KEY` | string | - | API key (required for Traceloop) |
+| `BACKEND_TIMEOUT` | integer | `30` | Request timeout in seconds |
+| `LOG_LEVEL` | string | `INFO` | Logging level: `DEBUG`, `INFO`, `WARNING`, `ERROR` |
+| `MAX_TRACES_PER_QUERY` | integer | `100` | Maximum traces to return per query (1-1000) |
+
+**Complete `.env` example:**
+
+```bash
+# Backend configuration
+BACKEND_TYPE=jaeger
+BACKEND_URL=http://localhost:16686
+
+# Optional: API key (mainly for Traceloop)
+BACKEND_API_KEY=
+
+# Optional: Request timeout (default: 30s)
+BACKEND_TIMEOUT=30
+
+# Optional: Logging level
+LOG_LEVEL=INFO
+
+# Optional: Max traces per query (default: 100)
+MAX_TRACES_PER_QUERY=100
+```
+
+
+
+
+Backend-Specific Setup
+
+### Jaeger
+
+```bash
+BACKEND_TYPE=jaeger
+BACKEND_URL=http://localhost:16686
+```
+
+### Grafana Tempo
+
+```bash
+BACKEND_TYPE=tempo
+BACKEND_URL=http://localhost:3200
+```
+
+### Traceloop
+
+```bash
+BACKEND_TYPE=traceloop
+BACKEND_URL=https://api.traceloop.com
+BACKEND_API_KEY=your_api_key_here
+```
+
+> **Note:** The API key contains project information. The backend uses a project slug of `"default"` and Traceloop resolves the actual project/environment from the API key.
+
+
+
+---
+
+## Usage
+
+### Quick Start with start_locally.sh (Recommended)
+
+The easiest way to run the server:
+
+```bash
+./start_locally.sh
+```
+
+This script handles all configuration and starts the server in stdio mode (perfect for Claude Desktop integration). To switch backends, simply edit the script and uncomment your preferred backend.
+
+### Manual Running
+
+For advanced use cases or custom configurations, you can run the server manually.
+
+#### stdio Transport (for Claude Desktop)
+
+Start the MCP server with stdio transport for local/Claude Desktop integration:
+
+```bash
+# If installed with pipx/pip
+opentelemetry-mcp
+
+# If running from cloned repository with UV
+uv run opentelemetry-mcp
+
+# With backend override (pipx/pip)
+opentelemetry-mcp --backend jaeger --url http://localhost:16686
+
+# With backend override (UV)
+uv run opentelemetry-mcp --backend jaeger --url http://localhost:16686
+```
+
+#### HTTP Transport (for Network Access)
+
+Start the MCP server with HTTP/SSE transport for remote access:
+
+```bash
+# If installed with pipx/pip
+opentelemetry-mcp --transport http
+
+# If running from cloned repository with UV
+uv run opentelemetry-mcp --transport http
+
+# Specify custom host and port (pipx/pip)
+opentelemetry-mcp --transport http --host 127.0.0.1 --port 9000
+
+# With UV
+uv run opentelemetry-mcp --transport http --host 127.0.0.1 --port 9000
+```
+
+The HTTP server will be accessible at `http://localhost:8000/sse` by default.
+
+**Transport Use Cases:**
+
+- **stdio transport**: Local use, Claude Desktop integration, single process
+- **HTTP transport**: Remote access, multiple clients, network deployment, sample applications
## Tools Reference
@@ -440,56 +742,214 @@ Find traces with errors:
## Example Queries
-### Find expensive LLM operations
+### Find Expensive OpenAI Operations
+
+**Natural Language:** _"Show me OpenAI traces from the last hour that took longer than 5 seconds"_
+
+**Tool Call:** `search_traces`
+
+```json
+{
+ "service_name": "my-app",
+ "gen_ai_system": "openai",
+ "min_duration_ms": 5000,
+ "start_time": "2024-01-15T10:00:00Z",
+ "limit": 20
+}
+```
+
+**Response:**
+```json
+{
+ "traces": [
+ {
+ "trace_id": "abc123...",
+ "service_name": "my-app",
+ "duration_ms": 8250,
+ "total_tokens": 4523,
+ "gen_ai_system": "openai",
+ "gen_ai_model": "gpt-4"
+ }
+ ],
+ "count": 1
+}
```
-Use search_traces to find traces from the last hour where:
-- gen_ai_system is "openai"
-- min_duration_ms is 5000
+
+---
+
+### Analyze Token Usage by Model
+
+**Natural Language:** _"How many tokens did we use for each model today?"_
+
+**Tool Call:** `get_llm_usage`
+
+```json
+{
+ "start_time": "2024-01-15T00:00:00Z",
+ "end_time": "2024-01-15T23:59:59Z",
+ "service_name": "my-app"
+}
```
-### Analyze token usage by model
+**Response:**
+```json
+{
+ "summary": {
+ "total_tokens": 125430,
+ "prompt_tokens": 82140,
+ "completion_tokens": 43290,
+ "request_count": 487
+ },
+ "by_model": {
+ "gpt-4": {
+ "total_tokens": 85200,
+ "request_count": 156
+ },
+ "gpt-3.5-turbo": {
+ "total_tokens": 40230,
+ "request_count": 331
+ }
+ }
+}
```
-Use get_llm_usage for the last 24 hours to see token usage breakdown by model
+
+---
+
+### Find Traces with Errors
+
+**Natural Language:** _"Show me all errors from the last hour"_
+
+**Tool Call:** `find_errors`
+
+```json
+{
+ "start_time": "2024-01-15T14:00:00Z",
+ "service_name": "my-app",
+ "limit": 10
+}
```
-### Debug recent errors
+**Response:**
+```json
+{
+ "errors": [
+ {
+ "trace_id": "def456...",
+ "service_name": "my-app",
+ "error_message": "RateLimitError: Too many requests",
+ "error_type": "openai.error.RateLimitError",
+ "timestamp": "2024-01-15T14:23:15Z"
+ }
+ ],
+ "count": 1
+}
```
-Use find_errors to show all error traces from the last hour
+
+---
+
+### Compare Model Performance
+
+**Natural Language:** _"What's the performance difference between GPT-4 and Claude?"_
+
+**Tool Call 1:** `get_llm_model_stats` for gpt-4
+
+```json
+{
+ "model_name": "gpt-4",
+ "start_time": "2024-01-15T00:00:00Z"
+}
```
-### Investigate a specific trace
+**Tool Call 2:** `get_llm_model_stats` for claude-3-opus
+```json
+{
+ "model_name": "claude-3-opus-20240229",
+ "start_time": "2024-01-15T00:00:00Z"
+}
```
-Use get_trace with trace_id "abc123" to see all spans and LLM attributes
+
+---
+
+### Investigate High Token Usage
+
+**Natural Language:** _"Which requests used the most tokens today?"_
+
+**Tool Call:** `get_llm_expensive_traces`
+
+```json
+{
+ "limit": 10,
+ "start_time": "2024-01-15T00:00:00Z",
+ "min_tokens": 5000
+}
```
-## OpenLLMetry Semantic Conventions
+---
+
+## Common Workflows
+
+### Cost Optimization
+
+1. **Identify expensive operations:**
+
+ ```
+ Use get_llm_expensive_traces to find high-token requests
+ ```
+
+2. **Analyze by model:**
+
+ ```
+ Use get_llm_usage to see which models are costing the most
+ ```
+
+3. **Investigate specific traces:**
+ ```
+ Use get_trace with the trace_id to see exact prompts/responses
+ ```
+
+### Performance Debugging
+
+1. **Find slow operations:**
+
+ ```
+ Use get_llm_slow_traces to identify latency issues
+ ```
+
+2. **Check for errors:**
+
+ ```
+ Use find_errors to see failure patterns
+ ```
+
+3. **Analyze finish reasons:**
+ ```
+ Use get_llm_model_stats to see if responses are being truncated
+ ```
+
+### Model Adoption Tracking
-This server automatically parses OpenLLMetry semantic conventions:
+1. **Discover models in use:**
-### Supported Attributes
+ ```
+ Use list_llm_models to see all models being called
+ ```
-- `gen_ai.system` - Provider (openai, anthropic, cohere, etc.)
-- `gen_ai.request.model` - Requested model
-- `gen_ai.response.model` - Actual model used
-- `gen_ai.operation.name` - Operation type (chat, completion, embedding)
-- `gen_ai.request.temperature` - Temperature parameter
-- `gen_ai.request.top_p` - Top-p parameter
-- `gen_ai.request.max_tokens` - Max tokens
-- `gen_ai.usage.prompt_tokens` - Input tokens (also supports `input_tokens` for Anthropic)
-- `gen_ai.usage.completion_tokens` - Output tokens (also supports `output_tokens` for Anthropic)
-- `gen_ai.usage.total_tokens` - Total tokens
+2. **Compare model statistics:**
-### Provider Compatibility
+ ```
+ Use get_llm_model_stats for each model to compare performance
+ ```
-The server handles different token naming conventions:
+3. **Identify shadow AI:**
+ ```
+ Look for unexpected models or services in list_llm_models results
+ ```
-- **OpenAI**: `prompt_tokens`, `completion_tokens`
-- **Anthropic**: `input_tokens`, `output_tokens`
-- **Others**: Falls back to standard OpenLLMetry names
+---
## Development
@@ -536,7 +996,7 @@ Make sure your API key is set correctly:
```bash
export BACKEND_API_KEY=your_key_here
# Or use --api-key CLI flag
-openllmetry-mcp --api-key your_key_here
+opentelemetry-mcp --api-key your_key_here
```
### No Traces Found
@@ -584,5 +1044,6 @@ Apache 2.0 License - see LICENSE file for details
For issues and questions:
-- GitHub Issues: https://github.com/yourusername/openllmetry-mcp/issues
+- GitHub Issues: https://github.com/traceloop/opentelemetry-mcp-server/issues
+- PyPI Package: https://pypi.org/project/opentelemetry-mcp/
- Traceloop Community: https://traceloop.com/slack
diff --git a/src/opentelemetry_mcp/config.py b/src/opentelemetry_mcp/config.py
index 8d1baff..ead8b8c 100644
--- a/src/opentelemetry_mcp/config.py
+++ b/src/opentelemetry_mcp/config.py
@@ -34,11 +34,7 @@ def validate_url(cls, v: HttpUrl) -> HttpUrl:
def from_env(cls) -> "BackendConfig":
"""Load configuration from environment variables."""
backend_type = os.getenv("BACKEND_TYPE", "jaeger")
- backend_url = os.getenv("BACKEND_URL")
-
- if not backend_url:
- raise ValueError("BACKEND_URL environment variable is required")
-
+ backend_url = os.getenv("BACKEND_URL", "http://localhost:16686")
if backend_type not in ["jaeger", "tempo", "traceloop"]:
raise ValueError(
f"Invalid BACKEND_TYPE: {backend_type}. Must be one of: jaeger, tempo, traceloop"
diff --git a/src/opentelemetry_mcp/server.py b/src/opentelemetry_mcp/server.py
index 4435620..7ab57ea 100644
--- a/src/opentelemetry_mcp/server.py
+++ b/src/opentelemetry_mcp/server.py
@@ -687,7 +687,9 @@ def main(
logger.info(f"Connect clients to: http://{host}:{port}/mcp")
mcp.run(transport="streamable-http", host=host, port=port)
else:
- logger.info("Starting MCP server with stdio transport")
+ logger.info(
+ f"Starting MCP server with stdio transport using Backend: {_config.backend.type} connected to: {_config.backend.url}"
+ )
mcp.run(transport="stdio")
except KeyboardInterrupt: