Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .env
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
OPENROUTER_API_KEY=your-api-key-here
32 changes: 32 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,38 @@ This list is not a definitive list of models supported by OpenRouter, as it cons

You can find the latest list of tool-supported models supported by OpenRouter [here](https://openrouter.ai/models?order=newest&supported_parameters=tools). (Note: This list may contain models that are not compatible with the AI SDK.)

For detailed information about specific models, including text generation capabilities and advanced usage examples, see our [Models Documentation](./docs/models.md).

### 0G Compute Network Models

The OpenRouter provider now supports models from the [0G Compute Network](https://docs.0g.ai/0g-compute/for-developers/inference-sdk), a decentralized AI inference network with verified computation capabilities:

| Model | Provider Address | Description | Verification |
|-------|------------------|-------------|--------------|
| `0g/llama-3.3-70b-instruct` | `0xf07240Efa67755B5311bc75784a061eDB47165Dd` | State-of-the-art 70B parameter model for general AI tasks | TEE (TeeML) |
| `0g/deepseek-r1-70b` | `0x3feE5a4dd5FDb8a32dDA97Bed899830605dBD9D3` | Advanced reasoning model optimized for complex problem solving | TEE (TeeML) |

These models run on the 0G Compute Network and provide verified AI inference through Trusted Execution Environments (TEE). For detailed text generation capabilities and examples, see the [Models Documentation](./docs/models.md#0g-compute-network-models).

#### Example Usage with 0G Compute Models

```typescript
import { openrouter } from '@openrouter/ai-sdk-provider';
import { generateText } from 'ai';

// Using the Llama 3.3 70B model from 0G Compute Network
const { text } = await generateText({
model: openrouter('0g/llama-3.3-70b-instruct'),
prompt: 'Explain quantum computing in simple terms.',
});

// Using the DeepSeek R1 70B reasoning model
const { text: reasoning } = await generateText({
model: openrouter('0g/deepseek-r1-70b'),
prompt: 'Solve this step by step: If a train travels 120 km in 2 hours, what is its average speed?',
});
```

## Passing Extra Body to OpenRouter

There are 3 ways to pass extra body to OpenRouter:
Expand Down
431 changes: 431 additions & 0 deletions bun.lock

Large diffs are not rendered by default.

219 changes: 219 additions & 0 deletions docs/models.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,219 @@
# Supported Models

This document provides detailed information about the models supported by the OpenRouter AI SDK Provider.

## Overview

The OpenRouter provider gives you access to over 300 large language models through the OpenRouter API. You can find the complete and up-to-date list of models at [openrouter.ai/models](https://openrouter.ai/models).

## 0G Compute Network Models

The OpenRouter provider supports models from the [0G Compute Network](https://docs.0g.ai/0g-compute/for-developers/inference-sdk), a decentralized AI inference network with verified computation capabilities.

### Available Models

#### Llama 3.3 70B Instruct (`0g/llama-3.3-70b-instruct`)

- **Provider Address**: `0xf07240Efa67755B5311bc75784a061eDB47165Dd`
- **Parameters**: 70 billion
- **Type**: Instruction-tuned language model
- **Verification**: TEE (Trusted Execution Environment) via TeeML
- **Best For**: General AI tasks, conversation, instruction following, creative writing

**Text Generation Capabilities:**

- High-quality conversational responses
- Code generation and explanation
- Creative writing (stories, poems, scripts)
- Educational content and explanations
- Problem-solving and analysis
- Multi-language support

**Example Usage:**

```typescript
import { openrouter } from '@openrouter/ai-sdk-provider';
import { generateText } from 'ai';

// General conversation
const { text } = await generateText({
model: openrouter('0g/llama-3.3-70b-instruct'),
prompt: 'Explain the concept of machine learning to a 10-year-old.',
});

// Code generation
const { text: code } = await generateText({
model: openrouter('0g/llama-3.3-70b-instruct'),
prompt: 'Write a Python function to calculate the factorial of a number.',
});

// Creative writing
const { text: story } = await generateText({
model: openrouter('0g/llama-3.3-70b-instruct'),
prompt: 'Write a short science fiction story about AI and humans working together.',
});
```

#### DeepSeek R1 70B (`0g/deepseek-r1-70b`)

- **Provider Address**: `0x3feE5a4dd5FDb8a32dDA97Bed899830605dBD9D3`
- **Parameters**: 70 billion
- **Type**: Advanced reasoning model
- **Verification**: TEE (Trusted Execution Environment) via TeeML
- **Best For**: Complex problem solving, mathematical reasoning, logical analysis

**Text Generation Capabilities:**

- Step-by-step reasoning and problem solving
- Mathematical computations and proofs
- Logical analysis and deduction
- Scientific explanations with detailed reasoning
- Code debugging and optimization
- Complex question answering with reasoning chains

**Example Usage:**

```typescript
import { openrouter } from '@openrouter/ai-sdk-provider';
import { generateText } from 'ai';

// Mathematical reasoning
const { text } = await generateText({
model: openrouter('0g/deepseek-r1-70b'),
prompt: 'Solve this step by step: If a train travels 120 km in 2 hours, what is its average speed in km/h and m/s?',
});

// Logical problem solving
const { text: logic } = await generateText({
model: openrouter('0g/deepseek-r1-70b'),
prompt: 'Three friends have different ages. Alice is older than Bob but younger than Charlie. If their ages sum to 60 and are consecutive integers, what are their ages?',
});

// Code analysis and debugging
const { text: analysis } = await generateText({
model: openrouter('0g/deepseek-r1-70b'),
prompt: `Analyze this code and explain why it might be inefficient:

def find_duplicates(arr):
duplicates = []
for i in range(len(arr)):
for j in range(i+1, len(arr)):
if arr[i] == arr[j] and arr[i] not in duplicates:
duplicates.append(arr[i])
return duplicates`,
});
```

### Model Configuration Options

Both 0G Compute models support all standard OpenRouter configuration options:

```typescript
import { openrouter } from '@openrouter/ai-sdk-provider';
import { generateText } from 'ai';

const { text } = await generateText({
model: openrouter('0g/llama-3.3-70b-instruct', {
// Control randomness (0.0 = deterministic, 1.0 = very random)
temperature: 0.7,

// Maximum tokens to generate
maxTokens: 500,

// Stop generation at these sequences
stop: ['\n\n', '###'],

// Nucleus sampling parameter
topP: 0.9,

// Frequency penalty to reduce repetition
frequencyPenalty: 0.1,

// Presence penalty to encourage new topics
presencePenalty: 0.1,
}),
prompt: 'Write a technical explanation of blockchain technology.',
});
```

### Streaming Text Generation

Both models support streaming for real-time text generation:

```typescript
import { openrouter } from '@openrouter/ai-sdk-provider';
import { streamText } from 'ai';

const { textStream } = await streamText({
model: openrouter('0g/llama-3.3-70b-instruct'),
prompt: 'Write a detailed explanation of quantum computing.',
});

for await (const delta of textStream) {
process.stdout.write(delta);
}
```

### Verification and Trust

All 0G Compute Network models run in Trusted Execution Environments (TEE) using TeeML technology, which provides:

- **Computational Integrity**: Cryptographic proof that computations were performed correctly
- **Data Privacy**: Input and output data is protected during processing
- **Transparency**: Verifiable execution without revealing sensitive information
- **Decentralization**: Distributed across multiple independent providers

### Performance Characteristics

| Model | Latency | Throughput | Context Length | Best Use Cases |
|-------|---------|------------|----------------|----------------|
| `0g/llama-3.3-70b-instruct` | Medium | High | 8K tokens | General chat, creative writing, code generation |
| `0g/deepseek-r1-70b` | Medium-High | Medium | 8K tokens | Complex reasoning, math, analysis, debugging |

### Pricing and Availability

0G Compute Network models are available through OpenRouter's standard pricing model. Check [openrouter.ai/models](https://openrouter.ai/models) for current pricing information.

The decentralized nature of the 0G network often provides competitive pricing compared to centralized alternatives while maintaining high availability through distributed infrastructure.

## Other Supported Models

For a complete list of all supported models including OpenAI, Anthropic, Google, Meta, and other providers, visit [openrouter.ai/models](https://openrouter.ai/models).

Popular model families include:

- **OpenAI**: GPT-4, GPT-3.5, GPT-4 Turbo
- **Anthropic**: Claude 3.5 Sonnet, Claude 3 Haiku, Claude 3 Opus
- **Google**: Gemini Pro, Gemini Flash
- **Meta**: Llama 2, Llama 3, Code Llama
- **Mistral**: Mistral 7B, Mixtral 8x7B, Mistral Large
- **And many more...**

## Tool Support

Many models support function calling and tool use. Check the [tool-supported models list](https://openrouter.ai/models?order=newest&supported_parameters=tools) for models compatible with the AI SDK's tool functionality.

Both 0G Compute models support tool calling for enhanced functionality:

```typescript
import { openrouter } from '@openrouter/ai-sdk-provider';
import { generateText, tool } from 'ai';
import { z } from 'zod';

const { text } = await generateText({
model: openrouter('0g/llama-3.3-70b-instruct'),
prompt: 'What is the weather like in San Francisco?',
tools: {
getWeather: tool({
description: 'Get the current weather for a location',
parameters: z.object({
location: z.string().describe('The city and state'),
}),
execute: async ({ location }) => {
// Implementation would call a weather API
return `The weather in ${location} is sunny and 72°F`;
},
}),
},
});
```
68 changes: 68 additions & 0 deletions examples/0g-compute-example.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
/**
* Example usage of 0G Compute Network models with OpenRouter AI SDK Provider
*
* This example demonstrates how to use the 0G Compute Network models
* through the OpenRouter provider for the Vercel AI SDK.
*/

import { openrouter } from '@openrouter/ai-sdk-provider';
import { generateText } from 'ai';

async function main() {
// Example 1: Using Llama 3.3 70B Instruct model from 0G Compute Network
console.log('🦙 Testing 0G Llama 3.3 70B Instruct model...');

try {
const { text } = await generateText({
model: openrouter('0g/llama-3.3-70b-instruct'),
prompt: 'Explain quantum computing in simple terms for a beginner.',
});

console.log('Response from 0G Llama 3.3 70B:');
console.log(text);
console.log('\n' + '='.repeat(80) + '\n');
} catch (error) {
console.error('Error with 0G Llama model:', error);
}

// Example 2: Using DeepSeek R1 70B reasoning model from 0G Compute Network
console.log('🧠 Testing 0G DeepSeek R1 70B reasoning model...');

try {
const { text } = await generateText({
model: openrouter('0g/deepseek-r1-70b'),
prompt: 'Solve this step by step: If a train travels 120 km in 2 hours, what is its average speed in km/h and m/s?',
});

console.log('Response from 0G DeepSeek R1 70B:');
console.log(text);
console.log('\n' + '='.repeat(80) + '\n');
} catch (error) {
console.error('Error with 0G DeepSeek model:', error);
}

// Example 3: Using 0G models with custom settings
console.log('⚙️ Testing 0G model with custom settings...');

try {
const { text } = await generateText({
model: openrouter('0g/llama-3.3-70b-instruct', {
temperature: 0.7,
maxTokens: 150,
}),
prompt: 'Write a short poem about artificial intelligence and decentralization.',
});

console.log('Response from 0G Llama with custom settings:');
console.log(text);
} catch (error) {
console.error('Error with custom settings:', error);
}
}

// Run the examples
if (require.main === module) {
main().catch(console.error);
}

export { main };
60 changes: 60 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# OpenRouter AI SDK Provider Examples

This directory contains example usage of the OpenRouter AI SDK Provider with various models and configurations.

## 0G Compute Network Examples

### `0g-compute-example.ts`

Basic introduction to using the 0G Compute Network models through the OpenRouter provider:

- **0G Llama 3.3 70B Instruct**: State-of-the-art 70B parameter model for general AI tasks
- **0G DeepSeek R1 70B**: Advanced reasoning model optimized for complex problem solving

### `text-generation-0g.ts`

Comprehensive text generation examples showcasing the full capabilities of 0G Compute models:

- **Creative Writing**: Short stories, poetry, dialogue generation
- **Technical Explanations**: Complex concepts explained simply
- **Code Generation**: Python, JavaScript, and other programming languages
- **Reasoning & Problem Solving**: Mathematical problems, logic puzzles, code debugging
- **Streaming Generation**: Real-time text streaming
- **Multi-turn Conversations**: Context-aware dialogue

Both models run on the 0G Compute Network and provide verified AI inference through Trusted Execution Environments (TEE).

### Running the Examples

1. Install dependencies:

```bash
npm install
```

2. Set your OpenRouter API key:

```bash
export OPENROUTER_API_KEY="your-api-key-here"
```

3. Run the examples:

```bash
# Basic 0G Compute example
npx tsx examples/0g-compute-example.ts

# Comprehensive text generation examples
npx tsx examples/text-generation-0g.ts
```

### About 0G Compute Network

The 0G Compute Network is a decentralized AI inference network that provides:

- **Verified Computation**: All inference runs in Trusted Execution Environments (TEE)
- **Decentralized Infrastructure**: Distributed across multiple providers
- **Cost-Effective**: Competitive pricing through decentralized competition
- **High Performance**: State-of-the-art models with fast inference

Learn more at [0G Compute Documentation](https://docs.0g.ai/0g-compute/for-developers/inference-sdk).
Loading