|
2 | 2 |
|
3 | 3 | <a href="https://badge.fury.io/rb/chatgpt-ruby"><img src="https://img.shields.io/gem/v/chatgpt-ruby?style=for-the-badge" alt="Gem Version"></a> |
4 | 4 | <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow?style=for-the-badge" alt="License"></a> |
5 | | -<a href="https://codeclimate.com/github/nagstler/chatgpt-ruby/maintainability"><img src="https://img.shields.io/codeclimate/maintainability/nagstler/chatgpt-ruby?style=for-the-badge" alt="Maintainability"></a> |
6 | 5 | <a href="https://codeclimate.com/github/nagstler/chatgpt-ruby/test_coverage"><img src="https://img.shields.io/codeclimate/coverage/nagstler/chatgpt-ruby?style=for-the-badge" alt="Test Coverage"></a> |
7 | 6 | <a href="https://github.com/nagstler/chatgpt-ruby/actions/workflows/ci.yml"><img src="https://img.shields.io/github/actions/workflow/status/nagstler/chatgpt-ruby/ci.yml?branch=main&style=for-the-badge" alt="CI"></a> |
8 | 7 | <a href="https://github.com/nagstler/chatgpt-ruby/stargazers"><img src="https://img.shields.io/github/stars/nagstler/chatgpt-ruby?style=for-the-badge" alt="GitHub stars"></a> |
9 | 8 |
|
10 | | -🤖💎 A comprehensive Ruby SDK for OpenAI's GPT APIs, providing a robust, feature-rich interface for AI-powered applications. |
11 | | - |
12 | | -📚 [Check out the Integration Guide](https://github.com/nagstler/chatgpt-ruby/wiki) to get started! |
| 9 | +🤖💎 A lightweight Ruby wrapper for the OpenAI API, designed for simplicity and ease of integration. |
13 | 10 |
|
14 | 11 | ## Features |
15 | 12 |
|
16 | | -- 🚀 Full support for GPT-3.5-Turbo and GPT-4 models |
17 | | -- 📡 Streaming responses support |
18 | | -- 🔧 Function calling and JSON mode |
19 | | -- 🎨 DALL-E image generation |
20 | | -- 🔄 Fine-tuning capabilities |
21 | | -- 📊 Token counting and validation |
22 | | -- ⚡ Async operations support |
23 | | -- 🛡️ Built-in rate limiting and retries |
24 | | -- 🎯 Type-safe responses |
25 | | -- 📝 Comprehensive logging |
| 13 | +- API integration for chat completions and text completions |
| 14 | +- Streaming capability for handling real-time response chunks |
| 15 | +- Custom exception classes for different API error types |
| 16 | +- Configurable timeout, retries and default parameters |
| 17 | +- Complete test suite with mocked API responses |
26 | 18 |
|
27 | 19 | ## Table of Contents |
28 | 20 |
|
29 | 21 | - [Features](#features) |
30 | 22 | - [Installation](#installation) |
31 | 23 | - [Quick Start](#quick-start) |
32 | 24 | - [Configuration](#configuration) |
33 | | -- [Core Features](#core-features) |
| 25 | +- [Rails Integration](#rails-integration) |
| 26 | +- [Error Handling](#error-handling) |
| 27 | +- [Current Capabilities](#current-capabilities) |
34 | 28 | - [Chat Completions](#chat-completions) |
35 | | - - [Function Calling](#function-calling) |
36 | | - - [Image Generation (DALL-E)](#image-generation-dall-e) |
37 | | - - [Fine-tuning](#fine-tuning) |
38 | | - - [Token Management](#token-management) |
39 | | - - [Error Handling](#error-handling) |
40 | | -- [Advanced Usage](#advanced-usage) |
41 | | - - [Async Operations](#async-operations) |
42 | | - - [Batch Operations](#batch-operations) |
43 | | - - [Response Objects](#response-objects) |
| 29 | + - [Text Completions](#text-completions) |
| 30 | +- [Roadmap](#roadmap) |
44 | 31 | - [Development](#development) |
45 | 32 | - [Contributing](#contributing) |
46 | 33 | - [License](#license) |
@@ -80,155 +67,113 @@ puts response.dig("choices", 0, "text") |
80 | 67 |
|
81 | 68 | ``` |
82 | 69 |
|
83 | | -## Configuration |
| 70 | +## Rails Integration |
| 71 | + |
| 72 | +In a Rails application, create an initializer: |
84 | 73 |
|
85 | 74 | ```ruby |
| 75 | +# config/initializers/chat_gpt.rb |
| 76 | +require 'chatgpt' |
| 77 | + |
86 | 78 | ChatGPT.configure do |config| |
87 | | - config.api_key = ENV['OPENAI_API_KEY'] |
88 | | - config.api_version = 'v1' |
| 79 | + config.api_key = Rails.application.credentials.openai[:api_key] |
89 | 80 | config.default_engine = 'gpt-3.5-turbo' |
90 | 81 | config.request_timeout = 30 |
91 | | - config.max_retries = 3 |
92 | | - config.default_parameters = { |
93 | | - max_tokens: 16, |
94 | | - temperature: 0.5, |
95 | | - top_p: 1.0, |
96 | | - n: 1 |
97 | | - } |
98 | 82 | end |
99 | 83 | ``` |
100 | | - |
101 | | -## Core Features |
102 | | - |
103 | | -### Chat Completions |
| 84 | +Then use it in your controllers or services: |
104 | 85 |
|
105 | 86 | ```ruby |
106 | | -# Chat with system message |
107 | | -response = client.chat([ |
108 | | - { role: "system", content: "You are a helpful assistant." }, |
109 | | - { role: "user", content: "Hello!" } |
110 | | -]) |
| 87 | +# app/services/chat_gpt_service.rb |
| 88 | +class ChatGPTService |
| 89 | + def initialize |
| 90 | + @client = ChatGPT::Client.new |
| 91 | + end |
| 92 | + |
| 93 | + def ask_question(question) |
| 94 | + response = @client.chat([ |
| 95 | + { role: "user", content: question } |
| 96 | + ]) |
| 97 | + |
| 98 | + response.dig("choices", 0, "message", "content") |
| 99 | + end |
| 100 | +end |
111 | 101 |
|
112 | | -# With streaming |
113 | | -client.chat_stream([ |
114 | | - { role: "user", content: "Tell me a story" } |
115 | | -]) do |chunk| |
116 | | - print chunk.dig("choices", 0, "delta", "content") |
| 102 | +# Usage in controller |
| 103 | +def show |
| 104 | + service = ChatGPTService.new |
| 105 | + @response = service.ask_question("Tell me about Ruby on Rails") |
117 | 106 | end |
118 | 107 | ``` |
119 | 108 |
|
120 | | -### Function Calling |
| 109 | +## Configuration |
121 | 110 |
|
122 | 111 | ```ruby |
123 | | -functions = [ |
124 | | - { |
125 | | - name: "get_weather", |
126 | | - description: "Get current weather", |
127 | | - parameters: { |
128 | | - type: "object", |
129 | | - properties: { |
130 | | - location: { type: "string" }, |
131 | | - unit: { type: "string", enum: ["celsius", "fahrenheit"] } |
132 | | - } |
133 | | - } |
| 112 | +ChatGPT.configure do |config| |
| 113 | + config.api_key = ENV['OPENAI_API_KEY'] |
| 114 | + config.api_version = 'v1' |
| 115 | + config.default_engine = 'gpt-3.5-turbo' |
| 116 | + config.request_timeout = 30 |
| 117 | + config.max_retries = 3 |
| 118 | + config.default_parameters = { |
| 119 | + max_tokens: 16, |
| 120 | + temperature: 0.5, |
| 121 | + top_p: 1.0, |
| 122 | + n: 1 |
134 | 123 | } |
135 | | -] |
136 | | - |
137 | | -response = client.chat( |
138 | | - messages: [{ role: "user", content: "What's the weather in London?" }], |
139 | | - functions: functions, |
140 | | - function_call: "auto" |
141 | | -) |
142 | | -``` |
143 | | - |
144 | | -### Image Generation (DALL-E) |
145 | | - |
146 | | -```ruby |
147 | | -# Generate image |
148 | | -image = client.images.generate( |
149 | | - prompt: "A sunset over mountains", |
150 | | - size: "1024x1024", |
151 | | - quality: "hd" |
152 | | -) |
153 | | - |
154 | | -# Create variations |
155 | | -variation = client.images.create_variation( |
156 | | - image: File.read("input.png"), |
157 | | - n: 1 |
158 | | -) |
159 | | -``` |
160 | | - |
161 | | -### Fine-tuning |
162 | | - |
163 | | -```ruby |
164 | | -# Create fine-tuning job |
165 | | -job = client.fine_tunes.create( |
166 | | - training_file: "file-abc123", |
167 | | - model: "gpt-3.5-turbo" |
168 | | -) |
169 | | - |
170 | | -# List fine-tuning jobs |
171 | | -jobs = client.fine_tunes.list |
172 | | - |
173 | | -# Get job status |
174 | | -status = client.fine_tunes.retrieve(job.id) |
175 | | -``` |
176 | | - |
177 | | -### Token Management |
178 | | - |
179 | | -```ruby |
180 | | -# Count tokens |
181 | | -count = client.tokens.count("Your text here", model: "gpt-4") |
182 | | - |
183 | | -# Validate token limits |
184 | | -client.tokens.validate_messages(messages, max_tokens: 4000) |
| 124 | +end |
185 | 125 | ``` |
186 | 126 |
|
187 | | -### Error Handling |
| 127 | +## Error handling |
188 | 128 |
|
189 | 129 | ```ruby |
190 | 130 | begin |
191 | | - response = client.chat(messages: [...]) |
| 131 | + response = client.chat([ |
| 132 | + { role: "user", content: "Hello!" } |
| 133 | + ]) |
| 134 | +rescue ChatGPT::AuthenticationError => e |
| 135 | + puts "Authentication error: #{e.message}" |
192 | 136 | rescue ChatGPT::RateLimitError => e |
193 | 137 | puts "Rate limit hit: #{e.message}" |
| 138 | +rescue ChatGPT::InvalidRequestError => e |
| 139 | + puts "Bad request: #{e.message}" |
194 | 140 | rescue ChatGPT::APIError => e |
195 | 141 | puts "API error: #{e.message}" |
196 | | -rescue ChatGPT::TokenLimitError => e |
197 | | - puts "Token limit exceeded: #{e.message}" |
198 | 142 | end |
199 | 143 | ``` |
200 | 144 |
|
201 | | -## Advanced Usage |
202 | | - |
203 | | -### Async Operations |
| 145 | +## Current Capabilities |
204 | 146 |
|
| 147 | +### Chat Completions |
205 | 148 | ```ruby |
206 | | -client.async do |
207 | | - response1 = client.chat(messages: [...]) |
208 | | - response2 = client.chat(messages: [...]) |
209 | | - [response1, response2] |
| 149 | +# Basic chat |
| 150 | +response = client.chat([ |
| 151 | + { role: "user", content: "What is Ruby?" } |
| 152 | +]) |
| 153 | + |
| 154 | +# With streaming |
| 155 | +client.chat_stream([{ role: "user", content: "Tell me a story" }]) do |chunk| |
| 156 | + print chunk.dig("choices", 0, "delta", "content") |
210 | 157 | end |
211 | 158 | ``` |
212 | 159 |
|
213 | | -### Batch Operations |
214 | | - |
| 160 | +### Text Completions |
215 | 161 | ```ruby |
216 | | -responses = client.batch do |batch| |
217 | | - batch.add_chat(messages: [...]) |
218 | | - batch.add_chat(messages: [...]) |
219 | | -end |
| 162 | +# Basic completion with GPT-3.5-turbo-instruct |
| 163 | +response = client.completions("What is Ruby?") |
| 164 | +puts response.dig("choices", 0, "text") |
220 | 165 | ``` |
221 | 166 |
|
222 | | -### Response Objects |
| 167 | +## Roadmap |
223 | 168 |
|
224 | | -```ruby |
225 | | -response = client.chat(messages: [...]) |
| 169 | +While ChatGPT Ruby is functional, there are several areas planned for improvement: |
226 | 170 |
|
227 | | -response.content # Main response content |
228 | | -response.usage # Token usage information |
229 | | -response.finish_reason # Why the response ended |
230 | | -response.model # Model used |
231 | | -``` |
| 171 | +- [ ] Response object wrapper & Rails integration with Railtie (v2.2) |
| 172 | +- [ ] Token counting, function calling, and rate limiting (v2.3) |
| 173 | +- [ ] Batch operations and async support (v3.0) |
| 174 | +- [ ] DALL-E image generation and fine-tuning (Future) |
| 175 | + |
| 176 | +❤️ Contributions in any of these areas are welcome! |
232 | 177 |
|
233 | 178 | ## Development |
234 | 179 |
|
|
0 commit comments