Skip to content

Commit b4f8bd2

Browse files
authored
integrate helicone to llama-index (#20131)
* integrate helicone to llama-index * update pyproject contributions * add helicone llm docs * update docs parameters page * add more tests for helicone integration --------- Co-authored-by: Hammad Shami <hammad@helicone.ai>
1 parent 327c4c3 commit b4f8bd2

File tree

12 files changed

+4878
-0
lines changed

12 files changed

+4878
-0
lines changed
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
::: llama_index.llms.helicone
2+
options:
3+
members: - Helicone
Lines changed: 153 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,153 @@
1+
llama_index/_static
2+
.DS_Store
3+
# Byte-compiled / optimized / DLL files
4+
__pycache__/
5+
*.py[cod]
6+
*$py.class
7+
8+
# C extensions
9+
*.so
10+
11+
# Distribution / packaging
12+
.Python
13+
bin/
14+
build/
15+
develop-eggs/
16+
dist/
17+
downloads/
18+
eggs/
19+
.eggs/
20+
etc/
21+
include/
22+
lib/
23+
lib64/
24+
parts/
25+
sdist/
26+
share/
27+
var/
28+
wheels/
29+
pip-wheel-metadata/
30+
share/python-wheels/
31+
*.egg-info/
32+
.installed.cfg
33+
*.egg
34+
MANIFEST
35+
36+
# PyInstaller
37+
# Usually these files are written by a python script from a template
38+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
39+
*.manifest
40+
*.spec
41+
42+
# Installer logs
43+
pip-log.txt
44+
pip-delete-this-directory.txt
45+
46+
# Unit test / coverage reports
47+
htmlcov/
48+
.tox/
49+
.nox/
50+
.coverage
51+
.coverage.*
52+
.cache
53+
nosetests.xml
54+
coverage.xml
55+
*.cover
56+
*.py,cover
57+
.hypothesis/
58+
.pytest_cache/
59+
.ruff_cache
60+
61+
# Translations
62+
*.mo
63+
*.pot
64+
65+
# Django stuff:
66+
*.log
67+
local_settings.py
68+
db.sqlite3
69+
db.sqlite3-journal
70+
71+
# Flask stuff:
72+
instance/
73+
.webassets-cache
74+
75+
# Scrapy stuff:
76+
.scrapy
77+
78+
# Sphinx documentation
79+
docs/_build/
80+
81+
# PyBuilder
82+
target/
83+
84+
# Jupyter Notebook
85+
.ipynb_checkpoints
86+
notebooks/
87+
88+
# IPython
89+
profile_default/
90+
ipython_config.py
91+
92+
# pyenv
93+
.python-version
94+
95+
# pipenv
96+
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
97+
# However, in case of collaboration, if having platform-specific dependencies or dependencies
98+
# having no cross-platform support, pipenv may install dependencies that don't work, or not
99+
# install all needed dependencies.
100+
#Pipfile.lock
101+
102+
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
103+
__pypackages__/
104+
105+
# Celery stuff
106+
celerybeat-schedule
107+
celerybeat.pid
108+
109+
# SageMath parsed files
110+
*.sage.py
111+
112+
# Environments
113+
.env
114+
.venv
115+
env/
116+
venv/
117+
ENV/
118+
env.bak/
119+
venv.bak/
120+
pyvenv.cfg
121+
122+
# Spyder project settings
123+
.spyderproject
124+
.spyproject
125+
126+
# Rope project settings
127+
.ropeproject
128+
129+
# mkdocs documentation
130+
/site
131+
132+
# mypy
133+
.mypy_cache/
134+
.dmypy.json
135+
dmypy.json
136+
137+
# Pyre type checker
138+
.pyre/
139+
140+
# Jetbrains
141+
.idea
142+
modules/
143+
*.swp
144+
145+
# VsCode
146+
.vscode
147+
148+
# pipenv
149+
Pipfile
150+
Pipfile.lock
151+
152+
# pyright
153+
pyrightconfig.json
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
The MIT License
2+
3+
Copyright (c) Jerry Liu
4+
5+
Permission is hereby granted, free of charge, to any person obtaining a copy
6+
of this software and associated documentation files (the "Software"), to deal
7+
in the Software without restriction, including without limitation the rights
8+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9+
copies of the Software, and to permit persons to whom the Software is
10+
furnished to do so, subject to the following conditions:
11+
12+
The above copyright notice and this permission notice shall be included in
13+
all copies or substantial portions of the Software.
14+
15+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21+
THE SOFTWARE.
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
GIT_ROOT ?= $(shell git rev-parse --show-toplevel)
2+
3+
help: ## Show all Makefile targets.
4+
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[33m%-30s\033[0m %s\n", $$1, $$2}'
5+
6+
format: ## Run code autoformatters (black).
7+
pre-commit install
8+
git ls-files | xargs pre-commit run black --files
9+
10+
lint: ## Run linters: pre-commit (black, ruff, codespell) and mypy
11+
pre-commit install && git ls-files | xargs pre-commit run --show-diff-on-failure --files
12+
13+
test: ## Run tests via pytest.
14+
pytest tests
15+
16+
watch-docs: ## Build and watch documentation.
17+
sphinx-autobuild docs/ docs/_build/html --open-browser --watch $(GIT_ROOT)/llama_index/
Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
# LlamaIndex LLMs Integration: Helicone
2+
3+
## Installation
4+
5+
To install the required packages, run:
6+
7+
```bash
8+
pip install llama-index-llms-helicone
9+
pip install llama-index
10+
```
11+
12+
## Setup
13+
14+
### Initialize Helicone
15+
16+
Set your Helicone API key via `HELICONE_API_KEY` (or pass directly). No provider API keys are needed when using the Helicone AI Gateway.
17+
18+
```python
19+
from llama_index.llms.helicone import Helicone
20+
from llama_index.core.llms import ChatMessage
21+
22+
llm = Helicone(
23+
api_key="<helicone-api-key>", # or set HELICONE_API_KEY env var
24+
model="gpt-4o-mini", # works across providers via gateway
25+
)
26+
```
27+
28+
## Generate Chat Responses
29+
30+
You can generate a chat response by sending a list of `ChatMessage` instances:
31+
32+
```python
33+
message = ChatMessage(role="user", content="Tell me a joke")
34+
resp = llm.chat([message])
35+
print(resp)
36+
```
37+
38+
### Streaming Responses
39+
40+
To stream responses, use the `stream_chat` method:
41+
42+
```python
43+
message = ChatMessage(role="user", content="Tell me a story in 250 words")
44+
resp = llm.stream_chat([message])
45+
for r in resp:
46+
print(r.delta, end="")
47+
```
48+
49+
### Complete with Prompt
50+
51+
You can also generate completions with a prompt using the `complete` method:
52+
53+
```python
54+
resp = llm.complete("Tell me a joke")
55+
print(resp)
56+
```
57+
58+
### Streaming Completion
59+
60+
To stream completions, use the `stream_complete` method:
61+
62+
```python
63+
resp = llm.stream_complete("Tell me a story in 250 words")
64+
for r in resp:
65+
print(r.delta, end="")
66+
```
67+
68+
## Model Configuration
69+
70+
To use a specific model, you can specify it during initialization. For example, to use Mistral's Mixtral model, you can set it like this:
71+
72+
```python
73+
from llama_index.llms.helicone import Helicone
74+
75+
llm = Helicone(model="gpt-4o-mini")
76+
resp = llm.complete("Write a story about a dragon who can code in Rust")
77+
print(resp)
78+
```
79+
80+
### Notes
81+
82+
- Default Helicone base URL is `https://ai-gateway.helicone.ai/v1`. Override with `api_base` or `HELICONE_API_BASE` if needed.
83+
- Only `HELICONE_API_KEY` is required. The gateway routes to the correct provider based on the `model` string.
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
from llama_index.llms.helicone.base import Helicone
2+
3+
__all__ = ["Helicone"]
Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
from typing import Any, Dict, Optional
2+
3+
from llama_index.core.bridge.pydantic import Field
4+
from llama_index.core.constants import (
5+
DEFAULT_NUM_OUTPUTS,
6+
DEFAULT_TEMPERATURE,
7+
)
8+
from llama_index.core.base.llms.generic_utils import get_from_param_or_env
9+
from llama_index.llms.openai_like import OpenAILike
10+
11+
# Default Helicone AI Gateway base. Override with HELICONE_API_BASE if needed.
12+
DEFAULT_API_BASE = "https://ai-gateway.helicone.ai/v1"
13+
# Default model routed via gateway; users may override to any supported provider.
14+
DEFAULT_MODEL = "gpt-4o-mini"
15+
16+
17+
class Helicone(OpenAILike):
18+
"""
19+
Helicone (OpenAI-compatible) LLM.
20+
21+
Route OpenAI-compatible requests through Helicone for observability and control.
22+
23+
Authentication:
24+
- Set your Helicone API key via the `api_key` parameter or `HELICONE_API_KEY`.
25+
No OpenAI/third-party provider keys are required when using the AI Gateway.
26+
27+
Examples:
28+
`pip install llama-index-llms-helicone`
29+
30+
```python
31+
from llama_index.llms.helicone import Helicone
32+
33+
llm = Helicone(
34+
api_key="<helicone-api-key>",
35+
model="gpt-4o-mini", # works across providers
36+
)
37+
38+
response = llm.complete("Hello World!")
39+
print(str(response))
40+
```
41+
42+
"""
43+
44+
model: str = Field(
45+
description=(
46+
"OpenAI-compatible model name routed via the Helicone AI Gateway. "
47+
"Learn more about [provider routing](https://docs.helicone.ai/gateway/provider-routing). "
48+
"All models are visible [here](https://www.helicone.ai/models)."
49+
)
50+
)
51+
api_base: Optional[str] = Field(
52+
default=DEFAULT_API_BASE,
53+
description=(
54+
"Base URL for the Helicone AI Gateway. Can also be set via the "
55+
"HELICONE_API_BASE environment variable. See the "
56+
"[Gateway overview](https://docs.helicone.ai/gateway/overview)."
57+
),
58+
)
59+
api_key: Optional[str] = Field(
60+
description=(
61+
"Helicone API key used to authorize requests (Authorization: Bearer). "
62+
"Provide directly or set via HELICONE_API_KEY. Generate your API key "
63+
"in the [dashboard settings](https://us.helicone.ai/settings/api-keys). "
64+
),
65+
)
66+
default_headers: Optional[Dict[str, str]] = Field(
67+
default=None,
68+
description=(
69+
"Additional HTTP headers to include with requests. The Helicone "
70+
"Authorization header is added automatically from api_key. See "
71+
"[custom properties](https://docs.helicone.ai/features/advanced-usage/custom-properties)/[headers](https://docs.helicone.ai/helicone-headers/header-directory)."
72+
),
73+
)
74+
75+
def __init__(
76+
self,
77+
model: str = DEFAULT_MODEL,
78+
temperature: float = DEFAULT_TEMPERATURE,
79+
max_tokens: int = DEFAULT_NUM_OUTPUTS,
80+
additional_kwargs: Optional[Dict[str, Any]] = None,
81+
max_retries: int = 5,
82+
api_base: Optional[str] = DEFAULT_API_BASE,
83+
api_key: Optional[str] = None,
84+
default_headers: Optional[Dict[str, str]] = None,
85+
**kwargs: Any,
86+
) -> None:
87+
additional_kwargs = additional_kwargs or {}
88+
89+
api_base = get_from_param_or_env("api_base", api_base, "HELICONE_API_BASE")
90+
api_key = get_from_param_or_env("api_key", api_key, "HELICONE_API_KEY")
91+
92+
if default_headers:
93+
default_headers.update({"Authorization": f"Bearer {api_key}"})
94+
else:
95+
default_headers = {"Authorization": f"Bearer {api_key}"}
96+
97+
super().__init__(
98+
model=model,
99+
temperature=temperature,
100+
max_tokens=max_tokens,
101+
api_base=api_base,
102+
default_headers=default_headers,
103+
additional_kwargs=additional_kwargs,
104+
max_retries=max_retries,
105+
**kwargs,
106+
)
107+
108+
@classmethod
109+
def class_name(cls) -> str:
110+
return "Helicone_LLM"

llama-index-integrations/llms/llama-index-llms-helicone/llama_index/llms/helicone/py.typed

Whitespace-only changes.

0 commit comments

Comments
 (0)