Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
416 changes: 251 additions & 165 deletions poetry.lock

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ name = "humanloop"

[tool.poetry]
name = "humanloop"
version = "0.8.29b1"
version = "0.8.31"
description = ""
readme = "README.md"
authors = []
Expand Down
14 changes: 7 additions & 7 deletions reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ client.prompts.log(
messages=[{"role": "user", "content": "What really happened at Roswell?"}],
inputs={"person": "Trump"},
created_at=datetime.datetime.fromisoformat(
"2024-07-19 00:29:35.178000+00:00",
"2024-07-18 23:29:35.178000+00:00",
),
provider_latency=6.5931549072265625,
output_message={
Expand Down Expand Up @@ -193,7 +193,7 @@ client.prompts.log(
Controls how the model uses tools. The following options are supported:
- `'none'` means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.
- `'auto'` means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.
- `'required'` means the model can decide to call one or more of the provided tools.
- `'required'` means the model must call one or more of the provided tools.
- `{'type': 'function', 'function': {name': <TOOL_NAME>}}` forces the model to use the named function.

</dd>
Expand Down Expand Up @@ -512,7 +512,7 @@ client.prompts.update_log(
Controls how the model uses tools. The following options are supported:
- `'none'` means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.
- `'auto'` means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.
- `'required'` means the model can decide to call one or more of the provided tools.
- `'required'` means the model must call one or more of the provided tools.
- `{'type': 'function', 'function': {name': <TOOL_NAME>}}` forces the model to use the named function.

</dd>
Expand Down Expand Up @@ -743,7 +743,7 @@ for chunk in response:
Controls how the model uses tools. The following options are supported:
- `'none'` means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.
- `'auto'` means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.
- `'required'` means the model can decide to call one or more of the provided tools.
- `'required'` means the model must call one or more of the provided tools.
- `{'type': 'function', 'function': {name': <TOOL_NAME>}}` forces the model to use the named function.

</dd>
Expand Down Expand Up @@ -1017,7 +1017,7 @@ client.prompts.call(
Controls how the model uses tools. The following options are supported:
- `'none'` means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.
- `'auto'` means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.
- `'required'` means the model can decide to call one or more of the provided tools.
- `'required'` means the model must call one or more of the provided tools.
- `{'type': 'function', 'function': {name': <TOOL_NAME>}}` forces the model to use the named function.

</dd>
Expand Down Expand Up @@ -6760,10 +6760,10 @@ client.flows.log(
output="The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.",
log_status="incomplete",
start_time=datetime.datetime.fromisoformat(
"2024-07-08 22:40:35+00:00",
"2024-07-08 21:40:35+00:00",
),
end_time=datetime.datetime.fromisoformat(
"2024-07-08 22:40:39+00:00",
"2024-07-08 21:40:39+00:00",
),
)

Expand Down
2 changes: 1 addition & 1 deletion src/humanloop/core/client_wrapper.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ def get_headers(self) -> typing.Dict[str, str]:
headers: typing.Dict[str, str] = {
"X-Fern-Language": "Python",
"X-Fern-SDK-Name": "humanloop",
"X-Fern-SDK-Version": "0.8.29b1",
"X-Fern-SDK-Version": "0.8.31",
}
headers["X-API-KEY"] = self.api_key
return headers
Expand Down
4 changes: 4 additions & 0 deletions src/humanloop/eval_utils/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
from .run import log_with_evaluation_context, run_eval
from .types import File

__all__ = ["run_eval", "log_with_evaluation_context", "File"]
26 changes: 26 additions & 0 deletions src/humanloop/eval_utils/context.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
from typing import Callable, TypedDict


class EvaluationContext(TypedDict):
"""Context Log to Humanloop.

Per datapoint state that is set when an Evaluation is ran.
"""

"""Required for associating a Log with the Evaluation Run."""
source_datapoint_id: str

"""Overloaded .log method call."""
upload_callback: Callable[[str], None]

"""ID of the evaluated File."""
file_id: str

"""Path of the evaluated File."""
path: str

"""Required for associating a Log with the Evaluation Run."""
run_id: str


EVALUATION_CONTEXT_VARIABLE_NAME = "__EVALUATION_CONTEXT"
Loading
Loading