-
Notifications
You must be signed in to change notification settings - Fork 144
SGR Description API
SGR Agent Core provides a comprehensive REST API that is fully compatible with OpenAI's API format, making it easy to integrate with existing applications.
http://localhost:8010
No authentication required for local development. For production deployments, configure authentication as needed.
π₯ Health Check - Check API status and availability
Check if the API is running and healthy.
Response:
{
"status": "healthy",
"service": "sgr-agent-core API"
}Example:
curl http://localhost:8010/healthπ€ Available Models - Get list of supported agent models
Retrieve a list of available agent models.
Response:
{
"data": [
{
"id": "sgr-agent",
"object": "model",
"created": 1234567890,
"owned_by": "sgr-deep-research"
},
{
"id": "sgr-tools-agent",
"object": "model",
"created": 1234567890,
"owned_by": "sgr-deep-research"
}
],
"object": "list"
}Available Models:
-
sgr-agent- Pure SGR (Schema-Guided Reasoning) -
sgr-tools-agent- SGR + Function Calling hybrid -
sgr-auto-tools-agent- SGR + Auto Function Calling -
sgr-so-tools-agent- SGR + Structured Output -
tools-agent- Pure Function Calling
Example:
curl http://localhost:8010/v1/models㪠Chat Completions - Main research endpoint with streaming support
Create a chat completion for research tasks. This is the main endpoint for interacting with SGR agents.
Request Body:
{
"model": "sgr-agent",
"messages": [
{
"role": "user",
"content": "Research BMW X6 2025 prices in Russia"
}
],
"stream": true,
"max_tokens": 1500,
"temperature": 0.4
}Parameters:
-
model(string, required): Agent type or existing agent ID -
messages(array, required): List of chat messages -
stream(boolean, default: true): Enable streaming mode -
max_tokens(integer, optional): Maximum number of tokens -
temperature(float, optional): Generation temperature (0.0-1.0)
Response Headers:
-
X-Agent-ID: Unique agent identifier -
X-Agent-Model: Agent model used -
Cache-Control: no-cache -
Connection: keep-alive
Streaming Response: The response is streamed as Server-Sent Events (SSE) with real-time updates.
Example:
curl -X POST "http://localhost:8010/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"model": "sgr-agent",
"messages": [{"role": "user", "content": "Research AI market trends"}],
"stream": true,
"temperature": 0
}'π Agent Management - List and monitor active agents
Get a list of all active agents.
Response:
{
"agents": [
{
"agent_id": "sgr_agent_12345-67890-abcdef",
"task": "Research BMW X6 2025 prices",
"state": "RESEARCHING"
}
],
"total": 1
}Agent States:
-
INITED- Agent initialized -
RESEARCHING- Agent is actively researching -
WAITING_FOR_CLARIFICATION- Agent needs clarification -
COMPLETED- Research completed
Example:
curl http://localhost:8010/agentsπ Agent State - Get detailed information about a specific agent
Get detailed state information for a specific agent.
Response:
{
"agent_id": "sgr_agent_12345-67890-abcdef",
"task": "Research BMW X6 2025 prices",
"state": "RESEARCHING",
"iteration": 3,
"searches_used": 2,
"clarifications_used": 0,
"sources_count": 5,
"current_step_reasoning": {
"action": "web_search",
"query": "BMW X6 2025 price Russia",
"reason": "Need current market data"
}
}Parameters:
-
agent_id(string, required): Unique agent identifier
Example:
curl http://localhost:8010/agents/sgr_agent_12345-67890-abcdef/stateβ Provide Clarification - Respond to agent clarification requests
Provide clarification to an agent that is waiting for input.
Request Body:
{
"clarifications": "Focus on luxury models only, price range 5-8 million rubles"
}Parameters:
-
agent_id(string, required): Unique agent identifier -
clarifications(string, required): Clarification text
Response: Streaming response with continued research after clarification.
Example:
curl -X POST "http://localhost:8010/agents/sgr_agent_12345-67890-abcdef/provide_clarification" \
-H "Content-Type: application/json" \
-d '{
"clarifications": "Focus on luxury models only"
}'2025 // vamplab