Skip to content

Add response_json_schema support to RunConfig for per-request structured output #3866

@lithammer

Description

@lithammer

Is your feature request related to a problem? Please describe.

When building A2A (Agent-to-Agent) servers using ADK, the calling agent can request structured output by including a JSON schema in the message metadata, per the A2A Structured Data Exchange
spec
:

{
  "message": {
    "parts": [{
      "text": "Show me a list of my open IT tickets",
      "metadata": {
        "mediaType": "application/json",
        "schema": {
          "type": "array",
          "items": {
            "type": "object",
            "properties": {
              "ticketNumber": { "type": "string" },
              "description": { "type": "string" }
            }
          }
        }
      }
    }]
  }
}

The schema is per-request, not per-agent. Different requests to the same agent may request different output schemas (or no schema at all).

Currently, there's no direct way to pass a response schema through Runner.run_async(). The available options are:

  1. LlmAgent.output_schema - Set at agent creation time, not per-request
  2. LlmAgent.generate_content_config - Also set at agent creation time
  3. before_model_callback / Plugin - Works, but requires storing the schema in session state and reading it back in a callback, which is indirect

Describe the solution you'd like

Add optional fields to RunConfig:

class RunConfig(BaseModel):
    # ... 

    response_json_schema: Optional[dict[str, Any]] = None
    """JSON Schema for structured output. When set, configures the LLM
    to return responses conforming to this schema."""

    response_mime_type: Optional[str] = None
    """MIME type for the response. Defaults to 'application/json' when
response_json_schema is set."""

Then in BaseLlmFlow, apply these to the LlmRequest.config before calling the LLM:

if invocation_context.run_config and invocation_context.run_config.response_json_schema:
    llm_request.config = llm_request.config or types.GenerateContentConfig()
    llm_request.config.response_json_schema = invocation_context.run_config.response_json_schema
    llm_request.config.response_mime_type = (
        invocation_context.run_config.response_mime_type or "application/json"
    )

Example Usage

schema = {
    "type": "object",
    "properties": {
        "answer": {"type": "string"},
        "confidence": {"type": "number"}
    }
}

async for event in runner.run_async(
    user_id="user-123",
    session_id="session-456",
    new_message=content,
    run_config=RunConfig(response_json_schema=schema),
):
    # LLM responses will conform to the schema
    ...

Describe alternatives you've considered

Using a plugin/callback. This works but requires:

  1. Storing the schema in session state before calling run_async()
  2. Creating a plugin that reads from session state in before_model_callback
  3. Setting it on llm_request.config.response_json_schema

This is indirect and couples the schema to session state rather than the request itself.

Additional context

This started with me trying to send A2A requests to kagent (which uses ADK under the hood) including as the documentation says. But the schema information is dropped by ADK.

Metadata

Metadata

Assignees

No one assigned

    Labels

    a2a[Component] This issue is related a2a support inside ADK.request clarification[Status] The maintainer need clarification or more information from the authorstale[Status] Issues which have been marked inactive since there is no user response

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions