A comprehensive guide and working example for building deep agents using Google's Gemini 2.5 Flash model with the deepagents framework.
This project demonstrates how to build intelligent research agents that can:
- Plan complex tasks using built-in planning tools
- Search the internet for information
- Manage file systems to handle large datasets
- Delegate subtasks to specialized subagents
- Synthesize research into polished reports
All powered by Google's fast and cost-efficient Gemini 2.5 Flash model.
pip install -r requirements.txtOr with uv:
uv add -r requirements.txtCreate a .env file from the template:
cp .env.example .envThen add your API keys:
- GOOGLE_API_KEY: Get from Google AI Studio
- TAVILY_API_KEY: Get from Tavily
python gemini_quickstart.py| File | Purpose |
|---|---|
GEMINI_QUICKSTART.md |
Comprehensive guide to using Gemini with deep agents |
gemini_quickstart.py |
Ready-to-run example script |
requirements.txt |
Python dependencies |
.env.example |
Template for environment variables |
README.md |
This file |
# Anthropic (Claude)
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-3-5-sonnet-20241022")
# Google (Gemini) - This project
from langchain_google_genai import ChatGoogleGenerativeAI
model = ChatGoogleGenerativeAI(model="gemini-2.5-flash")- β‘ Speed: Optimized for fast inference
- π° Cost: More economical pricing
- π Context: 1 million token window
- π― Quality: Excellent performance-to-latency ratio
βββββββββββββββββββββββββββββββββββββββββββββββ
β User Query (Research Task) β
ββββββββββββββββ¬βββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββ
β Gemini 2.5 Flash Deep Agent β
β ββββββββββββββββββββββββββββββββββββββββββ β
β β System Prompt (Research Expert) β β
β ββββββββββββββββββββββββββββββββββββββββββ β
ββββββββββββββββ¬βββββββββββββββββββββββββββββββ
β
ββββββββΌβββββββ¬βββββββββββ¬βββββββββββ
βΌ βΌ βΌ βΌ βΌ
βββββββ ββββββββ ββββββββββ ββββββββ βββββββ
βPlan β βSearchβ βOrganizeβ βWrite β βReportβ
βWork β βWeb β βFiles β βFile β βSynth β
βTodosβ β β β β β β β β
βββββββ ββββββββ ββββββββββ ββββββββ βββββββ
β β β β β
ββββββββΌβββββββββΌββββββββββΌβββββββββ
β β β
βΌ βΌ βΌ
ββββββββββββββββββββββββββββββββ
β Built-in Tools β
β β’ write_todos β
β β’ internet_search β
β β’ read_file/write_file β
β β’ grep/glob β
β β’ execute β
β β’ task (subagents) β
ββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββ
β Final Report β
β β’ Structured findings β
β β’ Citations β
β β’ Analysis & conclusions β
ββββββββββββββββββββββββββββββββ
from gemini_quickstart import agent
result = agent.invoke({
"messages": [{
"role": "user",
"content": "What are the latest developments in AI agents?"
}]
})
print(result["messages"][-1].content)# Research with more search results
system_prompt = """You are an expert researcher...
Use up to 10 search results per query for comprehensive coverage."""
agent = create_deep_agent(
tools=[internet_search],
system_prompt=system_prompt,
model=model,
)Modify the internet_search call in your agent's instructions:
# For finance-specific research
topic = "finance"
# For current news
topic = "news"
# For general topics
topic = "general"Extend the agent with additional tools:
def calculator(expression: str) -> float:
"""Evaluate mathematical expressions"""
return eval(expression)
agent = create_deep_agent(
tools=[internet_search, calculator],
system_prompt=research_instructions,
model=model,
)Delegate specific tasks to specialized agents:
# Define system prompts for specialized agents
data_analyst_prompt = "You are an expert data analyst..."
summarizer_prompt = "You are an expert at summarizing content..."
# Agents will automatically handle delegationSolution: Make sure you've set the environment variable:
export GOOGLE_API_KEY="your-key-here"Or use a .env file with python-dotenv:
from dotenv import load_dotenv
load_dotenv()Solution: Reduce request frequency or add delays:
import time
time.sleep(1) # 1 second delay between searchesSolution: Use the built-in write_file tool to save results:
# Agent automatically manages this with FilesystemMiddleware
# No additional configuration needed- Use Flash model: It's faster and cheaper than Pro/Ultra models
- Limit search results: Set
max_results=3-5for focused searches - Cache API keys: Set environment variables once, reuse across sessions
- Enable LangSmith: Monitor agent behavior and optimize
- Use file management: Let the agent offload context to files
- Deepagents: https://github.com/langchain-ai/deepagents
- Google Gemini API: https://ai.google.dev
- Tavily Search: https://tavily.com
- LangChain: https://python.langchain.com
- LangGraph: https://langgraph.dev
This project is provided as an educational example.
For issues with:
- Deep agents: Check deepagents docs
- Gemini API: Visit Google AI Support
- Tavily: Contact Tavily Support