LlamaIndex is a framework for building knowledge agents using LLMs connected to your data. This example shows you how to build a multi-agent workflow for a Research Agent. In LlamaIndex, Workflows
are the building blocks of agent or multi-agent systems.
You need a Gemini API key. If you don't already have one, you can get one in Google AI Studio. First, install all reuqired LlamaIndex libraries. LlamaIndex uses the google-genai
package under the hood.
pip install llama-index llama-index-utils-workflow llama-index-llms-google-genai llama-index-tools-google
Set up Gemini 2.5 Pro in LlamaIndex
The engine of any LlamaIndex agent is an LLM that handles reasoning and text processing. This example uses Gemini 2.5 Pro. Make sure you set your API key as an environment variable.
from llama_index.llms.google_genai import GoogleGenAI
llm = GoogleGenAI(model="gemini-2.5-pro")
Build tools
Agents use tools to interact with the outside world, like searching the web or storing information. Tools in LlamaIndex can be regular Python functions, or imported from pre-existing ToolSpecs
. Gemini comes with a built-in tool for using Google Search which is used here.
from google.genai import types
google_search_tool = types.Tool(
google_search=types.GoogleSearch()
)
llm_with_search = GoogleGenAI(
model="gemini-2.5-pro",
generation_config=types.GenerateContentConfig(tools=[google_search_tool])
)
Now test the LLM instance with a query that requires search:
response = llm_with_search.complete("What's the weather like today in Biarritz?")
print(response)
The Research Agent will use Python functions as tools. There are a lot of ways you could go about building a system to perform this task. In this example, you will use the following:
search_web
uses Gemini with Google Search to search the web for information on the given topic.record_notes
saves research found on the web to the state so that the other tools can use it.write_report
writes the report using the information found by theResearchAgent
review_report
reviews the report and provides feedback.
The Context
class passes the state between agents/tools, and each agent will have access to the current state of the system.
from llama_index.core.workflow import Context
async def search_web(ctx: Context, query: str) -> str:
"""Useful for searching the web about a specific query or topic"""
response = await llm_with_search.acomplete(f"""Please research given this query or topic,
and return the result\n<query_or_topic>{query}</query_or_topic>""")
return response
async def record_notes(ctx: Context, notes: str, notes_title: str) -> str:
"""Useful for recording notes on a given topic."""
current_state = await ctx.store.get("state")
if "research_notes" not in current_state:
current_state["research_notes"] = {}
current_state["research_notes"][notes_title] = notes
await ctx.store.set("state", current_state)
return "Notes recorded."
async def write_report(ctx: Context, report_content: str) -> str:
"""Useful for writing a report on a given topic."""
current_state = await ctx.store.get("state")
current_state["report_content"] = report_content
await ctx.store.set("state", current_state)
return "Report written."
async def review_report(ctx: Context, review: str) -> str:
"""Useful for reviewing a report and providing feedback."""
current_state = await ctx.store.get("state")
current_state["review"] = review
await ctx.store.set("state", current_state)
return "Report reviewed."
Build a multi-agent assistant
To build a multi-agent system, you define the agents and their interactions. Your system will have three agents:
- A
ResearchAgent
searches the web for information on the given topic. - A
WriteAgent
writes the report using the information found by theResearchAgent
. - A
ReviewAgent
reviews the report and provides feedback.
This example uses the AgentWorkflow
class to create a multi-agent system that will execute these agents in order. Each agent takes a system_prompt
that tells it what it should do, and suggests how to work with the other agents.
Optionally, you can help your multi-agent system by specifying which other agents it can talk to using can_handoff_to
(if not, it will try to figure this out on its own).
from llama_index.core.agent.workflow import (
AgentInput,
AgentOutput,
ToolCall,
ToolCallResult,
AgentStream,
)
from llama_index.core.agent.workflow import FunctionAgent, ReActAgent
research_agent = FunctionAgent(
name="ResearchAgent",
description="Useful for searching the web for information on a given topic and recording notes on the topic.",
system_prompt=(
"You are the ResearchAgent that can search the web for information on a given topic and record notes on the topic. "
"Once notes are recorded and you are satisfied, you should hand off control to the WriteAgent to write a report on the topic."
),
llm=llm,
tools=[search_web, record_notes],
can_handoff_to=["WriteAgent"],
)
write_agent = FunctionAgent(
name="WriteAgent",
description="Useful for writing a report on a given topic.",
system_prompt=(
"You are the WriteAgent that can write a report on a given topic. "
"Your report should be in a markdown format. The content should be grounded in the research notes. "
"Once the report is written, you should get feedback at least once from the ReviewAgent."
),
llm=llm,
tools=[write_report],
can_handoff_to=["ReviewAgent", "ResearchAgent"],
)
review_agent = FunctionAgent(
name="ReviewAgent",
description="Useful for reviewing a report and providing feedback.",
system_prompt=(
"You are the ReviewAgent that can review a report and provide feedback. "
"Your feedback should either approve the current report or request changes for the WriteAgent to implement."
),
llm=llm,
tools=[review_report],
can_handoff_to=["ResearchAgent","WriteAgent"],
)
The Agents are defined, now you can create the AgentWorkflow
and run it.
from llama_index.core.agent.workflow import AgentWorkflow
agent_workflow = AgentWorkflow(
agents=[research_agent, write_agent, review_agent],
root_agent=research_agent.name,
initial_state={
"research_notes": {},
"report_content": "Not written yet.",
"review": "Review required.",
},
)
During execution of the workflow, you can stream events, tool calls and updates to the console.
from llama_index.core.agent.workflow import (
AgentInput,
AgentOutput,
ToolCall,
ToolCallResult,
AgentStream,
)
research_topic = """Write me a report on the history of the web.
Briefly describe the history of the world wide web, including
the development of the internet and the development of the web,
including 21st century developments"""
handler = agent_workflow.run(
user_msg=research_topic
)
current_agent = None
current_tool_calls = ""
async for event in handler.stream_events():
if (
hasattr(event, "current_agent_name")
and event.current_agent_name != current_agent
):
current_agent = event.current_agent_name
print(f"\n{'='*50}")
print(f"🤖 Agent: {current_agent}")
print(f"{'='*50}\n")
elif isinstance(event, AgentOutput):
if event.response.content:
print("📤 Output:", event.response.content)
if event.tool_calls:
print(
"🛠️ Planning to use tools:",
[call.tool_name for call in event.tool_calls],
)
elif isinstance(event, ToolCallResult):
print(f"🔧 Tool Result ({event.tool_name}):")
print(f" Arguments: {event.tool_kwargs}")
print(f" Output: {event.tool_output}")
elif isinstance(event, ToolCall):
print(f"🔨 Calling Tool: {event.tool_name}")
print(f" With arguments: {event.tool_kwargs}")
After the workflow is complete, you can print the final output of the report, as well as the final review state from then review agent.
state = await handler.ctx.store.get("state")
print("Report Content:\n", state["report_content"])
print("\n------------\nFinal Review:\n", state["review"])
Go further with custom workflows
The AgentWorkflow
is a great way to get started with multi-agent systems. But what if you need more control? You can build a workflow from scratch. Here are some reasons why you might want to build your own workflow:
- More control over the process: You can decide the exact path your agents take. This includes creating loops, making decisions at certain points, or having agents work in parallel on different tasks.
- Use complex data: Go beyond simple text. Custom workflows let you use more structured data, like JSON objects or custom classes, for your inputs and outputs.
- Work with different media: Build agents that can understand and process not just text, but also images, audio, and video.
- Smarter planning: You can design a workflow that first creates a detailed plan before the agents start working. This is useful for complex tasks that require multiple steps.
- Enable self-correction: Create agents that can review their own work. If the output isn't good enough, the agent can try again, creating a loop of improvement until the result is perfect.
To learn more about LlamaIndex Workflows, see the LlamaIndex Workflows Documentation.