On this tutorial, we assemble an entire multi-agent evaluation workers system using LangGraph and Google’s Gemini API. We profit from role-specific brokers, Researcher, Analyst, Writer, and Supervisor, each chargeable for a particular part of the evaluation pipeline. Collectively, these brokers collaboratively accumulate information, analyze insights, synthesize a report, and coordinate the workflow. We moreover incorporate choices like memory persistence, agent coordination, custom-made brokers, and effectivity monitoring. By the highest of the setup, we’re capable of run automated, intelligent evaluation courses that generate structured research on any given matter.
!pip arrange langgraph langchain-google-genai langchain-community langchain-core python-dotenv
import os
from typing import Annotated, Itemizing, Tuple, Union
from typing_extensions import TypedDict
import operator
from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, SystemMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_google_genai import ChatGoogleGenerativeAI
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode
from langgraph.checkpoint.memory import MemorySaver
import functools
import getpass
GOOGLE_API_KEY = getpass.getpass("Enter your Google API Key: ")
os.environ["GOOGLE_API_KEY"] = GOOGLE_API_KEY
We begin by placing within the required libraries, along with LangGraph and LangChain’s Google Gemini integration. Then, we import the essential modules and prepare the setting by securely coming into the Google API key using the getpass module. This ensures we’re capable of authenticate our Gemini LLM with out exposing the essential factor inside the code.
class AgentState(TypedDict):
"""State shared between all brokers inside the graph"""
messages: Annotated[list, operator.add]
subsequent: str
current_agent: str
research_topic: str
findings: dict
final_report: str
class AgentResponse(TypedDict):
"""Commonplace response format for all brokers"""
content material materials: str
next_agent: str
findings: dict
def create_llm(temperature: float = 0.1, model: str = "gemini-1.5-flash") -> ChatGoogleGenerativeAI:
"""Create a configured Gemini LLM event"""
return ChatGoogleGenerativeAI(
model=model,
temperature=temperature,
google_api_key=os.environ["GOOGLE_API_KEY"]
)
We define two TypedDict programs to maintain up structured state and responses shared all through all brokers inside the LangGraph. AgentState tracks messages, workflow standing, matter, and picked up findings, whereas AgentResponse standardizes each agent’s output. We moreover create a helper function to begin out the Gemini LLM with a specified model and temperature, guaranteeing fixed conduct all through all brokers.
def create_research_agent(llm: ChatGoogleGenerativeAI) -> callable:
"""Creates a evaluation specialist agent for preliminary information gathering"""
research_prompt = ChatPromptTemplate.from_messages([
("system", """You are a Research Specialist AI. Your role is to:
1. Analyze the research topic thoroughly
2. Identify key areas that need investigation
3. Provide initial research findings and insights
4. Suggest specific angles for deeper analysis
Focus on providing comprehensive, accurate information and clear research directions.
Always structure your response with clear sections and bullet points.
"""),
MessagesPlaceholder(variable_name="messages"),
("human", "Research Topic: {research_topic}")
])
research_chain = research_prompt | llm
def research_agent(state: AgentState) -> AgentState:
"""Execute evaluation analysis"""
attempt:
response = research_chain.invoke({
"messages": state["messages"],
"research_topic": state["research_topic"]
})
findings = {
"research_overview": response.content material materials,
"key_areas": ["area1", "area2", "area3"],
"initial_insights": response.content material materials[:500] + "..."
}
return {
"messages": state["messages"] + [AIMessage(content=response.content)],
"subsequent": "analyst",
"current_agent": "researcher",
"research_topic": state["research_topic"],
"findings": {**state.get("findings", {}), "evaluation": findings},
"final_report": state.get("final_report", "")
}
moreover Exception as e:
error_msg = f"Evaluation agent error: {str(e)}"
return {
"messages": state["messages"] + [AIMessage(content=error_msg)],
"subsequent": "analyst",
"current_agent": "researcher",
"research_topic": state["research_topic"],
"findings": state.get("findings", {}),
"final_report": state.get("final_report", "")
}
return research_agent
We now create our first specialised agent, the Evaluation Specialist AI. This agent is prompted to deeply analyze a given matter, extract key areas of curiosity, and suggest directions for extra exploration. Using a ChatPromptTemplate, we define its conduct and be part of it with our Gemini LLM. The research_agent function executes this logic, updates the shared state with findings and messages, and passes administration to the next agent in line, the Analyst.
def create_analyst_agent(llm: ChatGoogleGenerativeAI) -> callable:
"""Creates a information analyst agent for deep analysis"""
analyst_prompt = ChatPromptTemplate.from_messages([
("system", """You are a Data Analyst AI. Your role is to:
1. Analyze data and information provided by the research team
2. Identify patterns, trends, and correlations
3. Provide statistical insights and data-driven conclusions
4. Suggest actionable recommendations based on analysis
Focus on quantitative analysis, data interpretation, and evidence-based insights.
Use clear metrics and concrete examples in your analysis.
"""),
MessagesPlaceholder(variable_name="messages"),
("human", "Analyze the research findings for: {research_topic}")
])
analyst_chain = analyst_prompt | llm
def analyst_agent(state: AgentState) -> AgentState:
"""Execute information analysis"""
attempt:
response = analyst_chain.invoke({
"messages": state["messages"],
"research_topic": state["research_topic"]
})
analysis_findings = {
"analysis_summary": response.content material materials,
"key_metrics": ["metric1", "metric2", "metric3"],
"solutions": response.content material materials.lower up("solutions:")[-1] if "solutions:" in response.content material materials.lower() else "No explicit solutions found"
}
return {
"messages": state["messages"] + [AIMessage(content=response.content)],
"subsequent": "writer",
"current_agent": "analyst",
"research_topic": state["research_topic"],
"findings": {**state.get("findings", {}), "analysis": analysis_findings},
"final_report": state.get("final_report", "")
}
moreover Exception as e:
error_msg = f"Analyst agent error: {str(e)}"
return {
"messages": state["messages"] + [AIMessage(content=error_msg)],
"subsequent": "writer",
"current_agent": "analyst",
"research_topic": state["research_topic"],
"findings": state.get("findings", {}),
"final_report": state.get("final_report", "")
}
return analyst_agent
We now define the Data Analyst AI, which dives deeper into the evaluation findings generated by the sooner agent. This agent identifies key patterns, tendencies, and metrics, offering actionable insights backed by proof. Using a tailored system speedy and the Gemini LLM, the analyst_agent function enriches the state with structured analysis, getting ready the groundwork for the report writer to synthesize each factor proper right into a remaining doc.
def create_writer_agent(llm: ChatGoogleGenerativeAI) -> callable:
"""Creates a report writer agent for remaining documentation"""
writer_prompt = ChatPromptTemplate.from_messages([
("system", """You are a Report Writer AI. Your role is to:
1. Synthesize all research and analysis into a comprehensive report
2. Create clear, professional documentation
3. Ensure proper structure with executive summary, findings, and conclusions
4. Make complex information accessible to various audiences
Focus on clarity, completeness, and professional presentation.
Include specific examples and actionable insights.
"""),
MessagesPlaceholder(variable_name="messages"),
("human", "Create a comprehensive report for: {research_topic}")
])
writer_chain = writer_prompt | llm
def writer_agent(state: AgentState) -> AgentState:
"""Execute report writing"""
attempt:
response = writer_chain.invoke({
"messages": state["messages"],
"research_topic": state["research_topic"]
})
return {
"messages": state["messages"] + [AIMessage(content=response.content)],
"subsequent": "supervisor",
"current_agent": "writer",
"research_topic": state["research_topic"],
"findings": state.get("findings", {}),
"final_report": response.content material materials
}
moreover Exception as e:
error_msg = f"Writer agent error: {str(e)}"
return {
"messages": state["messages"] + [AIMessage(content=error_msg)],
"subsequent": "supervisor",
"current_agent": "writer",
"research_topic": state["research_topic"],
"findings": state.get("findings", {}),
"final_report": f"Error producing report: {str(e)}"
}
return writer_agent
We now create the Report Writer AI, which is liable for transforming the collected evaluation and analysis right into a refined, structured doc. This agent synthesizes all earlier insights right into a clear, expert report with an authorities summary, detailed findings, and conclusions. By invoking the Gemini model with a structured speedy, the writer agent updates the last word report inside the shared state and arms administration over to the Supervisor agent for evaluation.
def create_supervisor_agent(llm: ChatGoogleGenerativeAI, members: Itemizing[str]) -> callable:
"""Creates a supervisor agent to coordinate the workers"""
selections = ["FINISH"] + members
supervisor_prompt = ChatPromptTemplate.from_messages([
("system", f"""You are a Supervisor AI managing a research team. Your team members are:
{', '.join(members)}
Your responsibilities:
1. Coordinate the workflow between team members
2. Ensure each agent completes their specialized tasks
3. Determine when the research is complete
4. Maintain quality standards throughout the process
Given the conversation, determine the next step:
- If research is needed: route to "researcher"
- If analysis is needed: route to "analyst"
- If report writing is needed: route to "writer"
- If work is complete: route to "FINISH"
Available options: {options}
Respond with just the name of the next agent or "FINISH".
"""),
MessagesPlaceholder(variable_name="messages"),
("human", "Current status: {current_agent} just completed their task for topic: {research_topic}")
])
supervisor_chain = supervisor_prompt | llm
def supervisor_agent(state: AgentState) -> AgentState:
"""Execute supervisor coordination"""
attempt:
response = supervisor_chain.invoke({
"messages": state["messages"],
"current_agent": state.get("current_agent", "none"),
"research_topic": state["research_topic"]
})
next_agent = response.content material materials.strip().lower()
if "finish" in next_agent or "full" in next_agent:
next_step = "FINISH"
elif "evaluation" in next_agent:
next_step = "researcher"
elif "analy" in next_agent:
next_step = "analyst"
elif "writ" in next_agent:
next_step = "writer"
else:
current = state.get("current_agent", "")
if current == "researcher":
next_step = "analyst"
elif current == "analyst":
next_step = "writer"
elif current == "writer":
next_step = "FINISH"
else:
next_step = "researcher"
return {
"messages": state["messages"] + [AIMessage(content=f"Supervisor decision: Next agent is {next_step}")],
"subsequent": next_step,
"current_agent": "supervisor",
"research_topic": state["research_topic"],
"findings": state.get("findings", {}),
"final_report": state.get("final_report", "")
}
moreover Exception as e:
error_msg = f"Supervisor error: {str(e)}"
return {
"messages": state["messages"] + [AIMessage(content=error_msg)],
"subsequent": "FINISH",
"current_agent": "supervisor",
"research_topic": state["research_topic"],
"findings": state.get("findings", {}),
"final_report": state.get("final_report", "")
}
return supervisor_agent
We now convey inside the Supervisor AI, which oversees and orchestrates the entire multi-agent workflow. This agent evaluates the current progress, understanding which workers member merely accomplished their job, and intelligently decides the next step: whether or not or to not proceed with evaluation, proceed to analysis, provoke report writing, or mark the enterprise as full. By parsing the dialog context and utilizing Gemini for reasoning, the supervisor agent ensures clear transitions and top quality administration all by way of the evaluation pipeline.
def create_research_team_graph() -> StateGraph:
"""Creates the entire evaluation workers workflow graph"""
llm = create_llm()
members = ["researcher", "analyst", "writer"]
researcher = create_research_agent(llm)
analyst = create_analyst_agent(llm)
writer = create_writer_agent(llm)
supervisor = create_supervisor_agent(llm, members)
workflow = StateGraph(AgentState)
workflow.add_node("researcher", researcher)
workflow.add_node("analyst", analyst)
workflow.add_node("writer", writer)
workflow.add_node("supervisor", supervisor)
workflow.add_edge("researcher", "supervisor")
workflow.add_edge("analyst", "supervisor")
workflow.add_edge("writer", "supervisor")
workflow.add_conditional_edges(
"supervisor",
lambda x: x["next"],
{
"researcher": "researcher",
"analyst": "analyst",
"writer": "writer",
"FINISH": END
}
)
workflow.set_entry_point("supervisor")
return workflow
def compile_research_team():
"""Compile the evaluation workers graph with memory"""
workflow = create_research_team_graph()
memory = MemorySaver()
app = workflow.compile(checkpointer=memory)
return app
def run_research_team(matter: str, thread_id: str = "research_session_1"):
"""Run the entire evaluation workers workflow"""
app = compile_research_team()
initial_state = {
"messages": [HumanMessage(content=f"Research the topic: {topic}")],
"research_topic": matter,
"subsequent": "researcher",
"current_agent": "start",
"findings": {},
"final_report": ""
}
config = {"configurable": {"thread_id": thread_id}}
print(f"🔍 Starting evaluation on: {matter}")
print("=" * 50)
attempt:
final_state = None
for step, state in enumerate(app.stream(initial_state, config=config)):
print(f"n📍 Step {step + 1}: {guidelines(state.keys())[0]}")
current_state = guidelines(state.values())[0]
if current_state["messages"]:
last_message = current_state["messages"][-1]
if isinstance(last_message, AIMessage):
print(f"💬 {last_message.content material materials[:200]}...")
final_state = current_state
if step > 10:
print("⚠️ Most steps reached. Stopping execution.")
break
return final_state
moreover Exception as e:
print(f"❌ Error all through execution: {str(e)}")
return None
Check out the overall Codes
We now assemble and execute the entire multi-agent workflow using LangGraph. First, we define the evaluation workers graph, which consists of nodes for each agent, Researcher, Analyst, Writer, and Supervisor, linked by logical transitions. Then, we compile this graph with memory using MemorySaver to persist dialog historic previous. Lastly, the run_research_team() function initializes the tactic with a topic and streams execution step-by-step, allowing us to hint each agent’s contribution in real-time. This orchestration ensures a completely automated, collaborative evaluation pipeline.
if __name__ == "__main__":
finish end result = run_research_team("Artificial Intelligence in Healthcare")
if finish end result:
print("n" + "=" * 50)
print("📊 FINAL RESULTS")
print("=" * 50)
print(f"🏁 Closing Agent: {finish end result['current_agent']}")
print(f"📋 Findings: {len(finish end result['findings'])} sections")
print(f"📄 Report Measurement: {len(finish end result['final_report'])} characters")
if finish end result['final_report']:
print("n📄 FINAL REPORT:")
print("-" * 30)
print(finish end result['final_report'])
def interactive_research_session():
"""Run an interactive evaluation session"""
app = compile_research_team()
print("🎯 Interactive Evaluation Employees Session")
print("Enter 'surrender' to exitn")
session_count = 0
whereas True:
matter = enter("🔍 Enter evaluation matter: ").strip()
if matter.lower() in ['quit', 'exit', 'q']:
print("👋 Goodbye!")
break
if not matter:
print("❌ Please enter a sound matter.")
proceed
session_count += 1
thread_id = f"interactive_session_{session_count}"
finish end result = run_research_team(matter, thread_id)
if finish end result and finish end result['final_report']:
print(f"n✅ Evaluation achieved for: {matter}")
print(f"📄 Report preview: {finish end result['final_report'][:300]}...")
show_full = enter("n📖 Current full report? (y/n): ").lower()
if show_full.startswith('y'):
print("n" + "=" * 60)
print("📄 COMPLETE RESEARCH REPORT")
print("=" * 60)
print(finish end result['final_report'])
print("n" + "-" * 50)
def create_custom_agent(place: str, instructions: str, llm: ChatGoogleGenerativeAI) -> callable:
"""Create a custom-made agent with explicit place and instructions"""
custom_prompt = ChatPromptTemplate.from_messages([
("system", f"""You are a {role} AI.
Your specific instructions:
{instructions}
Always provide detailed, professional responses relevant to your role.
"""),
MessagesPlaceholder(variable_name="messages"),
("human", "Task: {task}")
])
custom_chain = custom_prompt | llm
def custom_agent(state: AgentState) -> AgentState:
"""Execute custom-made agent job"""
attempt:
response = custom_chain.invoke({
"messages": state["messages"],
"job": state["research_topic"]
})
return {
"messages": state["messages"] + [AIMessage(content=response.content)],
"subsequent": "supervisor",
"current_agent": place.lower().change(" ", "_"),
"research_topic": state["research_topic"],
"findings": state.get("findings", {}),
"final_report": state.get("final_report", "")
}
moreover Exception as e:
error_msg = f"{place} agent error: {str(e)}"
return {
"messages": state["messages"] + [AIMessage(content=error_msg)],
"subsequent": "supervisor",
"current_agent": place.lower().change(" ", "_"),
"research_topic": state["research_topic"],
"findings": state.get("findings", {}),
"final_report": state.get("final_report", "")
}
return custom_agent
Check out the overall Codes
We wrap up our system with runtime and customization capabilities. The first block permits us to set off a evaluation run straight, making it wonderful for testing the pipeline with a real-world matter, resembling Artificial Intelligence in Healthcare. For additional dynamic use, the interactive_research_session() permits a lot of matter queries in a loop, simulating real-time exploration. Lastly, the create_custom_agent() function permits us to mix new brokers with distinctive roles and instructions, making the framework versatile and extensible for specialised workflows.
def visualize_graph():
"""Visualize the evaluation workers graph building"""
attempt:
app = compile_research_team()
graph_repr = app.get_graph()
print("🗺️ Evaluation Employees Graph Building")
print("=" * 40)
print(f"Nodes: {guidelines(graph_repr.nodes.keys())}")
print(f"Edges: {[(edge.source, edge.target) for edge in graph_repr.edges]}")
attempt:
graph_repr.draw_mermaid()
moreover:
print("📊 Seen graph requires mermaid-py bundle")
print("Arrange with: !pip arrange mermaid-py")
moreover Exception as e:
print(f"❌ Error visualizing graph: {str(e)}")
import time
from datetime import datetime
def monitor_research_performance(matter: str):
"""Monitor and report effectivity metrics"""
start_time = time.time()
print(f"⏱️ Starting effectivity monitoring for: {matter}")
finish end result = run_research_team(matter, f"perf_test_{int(time.time())}")
end_time = time.time()
interval = end_time - start_time
metrics = {
"interval": interval,
"total_messages": len(finish end result["messages"]) if finish end result else 0,
"findings_sections": len(finish end result["findings"]) if finish end result else 0,
"report_length": len(finish end result["final_report"]) if finish end result and finish end result["final_report"] else 0,
"success": finish end result should not be None
}
print("n📊 PERFORMANCE METRICS")
print("=" * 30)
print(f"⏱️ Interval: {interval:.2f} seconds")
print(f"💬 Complete Messages: {metrics['total_messages']}")
print(f"📋 Findings Sections: {metrics['findings_sections']}")
print(f"📄 Report Measurement: {metrics['report_length']} chars")
print(f"✅ Success: {metrics['success']}")
return metrics
def quick_start_demo():
"""Full demo of the evaluation workers system"""
print("🚀 LangGraph Evaluation Employees - Quick Start Demo")
print("=" * 50)
topics = [
"Climate Change Impact on Agriculture",
"Quantum Computing Applications",
"Digital Privacy in the Modern Age"
]
for i, matter in enumerate(topics, 1):
print(f"n🔍 Demo {i}: {matter}")
print("-" * 40)
attempt:
finish end result = run_research_team(matter, f"demo_{i}")
if finish end result and finish end result['final_report']:
print(f"✅ Evaluation achieved effectively!")
print(f"📊 Report preview: {finish end result['final_report'][:150]}...")
else:
print("❌ Evaluation failed")
moreover Exception as e:
print(f"❌ Error in demo {i}: {str(e)}")
print("n" + "="*30)
print("🎉 Demo achieved!")
quick_start_demo()
We finalize the system by together with extremely efficient utilities for graph visualization, effectivity monitoring, and a quick start demo. The visualize_graph() function provides a structural overview of agent connections, preferrred for debugging or presentation capabilities. The monitor_research_performance() tracks runtime, message amount, and report dimension, serving to us contemplate the system’s effectivity. Lastly, quick_start_demo() runs a lot of sample evaluation topics in sequence, exhibiting how seamlessly the brokers collaborate to generate insightful research.
In conclusion, we’ve effectively constructed and examined a completely purposeful, modular AI evaluation assistant framework using LangGraph. With clear agent roles and computerized job routing, we streamline evaluation from raw matter enter to a well-structured remaining report. Whether or not or not we use the quick start demo, run interactive courses, or monitor effectivity, this method empowers us to cope with superior evaluation duties with minimal intervention. We’re now geared as much as adapt or lengthen this setup extra by integrating custom-made brokers, visualizing workflows, and even deploying it into real-world functions.
Check out the overall Codes | Sponsorship Various: Want to attain basically probably the most influential AI builders all through the US and Europe? Be a part of our ecosystem of 1M+ month-to-month readers and 500K+ engaged group members. [Explore Sponsorship]
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is devoted to harnessing the potential of Artificial Intelligence for social good. His latest endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth safety of machine finding out and deep finding out data that’s every technically sound and easily understandable by a big viewers. The platform boasts of over 2 million month-to-month views, illustrating its fame amongst audiences.
Elevate your perspective with NextTech Info, the place innovation meets notion.
Uncover the newest breakthroughs, get distinctive updates, and be part of with a worldwide group of future-focused thinkers.
Unlock tomorrow’s tendencies within the current day: be taught additional, subscribe to our e-newsletter, and change into part of the NextTech group at NextTech-news.com
Keep forward of the curve with NextBusiness 24. Discover extra tales, subscribe to our publication, and be part of our rising group at nextbusiness24.com

