On this superior tutorial, we intention to assemble a multi-agent course of automation system using the PrimisAI Nexus framework, which is completely built-in with the OpenAI API. Our main purpose is to indicate how hierarchical supervision, intelligent machine utilization, and structured outputs can facilitate the coordination of plenty of AI brokers to hold out sophisticated duties, ranging from planning and enchancment to top quality assurance and knowledge analysis. As we stroll via each half, we don’t merely assemble explicit particular person brokers; we architect a collaborative ecosystem the place each agent has a clear perform, obligations, and smart devices to carry out the responsibility.
!pip arrange primisai openai nest-asyncio
import os
import nest_asyncio
from primisai.nexus.core import AI, Agent, Supervisor
from primisai.nexus.utils.debugger import Debugger
import json
nest_asyncio.apply()
We begin by placing within the core dependencies: Primisai for agent orchestration, OpenAI for LLM entry, and nest_asyncio to take care of Colab’s event loop quirks. After making use of nest_asyncio, we make certain the pocket e-book is ready to execute asynchronous duties seamlessly, a key requirement for multi-agent execution.
print("🚀 PrimisAI Nexus Superior Tutorial with OpenAI API")
print("=" * 55)
os.environ["OPENAI_API_KEY"] = "Use Your Private API Key Here5"
# llm_config = {
# "api_key": os.environ["OPENAI_API_KEY"],
# "model": "gpt-4o-mini",
# "base_url": "https://api.openai.com/v1",
# "temperature": 0.7
# }
llm_config = {
"api_key": os.environ["OPENAI_API_KEY"],
"model": "gpt-3.5-turbo",
"base_url": "https://api.openai.com/v1",
"temperature": 0.7
}
print("📋 API Configuration:")
print(f"• Model: {llm_config['model']}")
print(f"• Base URL: {llm_config['base_url']}")
print("• Observe: OpenAI has restricted free tokens via April 2025")
print("• Totally different: Ponder Puter.js for limitless free entry")
To vitality our brokers, we join with OpenAI’s fashions, starting with gpt-3.5-turbo for cost-efficient duties. We retailer our API key in environment variables and assemble a configuration dictionary specifying the model, temperature, and base URL. This half permits us to flexibly change between fashions, equal to gpt-4o-mini or gpt-4o, counting on course of complexity and worth.
code_schema = {
"type": "object",
"properties": {
"description": {"type": "string", "description": "Code rationalization"},
"code": {"type": "string", "description": "Python code implementation"},
"language": {"type": "string", "description": "Programming language"},
"complexity": {"type": "string", "enum": ["beginner", "intermediate", "advanced"]},
"test_cases": {"type": "array", "objects": {"type": "string"}, "description": "Occasion utilization"}
},
"required": ["description", "code", "language"]
}
analysis_schema = {
"type": "object",
"properties": {
"summary": {"type": "string", "description": "Transient analysis summary"},
"insights": {"type": "array", "objects": {"type": "string"}, "description": "Key insights"},
"solutions": {"type": "array", "objects": {"type": "string"}, "description": "Movement objects"},
"confidence": {"type": "amount", "minimal": 0, "most": 1},
"methodology": {"type": "string", "description": "Analysis technique used"}
},
"required": ["summary", "insights", "confidence"]
}
planning_schema = {
"type": "object",
"properties": {
"duties": {"type": "array", "objects": {"type": "string"}, "description": "Guidelines of duties to complete"},
"priority": {"type": "string", "enum": ["low", "medium", "high"]},
"estimated_time": {"type": "string", "description": "Time estimate"},
"dependencies": {"type": "array", "objects": {"type": "string"}, "description": "Course of dependencies"}
},
"required": ["tasks", "priority"]
}
We define JSON schemas for 3 agent kinds: CodeWriter, Info Analyst, and Problem Planner. These schemas implement building inside the agent’s responses, making the output machine-readable and predictable. It helps us be sure that the system returns fixed data, equal to code blocks, insights, or enterprise timelines, even when fully completely different LLMs are behind the scenes.
def calculate_metrics(data_str):
"""Calculate full statistics for numerical data"""
try:
data = json.lots of(data_str) if isinstance(data_str, str) else data_str
if isinstance(data, itemizing) and all(isinstance(x, (int, float)) for x in data):
import statistics
return {
"suggest": statistics.suggest(data),
"median": statistics.median(data),
"mode": statistics.mode(data) if len(set(data)) 1 else 0,
"max": max(data),
"min": min(data),
"rely": len(data),
"sum": sum(data)
}
return {"error": "Invalid data format - anticipating array of numbers"}
in addition to Exception as e:
return {"error": f"Could not parse data: {str(e)}"}
def validate_code(code):
"""Superior code validation with syntax and first security checks"""
try:
dangerous_imports = ['os', 'subprocess', 'eval', 'exec', '__import__']
security_warnings = []
for hazard in dangerous_imports:
if hazard in code:
security_warnings.append(f"Doubtlessly dangerous: {hazard}")
compile(code, '', 'exec')
return {
"legit": True,
"message": "Code syntax is legit",
"security_warnings": security_warnings,
"traces": len(code.minimize up('n'))
}
in addition to SyntaxError as e:
return {
"legit": False,
"message": f"Syntax error: {e}",
"line": getattr(e, 'lineno', 'unknown'),
"security_warnings": []
}
def search_documentation(query):
"""Simulate looking documentation (placeholder carry out)"""
docs = {
"python": "Python is a high-level programming language",
"itemizing": "Lists are ordered, mutable collections in Python",
"carry out": "Options are reusable blocks of code",
"class": "Classes define objects with attributes and techniques"
}
outcomes = []
for key, price in docs.objects():
if query.lower() in key.lower():
outcomes.append(f"{key}: {price}")
return {
"query": query,
"outcomes": outcomes if outcomes else ["No documentation found"],
"total_results": len(outcomes)
}
Subsequent, we add personalized devices that brokers might identify, equal to calculate_metrics for statistical summaries, validate_code for syntax and security checks, and search_documentation for simulated programming help. These devices delay the brokers’ skills, turning them from straightforward chatbots into interactive, utility-driven employees in a position to autonomous reasoning and validation.
print("n📋 Establishing Multi-Agent Hierarchy with OpenAI")
main_supervisor = Supervisor(
title="ProjectManager",
llm_config=llm_config,
system_message="You are a senior enterprise supervisor coordinating enchancment and analysis duties. Delegate appropriately, current clear summaries, and assure top quality provide. Always ponder time estimates and dependencies."
)
dev_supervisor = Supervisor(
title="DevManager",
llm_config=llm_config,
is_assistant=True,
system_message="You deal with enchancment duties. Coordinate between coding, testing, and code evaluation. Assure best practices and security."
)
analysis_supervisor = Supervisor(
title="AnalysisManager",
llm_config=llm_config,
is_assistant=True,
system_message="You deal with data analysis and evaluation duties. Assure thorough analysis, statistical rigor, and actionable insights."
)
qa_supervisor = Supervisor(
title="QAManager",
llm_config=llm_config,
is_assistant=True,
system_message="You deal with top quality assurance and testing. Assure thorough validation and documentation."
)
To simulate a real-world administration building, we create a multi-tiered hierarchy. A ProjectManager serves as the premise supervisor, overseeing three assistant supervisors (DevManager, AnalysisManager, and QAManager), each answerable for domain-specific brokers. This modular hierarchy permits duties to flow into down from high-level approach to granular execution.
code_agent = Agent(
title="CodeWriter",
llm_config=llm_config,
system_message="You are an expert Python developer. Write clear, atmosphere pleasant, well-documented code with right error coping with. Always embrace check out cases and observe PEP 8 necessities.",
output_schema=code_schema,
devices=[{
"metadata": {
"function": {
"name": "validate_code",
"description": "Validates Python code syntax and checks for security issues",
"parameters": {
"type": "object",
"properties": {
"code": {"type": "string", "description": "Python code to validate"}
},
"required": ["code"]
}
}
},
"machine": validate_code
}, {
"metadata": {
"carry out": {
"title": "search_documentation",
"description": "Look for programming documentation and examples",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string", "description": "Documentation matter to hunt for"}
},
"required": ["query"]
}
}
},
"machine": search_documentation
}],
use_tools=True
)
review_agent = Agent(
title="CodeReviewer",
llm_config=llm_config,
system_message="You are a senior code reviewer. Analyze code for best practices, effectivity, security, maintainability, and potential factors. Current constructive solutions and choices.",
keep_history=True,
devices=[{
"metadata": {
"function": {
"name": "validate_code",
"description": "Validates code syntax and security",
"parameters": {
"type": "object",
"properties": {
"code": {"type": "string", "description": "Code to validate"}
},
"required": ["code"]
}
}
},
"machine": validate_code
}],
use_tools=True
)
analyst_agent = Agent(
title="DataAnalyst",
llm_config=llm_config,
system_message="You are a data scientist specializing in statistical analysis and insights expertise. Current thorough analysis with confidence metrics and actionable solutions.",
output_schema=analysis_schema,
devices=[{
"metadata": {
"function": {
"name": "calculate_metrics",
"description": "Calculates comprehensive statistics for numerical data",
"parameters": {
"type": "object",
"properties": {
"data_str": {"type": "string", "description": "JSON string of numerical data array"}
},
"required": ["data_str"]
}
}
},
"machine": calculate_metrics
}],
use_tools=True
)
planner_agent = Agent(
title="ProjectPlanner",
llm_config=llm_config,
system_message="You are a enterprise planning specialist. Break down sophisticated initiatives into manageable duties with affordable time estimates and clear dependencies.",
output_schema=planning_schema
)
tester_agent = Agent(
title="QATester",
llm_config=llm_config,
system_message="You are a QA specialist centered on full testing strategies, edge cases, and top quality assurance.",
devices=[{
"metadata": {
"function": {
"name": "validate_code",
"description": "Validates code for testing",
"parameters": {
"type": "object",
"properties": {
"code": {"type": "string", "description": "Code to test"}
},
"required": ["code"]
}
}
},
"machine": validate_code
}],
use_tools=True
)
We then assemble a varied set of specialized brokers: CodeWriter for producing Python code, CodeReviewer for reviewing logic and security, DataAnalyst for performing structured data analysis, ProjectPlanner for course of breakdown, and QATester for top of the range checks. Each agent has domain-specific devices, output schemas, and system instructions tailored to their perform.
dev_supervisor.register_agent(code_agent)
dev_supervisor.register_agent(review_agent)
analysis_supervisor.register_agent(analyst_agent)
qa_supervisor.register_agent(tester_agent)
main_supervisor.register_agent(dev_supervisor)
main_supervisor.register_agent(analysis_supervisor)
main_supervisor.register_agent(qa_supervisor)
main_supervisor.register_agent(planner_agent)
All brokers are registered beneath their respective supervisors, and the assistant supervisors are, in flip, registered with the first supervisor. This setup creates a totally linked agent ecosystem, the place instructions might cascade from the top-level agent to any specialist agent inside the group.
print("n🌳 Agent Hierarchy:")
main_supervisor.display_agent_graph()
print("n🧪 Testing Full Multi-Agent Communication")
print("-" * 45)
try:
test_response = main_supervisor.chat("Hiya! Please introduce your crew and make clear the best way you coordinate sophisticated initiatives.")
print(f"✅ Supervisor communication check out worthwhile!")
print(f"Response preview: {test_response[:200]}...")
in addition to Exception as e:
print(f"❌ Supervisor check out failed: {str(e)}")
print("Falling once more to direct agent testing...")
We visualize the entire hierarchy using display_agent_graph() to substantiate our building. It offers a clear view of how each agent is said contained in the broader course of administration flow into, a helpful diagnostic sooner than deployment.
print("n🎯 Difficult Multi-Agent Course of Execution")
print("-" * 40)
complex_task = """Create a Python carry out that implements a binary search algorithm,
have it reviewed for optimization, examined completely, and provide a enterprise plan
for integrating it into a much bigger search system."""
print(f"Difficult Course of: {complex_task}")
try:
complex_response = main_supervisor.chat(complex_task)
print(f"✅ Difficult course of completed")
print(f"Response: {complex_response[:300]}...")
in addition to Exception as e:
print(f"❌ Difficult course of failed: {str(e)}")
We give the full system a real-world course of: create a binary search carry out, evaluation it, check out it, and plan its integration into a much bigger enterprise. The ProjectManager seamlessly coordinates brokers all through enchancment, QA, and planning, demonstrating the true vitality of hierarchical, tool-driven agent orchestration.
print("n🔧 Software program Integration & Structured Outputs")
print("-" * 43)
print("Testing Code Agent with devices...")
try:
code_response = code_agent.chat("Create a carry out to calculate fibonacci numbers with memoization")
print(f"✅ Code Agent with devices: Working")
print(f"Response type: {type(code_response)}")
if isinstance(code_response, str) and code_response.strip().startswith('{'):
code_data = json.lots of(code_response)
print(f" - Description: {code_data.get('description', 'N/A')[:50]}...")
print(f" - Language: {code_data.get('language', 'N/A')}")
print(f" - Complexity: {code_data.get('complexity', 'N/A')}")
else:
print(f" - Raw response: {code_response[:100]}...")
in addition to Exception as e:
print(f"❌ Code Agent error: {str(e)}")
print("nTesting Analyst Agent with devices...")
try:
analysis_response = analyst_agent.chat("Analyze this product sales data: [100, 150, 120, 180, 200, 175, 160, 190, 220, 185]. What tendencies do you see?")
print(f"✅ Analyst Agent with devices: Working")
if isinstance(analysis_response, str) and analysis_response.strip().startswith('{'):
analysis_data = json.lots of(analysis_response)
print(f" - Summary: {analysis_data.get('summary', 'N/A')[:50]}...")
print(f" - Confidence: {analysis_data.get('confidence', 'N/A')}")
print(f" - Insights rely: {len(analysis_data.get('insights', []))}")
else:
print(f" - Raw response: {analysis_response[:100]}...")
in addition to Exception as e:
print(f"❌ Analyst Agent error: {str(e)}")
We immediately check out the capabilities of two specialised brokers using precise prompts. We first ask the CodeWriter agent to generate a Fibonacci carry out with memoization and validate that it returns structured output containing a code description, language, and complexity stage. Then, we take into account the DataAnalyst agent by feeding it sample product sales data to extract tendencies.
print("n🔨 Information Software program Utilization")
print("-" * 22)
# Test all devices manually
sample_data = "[95, 87, 92, 88, 91, 89, 94, 90, 86, 93]"
metrics_result = calculate_metrics(sample_data)
print(f"Statistics for {sample_data}:")
for key, price in metrics_result.objects():
print(f" {key}: {price}")
print("nCode validation check out:")
test_code = """
def binary_search(arr, purpose):
left, correct = 0, len(arr) - 1
whereas left
We step exterior the agent framework to examine each machine immediately. First, we use the calculate_metrics machine on a dataset of ten numbers, confirming it appropriately returned statistics equal to suggest, median, mode, and commonplace deviation. Subsequent, we run the validate_code machine on a sample binary search carry out, which confirms every syntactic correctness and flags no security warnings. Lastly, we check out the search_documentation machine with the query “python carry out” and acquire associated documentation snippets, verifying its means to successfully simulate contextual lookup.
print("n🚀 Superior Multi-Agent Workflow")
print("-" * 35)
workflow_stages = [
("Planning", "Create a project plan for building a web scraper for news articles"),
("Development", "Implement the web scraper with error handling and rate limiting"),
("Review", "Review the web scraper code for security and efficiency"),
("Testing", "Create comprehensive test cases for the web scraper"),
("Analysis", "Analyze sample scraped data: [45, 67, 23, 89, 12, 56, 78, 34, 91, 43]")
]
workflow_results = {}
for stage, course of in workflow_stages:
print(f"n{stage} Stage: {course of}")
try:
if stage == "Planning":
response = planner_agent.chat(course of)
elif stage == "Enchancment":
response = code_agent.chat(course of)
elif stage == "Overview":
response = review_agent.chat(course of)
elif stage == "Testing":
response = tester_agent.chat(course of)
elif stage == "Analysis":
response = analyst_agent.chat(course of)
workflow_results[stage] = response
print(f"✅ {stage} completed: {response[:80]}...")
in addition to Exception as e:
print(f"❌ {stage} failed: {str(e)}")
workflow_results[stage] = f"Error: {str(e)}"
We simulate a five-stage enterprise lifecycle: planning, enchancment, evaluation, testing, and analysis. Each course of is handed to primarily probably the most associated agent, and responses are collected to evaluate effectivity. This demonstrates the framework’s performance to deal with end-to-end workflows with out information intervention.
print("n📊 System Monitoring & Effectivity")
print("-" * 37)
debugger = Debugger(title="OpenAITutorialDebugger")
debugger.log("Superior OpenAI tutorial execution completed effectively")
print(f"Major Supervisor ID: {main_supervisor.workflow_id}")
We activate the Debugger machine to hint the effectivity of our session and log system events. We moreover print the first supervisor’s workflow_id as a traceable identifier, useful when managing plenty of workflows in manufacturing.
In conclusion, we’ve effectively constructed a totally automated, OpenAI-compatible multi-agent system using PrimisAI Nexus. Each agent operates with readability, precision, and autonomy, whether or not or not writing code, validating logic, analyzing data, or breaking down sophisticated workflows. Our hierarchical building permits for seamless course of delegation and modular scalability. PrimisAI Nexus framework establishes a sturdy foundation for automating real-world duties, whether or not or not in software program program enchancment, evaluation, planning, or data operations, via intelligent collaboration between specialised brokers.
Check out the Codes. All credit score rating for this evaluation goes to the researchers of this enterprise. Moreover, be blissful to watch us on Twitter, Youtube and Spotify and don’t overlook to hitch our 100k+ ML SubReddit and Subscribe to our Publication.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is devoted to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth safety of machine learning and deep learning data that’s every technically sound and easily understandable by a big viewers. The platform boasts of over 2 million month-to-month views, illustrating its status amongst audiences.
Keep forward of the curve with NextBusiness 24. Discover extra tales, subscribe to our e-newsletter, and be part of our rising group at nextbusiness24.com

