On this tutorial, we dive into the essence of Agentic AI by uniting LangChain, AutoGen, and Hugging Face proper right into a single, completely purposeful framework that runs with out paid APIs. We begin by establishing a lightweight open-source pipeline after which progress via structured reasoning, multi-step workflows, and collaborative agent interactions. As we switch from LangChain chains to simulated multi-agent packages, we experience how reasoning, planning, and execution can seamlessly combine to type autonomous, intelligent conduct, completely inside our administration and setting. Attempt the FULL CODES proper right here.
import warnings
warnings.filterwarnings('ignore')
from typing import Guidelines, Dict
import autogen
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_community.llms import HuggingFacePipeline
from transformers import pipeline
import json
print("🚀 Loading fashions...n")
pipe = pipeline(
"text2text-generation",
model="google/flan-t5-base",
max_length=200,
temperature=0.7
)
llm = HuggingFacePipeline(pipeline=pipe)
print("✓ Fashions loaded!n")
We start by establishing the atmosphere and bringing in all of the necessary libraries. We initialize a Hugging Face FLAN-T5 pipeline as our native language model, guaranteeing it’d generate coherent, contextually rich textual content material. We confirm that the whole thing lots effectively, laying the groundwork for the agentic experiments that adjust to. Attempt the FULL CODES proper right here.
def demo_langchain_basics():
print("="*70)
print("DEMO 1: LangChain - Intelligent Quick Chains")
print("="*70 + "n")
quick = PromptTemplate(
input_variables=["task"],
template="Course of: {exercise}nnProvide an in depth step-by-step decision:"
)
chain = LLMChain(llm=llm, quick=quick)
exercise = "Create a Python carry out to calculate fibonacci sequence"
print(f"Course of: {exercise}n")
end result = chain.run(exercise=exercise)
print(f"LangChain Response:n{end result}n")
print("✓ LangChain demo completen")
def demo_langchain_multi_step():
print("="*70)
print("DEMO 2: LangChain - Multi-Step Reasoning")
print("="*70 + "n")
planner = PromptTemplate(
input_variables=["goal"],
template="Break down this function into 3 steps: {function}"
)
executor = PromptTemplate(
input_variables=["step"],
template="Make clear how one can execute this step: {step}"
)
plan_chain = LLMChain(llm=llm, quick=planner)
exec_chain = LLMChain(llm=llm, quick=executor)
function = "Assemble a machine finding out model"
print(f"Goal: {function}n")
plan = plan_chain.run(function=function)
print(f"Plan:n{plan}n")
print("Executing first step...")
execution = exec_chain.run(step="Accumulate and put collectively information")
print(f"Execution:n{execution}n")
print("✓ Multi-step reasoning completen")
We uncover LangChain’s capabilities by establishing intelligent quick templates that allow our model to function via duties. We assemble every a simple one-step chain and a multi-step reasoning motion that break difficult aims into clear subtasks. We observe how LangChain permits structured pondering and turns plain instructions into detailed, actionable responses. Attempt the FULL CODES proper right here.
class SimpleAgent:
def __init__(self, title: str, place: str, llm_pipeline):
self.title = title
self.place = place
self.pipe = llm_pipeline
self.memory = []
def course of(self, message: str) -> str:
quick = f"You are a {self.place}.nUser: {message}nYour response:"
response = self.pipe(quick, max_length=150)[0]['generated_text']
self.memory.append({"client": message, "agent": response})
return response
def __repr__(self):
return f"Agent({self.title}, place={self.place})"
def demo_simple_agents():
print("="*70)
print("DEMO 3: Simple Multi-Agent System")
print("="*70 + "n")
researcher = SimpleAgent("Researcher", "evaluation specialist", pipe)
coder = SimpleAgent("Coder", "Python developer", pipe)
reviewer = SimpleAgent("Reviewer", "code reviewer", pipe)
print("Brokers created:", researcher, coder, reviewer, "n")
exercise = "Create a carry out to sort a listing"
print(f"Course of: {exercise}n")
print(f"[{researcher.name}] Researching...")
evaluation = researcher.course of(f"What is the good technique to: {exercise}")
print(f"Evaluation: {evaluation[:100]}...n")
print(f"[{coder.name}] Coding...")
code = coder.course of(f"Write Python code to: {exercise}")
print(f"Code: {code[:100]}...n")
print(f"[{reviewer.name}] Reviewing...")
analysis = reviewer.course of(f"Overview this technique: {code[:50]}")
print(f"Overview: {analysis[:100]}...n")
print("✓ Multi-agent workflow completen")
We design lightweight brokers powered by the equivalent Hugging Face pipeline, each assigned a specific place, equal to researcher, coder, or reviewer. We let these brokers collaborate on a simple coding exercise, exchanging information and developing upon each other’s outputs. We witness how a coordinated multi-agent workflow can emulate teamwork, creativity, and self-organization in an automated setting. Attempt the FULL CODES proper right here.
def demo_autogen_conceptual():
print("="*70)
print("DEMO 4: AutoGen Concepts (Conceptual Demo)")
print("="*70 + "n")
agent_config = {
"brokers": [
{"name": "UserProxy", "type": "user_proxy", "role": "Coordinates tasks"},
{"name": "Assistant", "type": "assistant", "role": "Solves problems"},
{"name": "Executor", "type": "executor", "role": "Runs code"}
],
"workflow": [
"1. UserProxy receives task",
"2. Assistant generates solution",
"3. Executor tests solution",
"4. Feedback loop until complete"
]
}
print(json.dumps(agent_config, indent=2))
print("n📝 AutoGen Key Choices:")
print(" • Automated agent chat conversations")
print(" • Code execution capabilities")
print(" • Human-in-the-loop help")
print(" • Multi-agent collaboration")
print(" • Software program/carry out callingn")
print("✓ AutoGen concepts explainedn")
class MockLLM:
def __init__(self):
self.responses = {
"code": "def fibonacci(n):n if n <= 1:n return nn return fibonacci(n-1) + fibonacci(n-2)",
"make clear": "This generally is a recursive implementation of the Fibonacci sequence.",
"analysis": "The code is suitable nevertheless might very nicely be optimized with memoization.",
"default": "I understand. Let me help with that exercise."
}
def generate(self, quick: str) -> str:
prompt_lower = quick.lower()
if "code" in prompt_lower or "carry out" in prompt_lower:
return self.responses["code"]
elif "make clear" in prompt_lower:
return self.responses["explain"]
elif "analysis" in prompt_lower:
return self.responses["review"]
return self.responses["default"]
def demo_autogen_with_mock():
print("="*70)
print("DEMO 5: AutoGen with Custom-made LLM Backend")
print("="*70 + "n")
mock_llm = MockLLM()
dialog = [
("User", "Create a fibonacci function"),
("CodeAgent", mock_llm.generate("write code for fibonacci")),
("ReviewAgent", mock_llm.generate("review this code")),
]
print("Simulated AutoGen Multi-Agent Dialog:n")
for speaker, message in dialog:
print(f"[{speaker}]")
print(f"{message}n")
print("✓ AutoGen simulation completen")
We illustrate AutoGen’s core thought by defining a conceptual configuration of brokers and their workflow. We then simulate an AutoGen-style dialog using a custom-made mock LLM that generates sensible however controllable responses. We discover how this framework permits plenty of brokers to function, test, and refine ideas collaboratively with out relying on any exterior APIs. Attempt the FULL CODES proper right here.
def demo_hybrid_system():
print("="*70)
print("DEMO 6: Hybrid LangChain + Multi-Agent System")
print("="*70 + "n")
reasoning_prompt = PromptTemplate(
input_variables=["problem"],
template="Analyze this disadvantage: {disadvantage}nWhat are the essential factor steps?"
)
reasoning_chain = LLMChain(llm=llm, quick=reasoning_prompt)
planner = SimpleAgent("Planner", "strategic planner", pipe)
executor = SimpleAgent("Executor", "exercise executor", pipe)
disadvantage = "Optimize a gradual database query"
print(f"Draw back: {disadvantage}n")
print("[LangChain] Analyzing disadvantage...")
analysis = reasoning_chain.run(disadvantage=disadvantage)
print(f"Analysis: {analysis[:120]}...n")
print(f"[{planner.name}] Creating plan...")
plan = planner.course of(f"Plan how one can: {disadvantage}")
print(f"Plan: {plan[:120]}...n")
print(f"[{executor.name}] Executing...")
end result = executor.course of(f"Execute: Add database indexes")
print(f"Consequence: {end result[:120]}...n")
print("✓ Hybrid system completen")
if __name__ == "__main__":
print("="*70)
print("🤖 ADVANCED AGENTIC AI TUTORIAL")
print("AutoGen + LangChain + HuggingFace")
print("="*70 + "n")
demo_langchain_basics()
demo_langchain_multi_step()
demo_simple_agents()
demo_autogen_conceptual()
demo_autogen_with_mock()
demo_hybrid_system()
print("="*70)
print("🎉 TUTORIAL COMPLETE!")
print("="*70)
print("n📚 What You Realized:")
print(" ✓ LangChain quick engineering and chains")
print(" ✓ Multi-step reasoning with LangChain")
print(" ✓ Setting up custom-made multi-agent packages")
print(" ✓ AutoGen construction and concepts")
print(" ✓ Combining LangChain + brokers")
print(" ✓ Using HuggingFace fashions (no API wished!)")
print("n💡 Key Takeaway:")
print(" You could assemble extremely efficient agentic AI packages with out expensive APIs!")
print(" Combine LangChain's chains with multi-agent architectures for")
print(" intelligent, autonomous AI packages.")
print("="*70 + "n")
We combine LangChain’s structured reasoning with our simple agentic system to create a hybrid intelligent framework. We allow LangChain to research points whereas the brokers plan and execute corresponding actions in sequence. We conclude the demonstration by working all modules collectively, showcasing how open-source devices can mix seamlessly to assemble adaptive, autonomous AI packages.
In conclusion, we witness how Agentic AI transforms from thought to actuality via a simple, modular design. We combine the reasoning depth of LangChain with the cooperative vitality of brokers to assemble adaptable packages that suppose, plan, and act independently. The consequence’s a clear demonstration that extremely efficient, autonomous AI packages could also be constructed with out expensive infrastructure, leveraging open-source devices, inventive design, and slightly little bit of experimentation.
Attempt the FULL CODES proper right here. Be completely happy to try our GitHub Internet web page for Tutorials, Codes and Notebooks. Moreover, be completely happy to adjust to us on Twitter and don’t overlook to affix our 100k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you’ll be capable of be a part of us on telegram as properly.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is devoted to harnessing the potential of Artificial Intelligence for social good. His newest endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth safety of machine finding out and deep finding out data that’s every technically sound and easily understandable by a big viewers. The platform boasts of over 2 million month-to-month views, illustrating its repute amongst audiences.
🙌 Observe MARKTECHPOST: Add us as a hottest provide on Google.
Elevate your perspective with NextTech Info, the place innovation meets notion.
Uncover the latest breakthroughs, get distinctive updates, and be a part of with a world neighborhood of future-focused thinkers.
Unlock tomorrow’s developments instantly: be taught further, subscribe to our e-newsletter, and grow to be part of the NextTech group at NextTech-news.com
Keep forward of the curve with NextBusiness 24. Discover extra tales, subscribe to our e-newsletter, and be a part of our rising neighborhood at nextbusiness24.com