On this tutorial, we show strategies to design a contract-first agentic selection system using PydanticAI, treating structured schemas as non-negotiable governance contracts barely than non-compulsory output codecs. We current how we define a strict selection model that encodes protection compliance, risk analysis, confidence calibration, and actionable subsequent steps immediately into the agent’s output schema. By combining Pydantic validators with PydanticAI’s retry and self-correction mechanisms, we be sure that the agent can’t produce logically inconsistent or non-compliant picks. All by the workflow, we focus on setting up an enterprise-grade selection agent that causes under constraints, making it applicable for real-world risk, compliance, and governance eventualities barely than toy prompt-based demos. Check out the FULL CODES proper right here.
!pip -q arrange -U pydantic-ai pydantic openai nest_asyncio
import os
import time
import asyncio
import getpass
from dataclasses import dataclass
from typing import Itemizing, Literal
import nest_asyncio
nest_asyncio.apply()
from pydantic import BaseModel, Self-discipline, field_validator
from pydantic_ai import Agent
from pydantic_ai.fashions.openai import OpenAIChatModel
from pydantic_ai.suppliers.openai import OpenAIProvider
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
if not OPENAI_API_KEY:
try:
from google.colab import userdata
OPENAI_API_KEY = userdata.get("OPENAI_API_KEY")
moreover Exception:
OPENAI_API_KEY = None
if not OPENAI_API_KEY:
OPENAI_API_KEY = getpass.getpass("Enter OPENAI_API_KEY: ").strip()
We organize the execution environment by placing within the required libraries and configuring asynchronous execution for Google Colab. We securely load the OpenAI API key and ensure the runtime is ready to cope with async agent calls. This establishes a safe foundation for working the contract-first agent with out environment-related factors. Check out the FULL CODES proper right here.
class RiskItem(BaseModel):
risk: str = Self-discipline(..., min_length=8)
severity: Literal["low", "medium", "high"]
mitigation: str = Self-discipline(..., min_length=12)
class DecisionOutput(BaseModel):
selection: Literal["approve", "approve_with_conditions", "reject"]
confidence: float = Self-discipline(..., ge=0.0, le=1.0)
rationale: str = Self-discipline(..., min_length=80)
identified_risks: Itemizing[RiskItem] = Self-discipline(..., min_length=2)
compliance_passed: bool
conditions: Itemizing[str] = Self-discipline(default_factory=document)
next_steps: Itemizing[str] = Self-discipline(..., min_length=3)
timestamp_unix: int = Self-discipline(default_factory=lambda: int(time.time()))
@field_validator("confidence")
@classmethod
def confidence_vs_risk(cls, v, knowledge):
risks = knowledge.data.get("identified_risks") or []
if any(r.severity == "extreme" for r in risks) and v > 0.70:
elevate ValueError("confidence too extreme given high-severity risks")
return v
@field_validator("selection")
@classmethod
def reject_if_non_compliant(cls, v, knowledge):
if knowledge.data.get("compliance_passed") is False and v != "reject":
elevate ValueError("non-compliant picks need to be reject")
return v
@field_validator("conditions")
@classmethod
def conditions_required_for_conditional_approval(cls, v, knowledge):
d = knowledge.data.get("selection")
if d == "approve_with_conditions" and (not v or len(v) < 2):
elevate ValueError("approve_with_conditions requires a minimal of two conditions")
if d == "approve" and v:
elevate ValueError("approve shouldn't embrace conditions")
return v
We define the core selection contract using strict Pydantic fashions that precisely describe a sound selection. We encode logical constraints equal to confidence–risk alignment, compliance-driven rejection, and conditional approvals immediately into the schema. This ensures that any agent output ought to fulfill enterprise logic, not merely syntactic development. Check out the FULL CODES proper right here.
@dataclass
class DecisionContext:
company_policy: str
risk_threshold: float = 0.6
model = OpenAIChatModel(
"gpt-5",
provider=OpenAIProvider(api_key=OPENAI_API_KEY),
)
agent = Agent(
model=model,
deps_type=DecisionContext,
output_type=DecisionOutput,
system_prompt="""
You are an organization selection analysis agent.
It is important to contemplate risk, compliance, and uncertainty.
All outputs ought to strictly fulfill the DecisionOutput schema.
"""
)
We inject enterprise context by a typed dependency object and initialize the OpenAI-backed PydanticAI agent. We configure the agent to produce solely structured selection outputs that conform to the predefined contract. This step formalizes the separation between enterprise context and model reasoning. Check out the FULL CODES proper right here.
@agent.output_validator
def ensure_risk_quality(consequence: DecisionOutput) -> DecisionOutput:
if len(consequence.identified_risks) < 2:
elevate ValueError("minimal two risks required")
if not any(r.severity in ("medium", "extreme") for r in consequence.identified_risks):
elevate ValueError("a minimal of 1 medium or extreme risk required")
return consequence
@agent.output_validator
def enforce_policy_controls(consequence: DecisionOutput) -> DecisionOutput:
protection = CURRENT_DEPS.company_policy.lower()
textual content material = (
consequence.rationale
+ " ".be a part of(consequence.next_steps)
+ " ".be a part of(consequence.conditions)
).lower()
if consequence.compliance_passed:
if not any(okay in textual content material for okay in ["encryption", "audit", "logging", "access control", "key management"]):
elevate ValueError("missing concrete security controls")
return consequence
We add output validators that act as governance checkpoints after the model generates a response. We drive the agent to find out important risks and to explicitly reference concrete security controls when claiming compliance. If these constraints are violated, we set off computerized retries to implement self-correction. Check out the FULL CODES proper right here.
async def run_decision():
world CURRENT_DEPS
CURRENT_DEPS = DecisionContext(
company_policy=(
"No deployment of methods coping with personal data or transaction metadata "
"with out encryption, audit logging, and least-privilege entry administration."
)
)
quick = """
Decision request:
Deploy an AI-powered purchaser analytics dashboard using a third-party cloud vendor.
The system processes client habits and transaction metadata.
Audit logging won't be utilized and customer-managed keys are not sure.
"""
consequence = await agent.run(quick, deps=CURRENT_DEPS)
return consequence.output
selection = asyncio.run(run_decision())
from pprint import pprint
pprint(selection.model_dump())
We run the agent on a good choice request and seize the validated structured output. We show how the agent evaluates risk, protection compliance, and confidence sooner than producing a final selection. This completes the end-to-end contract-first selection workflow in a production-style setup.
In conclusion, we show strategies to switch from free-form LLM outputs to dominated, reliable selection methods using PydanticAI. We current that by implementing exhausting contracts on the schema stage, we’re in a position to routinely align picks with protection requirements, risk severity, and confidence realism with out information quick tuning. This technique permits us to assemble brokers that fail safely, self-correct when constraints are violated, and produce auditable, structured outputs that downstream methods can perception. Lastly, we show that contract-first agent design permits us to deploy agentic AI as a dependable selection layer inside manufacturing and enterprise environments.
Check out the FULL CODES proper right here. Moreover, be completely happy to adjust to us on Twitter and don’t overlook to hitch our 100k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you probably may be a part of us on telegram as properly.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is devoted to harnessing the potential of Artificial Intelligence for social good. His newest endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth safety of machine finding out and deep finding out data that’s every technically sound and easily understandable by a big viewers. The platform boasts of over 2 million month-to-month views, illustrating its fame amongst audiences.
Elevate your perspective with NextTech Info, the place innovation meets notion.
Uncover the newest breakthroughs, get distinctive updates, and be a part of with a world group of future-focused thinkers.
Unlock tomorrow’s traits at current: study additional, subscribe to our publication, and switch into part of the NextTech neighborhood at NextTech-news.com
Keep forward of the curve with NextBusiness 24. Discover extra tales, subscribe to our publication, and be a part of our rising group at nextbusiness24.com

