On this tutorial, we display the right way to design a contract-first agentic determination system utilizing PydanticAI, treating structured schemas as non-negotiable governance contracts reasonably than non-obligatory output codecs. We present how we outline a strict determination mannequin that encodes coverage compliance, danger evaluation, confidence calibration, and actionable subsequent steps instantly into the agent’s output schema. By combining Pydantic validators with PydanticAI’s retry and self-correction mechanisms, we be sure that the agent can’t produce logically inconsistent or non-compliant choices. All through the workflow, we deal with constructing an enterprise-grade determination agent that causes below constraints, making it appropriate for real-world danger, compliance, and governance eventualities reasonably than toy prompt-based demos. Take a look at the FULL CODES right here.
!pip -q set up -U pydantic-ai pydantic openai nest_asyncio
import os
import time
import asyncio
import getpass
from dataclasses import dataclass
from typing import Record, Literal
import nest_asyncio
nest_asyncio.apply()
from pydantic import BaseModel, Discipline, field_validator
from pydantic_ai import Agent
from pydantic_ai.fashions.openai import OpenAIChatModel
from pydantic_ai.suppliers.openai import OpenAIProvider
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
if not OPENAI_API_KEY:
attempt:
from google.colab import userdata
OPENAI_API_KEY = userdata.get("OPENAI_API_KEY")
besides Exception:
OPENAI_API_KEY = None
if not OPENAI_API_KEY:
OPENAI_API_KEY = getpass.getpass("Enter OPENAI_API_KEY: ").strip()We arrange the execution atmosphere by putting in the required libraries and configuring asynchronous execution for Google Colab. We securely load the OpenAI API key and make sure the runtime is able to deal with async agent calls. This establishes a secure basis for working the contract-first agent with out environment-related points. Take a look at the FULL CODES right here.
class RiskItem(BaseModel):
danger: str = Discipline(..., min_length=8)
severity: Literal["low", "medium", "high"]
mitigation: str = Discipline(..., min_length=12)
class DecisionOutput(BaseModel):
determination: Literal["approve", "approve_with_conditions", "reject"]
confidence: float = Discipline(..., ge=0.0, le=1.0)
rationale: str = Discipline(..., min_length=80)
identified_risks: Record[RiskItem] = Discipline(..., min_length=2)
compliance_passed: bool
situations: Record[str] = Discipline(default_factory=record)
next_steps: Record[str] = Discipline(..., min_length=3)
timestamp_unix: int = Discipline(default_factory=lambda: int(time.time()))
@field_validator("confidence")
@classmethod
def confidence_vs_risk(cls, v, information):
dangers = information.knowledge.get("identified_risks") or []
if any(r.severity == "excessive" for r in dangers) and v > 0.70:
elevate ValueError("confidence too excessive given high-severity dangers")
return v
@field_validator("determination")
@classmethod
def reject_if_non_compliant(cls, v, information):
if information.knowledge.get("compliance_passed") is False and v != "reject":
elevate ValueError("non-compliant choices have to be reject")
return v
@field_validator("situations")
@classmethod
def conditions_required_for_conditional_approval(cls, v, information):
d = information.knowledge.get("determination")
if d == "approve_with_conditions" and (not v or len(v) < 2):
elevate ValueError("approve_with_conditions requires not less than 2 situations")
if d == "approve" and v:
elevate ValueError("approve should not embody situations")
return vWe outline the core determination contract utilizing strict Pydantic fashions that exactly describe a legitimate determination. We encode logical constraints resembling confidence–danger alignment, compliance-driven rejection, and conditional approvals instantly into the schema. This ensures that any agent output should fulfill enterprise logic, not simply syntactic construction. Take a look at the FULL CODES right here.
@dataclass
class DecisionContext:
company_policy: str
risk_threshold: float = 0.6
mannequin = OpenAIChatModel(
"gpt-5",
supplier=OpenAIProvider(api_key=OPENAI_API_KEY),
)
agent = Agent(
mannequin=mannequin,
deps_type=DecisionContext,
output_type=DecisionOutput,
system_prompt="""
You're a company determination evaluation agent.
You need to consider danger, compliance, and uncertainty.
All outputs should strictly fulfill the DecisionOutput schema.
"""
)
We inject enterprise context by a typed dependency object and initialize the OpenAI-backed PydanticAI agent. We configure the agent to supply solely structured determination outputs that conform to the predefined contract. This step formalizes the separation between enterprise context and mannequin reasoning. Take a look at the FULL CODES right here.
@agent.output_validator
def ensure_risk_quality(end result: DecisionOutput) -> DecisionOutput:
if len(end result.identified_risks) < 2:
elevate ValueError("minimal two dangers required")
if not any(r.severity in ("medium", "excessive") for r in end result.identified_risks):
elevate ValueError("not less than one medium or excessive danger required")
return end result
@agent.output_validator
def enforce_policy_controls(end result: DecisionOutput) -> DecisionOutput:
coverage = CURRENT_DEPS.company_policy.decrease()
textual content = (
end result.rationale
+ " ".be a part of(end result.next_steps)
+ " ".be a part of(end result.situations)
).decrease()
if end result.compliance_passed:
if not any(okay in textual content for okay in ["encryption", "audit", "logging", "access control", "key management"]):
elevate ValueError("lacking concrete safety controls")
return end resultWe add output validators that act as governance checkpoints after the mannequin generates a response. We pressure the agent to determine significant dangers and to explicitly reference concrete safety controls when claiming compliance. If these constraints are violated, we set off computerized retries to implement self-correction. Take a look at the FULL CODES right here.
async def run_decision():
world CURRENT_DEPS
CURRENT_DEPS = DecisionContext(
company_policy=(
"No deployment of methods dealing with private knowledge or transaction metadata "
"with out encryption, audit logging, and least-privilege entry management."
)
)
immediate = """
Choice request:
Deploy an AI-powered buyer analytics dashboard utilizing a third-party cloud vendor.
The system processes person conduct and transaction metadata.
Audit logging isn't applied and customer-managed keys are unsure.
"""
end result = await agent.run(immediate, deps=CURRENT_DEPS)
return end result.output
determination = asyncio.run(run_decision())
from pprint import pprint
pprint(determination.model_dump())We run the agent on a sensible determination request and seize the validated structured output. We display how the agent evaluates danger, coverage compliance, and confidence earlier than producing a ultimate determination. This completes the end-to-end contract-first determination workflow in a production-style setup.
In conclusion, we display the right way to transfer from free-form LLM outputs to ruled, dependable determination methods utilizing PydanticAI. We present that by implementing onerous contracts on the schema degree, we will robotically align choices with coverage necessities, danger severity, and confidence realism with out guide immediate tuning. This method permits us to construct brokers that fail safely, self-correct when constraints are violated, and produce auditable, structured outputs that downstream methods can belief. Finally, we display that contract-first agent design allows us to deploy agentic AI as a reliable determination layer inside manufacturing and enterprise environments.
Take a look at the FULL CODES right here. Additionally, be at liberty to observe us on Twitter and don’t overlook to hitch our 100k+ ML SubReddit and Subscribe to our Publication. Wait! are you on telegram? now you possibly can be a part of us on telegram as nicely.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.
