24.3 C
New York
Friday, June 13, 2025

Develop a Multi-Device AI Agent with Safe Python Execution utilizing Riza and Gemini


On this tutorial, we’ll harness Riza’s safe Python execution because the cornerstone of a strong, tool-augmented AI agent in Google Colab. Starting with seamless API key administration, via Colab secrets and techniques, atmosphere variables, or hidden prompts, we’ll configure your Riza credentials to allow sandboxed, audit-ready code execution. We’ll combine Riza’s ExecPython device right into a LangChain agent alongside Google’s Gemini generative mannequin, outline an AdvancedCallbackHandler that captures each device invocations and Riza execution logs, and construct customized utilities for advanced math and in-depth textual content evaluation.

%pip set up --upgrade --quiet langchain-community langchain-google-genai rizaio python-dotenv


import os
from typing import Dict, Any, Listing
from datetime import datetime
import json
import getpass
from google.colab import userdata

We are going to set up and improve the core libraries, LangChain Group extensions, Google Gemini integration, Riza’s safe execution bundle, and dotenv help, quietly in Colab. We then import commonplace utilities (e.g., os, datetime, json), typing annotations, safe enter through getpass, and Colab’s person knowledge API to handle atmosphere variables and person secrets and techniques seamlessly.

def setup_api_keys():
    """Arrange API keys utilizing a number of safe strategies."""
   
    strive:
        os.environ['GOOGLE_API_KEY'] = userdata.get('GOOGLE_API_KEY')
        os.environ['RIZA_API_KEY'] = userdata.get('RIZA_API_KEY')
        print("✅ API keys loaded from Colab secrets and techniques")
        return True
    besides:
        move
   
    if os.getenv('GOOGLE_API_KEY') and os.getenv('RIZA_API_KEY'):
        print("✅ API keys present in atmosphere")
        return True
   
    strive:
        if not os.getenv('GOOGLE_API_KEY'):
            google_key = getpass.getpass("🔑 Enter your Google Gemini API key: ")
            os.environ['GOOGLE_API_KEY'] = google_key
       
        if not os.getenv('RIZA_API_KEY'):
            riza_key = getpass.getpass("🔑 Enter your Riza API key: ")
            os.environ['RIZA_API_KEY'] = riza_key
       
        print("✅ API keys set securely through enter")
        return True
    besides:
        print("❌ Did not set API keys")
        return False


if not setup_api_keys():
    print("⚠️  Please arrange your API keys utilizing considered one of these strategies:")
    print("   1. Colab Secrets and techniques: Go to 🔑 in left panel, add GOOGLE_API_KEY and RIZA_API_KEY")
    print("   2. Setting: Set GOOGLE_API_KEY and RIZA_API_KEY earlier than working")
    print("   3. Handbook enter: Run the cell and enter keys when prompted")
    exit()

The above cell defines a setup_api_keys() perform that securely retrieves your Google Gemini and Riza API keys by first trying to load them from Colab secrets and techniques, then falling again to current atmosphere variables, and eventually prompting you to enter them through hidden enter if wanted. If none of those strategies succeed, it prints directions on the best way to present your keys and exits the pocket book.

from langchain_community.instruments.riza.command import ExecPython
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.brokers import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage, AIMessage
from langchain.reminiscence import ConversationBufferWindowMemory
from langchain.instruments import Device
from langchain.callbacks.base import BaseCallbackHandler

We import Riza’s ExecPython device alongside LangChain’s core elements for constructing a device‐calling agent, particularly the Gemini LLM wrapper (ChatGoogleGenerativeAI), the agent executor and creation capabilities (AgentExecutor, create_tool_calling_agent), the immediate and message templates, dialog reminiscence buffer, generic Device wrapper, and the bottom callback handler for logging and monitoring agent actions. These constructing blocks allow you to assemble, configure, and observe a memory-enabled, multi-tool AI agent in Colab.

class AdvancedCallbackHandler(BaseCallbackHandler):
    """Enhanced callback handler for detailed logging and metrics."""
   
    def __init__(self):
        self.execution_log = []
        self.start_time = None
        self.token_count = 0
   
    def on_agent_action(self, motion, **kwargs):
        timestamp = datetime.now().strftime("%H:%M:%S")
        self.execution_log.append({
            "timestamp": timestamp,
            "motion": motion.device,
            "enter": str(motion.tool_input)[:100] + "..." if len(str(motion.tool_input)) > 100 else str(motion.tool_input)
        })
        print(f"🔧 [{timestamp}] Utilizing device: {motion.device}")
   
    def on_agent_finish(self, end, **kwargs):
        timestamp = datetime.now().strftime("%H:%M:%S")
        print(f"✅ [{timestamp}] Agent accomplished efficiently")
   
    def get_execution_summary(self):
        return {
            "total_actions": len(self.execution_log),
            "execution_log": self.execution_log
        }


class MathTool:
    """Superior mathematical operations device."""
   
    @staticmethod
    def complex_calculation(expression: str) -> str:
        """Consider advanced mathematical expressions safely."""
        strive:
            import math
            import numpy as np
           
            safe_dict = {
                "__builtins__": {},
                "abs": abs, "spherical": spherical, "min": min, "max": max,
                "sum": sum, "len": len, "pow": pow,
                "math": math, "np": np,
                "sin": math.sin, "cos": math.cos, "tan": math.tan,
                "log": math.log, "sqrt": math.sqrt, "pi": math.pi, "e": math.e
            }
           
            end result = eval(expression, safe_dict)
            return f"End result: {end result}"
        besides Exception as e:
            return f"Math Error: {str(e)}"


class TextAnalyzer:
    """Superior textual content evaluation device."""
   
    @staticmethod
    def analyze_text(textual content: str) -> str:
        """Carry out complete textual content evaluation."""
        strive:
            char_freq = {}
            for char in textual content.decrease():
                if char.isalpha():
                    char_freq[char] = char_freq.get(char, 0) + 1
           
            phrases = textual content.break up()
            word_count = len(phrases)
            avg_word_length = sum(len(phrase) for phrase in phrases) / max(word_count, 1)
           
            specific_chars = {}
            for char in set(textual content.decrease()):
                if char.isalpha():
                    specific_chars[char] = textual content.decrease().depend(char)
           
            evaluation = {
                "total_characters": len(textual content),
                "total_words": word_count,
                "average_word_length": spherical(avg_word_length, 2),
                "character_frequencies": dict(sorted(char_freq.objects(), key=lambda x: x[1], reverse=True)[:10]),
                "specific_character_counts": specific_chars
            }
           
            return json.dumps(evaluation, indent=2)
        besides Exception as e:
            return f"Evaluation Error: {str(e)}"

Above cell brings collectively three important items: an AdvancedCallbackHandler that captures each device invocation with a timestamped log and might summarize the whole actions taken; a MathTool class that safely evaluates advanced mathematical expressions in a restricted atmosphere to forestall undesirable operations; and a TextAnalyzer class that computes detailed textual content statistics, akin to character frequencies, phrase counts, and common phrase size, and returns the outcomes as formatted JSON.

def validate_api_keys():
    """Validate API keys earlier than creating brokers."""
    strive:
        test_llm = ChatGoogleGenerativeAI(
            mannequin="gemini-1.5-flash",  
            temperature=0
        )
        test_llm.invoke("check")
        print("✅ Gemini API key validated")
       
        test_tool = ExecPython()
        print("✅ Riza API key validated")
       
        return True
    besides Exception as e:
        print(f"❌ API key validation failed: {str(e)}")
        print("Please verify your API keys and take a look at once more")
        return False


if not validate_api_keys():
    exit()


python_tool = ExecPython()
math_tool = Device(
    title="advanced_math",
    description="Carry out advanced mathematical calculations and evaluations",
    func=MathTool.complex_calculation
)
text_analyzer_tool = Device(
    title="text_analyzer",
    description="Analyze textual content for character frequencies, phrase statistics, and particular character counts",
    func=TextAnalyzer.analyze_text
)


instruments = [python_tool, math_tool, text_analyzer_tool]


strive:
    llm = ChatGoogleGenerativeAI(
        mannequin="gemini-1.5-flash",
        temperature=0.1,
        max_tokens=2048,
        top_p=0.8,
        top_k=40
    )
    print("✅ Gemini mannequin initialized efficiently")
besides Exception as e:
    print(f"⚠️  Gemini Professional failed, falling again to Flash: {e}")
    llm = ChatGoogleGenerativeAI(
        mannequin="gemini-1.5-flash",
        temperature=0.1,
        max_tokens=2048
    )

On this cell, we first outline and run validate_api_keys() to make sure that each the Gemini and Riza credentials work, trying a dummy LLM name and instantiating the Riza ExecPython device. We exit the pocket book if validation fails. We then instantiate python_tool for safe code execution, wrap our MathTool and TextAnalyzer strategies into LangChain Device objects, and acquire them into the instruments record. Lastly, we initialize the Gemini mannequin with customized settings (temperature, max_tokens, top_p, top_k), and if the “Professional” configuration fails, we gracefully fall again to the lighter “Flash” variant.

prompt_template = ChatPromptTemplate.from_messages([
    ("system", """You are an advanced AI assistant with access to powerful tools.


Key capabilities:
- Python code execution for complex computations
- Advanced mathematical operations
- Text analysis and character counting
- Problem decomposition and step-by-step reasoning


Instructions:
1. Always break down complex problems into smaller steps
2. Use the most appropriate tool for each task
3. Verify your results when possible
4. Provide clear explanations of your reasoning
5. For text analysis questions (like counting characters), use the text_analyzer tool first, then verify with Python if needed


Be precise, thorough, and helpful."""),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}"),
])


reminiscence = ConversationBufferWindowMemory(
    okay=5,  
    return_messages=True,
    memory_key="chat_history"
)


callback_handler = AdvancedCallbackHandler()


agent = create_tool_calling_agent(llm, instruments, prompt_template)
agent_executor = AgentExecutor(
    agent=agent,
    instruments=instruments,
    verbose=True,
    reminiscence=reminiscence,
    callbacks=[callback_handler],
    max_iterations=10,
    early_stopping_method="generate"
)

This cell constructs the agent’s “mind” and workflow: it defines a structured ChatPromptTemplate that instructs the system on its toolset and reasoning model, units up a sliding-window dialog reminiscence to retain the final 5 exchanges, and instantiates the AdvancedCallbackHandler for real-time logging. It then creates a device‐calling agent by binding the Gemini LLM, customized instruments, and immediate template, and wraps it in an AgentExecutor that manages execution (as much as ten steps), leverages reminiscence for context, streams verbose output, and halts cleanly as soon as the agent generates a remaining response.

def ask_question(query: str) -> Dict[str, Any]:
    """Ask a query to the superior agent and return detailed outcomes."""
    print(f"n🤖 Processing: {query}")
    print("=" * 50)
   
    strive:
        end result = agent_executor.invoke({"enter": query})
       
        output = end result.get("output", "No output generated")
       
        print("n📊 Execution Abstract:")
        abstract = callback_handler.get_execution_summary()
        print(f"Instruments used: {abstract['total_actions']}")
       
        return {
            "query": query,
            "reply": output,
            "execution_summary": abstract,
            "success": True
        }
   
    besides Exception as e:
        print(f"❌ Error: {str(e)}")
        return {
            "query": query,
            "error": str(e),
            "success": False
        }


test_questions = [
    "How many r's are in strawberry?",
    "Calculate the compound interest on $1000 at 5% for 3 years",
    "Analyze the word frequency in the sentence: 'The quick brown fox jumps over the lazy dog'",
    "What's the fibonacci sequence up to the 10th number?"
]


print("🚀 Superior Gemini Agent with Riza - Prepared!")
print("🔐 API keys configured securely")
print("Testing with pattern questions...n")


outcomes = []
for query in test_questions:
    end result = ask_question(query)
    outcomes.append(end result)
    print("n" + "="*80 + "n")


print("📈 FINAL SUMMARY:")
profitable = sum(1 for r in outcomes if r["success"])
print(f"Efficiently processed: {profitable}/{len(outcomes)} questions")

Lastly, we outline a helper perform, ask_question(), that sends a person question to the agent executor, prints the query header, captures the agent’s response (or error), after which outputs a quick execution abstract (exhibiting what number of device calls had been made). It then provides a listing of pattern questions, masking counting characters, computing compound curiosity, analyzing phrase frequency, and producing a Fibonacci sequence, and iterates via them, invoking the agent on every and gathering the outcomes. After working all exams, it prints a concise “FINAL SUMMARY” indicating what number of queries had been processed efficiently, confirming that your Superior Gemini + Riza agent is up and working in Colab.

In conclusion, by centering the structure on Riza’s safe execution atmosphere, we’ve created an AI agent that generates insightful responses through Gemini whereas additionally working arbitrary Python code in a completely sandboxed, monitored context. The mixing of Riza’s ExecPython device ensures that each computation, from superior numerical routines to dynamic textual content analyses, is executed with rigorous safety and transparency. With LangChain orchestrating device calls and a reminiscence buffer sustaining context, we now have a modular framework prepared for real-world duties akin to automated knowledge processing, analysis prototyping, or instructional demos.


Take a look at the Pocket book. All credit score for this analysis goes to the researchers of this undertaking. Additionally, be happy to observe us on Twitter and don’t overlook to hitch our 99k+ ML SubReddit and Subscribe to our Publication.


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its recognition amongst audiences.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles