Securing AgentExecutor
LangChain's default agent architecture is probability-driven. It blindly trusts the LLM to choose the correct tool and parameters. Exogram's wrapper intercepts every `Tool` payload and executes a cryptographic context-check before allowing the framework to proceed.
01. Installation
pip install exogram-langchain
02. Implementation
Replace your standard from langchain.agents import AgentExecutor import. The Exogram wrapper handles the `403 Forbidden` API logic internally, meaning you don't have to write custom exception handlers to deal with rejected tool calls.
from exogram_langchain import ExogramSecureExecutor
# 1. Initialize your model and tools normally
llm = ChatOpenAI(temperature=0)
tools = [execute_sql_query, issue_refund]
# 2. Build the standard LangChain Agent
agent = create_openai_tools_agent(llm, tools, prompt)
# 3. Secure the Execution Loop (The Only Change)
secure_executor = ExogramSecureExecutor( agent=agent, tools=tools, exogram_api_key=os.environ["EXOGRAM_API_KEY"], policy_group="fin_tech_strict" )
# 4. Invoke normally.
response = secure_executor.invoke({
"input": "User wants a full refund due to shipping delay."
})
""" If the LLM hallucinates an invalid refund amount, or attempts to run DDL in the SQL tool, Exogram intercepts the evaluation. The Executor receives a 'POLICY_VIOLATION' Error Code and injects it back into the agent's scratchpad, forcing the LLM to cleanly correct itself instead of crashing your application. """⚠️ Solving the "Infinite ReAct Loop"
LangChain's ReAct (Reasoning and Acting) looping often fails catastrophically when an API errors out. The agent will blindly retry the exact same failing tool syntax repeatedly until it exhausts your token limit (the "AgentExecutor Output Parser Error").
The Exogram Fix: The `ExogramSecureExecutor` inherently measures Execution Idempotency. If Exogram blocks a payload, and the LLM attempts to execute the exact same deterministic hash consecutively, Exogram hard-halts the execution, overriding LangChain's internal `max_iterations`, saving you thousands of dollars in runaway OpenAI inference costs.