In recent AI systems,
LLM (Large Language Model) agents are increasingly used in various contexts.
For example:
-
Intent understanding
-
Summarization
-
Reasoning
-
Decision support
-
Rule interpretation
However, there is a critical problem here.
LLMs are extremely powerful, but
when used directly, they make decision structures opaque.
The Problem: Hidden Decision Logic
Consider a case where LLMs are used for rule interpretation and decision support:
prompt = f""" Based on the following internal rules, determine whether this transaction should be approved. Rules: - Transfers to high-risk countries require caution - First-time transactions should be carefully evaluated - Large amounts require additional verification Transaction: - Amount: 500,000 JPY - Country: Japan → Nigeria - Transaction history: none Respond with 'approve' or 'reject'. """
Instead, it is:
-
Interpreting rules
-
Integrating multiple conditions
-
Making judgments under ambiguous criteria
In other words, it appears to perform
human-like decision-making.
The Critical Issue
However, this implementation has a fundamental problem.
Specifically:
-
Why was this decision made?
-
Which rules had the strongest influence?
-
Which conditions were ignored?
-
Where does responsibility lie?
None of these are visible or recorded.
Inside the LLM, the following processes occur:
-
Probabilistic modeling
-
Contextual reasoning
-
Pattern similarity
However, these processes
cannot be directly observed from the outside.
This means we cannot retrieve:
-
What information was prioritized
-
Which rules were emphasized
-
What reasoning path was taken
As a result,
the basis of the decision cannot be recorded as a structured artifact.
This is not merely a problem of explainability.
It means that
the decision-making process does not exist as an externally referable structure.
In other words,
Decision Trace does not exist.
The Core Solution
To solve this problem,
the decision structure must be defined outside of the LLM.
This requires treating the LLM as
one agent within an AI orchestrator.
In other words:
Instead:
The Role of LLM Agents (Micro Structure)
The role of LLM agents is consistent:
to transform unstructured information into structured signals.
1. Intent Understanding (Intent → Signal)
"I want to cancel my subscription" ↓ intent = cancel_subscription confidence = 0.91
-
Interprets natural language
-
Extracts user intent
-
Converts it into structured data with confidence
2. Knowledge Reasoning (Context → Signal)
user_segment = "VIP" risk_score = 0.15 ↓ recommended_offer = VIP_discount
-
Interprets contextual information
-
Considers risk
-
Infers an appropriate action candidate
3. Policy Interpretation (Policy → Signal)
"Transfers to high-risk countries require caution" ↓ risk_flag = high_risk_country severity = medium
-
Interprets natural language rules
-
Converts abstract expressions into concrete meaning
-
Produces structured policy signals
4. Explanation Generation (Trace → Explanation)
decision_trace → explanation
-
References recorded decision traces
-
Organizes influencing factors
-
Generates human-readable explanations
All of these follow the same transformation:
Unstructured Data ↓ LLM ↓ Structured Signal
Internal Structure of an LLM Agent
LLM agents are not treated as black boxes.
Prompt Builder ↓ LLM ↓ Parser ↓ Structured Signal
Outputs are always structured:
{ "intent": "cancel_subscription", "confidence": 0.91 }
AI Orchestrator (Macro Structure)
The overall system is structured as follows:
Event ↓ Signal Agents (including LLM) ↓ Decision Engine (Contract / DSL) ↓ Policy Check ↓ Boundary ↓ Execution
👉 Decisions are always made outside the LLM.
Separation of Responsibilities
LLM = Signal Generation (meaning & context) Orchestrator = Decision Control (rules, responsibility, trace)
LLM → Signal (interpretation) Rules → Decision (evaluation) Policy → Validation (compliance) Boundary → Control (stop / escalation)
Roles Explained
-
LLM → Signal
Interprets language and context into structured inputs -
Rules → Decision
Applies logic and determines outcomes -
Policy → Validation
Ensures compliance with constraints -
Boundary → Control
Handles thresholds, escalation, and stopping conditions
Why This Separation Is Necessary
Using LLMs directly for decisions introduces:
Reproducibility Issues
Outputs may vary for the same input
Governance Issues
Rules and constraints are not explicitly encoded
Accountability Issues
LLMs cannot assume responsibility
Root Cause
All of these stem from the same issue:
👉 Decision logic is embedded inside the LLM
This leads to:
-
Non-observable decision criteria
-
Non-controllable logic
-
Non-recordable processes
As a result:
Decisions cannot be managed as a system.
Solution Direction
👉 Decisions must be defined as external structures
This means:
-
Explicitly defining decision logic
-
Managing rules and policies separately
-
Recording execution processes
LLMs are then limited to:
👉 Generating signals, not making decisions
LLM Agents and Decision Trace
All processes, including LLM outputs, are recorded:
Event ↓ Signal └ LLM Output ↓ Decision ↓ Policy ↓ Boundary ↓ Execution
This enables full traceability:
-
Which signals were used
-
Which rules were applied
-
Why the decision was made
Final Perspective
LLM = Reasoning Engine Orchestrator = Decision Governance
LLMs excel at:
-
Understanding meaning
-
Interpreting documents
-
Inferring context
But they do NOT provide:
-
Rule governance
-
Responsibility management
-
Process traceability
The New AI System Architecture
Models + LLM Agents (Signal Generation) + Rules / Contracts (Decision) + Policies (Constraints) + Human (Responsibility) + Ledger (Trace)
At the center of this system is:
👉 The AI Orchestrator
Conclusion
The AI Orchestrator:
-
Externalizes decisions
-
Structures decision logic
-
Records decision processes
-
Controls execution
In other words:

コメント