n recent years, many AI systems have begun shifting from single-model architectures to multi-agent structures.
For example:
- Signal Agent (prediction generation)
- Decision Agent (decision proposal)
- Policy Agent (rule validation)
- Risk Agent (risk evaluation)
- Execution Agent (execution)
Such systems are generally referred to as multi-agent AI.
The Core Problem of Multi-Agent AI
However, an important problem arises:
👉 How do we govern decisions made by multiple AI agents?
As the number of agents increases, the system behavior becomes more complex.
If the decision structure is not explicitly defined:
👉 The AI system becomes a collection of black boxes.
The Role of the Decision Trace Model
To address this issue, the Decision Trace Model becomes essential.
What Multi-Agent AI Needs: Orchestration
In multi-agent AI, what matters is not the number of agents.
👉 What matters is orchestration.
That is:
👉 A structure that controls the flow of decisions
Decision Flow in Multi-Agent AI
A typical decision flow looks like this:
Event ↓ Signal Agents ↓ Decision Agents ↓ Policy Agent ↓ Boundary ↓ Human ↓ Ledger
Each component plays a distinct role:
- Event: Real-world occurrence
- Signal Agents: Generate predictions
- Decision Agents: Propose decisions
- Policy Agent: Validate rules
- Boundary: Enforce safety constraints
- Human: Final responsibility
With this structure, the AI system begins to function as a decision-making organization.
Why the Decision Trace Model Is Necessary
In multi-agent AI systems, the following problems inevitably arise:
- Which AI made the decision?
- Why was that decision made?
- Where does responsibility lie?
If this is not recorded:
👉 AI decisions cannot be explained.
Decision Trace
To solve this, we introduce Decision Trace.
A Decision Trace consists of:
Event Signal Decision Policy Boundary Human
👉 AI decisions become traceable and explainable.
Structure for Representing Decisions
The Decision Trace Model represents decisions using three layers:
- Ontology
- DSL
- Behavior Tree
Ontology
Ontology is not just a definition.
It defines:
- What to distinguish
- What to treat as equivalent
- How to partition the world for decision-making
In other words:
👉 It determines the resolution of meaning before Signal is generated in:
Manufacturing Example
Without Ontology
status = defective or normal
→ The nature of defects is unclear
→ All defects are treated the same
With Ontology
– appearance defect (scratch, dirt)
– functional defect (does not work)
– dimensional defect (out of specification)
What Changes
Becomes:
functional defect → operational issue
dimensional defect → precision issue
Decision Changes
- Appearance defect → inspection / conditional shipping
- Functional defect → stop shipping / recall
- Dimensional defect → process adjustment
👉 Without ontology: all defects are the same
👉 With ontology: actions differ by cause
DSL (Domain Specific Language)
DSL explicitly defines decision conditions.
Example:
IF defect.type == "appearance" AND defect.severity == "minor" THEN allow_shipping() IF defect.type == "functional" THEN stop_shipping() AND trigger_recall() IF defect.type == "dimensional" AND defect.deviation > threshold THEN adjust_process()
What This Means
- Ontology = decomposing meaning
- DSL = fixing decision rules
👉 Ontology defines the world
👉 DSL defines how to act within it
Behavior Tree
Behavior Tree defines:
👉 Execution order, branching, and stopping logic
Example
SELECTOR ├─ Sequence │ ├─ IsFunctionalDefect │ └─ StopShipping ├─ Sequence │ ├─ IsDimensionalDefect │ └─ AdjustProcess ├─ Sequence │ ├─ IsMinorAppearanceDefect │ └─ AllowConditionalShipping └─ AcceptProduct
What It Represents
Even for the same event:
- Check critical defects first
- Then evaluate process issues
- Then check minor defects
- Otherwise accept
👉 This defines the execution order of decisions
Summary of the Three Layers
- Ontology = how to classify
- DSL = how to respond
- Behavior Tree = how to execute
From Structure to Execution
Event ↓ Signal (Ontology) ↓ Decision (DSL) ↓ Execution (Behavior Tree)
This structure represents AI decision-making as:
👉 Meaning
👉 Rules
👉 Execution
Multi-Agent AI and Behavior Tree
Behavior Tree, originally used in game AI, is highly suitable for orchestrating multi-agent systems.
Because it naturally expresses:
- Branching
- Priority
- Fallback
- Stop conditions
Example Multi-Agent Setup
- RiskAgent
- QualityAgent
- ProcessAgent
- ExecutionAgent
Orchestration via Behavior Tree
SELECTOR ├─ Sequence │ ├─ RiskCheckAgent │ └─ StopShipping ├─ Sequence │ ├─ QualityCheckAgent │ └─ AdjustProcess ├─ Sequence │ ├─ MinorDefectCheckAgent │ └─ AllowConditionalShipping └─ AcceptProduct
Key Insight
👉 Behavior Tree defines how decisions are structured
The Critical Problem
However:
👉 How does this actually run in a real system?
Behavior Tree Alone Cannot Run AI
A Behavior Tree is only:
👉 A blueprint
It does not define:
- Which agent executes each node
- Which APIs are called
- How failures are handled
- Where logs are stored
AI Orchestrator
This leads to the need for an AI Orchestrator.
👉 The AI Orchestrator is the execution control layer that runs the Behavior Tree.
Role of the Orchestrator
It performs:
- Node execution (agent invocation)
- Result collection
- Branch control
- Policy and boundary enforcement
- Logging
Execution Flow
2. Select node
3. Execute agent
4. Get result
5. Branch
6. Log
7. Repeat
Final Structure
Orchestrator = execution control
Ledger = decision history
Key Insight
👉 Behavior Tree defines decisions
👉 Orchestrator makes them run
Conclusion
AI systems are not just software.
👉 They are decision production systems
And the Decision Trace Model provides:
👉 The structure to control and explain those decisions

AIシステム設計・意思決定構造の設計を専門としています。
Ontology・DSL・Behavior Treeによる判断の外部化、マルチエージェント構築に取り組んでいます。
Specialized in AI system design and decision-making architecture.
Focused on externalizing decision logic using Ontology, DSL, and Behavior Trees, and building multi-agent systems.

コメント