Many AI platforms are designed with a single dominant focus:
How can we make the model smarter?
Improve accuracy.
Increase inference speed.
Raise automation rates.
However, in real-world decision environments, the truly important question is different:
Where does judgment reside?
Is it inside the model?
Inside the code?
Or hidden within implicit operational rules?
The Decision-Oriented Signal Platform offers a structural answer to this question.
It does not place the model at the center.
It places judgment at the center.
And most importantly, it moves judgment outside the model.
This article explains how three core structural components—
- Ontology
- Domain-Specific Language (DSL)
- Behavior Tree
—work together to create a decision infrastructure that is explainable, auditable, and structurally accountable.
1. Why Externalizing Judgment Is Necessary
In conventional machine learning systems, judgment is typically embedded in:
- Model weights
- Threshold logic
- Application code
- Operational rules
This structure creates several fundamental problems.
① The Location of Judgment Becomes Unclear
It becomes impossible to trace:
- Who defined the criteria?
- When were they changed?
- Under whose authority?
② Explanation and Responsibility Cannot Be Separated
We may be able to provide an Explanation (how it was computed),
but we cannot provide a Justification (why this decision is acceptable).
③ Change Costs Escalate
Changing thresholds or decision criteria requires code modifications.
The Decision-Oriented Signal Platform adopts a clear principle:
Judgment must not be a side effect of computation.
Judgment must be managed as a first-class structure.
To achieve this, three components are required:
- Fixing meaning (Ontology)
- Externalizing decision rules (DSL)
- Making execution structure explicit (Behavior Tree)
2. Ontology — Fixing the Boundaries of Meaning
The first step in externalizing judgment is fixing how we divide the world.
AI processes continuous data.
But decision-making always requires discrete boundaries:
Where does one state end and another begin?
In retail or facility operations, daily decisions depend on definitions such as:
- What counts as a “visit”?
- What qualifies as “zone roaming”?
- When is “purchase intent” considered high?
- Who qualifies as a “VIP”?
- What counts as an “anomaly”?
- When does something become “high risk”?
These categories do not naturally exist inside data.
They are interpretive decisions.
Example: “Visit” Can Be Defined Differently
One organization defines a visit as:
- Entering a geofence.
Another defines it as:
- Passing through an entrance gate.
Another requires:
- Staying inside for at least five minutes.
All are technically valid.
But the choice changes KPIs, marketing strategies, and operational actions.
This is not data processing.
It is a decision about meaning.
What Ontology Does
Ontology explicitly fixes these meaning boundaries.
Example definitions:
Visit
→ Stay within facility geofence for 3 continuous minutes.
Zone Roaming
→ Transition between two different zone IDs within 10 minutes.
High Purchase Intent
→ Intent score ≥ 0.8 calculated from browsing/cart behavior in last 24 hours.
VIP Tier
→ Top 5% of cumulative purchase amount over past 90 days.
Anomaly
→ Deviation exceeding 3σ from 8-week moving average.
High Risk
→ Risk score ≥ 0.7 AND consecutive negative signals.
These are not model outputs.
They are definitions of reality.
Without fixed ontology:
- Signal meanings drift across teams
- KPIs change retroactively
- Model updates alter semantic interpretation
- Justification becomes impossible
- Audits fail
Ontology is:
The coordinate system of the decision world.
Only after this coordinate system is fixed can DSL, Behavior Trees, and Signals operate stably.
3. DSL — Freeing Decision Logic from Code
Once meaning boundaries are fixed, the next question arises:
Where should decisions be written?
Most systems answer implicitly:
Write them in code.
Example:
if score > 0.8:
trigger_offer()
It appears simple and rational.
But structurally, this approach has serious flaws.
Problems with Code-Embedded Decisions
① Judgment Becomes Invisible
That single line hides multiple implicit decisions:
- Why 0.8?
- Who decided it?
- When was it changed?
- Was 0.7 considered?
- Who owns this threshold?
The code reveals implementation changes, not decision changes.
② Non-Engineers Cannot Participate
In reality, decision owners are often:
- Business leaders
- Marketing teams
- Risk management
- Operations staff
Yet embedding logic in code forces all changes through development workflows.
This is not inconvenience.
It is a structural bottleneck.
③ Decision Diff Cannot Be Audited
Code diff shows implementation change.
It does not show:
- Risk tolerance adjustment
- Strategy shift
- Expansion of target population
- Approval authority
④ Justification Is Lost
We can trace how something was computed.
But we cannot recover:
Why was this condition considered acceptable?
DSL as Decision Contract
The platform’s principle:
Do not write decisions in code.
Write them as DSL.
Example:
rule: high_value_user
when:
- signal: purchase_intent
op: ">="
value: 0.8
then:
action: issue_voucher
Structural Benefits of DSL
- Human-readable judgment logic
- Direct auditing of threshold changes
- Non-engineer participation
- Contract validation
- Version control
- Fail-closed compatibility
DSL is not configuration.
DSL = Decision Contract
It becomes:
- Auditable
- Versioned
- Approvable
- Validated before execution
4. Behavior Tree — Making Execution Topology Explicit
Even with Ontology and DSL, one critical question remains:
In what order are decisions evaluated?
In practice, decision logic is multi-layered:
- Is the user VIP?
- Is purchase intent high?
- Is risk acceptable?
- Has distribution limit been reached?
- Are there exception conditions?
Order matters.
If order is implicit, the system becomes opaque again.
Why Behavior Tree?
Behavior Trees (BT), originally developed for game AI, are ideal for representing decision execution structure declaratively.
They explicitly represent:
- Evaluation order
- Branching logic
- Fallback paths
- Termination conditions
- Fail-closed propagation
Example
Selector
├─ Check High Intent
│ └─ Issue Offer
└─ Check Medium Intent
└─ Log Only
This structure communicates:
- Priority ordering
- Early termination rules
- Fallback behavior
- Safe default handling
In this architecture:
- Ontology defines the semantic coordinate system.
- DSL defines the decision contract.
- Behavior Tree defines execution topology.
5. Three-Layer Integration — The Core Architecture
Individually:
- Ontology fixes meaning but cannot execute decisions.
- DSL expresses conditions but cannot guarantee order.
- Behavior Tree controls flow but requires semantic grounding.
Together, they enable:
① Complete Externalization of Judgment
Judgment becomes a managed asset.
② Separation of Explanation and Justification
- Explanation → Signal generation layer
- Justification → DSL + Behavior Tree
③ Institutionalized Fail-Closed Execution
Uncertain or invalid states halt execution.
④ Full Auditability
We can trace:
- Which ontology defined reality
- Which rule applied
- In what order evaluated
- Where execution stopped
- Who modified what
⑤ Safe Automation
Automation becomes accountable rather than opaque.
6. From Model-Centric to Judgment-Centric AI
Conventional AI pipeline:
Data → Model → Score → Threshold → Action
Decision-Oriented Signal Platform:
Data → Signal → Decision Structure → Action
The model does not disappear.
But its role changes.
Redefined Role of the Model
Old:
Model = Decision maker
New:
Model = Signal generator
The model estimates states.
The external structure makes the final decision.
This does not weaken AI.
It separates model capability from organizational responsibility.
Conclusion — AI Should Be Designed Around Judgment Structure
AI progress has focused on:
- Larger models
- Higher accuracy
- Faster inference
But in real-world systems, the essential question is:
Who is responsible for this decision, and through what structure was it made?
The Decision-Oriented Signal Platform takes a clear stance:
Judgment must not remain inside the model.
Judgment must be externalized as structure.
And the instruments that make this possible are:
- Ontology
- DSL
- Behavior Tree
Together, they form a unified architecture for building AI systems around judgment rather than model intelligence.
From Philosophy to Implementation
The architectural ideas outlined in this article are being organized in the following repository:
👉 https://github.com/masao-watanabe-ai/judgment-structure-core
At this stage, the repository contains a conceptual README describing the core design principles.
Implementation components — including:
-
Ontology definitions
-
Decision contract DSL
-
Behavior Tree runtime
-
Fail-closed enforcement mechanisms
-
Audit structures
— will be added incrementally.
Externalizing judgment is not merely a philosophical position.
It must eventually become executable, verifiable structure.
This repository serves as the public design foundation for that ongoing work.

コメント