In previous articles, we have described AI systems as
factories that mass-produce decisions.
AI is not merely software.
It receives Events, generates Signals, makes Decisions, and manages quality through Boundaries.
This structure closely resembles a manufacturing production line.
Furthermore, to stabilize the quality of AI systems, we need
AI Quality Engineering.
And for that, it is necessary to preserve a Decision Trace—
a record of
how decisions were made.
The mechanism that guarantees this record is a
Ledger (an immutable history).
However, one important question still remains.
How should those “decisions” themselves be described?
For the Decision Trace Model to function, the decision flow
Event
Signal
Decision
Boundary
Human
must exist in a structure that can be explicitly represented.
If decisions are embedded inside
-
model weights
-
application code
-
implicit operational rules
then it becomes impossible to record Decision Traces accurately.
In other words,
the precondition of the Decision Trace Model is that
decisions must be representable.
To achieve this, we need three structures that explicitly describe decisions:
-
Ontology
-
DSL
-
Behavior Tree
Through this three-layer structure, decisions in AI systems can be represented as
-
meaning
-
conditions
-
execution structure.
In this article, we will organize the
decision representation architecture
required to make the Decision Trace Model possible.
Decision Trace Model
The Decision Trace Model is a framework that defines
the historical structure of decisions in AI systems.
In an AI system, the following flow constantly occurs:
Event
↓
Signal
↓
Decision
↓
Boundary
↓
Human
For example, in a fraud detection AI system:
Event
→ A transaction occurs
Signal
→ fraud_probability = 0.82
Decision
→ Freeze account
Boundary
→ threshold = 0.8
Human
→ Manual review
This sequence represents a decision process.
The purpose of the Decision Trace Model is to define this process as a
structure that can be stored as history.
The key point here is that
the essence of AI is not prediction,
but
decision history.
AI produces Signals.
However, what matters in society is
how those Signals were used to make decisions.
To preserve this decision history,
a Ledger is required.
However, a critical problem arises here.
If
decisions themselves do not exist as explicit structures,
then Decision Traces cannot be accurately recorded.
AI Systems Without Explicit Decision Descriptions
In many AI systems, decisions appear in forms such as:
-
model weights
-
code branches
-
configuration files
-
operational rules
For example:
if score > 0.8: freeze_account()
-
a threshold
-
risk tolerance
-
business judgment
However, from the code alone, we cannot know:
-
who decided this threshold
-
why it is 0.8
-
when it changed
In this case,
decisions exist only as side effects of computation.
Under this structure,
the Decision Trace Model cannot be implemented.
Because
the structure of the decision does not exist.
What is needed instead is a
decision representation architecture.
In the Decision Trace Model,
decisions are described through three layers:
-
Ontology
-
DSL
-
Behavior Tree
Ontology — Semantic Boundaries
The first element required to describe decisions is
the definition of meaning.
Data handled by AI systems are typically
continuous numerical values.
For example, a fraud detection system may process values such as:
-
transaction_amount
-
location_distance
-
transaction_frequency
-
fraud_probability
However, decision-making requires
discrete semantic boundaries.
For instance, in fraud detection we need categories such as:
-
normal transaction
-
suspicious transaction
-
fraudulent transaction
But the important point is this:
these concepts
do not naturally exist in the data.
Even the definition of a “fraudulent transaction” may vary:
fraud_probability ≥ 0.9 fraud_probability ≥ 0.8 fraud_probability ≥ 0.7
fraud_probability ≥ 0.9 → freeze_account
fraud_probability ≥ 0.8 → freeze_account
fraud_probability ≥ 0.8 → manual_review fraud_probability ≥ 0.95 → freeze_account
the boundary of what counts as “fraud”
is not automatically determined by the data.
It is
an organizational risk decision.
Changing this boundary affects:
-
fraud detection rate
-
false positives
-
customer experience
-
operational cost
Thus, what is happening here is not merely data processing.
It is
a choice of meaning.
Ontology defines these semantic boundaries.
Within the Decision Trace Model,
Ontology functions as
the coordinate system of the decision world.
Only after defining:
-
what counts as fraud
-
what counts as suspicious
-
what counts as normal
do Signals, Decisions, and Boundaries gain meaning.
DSL — Decision Conditions
Once semantic boundaries are defined (Ontology),
the next step is
describing decision conditions.
In the previous section, we saw how a boundary such as
fraud_probability ≥ 0.8
However, defining meaning alone does not cause the AI to act.
The next step is specifying
how decisions should be made based on that meaning.
In many systems, such conditions are written directly in
application code:
if fraud_probability >= 0.8: freeze_account()
This single line implicitly contains:
-
risk tolerance
-
business decisions
-
operational policies
But the code does not reveal:
-
who decided this
-
why the threshold is 0.8
-
when it changed
In other words,
the decision is buried inside the code.
Under this structure,
-
visualization
-
auditing
-
modification
become difficult.
Therefore, in the Decision Trace Model,
decision conditions are written using a
DSL (Domain-Specific Language).
For example:
rule: freeze_suspected_fraud when: signal: fraud_probability op: ">=" value: 0.8 then: action: freeze_account
decision conditions become a
Decision Contract.
That means the following elements are explicitly defined:
-
which signal is used
-
under what condition
-
which action is taken
DSL enables:
-
readable decision logic
-
decision change auditing
-
review by non-engineers
For example, if the threshold changes from
value: 0.8
value: 0.85
It represents
a change in risk tolerance.
By expressing decisions through DSL,
such changes can be managed
at the organizational level.
Thus, DSL is not merely a configuration file.
It functions as a
Decision Contract.
Through this structure,
decisions in AI systems can be managed as
organizational assets.
Behavior Tree — Decision Flow
Even when decision conditions (DSL) are defined,
another important issue remains.
That issue is
the order of evaluation.
Real-world decision-making rarely relies on a single condition.
In fraud detection, for example, we might consider:
-
fraud probability
-
unusually large transaction amounts
-
suspicious transaction history
-
whether the customer is a VIP
Multiple conditions interact.
The crucial question becomes:
which condition should be evaluated first?
For example:
Extremely high fraud probability
→ freeze account immediately
Moderate fraud probability
→ send to manual review
However, in many systems this order is embedded in:
-
application code
-
service calls
-
workflow logic
As a result:
-
the overall decision structure is unclear
-
exception paths are difficult to trace
-
fallback conditions are hidden
In other words,
the decision conditions are visible, but the decision process is not.
To solve this, we use
Behavior Trees.
For example, a fraud detection decision flow may be represented as:
Selector ├ HighFraudProbability │ └ FreezeAccount └ MediumFraudProbability └ ManualReview
If fraud probability is extremely high
→ freeze account
Otherwise, if fraud probability is high
→ manual review
If neither condition applies
→ process as a normal transaction.
Behavior Trees represent
-
conditions
-
execution order
-
fallback logic
as explicit structure.
As a result,
-
decision order
-
exception paths
-
fallback handling
can be preserved as
design assets.
Behavior Trees therefore make the
structure of decision flows
explicit.
The Three-Layer Structure
In the Decision Trace Model,
decisions are expressed through three layers:
Ontology
→ semantic boundaries
DSL
→ decision conditions
Behavior Tree
→ execution structure
Through these layers,
decisions in AI systems are explicitly represented as:
-
meaning
-
conditions
-
execution structure
However, these are not merely conceptual ideas.
They are structures that must be
embedded within the actual system.
A typical AI decision pipeline looks like this:
Event ↓ Signal ↓ Decision ↓ Action
a Judgment Engine is inserted into this pipeline:
Event ↓ Signal ↓ Judgment Engine ↓ Decision ↓ Action
↓
DSL
↓
Behavior Tree
Their roles are:
Ontology
→ interpret the meaning of Signals
DSL
→ evaluate decision conditions
Behavior Tree
→ control evaluation order
The Judgment Engine receives Signals as input,
interprets their meaning through Ontology,
evaluates conditions through DSL,
controls execution order through Behavior Trees,
and finally produces results such as:
-
Decision
-
Boundary
-
Human override.
This entire process is recorded as a
Decision Trace:
Event
Signal
Decision
Boundary
Human
When this history is stored in a Ledger, the decisions of AI systems become:
-
traceable
-
explainable
-
auditable
Therefore,
to realize the Decision Trace Model,
decisions must be described using the three-layer structure of
-
Ontology
-
DSL
-
Behavior Tree.
The future of AI will not be defined by
bigger models,
but by
more transparent decision structures.
Conclusion
For a long time, the evolution of AI has been described in terms of:
-
model size
-
accuracy
-
inference speed
However, what truly matters in real-world systems is
decision structure.
The Decision Trace Model redefines AI systems as
decision history systems.
And the foundation of that model is a
decision representation architecture
built on:
-
Ontology
-
DSL
-
Behavior Tree.

コメント