In the previous article, I explained a three-layer structure for describing AI decisions:
-
Ontology
-
DSL (Domain-Specific Language)
-
Behavior Tree
With this structure, the decisions made by an AI system can be explicitly represented as:
-
Meaning (Ontology)
-
Conditions (DSL)
-
Execution structure (Behavior Tree)
As a result, it becomes possible to record a Decision Trace in the following form:
Signal
Decision
Boundary
Human
However, at this point many people ask an important question.
Are all of these created by humans?
If Ontologies, DSLs, and Behavior Trees must all be designed manually, building AI systems would become extremely heavy and difficult.
That concern is valid.
But there is another problem as well.
We cannot leave everything to AI either.
This is where the fundamental difficulty of AI system design appears.
To understand why, we need to look more closely at what AI is actually doing.
AI Approximates the World
AI can approximate the world remarkably well.
With astonishing precision and smoothness, it captures the structure of reality.
Yet we sometimes experience a strong sense of discomfort when interacting with AI systems.
We often feel:
“What it says is correct.
But it doesn’t really understand the meaning.”
This discomfort is not simply due to insufficient performance.
There is a deeper issue.
The problem lies in the difference between:
what can be approximated, and what cannot be approximated in principle.
AI Operates in a Continuous World
Modern AI systems are built on mechanisms such as:
-
distribution estimation
-
function approximation
-
gradient-based optimization
In other words, AI operates through the approximation of continuous spaces.
If two things are similar, they are close.
If they are slightly different, the output changes slightly.
For example:
0.49 0.50 0.51
Because of this property, AI can generate natural and flexible behavior.
Technologies such as
-
LLMs
-
CNNs
-
Transformers
operate as approximators in vector space.
But Meaning Is Not Continuous
Here lies the problem.
Meaning is not continuous.
Meaning does not emerge within smooth transitions.
Meaning emerges at discontinuities.
For example:
0.49 0.51
-
Pass / Fail
-
Allow / Deny
-
Safe / Dangerous
-
Legal / Illegal
Here we see a jump.
This jump exists outside continuous approximation.
Why AI Feels “Correct but Wrong”
The more smoothly AI approximates the world,
the more:
-
boundaries become blurred
-
taboos become diluted
-
contextual tension disappears
As a result,
the points where meaning originally emerged—the discontinuities—get smoothed away.
That is why we often feel:
“It is correct, but somehow wrong.”
The Continuous World and the World of Meaning
From this perspective, we can see that AI technology operates across two different worlds.
1. The World of Continuous Approximation
This is the domain where technologies such as:
-
LLMs
-
CNNs
-
Transformers
operate.
It is the world of vector spaces.
In this world, information is represented as:
-
similarity
-
distance
-
probability
2. The World of Meaning
The second world is the world of meaning.
Here we use logical structures such as:
-
Ontology
-
Semantic Web
-
Rules
-
DSLs
In this world, decisions are made through boundaries, such as:
-
Legal / Illegal
-
Allow / Deny
-
Safe / Dangerous
What Exists Between These Two Worlds
Thus, the world of AI contains two layers:
the continuous world
and
the world of meaning.
The fundamental challenge in AI system design is:
how to connect these two worlds.
One important technology that lies between them is
GNN (Graph Neural Network).
GNNs can simultaneously handle:
-
learning in vector space
-
relational structures represented as graphs
In other words, they combine:
-
continuous approximation (neural networks)
-
relational structure (graphs)
Because of this property, GNNs have the potential to act as a bridge between the continuous world and the world of meaning.
GNN Learns Relational Structures
Traditional deep learning models learn similarity in feature space.
For example, in:
-
image recognition
-
natural language processing
models learn to place similar objects close together in vector space.
What is learned is feature similarity.
In contrast, GNNs learn relationships.
A graph consists of:
-
nodes (entities)
-
edges (relationships)
Consider a fraud detection system.
Nodes might include:
-
customers
-
transactions
-
devices
-
locations
-
bank accounts
Edges represent relationships such as:
-
access from the same device
-
suspicious geographic movement
-
connections to previously fraudulent accounts
-
abnormal transaction paths
GNNs propagate information across this network and learn:
-
where anomalous structures appear
-
which relationships are important
-
which patterns are typical
In other words, GNNs learn not the similarity of data, but the structure of relationships.
In this sense, GNNs can extract structures that are closer to meaning.
But Even GNNs Cannot Decide Meaning
However, there is a crucial point.
GNNs do not determine meaning.
What GNNs can do is:
-
discover relational structures
-
extract patterns
-
detect anomalies
But they cannot determine the boundary of meaning.
For example:
-
When should something be considered fraud?
-
When should a case be sent to manual review?
-
When should an account be frozen?
These decisions involve:
-
risk management
-
institutional rules
-
responsibility
-
operational policies
In other words,
the discontinuity of meaning does not exist inside the model.
It exists as a designed boundary.
The Roles of AI and Experts
We can now return to the earlier question.
Are Ontologies, DSLs, and Behavior Trees all created manually?
The answer is:
Half yes, half no.
AI can discover:
-
relational structures
-
patterns
-
anomalies
-
clusters
Especially with GNNs, AI can extract structures that are close to meaning.
But deciding where to draw the line is a design decision.
Ultimately, determining:
-
what counts as fraud
-
when automated decisions should stop
-
when control should return to humans
is the responsibility of human experts.
The structure becomes:
AI discovers structure.
Experts define discontinuities.
AI does not replace human judgment.
Instead, it supports the design of decision structures.
The Relationship with the Decision Trace Model
This structure is closely related to the Decision Trace Model.
The Decision Trace Model does not simply record outcomes.
It records the structure of decisions, including:
-
which conceptual definitions were used
-
which decision conditions were applied
-
in what order evaluations were performed
-
where execution stopped
If AI proposes structures and experts define boundaries,
then that process itself becomes part of the Decision Trace.
Decision Trace is therefore not merely a record of AI decisions.
It is also a record of how decision structures were designed.
Conclusion
AI approximates the world continuously.
GNNs learn relational structures.
But meaning emerges at discontinuities.
Those discontinuities do not arise inside models.
They exist as designed boundaries.
AI discovers structures.
Experts define discontinuities.
This is the form of human–AI collaboration in the era of the Decision Trace Model.
Technical details related to GNNs are explained in the article on Graph Neural Networks.
In addition, AI technologies that deal with meaning include Semantic Web technologies and Ontology-based systems.
Readers interested in these topics are encouraged to refer to those articles as well.

コメント