AI approximates the world remarkably well.
With astonishing precision,
smoothly and seamlessly,
it captures the structure of reality.
And yet, from time to time,
we feel a strong sense of discomfort.
“What it says is correct.
But it does not understand the meaning.”
The source of this discomfort is not a lack of performance.
It is something more fundamental.
It is the difference between
what can be approximated
and what cannot be approximated in principle.
What AI Does: Continuous Approximation
Let us begin with the facts.
At the core of modern AI are:
-
distribution estimation
-
function approximation
-
gradient-based updates
In other words, AI performs
approximation of a continuous world.
If something is similar, it is close.
If it changes slightly, the output changes slightly.
0.49
0.50
0.51
These differences are handled
smoothly.
This property is exactly why AI can behave so naturally.
The World Appears Continuous
The physical world
and much of our data
appear continuous at first glance.
Temperature
Speed
Revenue
Probability
Because of this, we tend to assume:
“Meaning must also be something that can be approximated continuously.”
But here lies a critical misunderstanding.
Meaning Emerges from Discontinuity
Meaning does not exist within continuity.
Meaning appears at the moment of
a cut.
For example:
0.49
0.51
At some point this difference becomes
Pass / Fail
Allowed / Denied
Safe / Dangerous
Legal / Illegal
Here we observe a
jump.
This jump exists
outside continuous approximation.
The Better the Approximation, the More Meaning Disappears
A paradox appears here.
The more smoothly AI approximates,
the more
boundaries blur
taboos fade
contextual tension disappears.
As a result,
the points of discontinuity where meaning arises
are filled in and smoothed away.
This is why we sometimes feel:
“It is correct, but it is wrong.”
Meaning Is Where We Draw the Line
Structurally speaking,
meaning can be described as:
-
what is included
-
what is excluded
-
where the boundary is drawn
This is
a decision.
And decisions cannot be directly derived from
-
probabilities
-
gradients
-
optimization.
Where Does Meaning Remain?
Meaning does not fully reside
inside the model.
Nor is it completely contained in the data.
So where does it exist?
Meaning remains at
points of logical discontinuity.
For example:
-
Do not apply in this condition
-
Stop the decision process in this case
-
Return this case to human judgment
In other words, meaning appears as
-
boundaries
-
stop conditions
-
exception handling
This is where meaning resides.
Graph Neural Networks (GNN)
If we summarize the discussion so far,
AI technologies can be divided into two worlds.
The World of Continuous Approximation
-
LLM
-
CNN
-
Transformer
These operate in
vector space.
The World of Meaning
Here we see:
-
Ontology
-
Semantic Web
-
Rules
-
DSL
These are
logical structures.
Thus the AI landscape contains two layers:
-
the continuous world
-
the world of meaning.
At this point, an interesting technology appears.
That technology is
Graph Neural Networks (GNN).
GNN Learns Structure
GNNs are somewhat unique.
Most AI models operate on
vectors.
But GNNs operate on
graphs.
A graph consists of:
-
nodes (entities)
-
edges (relationships)
For example:
Product
├ Category
├ Reviews
└ Purchase history
Patient
├ Symptoms
├ Tests
└ Diagnosis
Design proposal
├ Options
├ Concerns
├ Rejected reasons
└ Stop conditions
All of these represent
relational structures.
GNNs learn by propagating information through these relationships.
GNN Moves Closer to Meaning
This is an important point.
Meaning often emerges from
relationships.
For example,
the reason a design proposal was rejected may depend on:
-
component constraints
-
safety standards
-
past accidents
-
cost
These decisions exist within a
network of relationships.
In other words,
much of meaning exists as
graph structures.
GNNs can learn these structures.
For this reason,
GNNs can be considered
the machine learning method closest to meaning.
But Even GNN Cannot Fully Handle Meaning
However, an important clarification must be made.
GNNs do not understand meaning.
What GNNs learn are
statistics of relationships.
For example,
they can learn that a structure like
A → B → C
frequently appears.
But deciding
“stop here”
is not a statistical pattern.
It is a
designed rule.
In other words,
GNNs can
approximate meaning structures,
but they cannot determine
where meaning should be cut.
The Three-Layer Structure of AI
We can summarize the discussion as a three-layer structure.
Continuous approximation
(LLM / Deep Learning) ↓ Relational structure
(GNN) ↓ Semantic discontinuity
(Ontology / Rules / Boundary)
In other words:
AI approximates the world.
GNN learns relationships.
Design defines the discontinuities.
What Is Design?
This brings us back to the initial question.
If AI cannot handle meaning,
who writes meaning?
The answer is:
design.
Design is the act of understanding the limits of AI
and defining in advance
the points of discontinuity
where meaning emerges.
For example:
-
Do not make decisions below this probability
-
Do not automate this type of case
-
Return this case to human judgment
These
boundaries
are where meaning resides.
Conclusion
AI can approximate the world continuously.
GNN can learn relational structures.
But meaning emerges from
discontinuity.
And that discontinuity
does not arise inside the model.
It exists as
designed boundaries.
AI reflects the world.
GNN learns relationships.
But the thing that creates meaning
is the human who decides
where to cut.
The real question in the age of AI is not:
“Can we build AI that understands meaning?”
The real question is:
Who is writing the discontinuities where meaning emerges — and where are they written?
Technical details related to GNN can be found in the article “Graph Neural Networks.”
For AI technologies that handle meaning structures, readers may also refer to Semantic Web technologies and Ontology engineering. Those interested are encouraged to consult those resources as well.

AIシステム設計・意思決定構造の設計を専門としています。
Ontology・DSL・Behavior Treeによる判断の外部化、マルチエージェント構築に取り組んでいます。
Specialized in AI system design and decision-making architecture.
Focused on externalizing decision logic using Ontology, DSL, and Behavior Trees, and building multi-agent systems.
