Meta-learning is often described as
“learning how to learn.”
It enables adaptation with very little data.
It allows models to adjust quickly to new tasks.
At first glance, it almost seems as if AI has begun to understand structure itself.
But there is a deeper question we should ask:
Can meta-learning learn “types”?
What Meta-Learning Actually Does
In ordinary machine learning:
Data → Pattern learning
In meta-learning:
Tasks → Learning how to adapt to tasks
In other words, meta-learning optimizes things like:
-
how gradients should update
-
the initial values of parameters
-
the speed of adaptation
Meta-learning is essentially learning an adaptation structure within a continuous space.
This means that:
-
similar tasks map to similar representations
-
different tasks map to slightly distant representations
Within the parameter space, the model learns how to adapt itself.
This is extremely powerful.
Even for unseen tasks, the model can adapt quickly using only a small amount of data.
However, there is an implicit assumption here.
It assumes that
the world changes continuously.
Similar things are close together.
Different things are further apart.
Changes occur smoothly.
Because of this, models can adapt simply by following gradients.
But there are domains of real-world decision-making where this assumption does not hold.
In those domains,
a small difference can produce
an entirely different meaning.
For example:
-
a person vs. a corporation
-
an accident vs. an attempted incident
-
illegal vs. legal
These are concepts that cannot be approached continuously.
Here, what matters is not distance,
but distinction.
In other words,
what we need is not a continuous space,
but a discrete structure.
Ontology as a Structure for Discrete Meaning
This is where ontology appears.
An ontology is a structure that organizes the world through:
-
concepts
-
relationships
-
distinctions
It is a structure of meaning.
The crucial point is that an ontology does not deal with continuous quantities.
What an ontology deals with is boundaries of meaning.
For example:
-
What counts as a customer?
-
What counts as fraud?
-
What counts as an accident?
-
What counts as a contract violation?
These are not questions of probability like 0.63 or 0.82.
They are questions such as:
-
Where does a customer begin?
-
Where does an accident begin?
In other words,
these are questions about conceptual boundaries.
Ontology therefore represents distinctions, not distances.
In this space,
something is not “slightly different.”
It becomes something else entirely.
For that reason,
ontology is not a continuous structure.
It is a discrete structure.
Ontology is a structure that expresses the breaks in meaning that exist in the world.
The Misalignment Between Meta-Learning and Ontology
Meta-learning improves:
-
speed of adaptation
-
efficiency of pattern extraction
-
generalization within continuous spaces
In other words, it learns how to learn better within an already defined representational space.
Even with little data, it can quickly adapt.
Similar tasks cluster together, while different tasks separate.
But there is an important limitation.
Meta-learning operates only within
an already defined world.
Ontology design performs a completely different task.
It decides:
-
where to cut the world
-
what distinctions to make
-
what to treat as the same
In other words,
ontology design defines the meaning structure of the world itself.
This is the fundamental difference.
Meta-learning can handle the world
after it has been partitioned.
But
where the world should be partitioned
is not something learning itself determines.
Meta-learning learns adaptation.
Ontology determines distinction.
They may both appear to belong to “intelligence,”
but they operate at entirely different layers.
What Few-Shot Learning Reveals
In low-data environments,
meta-learning is indeed effective.
Even with few samples, it can adapt by leveraging prior representations.
But there are still questions it cannot answer.
For example:
-
What is fraud?
-
What counts as a contract violation?
-
Where does an accident begin?
-
How should risks be categorized?
No matter how powerful meta-learning becomes,
these questions cannot be answered automatically.
Because they are not questions of pattern recognition.
They are questions of semantic boundaries.
When data is scarce, models cannot discover those boundaries.
But even with abundant data,
the fundamental problem remains.
Because these boundaries are not determined statistically.
They are determined through:
-
law
-
contracts
-
institutional rules
-
social agreements
In other words,
they are social and institutional choices.
The problem is not that there is too little data.
Few-shot scenarios simply make one fact more visible:
Semantic boundaries are not discovered through learning.
Meta-Learning Learns Inside the Type
Ontology defines boundaries of meaning.
But where do those boundaries appear in implementation?
In practice, they become things like:
-
categories
-
labels
-
rules
-
schemas
-
branching conditions for responsibility and authority
These boundaries become types.
Here, “type” does not refer only to programming language types.
In real systems, a type is
a framework that fixes how the world has been partitioned in an operational form.
For example, distinctions like:
-
customer
-
fraud
-
accident
-
contract violation
do not remain abstract concepts.
They become:
-
database fields
-
UI selection options
-
notification triggers
-
decision branches
-
responsibility assignments
In other words,
a boundary ultimately becomes an operational type.
With this in mind, an important insight appears.
Meta-learning learns adaptation inside the type.
It can optimize:
-
learning within a defined type
-
compression of task structure
-
fast adaptation with limited data
But meta-learning always operates within
a predefined framework.
Yet real-world decision-making involves a deeper question:
-
Which categories should exist?
-
Where should meaning be divided?
-
What should be considered identical?
-
What should be distinguished?
This is the question of
how the type itself should be defined.
Meta-learning can adapt inside types.
But it cannot determine types themselves in a responsible way.
Because a type is not merely a data structure.
A type declares:
-
who holds responsibility
-
which discontinuities are preserved
-
what distinctions are ignored
A type is a declaration of how the world is organized.
For that reason,
this is not a problem of learning.
It is a problem of design.
The Illusion of the Foundation Model Era
Modern foundation models are astonishingly powerful.
They can classify with zero-shot prompts.
They adapt with only a few examples.
With just a handful of prompts,
they can perform entirely new tasks.
Seeing this, many people conclude:
Perhaps ontology design is no longer necessary.
Maybe categories do not need to be predefined.
Maybe labels do not need strict definitions.
Perhaps the model can infer everything from context.
But in reality, the opposite is happening.
The more powerful the model becomes,
the more implicit classifications,
implicit values,
and implicit boundaries
become embedded inside the model.
And these are:
-
not explicitly defined
-
not owned by anyone
-
not observable from the outside
In other words,
boundaries that should have been explicitly defined as ontology
become hidden inside the model.
The problem is that we can no longer explain:
-
where the boundary is
-
why it exists
-
who decided it
The type has not disappeared.
It has simply become invisible.
And the stronger meta-learning becomes,
the stronger this effect becomes.
Because models can perform classification and reasoning
using only internal representations.
The result is clear:
Powerful meta-learning produces powerful “type-less systems.”
Boundaries still exist,
but they are written nowhere.
Ontology Design
As foundation models become more powerful,
boundaries sink deeper into the model.
The model can classify.
It can reason.
But we cannot see
which boundary the classification depends on.
The boundary has not disappeared.
It has become implicit.
But in real society,
boundaries cannot remain implicit.
For example:
-
What counts as fraud?
-
What counts as a contract violation?
-
What counts as an accident?
These distinctions must be defined explicitly.
Because they are tied to:
-
legal responsibility
-
organizational decisions
-
institutional systems
This is where ontology design becomes necessary.
Ontology design means:
-
fixing conceptual boundaries
-
making semantic discontinuities explicit
-
taking responsibility for classification
This is not merely data organization.
It defines
how the world should be understood and operated through distinctions.
Ontology design is therefore
the act of writing semantic boundaries.
Importantly,
this role is not Human-in-the-Loop.
Human-in-the-Loop means humans verify model outputs.
Ontology design is something different.
Humans are not at the end of the decision process.
Humans become
the authors of the decision structure.
This is
Human-as-Author.
The Proper Division of Labor
Meta-learning and ontology are not competing ideas.
They operate at different layers.
Ontology defines the structure of meaning.
It determines where the world should be divided.
Meta-learning performs adaptation within that structure.
The proper architecture is this:
First, humans write the ontology.
Conceptual boundaries are defined.
These boundaries become types, expressed through DSLs or schemas.
Then meta-learning adapts inside those types.
Patterns are learned.
Exceptions are handled.
Generalization becomes possible.
In short:
AI handles continuity.
Ontology defines discontinuity.
AI expands the world.
Ontology cuts the world.
Without this division,
all distinctions become blurred.
All decisions become probabilities.
And eventually,
the world dissolves into probability.
The Danger of Type-less Meta-Learning
What happens if we rely only on meta-learning without types?
At first glance, the system seems flexible.
It adapts to context.
It adjusts its classifications dynamically.
But underneath, something else happens.
-
classification axes become context-dependent
-
categories begin to drift
-
identical inputs produce different meanings
-
explanations become entirely post-hoc
In other words,
there is no fixed basis for judgment.
This is not advanced adaptation.
It is
the collapse of boundaries.
Without stable conceptual distinctions,
the model invents new partitions for each situation.
As a result,
classification loses consistency
and decisions cannot be reproduced.
This is not the evolution of adaptation.
It is
the breakdown of boundaries.
Conclusion
Meta-learning improves the efficiency of learning.
It enables rapid adaptation and generalization.
But ontology deals with a completely different problem.
Ontology defines:
-
conceptual distinctions
-
boundaries of meaning
-
how the world should be partitioned
Meta-learning does not learn types.
It only accelerates adaptation inside types.
Therefore,
in the era of foundation models,
what we need is not larger models
or stronger learning.
What we need is
more explicit ontology design.
AI handles continuity.
Ontology defines discontinuity.
AI expands the world.
Ontology cuts the world.
Only when this division of labor exists
can learning have meaning,
decisions have responsibility,
and systems function in society.
Without explicit boundaries,
all distinctions dissolve into probability.
And eventually,
the world loses meaning and melts into statistics.
AI approximates the world.
But deciding how the world should be divided
remains a human responsibility.

コメント