AI knows an enormous amount.
Wider than an encyclopedia.
Faster than a specialist.
And yet, at a certain moment, we feel:
“This AI has no common sense.”
This is not a lack of capability.
It is a structural issue.
What Happens When We Feel “It Has No Common Sense”?
When AI output feels “strange,”
it is rarely because:
-
the grammar is broken
-
the facts are incorrect
More often, it is because:
-
it says something that should not be said
-
it ignores a sequence that should not be violated
-
it steps over an implicit assumption
In other words,
a boundary has been crossed.
Common Sense Is Not Knowledge
The word “common sense” is vague.
But we must separate it from knowledge.
Common sense = knowing many things
Common sense = knowing facts
This is a misunderstanding.
Common sense is:
Knowing what not to do.
Common Sense Is Not Continuous — It Is Made of Discontinuities
AI excels in continuity.
Similarity.
Proximity.
Smooth transitions.
But common sense draws cut lines within continuity.
Do not cross this point.
Do not explain that.
Not that topic, not now.
These sudden drops — these sharp edges —
are the essence of common sense.
Common Sense = A Collection of Discontinuities
Structurally speaking:
Common sense is
the collection of countless discontinuities
drawn across the world.
They are:
-
not logically deduced
-
not fully written down
-
dependent on place and context
Yet when broken,
they produce a powerful sense of discomfort.
Why AI Cannot Learn Discontinuities
AI learns patterns:
-
frequency
-
similarity
-
co-occurrence
But discontinuities are composed of:
-
what does not happen
-
what is not said
-
what is deliberately avoided
In other words:
They rarely appear as data.
AI can learn the density of the world.
It cannot learn the cuts in the world.
Common Sense Is Ontology
Let us shift perspective.
If we treat common sense as “rules” or “manners,”
we will inevitably fail.
Common sense is:
An understanding of how the world is segmented.
That is ontology.
What counts as inside.
What counts as outside.
What is related.
What is irrelevant.
This segmentation itself is common sense.
Ontology Is a Premise of Judgment — Not Judgment Itself
This is crucial.
Evaluation functions, probabilities, optimization —
all of them operate only on top of an ontology of common sense.
If the ontology shifts,
a “correct” judgment
can appear wildly inappropriate.
When AI says something strange,
often it is not the decision that is wrong —
it is the prior segmentation of the world.
Common Sense Can Be Explained After the Fact, But Not Fully Written in Advance
This is the difficulty.
When broken, it can be explained.
But it cannot be exhaustively specified beforehand.
That is why people say:
“That’s just common sense.”
This is not laziness.
It is structurally inevitable.
What Happens If We Try to Encode Common Sense Into AI?
If we attempt to formalize common sense as rules:
-
exceptions explode
-
context-dependence becomes unsolvable
-
judgment becomes rigid
We end up with:
“Common sense rules” that have lost common sense.
Common Sense Is Not Something to Embed in AI
This is the conclusion.
Common sense is:
-
not something to give to AI
-
not something to train into AI
It is a foundational structure
that humans must continue to bear.
What AI can do is:
-
indicate where common sense may be violated
-
warn when it is approaching a boundary
That is all.
Summary
-
Common sense is not knowledge.
-
Common sense is a collection of discontinuities.
-
Discontinuities rarely appear as data.
-
Common sense is ontology itself.
-
AI does not possess a prior world-segmentation.
AI can become intelligent.
But the act of deciding
how the world is to be divided —
the assumption structure itself —
can only be borne by humans.
For technical approaches to ontology and knowledge,
please refer to discussions under
“Ontology Engineering” and “Knowledge Information Processing Technologies.”

コメント