In yesterday’s article,
I started from the premise that judgment is not singular,
and discussed a multi-agent design that does not erase discontinuities,
but treats them explicitly.
Sales, legal, operations, management.
When roles differ, judgment criteria differ.
This is not noise to be adjusted away,
but a structural discontinuity that inherently exists.
Multi-agent design does not attempt to integrate these discontinuities.
On the contrary, it leaves them split as they are.
This naturally leads to the next question.
How can such discontinuities be fixed inside a system?
AI can only handle computation.
In approximating a continuous and smooth world,
it is almost universally powerful.
However,
the moment a judgment switches,
the boundary where meaning reverses,
the line that must not be crossed—
such discontinuities cannot be preserved by computation alone.
So how should they be incorporated into a system?
One effective answer is
ontologies and DSLs (domain-specific languages).
Why logic must be moved outside the model
In many AI applications,
decision criteria themselves are pushed into the model.
Guided through prompts
Learned as weights
Embedded into scores
But this approach has a decisive weakness.
After the fact, it becomes impossible to tell where the judgment switched.
Models are smooth by nature.
Boundaries inevitably blur.
As a result, we get:
-
decisions that cannot be explained
-
decision-making that cannot be reproduced
-
systems where responsibility cannot be assumed
What is needed instead is this:
Fixing the structure of judgment outside computation.
Ontology as a “map of meaning”
The word ontology may sound difficult,
but its essence is simple.
It is an agreement on
how the world is divided into concepts and relationships.
What counts as a “target”
What is a “condition”
What is a “constraint”
What is “prohibited,” and what is an “exception”
These are decided by humans in advance.
The key point is that ontology is not used for inference.
It is used to explicitly mark discontinuities.
Up to here, continuity is allowed.
Beyond this point, meaning changes.
That line is preserved in language.
DSL as a language for freezing judgment
If ontology is a map of meaning,
DSL is the grammar that fixes judgment on that map.
The role of a DSL is clear:
-
write conditions
-
write priorities
-
write prohibitions
-
write exceptions
In other words,
“If X, then 반드시 do Y / must not do Z”
is written without ambiguity.
What matters here is this:
A DSL does not need to be smart.
In fact, it must not be smart.
It does not interpret.
It does not supplement.
It does not infer intent.
It only enforces what is written.
Only then does a space emerge
where AI is not allowed to fabricate meaning.
A healthy division of labor between computation and logic
At this point, the roles become clear.
What computation (AI) does:
-
measure similarity
-
produce scores
-
estimate probabilities
-
gather candidates broadly
What logic (DSL) does:
-
cut boundaries
-
determine priority
-
declare prohibitions
-
define conditions for returning decisions to humans
Neither is above the other.
Computation expands. Logic cuts.
Only through this combination
can a continuous world and a discontinuous world coexist.
Natural connection to multi-agent systems
Here, we return to yesterday’s discussion.
Multi-agent systems do not seek consensus.
Each agent carries a different judgment axis.
So where are those judgment axes written?
The answer is clear.
Each agent carries a different DSL and constraint set.
A constraint agent carries “must-not-break” DSLs
An efficiency agent carries “allowed optimization” DSLs
A review agent carries “stop conditions” DSLs
When they diverge, that is not a failure.
It simply means
a conflict between DSLs has been made visible.
A small conclusion
AI can only perform smooth computation.
That is precisely why
discontinuities must not be placed inside computation.
Meaning is divided by ontologies.
Judgment is frozen by DSLs.
Only then does AI become
a component that is safe to use.
AI does not think.
But it can be made into a device
that does not betray what has been thought.
That is the true reason
for introducing logic.
Related concrete designs (design notes)
The design discussed in this article—
fixing discontinuities outside computation and preserving judgment structures—
is organized as concrete design principles and structures
in the following GitHub repositories.
None of them represent a final form.
Their purpose is to preserve, in a readable structure,
-
where meaning switches
-
which judgments are fixed as DSLs
-
where decisions are returned to humans
-
ai-decision-system-map
─ A system-level map that views judgment, data, inference, logic, and visualization as a single structure
(a starting point for identifying where discontinuities appear) -
decision-pipeline-reference
─ A pipeline design that fixes ontologies and DSLs as contracts,
preventing judgments from dissolving into smooth computation -
multi-agent-orchestration-design
─ An orchestration design where agents carry different judgment axes (DSLs),
explicitly assuming that the system may stop without consensus -
time-aware-data-for-ai
─ A bitemporal data design that preserves
when a judgment was valid
(fixing discontinuities along the time axis)
These are not designs for building “smart AI.”
They are designs for
preserving judgment boundaries,
recording divergence as-is,
and keeping humans able to assume responsibility.
Only under that premise
does AI become something that can truly be used.

コメント