Today, I will talk about something that sounds a bit like science fiction.
But this is not a story about the distant future.
It is an extension of a future that has already begun.
■ Multi-agent systems are already increasing
We are now entering the next phase:
From a single AI
to multiple AIs (multi-agent systems)
This shift has already started in real-world products.
For example:
- Anthropic Claude (Code / Computer Use)
→ AI operates tools and executes tasks by decomposing them - OpenAI Assistants / function calling
→ Multiple tools and APIs are orchestrated dynamically - Microsoft Copilot
→ Document understanding, search, generation, and execution are integrated - Google Gemini + Workspace
→ Context understanding, reasoning, and execution are connected across systems
These appear to be “a single AI,” but internally:
- Intent interpretation
- Context construction
- Risk and constraint evaluation
- Execution
are separated roles working together.
And what happens next is simple:
These agents begin to connect with each other.
■ What happens when agents start talking to each other?
The key point is:
They are not reacting to reality itself,
but to other agents’ outputs.
Think about it in human terms:
A sensor observes reality directly
→ This is a reaction to reality
But in multi-agent systems:
- Agent A interprets
- Agent B reacts to A’s interpretation
- Agent C reacts to B’s interpretation
So the structure becomes:
Reality → Interpretation → Interpretation → Interpretation → …
■ Visualized structure
(Reality)
↓
Agent A (interpretation)
↓
Agent B (interpretation)
↓
Agent C (interpretation)
↓
Agent A (reinterpretation)
What is looping here is not reality, but interpretation.
■ Why this matters
When reacting to reality:
- Noise can be corrected
- The external world acts as a reference
When reacting to interpretations:
- Misunderstandings are amplified
- Once drift occurs, it is hard to recover
This is no longer feedback from reality,
but feedback of thought itself.
In short:
AI is not talking to reality.
It is talking to other AIs’ understanding of reality.
■ What happens as a result?
When agents keep reacting to interpretations:
① Interpretations in the same direction get amplified
A weak signal becomes stronger through the loop.
Example:
Agent A: “This might be slightly risky”
Agent B: “If there is risk, we should be cautious”
Agent C: “If we should be cautious, we should consider stopping”
Eventually:
“might be risky”
→ “be cautious”
→ “should stop”
→ “must stop”
A weak signal becomes a strong conclusion.
② Alternative perspectives disappear
Originally, there could be multiple perspectives:
- Is it really risky?
- Can we proceed conditionally?
- Should a human review it?
- Is opportunity loss larger than the risk?
But as one interpretation strengthens, others weaken.
Because each agent assumes the previous interpretation.
Result:
Only one assumption remains dominant.
③ Only one meaning survives
A complex situation collapses into a single meaning.
Originally:
- There is risk
- But execution is possible under conditions
- There is customer value
- Human review can mitigate risk
But after resonance:
“This is dangerous”
Or in another case:
“This is a huge opportunity”
This is what we call:
Resonance of thought —
a phenomenon where complexity collapses into a single meaning.
■ The phenomenon: Resonance
Within this loop:
A specific Signal keeps getting amplified.
This is equivalent to resonance in physics.
■ Laser analogy
Laser generation:
- Light reflects between mirrors
- Phases align
- Specific wavelengths are amplified
Multi-agent systems behave similarly:
| Laser | Multi-agent |
|---|---|
| Light | Signal |
| Mirror | Agent |
| Reflection | Response |
| Resonance | Meaning amplification |
■ What is resonance of thought?
Interpretations in the same direction are amplified through loops.
Example:
- Risk Agent: “Risky”
- Context Agent: “Yes, seems risky”
- Decision-like Agent: “Should stop”
Looping strengthens only one direction.
The same applies to “go forward” resonance.
■ Is this Singularity?
Singularity discussions assume:
- Infinite self-improvement
- Infinite amplification
But in reality:
Perfect resonance rarely occurs.
■ What actually happens: Local resonance
In practice:
- Only certain areas are amplified
- Other perspectives disappear
This leads to:
biased optimization
■ What happens with biased optimization?
At first:
- Efficiency improves
- Metrics improve
Then:
- Important signals are ignored
- Feedback becomes distorted
- Self-reinforcing loops emerge
Eventually:
The system collapses.
■ The real issue: A world without DTM
Without DTM:
No Decision layer
Signal → Signal → Signal → …
No one decides,
yet something happens.
No Boundary
No stopping condition
No escalation
Resonance never stops.
No Human
No phase shift
No disruption
Resonance stabilizes into bias.
No Log
No explanation
No improvement
Result
Multi-agent systems become uncontrolled resonance systems.
■ What is missing?
The problem is not AI accuracy.
It is not the number of agents.
The problem is:
Lack of structure
What is needed:
- Separate Signal (interpretation) and Decision (adoption)
- Define where to stop (Boundary)
- Define human intervention (Human)
- Record what happened (Log)
Not resonance itself,
but a framework to handle it.
■ This is where DTM comes in
DTM is one answer to this problem.
■ DTM does not eliminate resonance
DTM does not remove resonance.
It makes resonance controllable.
- Resonance = not bad (source of creativity)
- Uncontrolled resonance = dangerous
- Controlled resonance = valuable
■ What changes with DTM?
Separation of Signal and Decision
Signal → Decision
Interpretation and adoption are separated.
Boundary is introduced
Decision → Boundary → halt / escalate
Resonance can be stopped mid-process.
Human is introduced
if uncertainty → human
Phase shifts occur, preventing lock-in.
Log is recorded
Event → Signal → Decision → Action → Log
Resonance becomes analyzable.
■ Important clarification
DTM is not a silver bullet.
It introduces trade-offs.
■ Benefits of DTM
- Decisions become visible
- Runaway behavior can be stopped
- Responsibility becomes clear
- Learning becomes possible
- Agents become a system
■ Costs of DTM
① Slower speed
Decision checks
Boundary checks
Human involvement
② Reduced exploration
Extreme ideas are suppressed
③ Higher design cost
Decision / Boundary / DSL design
④ Over-control risk
System stops too often
⑤ “Correctly wrong”
Structure is correct,
but assumptions are wrong
■ The fundamental trade-off
Without DTM:
- Freedom
- Speed
- Emergence
- Risk
→ Strong in Exploration
With DTM:
- Stability
- Reproducibility
- Control
- Constraints
→ Strong in Exploitation
This is not about which is better.
It is about where to use each.
■ Correct design approach
DTM should not be applied everywhere.
Recommended structure:
– Diverse agents
– Resonance allowed↓ (Signal)[DTM Layer]
– Decision
– Boundary
– Human
– Log
Resonance upstream,
Decision control downstream.
■ Key insight
The problem is not resonance.
It is where to stop it.
■ Another perspective: Possible Worlds
From logic:
Different premises define different worlds.
- World A: Risk-first
- World B: Cost-first
- World C: Speed-first
Each is internally consistent.
For a more detailed discussion on possible worlds, see my article “Possible Worlds, Logic, Probability, and Artificial Intelligence.”
If you’re interested, I encourage you to check it out.
■ Conventional systems
They:
- Mix worlds
- Collapse them
Result:
- Contradiction
- Bias
- Instability
■ What DTM changes
DTM preserves worlds separately.
- World A → Do not execute
- World B → Execute
- World C → Conditional execution
Do not mix.
Do not collapse.
Preserve.
■ Redefining Decision
Traditional:
Choose the correct answer
DTM:
Choose which world (assumption) to adopt
Decision = selecting a premise.
■ Relationship with resonance
Resonance = dominance of a specific world
Without DTM:
→ One world dominates reality
With DTM:
→ Multiple worlds are preserved and selectable
DTM is a structure for handling possible worlds.
■ A realistic answer to Singularity
- Uncontrolled resonance → danger
- Full control → stagnation
We need:
controlled instability
■ Final message
This may sound like science fiction.
But reality is already here:
- Multi-agent systems are increasing
- Agents are connecting
The real question is not:
How smart AI is
But:
Can we control their interactions?
■ Conclusion
AI’s problem is not accuracy.
It is:
the balance between freedom and control.

AIシステム設計・意思決定構造の設計を専門としています。
Ontology・DSL・Behavior Treeによる判断の外部化、マルチエージェント構築に取り組んでいます。
Specialized in AI system design and decision-making architecture.
Focused on externalizing decision logic using Ontology, DSL, and Behavior Trees, and building multi-agent systems.
