There exists a class of important functions in physics and probability theory known as transcendental functions.
These include:
- the exponential function ex
- logarithmic functions logx
- trigonometric functions such as sinx, cosx,tanx
At the center of these transcendental functions are two special numbers:
- the natural constant e
- the circle constant π
They are represented by infinitely continuing irrational numbers:
e≈2.718281828459045…
These numbers cannot be expressed as fractions, and more importantly, they cannot be represented by any algebraic equation. Because of this, they are called transcendental numbers.
An algebraic equation is constructed only from:
- addition
- subtraction
- multiplication
- and a finite number of exponentiations
Algebraic numbers are therefore numbers that can be “contained within a finite set of relationships.”
In other words, they are entities that can be fully defined by a finite blueprint.
They represent a universe governed by finite rules — the worldview of classical mathematics and classical physics.
Transcendental numbers are fundamentally different.
They are entities that can never be fully captured by any finite blueprint.
And transcendental functions built upon them become functions that cannot be completely confined within finite designs.
Moreover, transcendental functions built upon transcendental numbers that continue infinitely can also be regarded as infinitely continuing functions.
For example, the exponential function y=ex can be expanded as an infinite function:
$e^x=\sum_{n=0}^{\infty}\frac{x^n}{n!} $ or $ e^x=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\cdots$
Intuitively, transcendental functions can be understood as:
- continuously moving blueprints
- endlessly generating rules
For a long time, mathematics believed that:
“The world can be fully described through finite ratios and equations.”
However, the discovery of transcendental numbers revealed something profound:
There exist enormous classes of numbers that cannot be fully described through finite relationships alone.
This discovery fundamentally changed the philosophy of mathematics itself.
What Do Transcendental Functions Represent?
When viewed as “continuously moving blueprints” or “rules that endlessly generate themselves,” transcendental functions fundamentally become tools for expressing:
- change
- flow
- generation
- interaction
Traditional algebraic functions primarily represented static structures such as:
- form
- boundary
- geometry
In other words, they represented existence.
Transcendental functions, by contrast, describe phenomena such as:
- temporal evolution where states continuously generate future states
- self-amplifying growth where accumulated results become causes of further growth
- wave propagation where local motion regenerates global motion
- propagation through relationships and interactions
- mutual interactions that alter future generation rules themselves
- infinite generation where rules continuously generate new rules
These structures appear universally across reality:
- Cell division and biological evolution
- Neural network learning and human knowledge acquisition
- Language evolution and cultural propagation
- Social media virality and meme diffusion
- Financial bubbles and compound economic growth
- AI self-improvement and LLM ecosystem evolution
- Multi-agent coordination and organizational intelligence
- Scientific revolutions and technological innovation chains
- Cosmological structure formation and chaos systems
- Buddhist dependent origination and process philosophy
In this sense, transcendental functions can be viewed as mathematical models of continuously evolving reality itself.
The Relationship Between AI and Transcendental Functions
Classical AI systems were largely based on:
- rule-based logic
- fixed classifications
- hardcoded if-statements
This was a highly algebraic and static world.
Modern AI is fundamentally different.
Today’s AI systems involve:
- Attention
- learning
- inference
- recursive generation
- self-improvement
- agent interaction
They behave more like continuously evolving generative systems.
LLMs and Transcendental Dynamics
Large Language Models (LLMs) do not simply output fixed answers.
Instead, the conversation history itself changes the future generation process.
For example, consider human conversation.
If someone first says:
“Today was wonderful.”
your future responses differ greatly from when someone first says:
“Today was terrible.”
Past context changes future thought patterns.
LLMs work similarly.
Suppose the input is:
“It rained today. I forgot my umbrella. My clothes got soaked.”
Inside the model, a contextual state forms around:
- rain
- discomfort
- trouble
- negativity
Then when the next sentence begins with:
“Therefore I…”
the model becomes more likely to generate:
- “I was frustrated”
- “It was awful”
- “I might catch a cold”
This means:
Past context
→ changes current internal state
→ which changes future token probabilities
But even more importantly:
the generated output itself becomes new context.
If the model generates:
“Today was terrible.”
the conversation becomes even more negatively biased.
Future generations shift accordingly.
Thus the process becomes:
Past context
→ next token generation
→ new context formation
→ future generation rule modification
→ further generation changes
This is not merely “word prediction.”
It is a structure where:
the current state continuously reshapes future generation rules.
LLMs are therefore not simple systems of:
Input A → Output B
Instead, they are systems where:
the current state continuously deforms the future possibility space.
A useful analogy is a snowy mountain path.
The first footsteps slightly shape the snow.
Future walkers are influenced by that path.
The path becomes reinforced over time.
Eventually a stable trail emerges.
LLMs behave similarly.
Generated words continuously reshape the terrain of future generation.
Attention and Transcendental Structures
Transformers do not process words independently.
They calculate:
how words influence one another across the entire context.
For example:
The word “bank” in:
“I withdrew money from the bank”
has a different meaning from:
“the river bank”
Transformers determine meaning by analyzing relationships between words.
The important concepts are:
- relationships
- propagation of influence
- contextual transmission
This resembles wave dynamics more than symbolic manipulation.
Imagine dropping a stone into water:
- waves spread
- waves collide
- waves reinforce one another
- waves cancel each other
Transformers exhibit similar behavior.
The meaning of one token propagates through the surrounding semantic field.
Words such as:
- danger
- emergency
- stop
can alter the meaning landscape of an entire sentence.
Meaning becomes spatially distributed.
Transformers therefore behave less like dictionary processors and more like systems computing:
interactions within a semantic field.
This is deeply analogous to transcendental functions which exhibit:
- periodicity
- propagation
- interference
Similarly, Transformers use:
- Attention
- weight propagation
- embedding spaces
- vector interference
to form structures resembling “waves of meaning.”
Multi-Agent Systems and Transcendental Dynamics
In multi-agent systems, phenomena such as:
- interaction
- trust formation
- coordination
- feedback
naturally emerge.
The important thing is no longer the individual agent itself, but the relationship dynamics between agents.
This is fundamentally transcendental in nature because the system is:
- not fixed
- mutually generative
- propagative
- self-updating
AI Self-Improvement as a Transcendental Process
When AI begins generating:
- code
- data
- model improvements
the generated artifacts themselves begin changing future generation capability.
In other words:
generation rules begin updating generation rules.
This becomes an infinite generative structure closely related to:
- autopoiesis
- complex systems
- chaos
- evolutionary systems
AI Has Become Something That Cannot Be Fully Contained by Finite Rules
Transcendental numbers were entities that could not be fully captured by finite rules.
Large-scale AI systems exhibit similar properties:
- emergence
- nonlinearity
- unpredictability
- inability to fully enumerate system states
AI begins behaving beyond finite design specifications.
Designers can no longer fully enumerate:
- all behaviors
- all meanings
- all decisions
As a result, the central challenge of AI changes.
The key is no longer:
fixed design
but rather:
governance of the generative process itself.
Governing the Generative Process with DTM
This is where the Decision Trace Model (DTM) becomes important.
Traditional software design assumed:
Write specifications
→ fix the rules
→ produce predictable behavior
However, generative AI breaks this assumption because AI systems:
- change through context
- evolve through interaction
- self-update
- produce emergent outputs
- cannot have all states enumerated in advance
DTM addresses this by treating AI outputs not as decisions, but as:
Signals
This distinction is critical.
LLMs and AI agents produce:
- proposals
- hypotheses
- inferences
- candidates
but not final decisions.
In other words:
Signal ≠ Decision
DTM connects AI generation into an explicit decision process:
Event
→ Signal
→ Decision
→ Boundary
→ Human
→ Log
For example:
If AI outputs:
“This transaction may be fraudulent.”
traditional systems may automatically freeze the account.
DTM instead introduces:
AI output
→ treated as Signal
→ boundary conditions checked
→ escalation thresholds evaluated
→ human review if necessary
→ final Decision
→ execution
→ full trace logging
The goal is not to fully understand AI internally.
The goal is to govern:
- how AI connects to decisions
- where boundaries exist
- where humans intervene
- what gets recorded
- how failures propagate
This resembles air traffic control.
Controllers do not manage every air molecule inside the aircraft.
But they govern:
- routes
- altitude
- boundaries
- collision avoidance
- emergency protocols
DTM similarly governs generative systems without requiring total internal determinism.
Conclusion
The AI era is not simply:
the era of intelligent prediction machines.
It is the era where humanity has begun interacting with systems that cannot be fully confined within finite rules.
Large-scale AI, multi-agent systems, and self-improving AI continuously evolve through:
- meaning
- context
- relationships
- trust
- inference
- coordination
The current state changes future generation rules.
Generated outputs change future generation capabilities.
Relationships reshape global system behavior.
These systems increasingly resemble:
- complex adaptive systems
- evolutionary systems
- self-updating ecosystems
rather than traditional software.
Therefore, the core challenge of AI is no longer:
controlling fixed rules
but rather:
governing evolving generative processes.
This is precisely why Decision Trace Model matters.
DTM is not a prediction model.
It is:
a decision infrastructure for the age of emergent AI.
Its purpose is not to freeze generation into static certainty.
Its purpose is to make evolving generation governable through:
- boundaries
- escalation
- human oversight
- traceability
- accountability
- continuous improvement
As AI becomes increasingly transcendental, the key issue is no longer the model itself.
The true question becomes:
How do we connect generative systems to socially valid decisions?
That is the fundamental challenge of the AI era.
AIシステム設計・意思決定構造の設計を専門としています。
Ontology・DSL・Behavior Treeによる判断の外部化、マルチエージェント構築に取り組んでいます。
Specialized in AI system design and decision-making architecture.
Focused on externalizing decision logic using Ontology, DSL, and Behavior Trees, and building multi-agent systems.
