Judgment Cannot Be Expressed by Smooth Computation — The Problem of Discontinuity in AI

Yesterday, I wrote about why many AI projects end at the PoC stage and never take root in real operations.

Today, I want to examine the reasons by returning to a technical premise: what AI actually is, and what it is not.

Recently, when observing how AI is being used,
it feels as though one fundamental premise is quietly being forgotten.

That premise is an almost trivial fact:

AI runs inside a computer.

What can a computer do?

There is only one answer.

Computation.

AI is no exception.
Inference, generation, learning, judgment — all of these are, internally, reduced to numerical operations.


The only tool for computing a non-static world

The real world is not static.
It changes over time, states transition continuously, and causality becomes entangled.

To make this changing world computable, humanity chose a foundation:

differentiation and integration.

  • Differentiation captures the direction and magnitude of change

  • Integration captures outcomes as the accumulation of change

At its core, training a neural network is nothing more than
differentiating an error function and repeatedly updating parameters along its gradient.

This is extraordinarily powerful.

Because as long as the world can be treated as continuous,
differentiation and integration are almost universal tools.


Why “smoothness” works in physics

There is, however, a crucial assumption behind this power.

For differentiation and integration to work,
the function being computed must be sufficiently smooth.

In physics, this assumption works astonishingly well.

  • Space is treated as continuous

  • Time is continuous

  • Energy and momentum can be approximated as continuous quantities

Even though discontinuities exist at the atomic level,
at macroscopic scales the world can be approximated as smooth without issue.

That is why physics has been so successful with:

  • Differential equations

  • Continuous-time models

  • Limit operations


A fully connected world loses distinction

Problems arise when this assumption is carried over, unchanged,
into the domain of intelligence and meaning.

In a smooth world:

  • Everything is connected gradually

  • Everything changes continuously

  • Boundaries become ambiguous

What happens as a result?

Distinctions disappear.

Between black and white lies an infinite spectrum of gray.
Conceptual boundaries blur.
Statements like “this is A” or “this is B” become impossible to assert decisively.

This is precisely the world current AI excels at.

  • Roughly similar

  • Plausible

  • Contextually appropriate

But what it struggles to express are:

  • Where a decision actually switched

  • Why that judgment was made

  • Whether a line exists that must not be crossed

These points of discontinuity are difficult to represent.


Attempts to force discontinuity into smooth systems

To avoid this problem, AI research has accumulated various techniques.

One representative example is softmax.

Softmax:

  • Avoids strict 0/1 decisions

  • Emphasizes the maximum value

  • Produces a pseudo-choice

It represents a compromise between smoothness and selection.

Other techniques include:

  • Piecewise activations like ReLU

  • Loss functions designed with thresholds in mind

  • Temperature parameters to control sharpness

All of these attempt to simulate discontinuity within computable limits.

But they remain approximations.

They do not handle real discontinuities as they exist in the world.


And still, the world cannot be fully represented

Real-world decisions have properties such as:

  • The moment a condition is met, an action becomes forbidden

  • Once a context boundary is crossed, meaning reverses

  • Rules resolve conflicts through explicit priority

These phenomena are not continuous.

They are neither differentiable nor smooth.
They are established through logical jumps.

Here, an idea long discussed and repeatedly forgotten regains its relevance.


Why logic must be combined with computation

Throughout the history of AI, the question has surfaced again and again:

How should computation (numbers) and logic (symbols) be combined?

  • Rule-based AI

  • Expert systems

  • Symbolic reasoning

  • Constraint satisfaction

  • Logic programming

These approaches were once dismissed as “obsolete.”

Today, it is clear why they matter again.

Because continuous computation alone cannot represent a world with discontinuities.

Logic:

  • Defines boundaries

  • Makes priorities explicit

  • Clearly separates prohibition from permission

Computation:

  • Handles ambiguity

  • Tolerates noise

  • Performs continuous optimization

These are not opposing forces.
They simply serve different roles.


What does it mean to make AI “intelligent”?

Making AI intelligent does not mean
making models larger or increasing parameter counts.

It means this:

Designing where the world should remain smooth under computation,
and where it must be decisively cut by logic.

AI has always done only one thing: computation.

Precisely because of that, humans must take responsibility for deciding:

  • What should be delegated to computation

  • What must be fixed by logic

AI does not think.
But it can become a component for thinking.

If we use it without awareness of that boundary,
the world will quietly be painted over —
in an endlessly smooth shade of gray.

コメント

タイトルとURLをコピーしました