Accidents appear to happen suddenly.
But in reality, they have been prepared for long before they occur.
And their common cause is almost always the same:
No boundary was ever written down.
What is a Boundary
A boundary is the line that defines:
Here, we do not proceed further.
From this point, a human decides.
Up to this point, automation is allowed.
But more precisely,
a boundary defines
the conditions under which responsibility for a decision is returned to a human.
The important point is this:
a boundary is not a limitation.
A boundary is
a map of responsibility.
A boundary is not a single line.
In real systems, there are multiple kinds of boundaries.
For example:
1. The Uncertainty Boundary
When the model is not sufficiently confident, it must not decide automatically.
This is the boundary that stops the system when it does not truly know.
2. The Impact Boundary
When the consequences of a decision are significant, automation must stop.
This boundary is defined by the magnitude of the outcome.
3. The Novelty Boundary
When the input is outside what the model was designed to handle, automation must stop.
This boundary defines the valid domain of the model.
4. The Observability Boundary
When required information is missing, the system must not decide automatically.
This boundary is defined by the completeness of observation.
5. The Agreement Boundary
When multiple models or agents disagree, no automatic decision is made.
This boundary protects against unstable or ambiguous conclusions.
6. The Authority Boundary
When a decision carries legal or ethical responsibility, it must not be automated.
This boundary defines where responsibility must remain human.
7. The Temporal Boundary
When the assumptions underlying the model may no longer hold, automation must stop.
This boundary defines the validity period of the model.
All of these share the same essential nature.
A boundary does not define when a model is allowed to decide.
It defines
when a model must not decide.
And at the same time,
it defines
where responsibility transfers back to a human.
A boundary is not the limit of capability.
It is
the starting point of responsibility.
What happens when boundaries are not defined
In systems where boundaries are unclear, the same pattern always emerges.
AI proceeds as far as it can.
Optimization does not stop.
Probabilities continue to update.
And no one says, “Stop here.”
Because there is no written reason to stop.
Implicit boundaries are always violated
In many real systems, such boundaries exist only implicitly.
“Everyone knows not to go that far.”
“Normally, it wouldn’t do that.”
“A human is assumed to review it.”
But implicit boundaries are not real boundaries.
They do not exist in code.
They do not exist in logs.
They do not define responsibility.
As a result, accidents occur where
no one intended to cross the line,
yet the line was crossed.
Accidents are not anomalies — they are executions of the design
After an accident, people often say:
“It was unexpected.”
“It was a special case.”
“It was bad luck.”
But if we look carefully, the truth is simpler:
If no boundary was defined, reaching that point was inevitable.
The accident was not caused by a failure of the system.
It was the result of computation proceeding correctly
on top of boundaries that were never designed.
Why people avoid defining boundaries
The reason is clear.
Defining boundaries is expensive.
It requires agreement.
It makes responsibility explicit.
It invites future scrutiny.
It forces the question: “Why is the boundary here?”
To define a boundary is to accept ownership of a decision.
And so, people avoid writing them.
But the cost of not articulating boundaries is always paid later
By not defining boundaries, teams gain temporary advantages:
Faster development
Fewer difficult discussions
Ambiguous responsibility
But the cost always returns later:
Accidents
Incidents
Unexpected use
Unexplainable decisions
These are not random events.
They are the deferred cost of decisions that were never articulated.
To define a boundary is to define where the system must stop
Defining boundaries does not mean writing more detailed logic.
What matters are statements like:
-
Under these conditions, automated decisions must not be made
-
In this domain, outputs must be treated as hypotheses
-
These cases must always be escalated to humans
-
If agreement cannot be reached, no decision will be made
These are specifications, but they are not logic.
They define the flow of responsibility.
Boundaries are not written for AI — they are written for humans
This is often misunderstood.
Boundaries are not written to constrain AI.
They are written to help humans remember where responsibility returns.
They create a place where humans can recognize:
From here, the decision is mine.
Without written boundaries, people inevitably say:
“The system decided.”
The best systems have the most boundaries
At first glance, systems with many boundaries appear inefficient.
They stop more often.
They escalate more often.
They defer more often.
But this is not inefficiency.
It is an acknowledgment that the world itself is uncertain and fragile.
Only those who define boundaries are truly designing the system
Let us be clear.
Those who write models are not necessarily the designers.
Those who optimize performance are not necessarily the designers.
Only those who define boundaries are the designers.
Those who define:
Where the system must stop
Where decisions return to humans
What lies outside the system’s authority
These are the true authors of decisions.
Summary
Designs without explicit boundaries inevitably fail.
Implicit boundaries are the root cause of accidents.
Accidents are not unexpected — they are the natural result of the design.
The cost of articulation is the cost of safety.
To define boundaries is to define responsibility.
In the age of AI, safety is not determined by performance or accuracy.
It is determined by whether the system clearly states:
“Beyond this point, the system must not act.”
That single line determines the fate of the system.

コメント