There is a term called “Explainable AI (XAI).”
AI that is not a black box.
AI that can explain the grounds of its decisions.
At first glance, this sounds ideal.
And yet, in real-world practice,
a recurring situation emerges:
“It was explained —
but I still cannot accept it.”
This is not an implementation failure.
It is caused by a misunderstanding of the word “explanation.”
There Are Two Fundamentally Different Meanings of Explanation
Most discussions proceed without clearly separating these two.
Explanation
-
How was this result produced?
-
Which features contributed?
-
What happened inside the model?
This concerns causality, structure, and process.
Justification
-
Why was this decision adopted?
-
Why is it acceptable to use it?
-
Who takes responsibility for it?
This concerns values, responsibility, and choice.
These two may appear similar,
but they are fundamentally different.
What Explainable AI Provides Is Only Explanation
So-called “Explainable AI” typically provides:
-
feature importance
-
contribution scores
-
local approximations
-
rule extraction
All of these answer the question:
“Why did this output occur?”
But in practice, the question that truly matters is different:
“Why is it acceptable to rely on this decision?”
This is a different question entirely.
Here lies the critical misalignment.
What People Seek Is Not a Reason — But Someone to Bear It
Consider situations where explanations are demanded:
-
A rejected job candidate
-
A customer denied based on a score
-
A team that failed after following an AI recommendation
What they want to know is not:
“Feature A contributed 0.3.”
“The weight was configured like this.”
What they want to know is:
“Who stands behind this decision?”
This cannot be answered by Explanation alone.
The More Explanation Increases, the More Justification Disappears
Ironically, even if AI carefully explains its internal mechanisms,
-
Who decided to use this logic?
-
Why was this the chosen criterion?
-
Can it be changed?
These questions remain unanswered.
The result is a widespread state of:
“I understand the mechanism,
but I cannot accept the decision.”
Accountability Does Not Exist Inside Logic
This is the core issue.
Accountability is a relational matter.
It exists when:
-
A subject adopts a judgment
-
Bears the consequences
-
Responds when questioned
Inside a model, there is no:
-
responsibility
-
position
-
commitment
Therefore, no matter how detailed the explanation,
responsibility does not emerge.
Logic and Accountability Must Be Divided
Only at this point does the proper division of roles become clear.
What AI (the logic side) should do:
-
Compute judgments
-
Visualize influencing factors
-
Indicate uncertainty
-
Present counterexamples
This belongs to Explanation.
What humans (the responsibility side) must do:
-
Articulate why the logic was adopted
-
Explain why alternative options were rejected
-
Bear responsibility if the outcome fails
This belongs to Justification.
The moment this division is blurred,
explanation becomes hollow.
“Explainable AI” Is a Misleading Term
To put it bluntly:
The very term “Explainable AI” creates the illusion
that accountability can be delegated to AI.
If AI can explain itself,
then no one else needs to explain.
But this is a fantasy.
Accountability is not a property of logic.
It is a matter of relationship and position.
What We Truly Need Is Explainable Decision Design
The goal is not to make AI explainable.
The goal is to make the design of decision-making explainable.
-
What is delegated to AI?
-
What remains with humans?
-
Why is the boundary drawn there?
Unless these are articulated,
no XAI system can provide a fundamental explanation.
Summary
-
Explanation and Justification are different.
-
XAI provides Explanation only.
-
What people seek is Justification.
-
Accountability does not reside inside models.
-
Logic and responsibility must be divided.
AI can explain.
But only humans can assign responsibility to an explanation.
For machine learning techniques related to the concept of explanation discussed here, I elaborate on both the theoretical foundations and concrete implementations in the “Explainable Machine Learning” section of this blog.
As for the implementation policy regarding justification, it is described in detail on my GitHub:
https://github.com/masao-watanabe-ai
Those who are interested are encouraged to refer to these resources as well.

コメント