When I was working on AI solutions,
I kept hearing the same request from clients again and again:
“We want to eliminate variability caused by people.”
- Decisions differ between juniors and experts
- Responses change even in the same situation
- The organization depends heavily on a few experts
And then, inevitably, the question follows:
“Can AI standardize this?”
The Initial Approach: Accumulating Knowledge
At the time, the first idea was simple:
“If we accumulate knowledge, we can solve this.”
- Collect past cases
- Record expert decisions
- Build FAQs and rules
However, once you try it, you realize:
This does not solve the problem.
- People are too busy to refer to it
- Every case is slightly different
- Tacit knowledge is lost
- Final decisions still return to humans
Knowledge becomes material,
but variability in decisions does not disappear.
Insight: The Problem Was Not “Knowledge”
At this point, a critical discomfort emerges.
Knowledge is increasing.
Information is available.
Yet,
variability in decisions remains.
Why?
The reason is simple.
Knowledge standardizes “what we know.”
But the variability exists in:
“how we decide.”
The Mismatch
If we break it down:
- Knowledge → a problem of understanding
- Decision → a problem of judgment
👉 The problem we tried to solve
👉 was different from the actual problem
The Core Realization
The issue is not a lack of knowledge.
The issue is:
Decision-making remains inside individuals.
In other words:
Decision-making is not structured.
A Shift in Perspective
At this point, the question changes.
From:
👉 How do we accumulate more knowledge?
To:
👉 How do we externalize decision-making?
This Problem Has Already Been Solved
This issue of “variability”
is not unique to AI.
As long as systems depend on individuals,
this problem inevitably occurs.
And in fact,
this problem has already been solved in other domains.
Manufacturing
In the past, manufacturing looked like this:
- Only skilled workers could produce quality output
- Quality varied depending on the individual
So what did they do?
They structured the work:
- Standard operating procedures
- Work instructions
- Tolerances
- Checkpoints
Result
- Anyone can produce the same output
- Quality becomes stable
Retail
The same happened in retail.
Before:
- Each store manager operated differently
- Sales varied widely by store
After:
Operations were structured:
- Customer interaction flows
- Display rules
- Restocking timing
- Discount criteria
Result
- Differences between stores decreased
- Sales stabilized
What’s Common
The key point is this:
- Manufacturing structured work
- Retail structured operations
Then, What Has AI Been Doing?
Let’s return to AI.
Traditionally, AI has mainly handled two things:
- Prediction
- Recommendation
At first glance, this seems highly advanced.
Examples
Manufacturing
- “This component has a high defect probability” (Prediction)
- “There may be an issue in this process” (Recommendation)
- “Inspection is recommended” (Recommendation)
But:
- Should we stop the line?
- At what level should we respond?
- How do we balance cost?
👉 Humans decide.
Finance
- “This transaction is likely fraudulent”
- “This user is high risk”
- “Additional verification is recommended”
But:
- Should we block immediately?
- Hold temporarily?
- Balance with customer experience?
👉 Humans decide.
Customer Support
- “This inquiry indicates churn risk”
- “This response is recommended”
- “Here is a suggested reply”
But:
- Is it appropriate to send?
- Is the tone correct?
- Are exceptions needed?
👉 Humans decide.
Hiring
- “This candidate has a high probability of success”
- “This candidate is a strong match”
- “Proceed to interview”
But:
- Should we hire?
- Under what conditions?
- Compared to other candidates?
👉 Humans decide.
Healthcare
- “This patient has a high likelihood of disease”
- “This test is recommended”
- “This treatment plan is suggested”
But:
- Should we proceed with treatment?
- Is the risk acceptable?
- What about patient context?
👉 Humans decide.
At First, This Seems Reasonable
AI provides input.
Humans make final decisions.
It appears to be:
👉 A safe and practical approach
But the Problem Remains
In practice, this structure creates persistent issues:
1. Variability remains
Even with the same prediction,
decisions differ by person.
- Juniors act cautiously
- Experts take risks
👉 Decisions are inconsistent
2. Dependence on tacit knowledge
Decisions rely on:
- Experience
- Intuition
- Context
👉 Decision criteria are not externalized
3. Lack of reproducibility
- Why was that decision made?
- Can it be repeated?
👉 It does not become quality
4. No scalability
- Humans become bottlenecks
- Real-time response is difficult
👉 Operations do not scale
What Is Actually Happening?
This is the most important point:
👉 AI provides materials
👉 But decision-making remains inside people
Returning to the Original Question
“We want to eliminate variability.”
“Can AI standardize this?”
As we’ve seen:
AI can:
- Predict
- Recommend
👉 It can prepare the information needed for decisions
👉 The information is aligned
But Something Is Not
Even then:
👉 Decisions still differ
Why?
Because:
👉 Only information is aligned
Key Breakdown
- Information (data, predictions, recommendations)
- Knowledge (experience, understanding)
- Decision (judgment)
AI has focused on:
👉 Aligning information and knowledge
But:
👉 Decision-making remains human
The Core Insight
👉 We were standardizing the wrong thing
❌ Assumption:
Align knowledge → variability disappears
⭕ Reality:
👉 Unless decision-making is aligned, variability remains
So What Should We Do?
This leads to a new question:
👉 How do we align decision-making?
Turning Decisions into Structure
We must shift from:
👉 Decisions inside people
To:
👉 Decisions that are defined
Decisions Can Be Decomposed
What seems abstract can be broken down:
- What information to use
- What conditions to evaluate
- What options to choose
- What thresholds to allow
The Same Pattern Emerges
Manufacturing:
👉 Decompose work → define procedures
Retail:
👉 Decompose operations → define flows
AI:
👉 Decompose decisions → define structures
What Changes?
1. Reproducibility
Same conditions → same decisions
2. Transparency
Why was the decision made?
3. Optimization
What should be changed?
4. Scalability
No dependence on individuals
The Role of AI Changes
Before:
- Predict
- Recommend
Now:
👉 Operate within a decision structure
- Predictions become signals
- Recommendations become options
- Rules determine decisions
👉 AI is no longer standalone
👉 It becomes part of a structured system
What This Means
AI does not replace humans.
👉 It externalizes human decision-making
👉 as structure
Final Answer
Returning to the original question:
“We want to eliminate variability.”
The answer is:
👉 Not aligning knowledge
👉 But aligning decision structures
Final Message
AI can align information.
But it cannot align decisions.
👉 Without structure
👉 What is needed is:
👉 Designing decision-making as structure
AIシステム設計・意思決定構造の設計を専門としています。
Ontology・DSL・Behavior Treeによる判断の外部化、マルチエージェント構築に取り組んでいます。
Specialized in AI system design and decision-making architecture.
Focused on externalizing decision logic using Ontology, DSL, and Behavior Trees, and building multi-agent systems.

コメント