Who Is the “Designer” in the Age of AI? — Neither Engineer nor PM, but the Author of Judgment Structure

In the age of AI, the same debate always emerges.

Engineers are critical.
PMs are essential.
We lack data scientists.

None of these claims are wrong.
But none of them reach the core.

As AI enters society and decision-making becomes automated,
there is another role that is truly missing.

The Author of Judgment Structure.


Why Is the Identity of the Designer Being Questioned Now?

As discussed earlier:

  • AI cannot bear responsibility.

  • Optimization does not stop on its own.

  • Probability does not replace judgment.

  • Exceptions and conflicts reveal the essence of systems.

  • Any design that fails to write its boundaries will eventually collapse.

All of these lead to one shared question:

“Then who writes it?”

Traditional roles cannot answer this.


What Engineers Write

Traditionally, the engineer’s domain was clear:

  • Designing algorithms

  • Optimizing data processing

  • Building pipelines

  • Defining implementation specifications

In other words, refining how to build.

With the rise of generative AI:

  • Code generation

  • Data transformation

  • Model construction

  • Documentation creation

Many of these tasks can now be significantly automated.

Technical implementation is rapidly becoming commoditized.

But here lies a crucial problem.

Generative AI can automate:

  • Writing text

  • Generating code

  • Producing scores

  • Classifying data

These are acts of processing and generation.

But it cannot automate:

Where to stop.

For example:

  • Should the system stop when confidence is low?

  • Should large financial amounts trigger human review?

  • Should medical or legal decisions be excluded entirely?

These are not problems of model accuracy.
They are not algorithmic questions.

They are questions of where responsibility is placed.

Yet in many organizations, engineers are tasked with:

  • Increasing performance

  • Accelerating processing

  • Maximizing automation rates

In other words, optimizing toward “do not stop.”

Meanwhile:

  • Under what conditions to return judgment to humans

  • Which risks must never be automated

  • What must remain outside the system

These boundary decisions often have no clear owner.

The result?

Automation advances.
But when failure occurs, no one can answer:

“Who designed the stopping rule?”

This is how responsibility becomes blurred as automation increases.

In the age of generative AI, what truly matters is not what we can build.

It is what we deliberately choose not to build.

Before implementation capability, we must design:

The stopping point of judgment.

And this is not a matter of algorithmic precision.

It is a matter of Contracts and Boundaries.


What PMs Decide

If engineers determine how to build,
PMs determine what to build.

PMs typically manage:

  • Requirements

  • Schedules

  • Priorities

  • Stakeholder coordination

They define objectives and constraints:

What must be achieved?
By when?
In what order?
For whom?

This is essential.

But there is a deeper question.

When a system makes decisions:

  • What does it maximize? (evaluation function)

  • When must it stop? (stopping conditions)

  • How are exceptions handled?

  • What happens under uncertainty?

This “structure of judgment” is rarely written explicitly in requirements.

PMs define deliverables.
They do not usually design judgment boundaries.

For example:

We may set goals such as:

  • Increase CVR

  • Improve fraud detection rates

  • Raise automation ratios

But we rarely specify:

  • At what false positive rate must we stop?

  • Above what transaction amount must humans intervene?

  • Which categories must never be automated?

As a result:

Engineers optimize.
PMs pursue outcomes.
Automation expands.

But no one writes the boundaries of judgment.

When failure occurs, we ask:

“Who defined the evaluation function?”
“Who set the stopping conditions?”

This is not a failure of PM competence.

It is simply that the role was never defined at that depth.

But in the AI era, design is no longer about:

What to build.
How to build.

It is about:

How the system is allowed to judge.

And that is neither project management nor algorithm design.

It is the design of Contracts and Boundaries.


Design in the Age of AI Means Writing Structure

Engineers optimize how.
PMs decide what.

But someone must define how judgment itself is structured.

AI-era design is not:

  • Drawing interfaces

  • Defining APIs

  • Selecting models

It is writing:

  • How judgment is generated

  • Where it stops

  • When it defers

  • How conflicts are preserved

This is not a technical problem.
Not a management problem.

It is a judgment architecture problem.


Who Is the Author of Judgment Structure?

This person explicitly answers:

  • What may be optimized

  • What must never be optimized

  • Which exceptions can be automated

  • Which must escalate to humans

  • How unresolved conflicts are recorded

  • What conditions halt decision-making

These are rarely found in job descriptions.

Yet without them, autonomous systems drift.


Why Is This Role Invisible?

Because:

  • Its output is not code

  • It does not directly improve KPIs

  • It is evaluated by the absence of incidents

The Author of Judgment Structure is judged by what did not happen.

Until something goes wrong.

Then the question emerges:

“Why didn’t the system stop here?”

And suddenly the absence of a stopping rule becomes visible.


The Author of Judgment Structure Accepts Structural Responsibility

This role is not defined by authority.

Not the final approver.
Not necessarily the executive.

But the one who wrote the premise of judgment.

The one who can say:

“This boundary was placed here intentionally.”

This is not approval responsibility.

It is structural responsibility.


What Happens Without This Role?

Without it:

  • Judgment drifts toward AI

  • Exceptions are pushed back to frontline staff

  • Responsibility diffuses into approval workflows

No one becomes the author.

And what emerges is:

Human-in-the-loop as a form of diluted responsibility.

A human presses the final button.
But no one designed the judgment structure.

That is not safety.
It is fragmentation.


The Author Stands Between AI and Humans

Not above engineers.
Not beside PMs.
Not merely representing executives.

They stand at the boundary between AI and humans.

Here is computation.
Here is judgment.
Here is deferral.
Here is recorded conflict.

Drawing that line is the work itself.


This Is Not a New Role — It Is a Rediscovered One

Experienced designers.
Veteran operational leaders.
People who understood both technology and domain.

They implicitly wrote boundaries.

The AI era no longer allows this to remain implicit.

Boundaries must be written explicitly.


Conclusion

Engineers define how to build.
PMs define what to build.

But in the AI era, what truly matters is:

How the system is allowed to judge.

Design means writing judgment structure.

The Author of Judgment Structure writes:

  • Boundaries

  • Contracts

  • Stopping conditions

Without this role, AI inevitably loses responsibility.

A true designer is someone who can say:

“I wrote this judgment structure.”

As AI evolves, human roles do not shrink.

They become more exacting.

And the final question remains:

Who wrote the structure of this judgment?

Any AI system that cannot answer that question
will eventually fail.

コメント

タイトルとURLをコピーしました