Most AI projects are dead before they are built

— Reflections on AI Pre-Mortem Reviews

Over the past few years, as I’ve listened to companies talk about their AI projects,
I’ve kept seeing the same pattern.

The PoC was built.
The demo worked.
But it was never used in real operations.
A few months later, no one talked about it anymore.

Technically, nothing failed.
But as a project, it was already dead.


Why do AI projects fail?

The reason is surprisingly simple.

No one checks whether the project should survive before building it.

In many AI projects:

  • It’s unclear what decision the AI is actually supposed to make

  • No one can explain how humans currently make that decision

  • There is no clear owner who can stop the project when it fails

  • KPIs are too simplistic to reflect real-world judgment

And yet, despite these gaps, teams say:
“Let’s just build a PoC first.”


What I’ve done so far

Over my career, I’ve reviewed more than 200 PoCs and AI initiatives.

This includes success stories, failures,
and many projects that quietly disappeared without ever being discussed publicly.

At some point, I stopped focusing on building AI
and started thinking about how decisions themselves are structured.

I worked on:

  • White papers on decision design

  • Scoring and decision-making structures

  • Frameworks for translating decisions into execution

To be honest, many of these ideas were never adopted.
They were often seen as “too early” or “too heavy.”

Only now do I fully understand why.

Organizations don’t need perfect solutions first.
They need the ability to say “stop.”


What I’ve learned at this age

When I was younger, I focused on
“How can we make this succeed?”

Now, I think differently.

Being able to say “this should not be built” is far more valuable.

With age, you may lose some stamina for implementation.
But you gain clarity about:

  • Where systems are likely to break

  • What causes projects to escalate or fail publicly

  • Which designs organizations will never truly absorb

These things become easier to see, not harder.


What is an AI Pre-Mortem Review?

Recently, I started offering a service I call
an “AI Pre-Mortem Review.”

Before building any AI system, I assess:

  • Whether the project can realistically work

  • Whether the organization can sustain it

  • Whether there is a clear way to stop it if things go wrong

This review takes one to two weeks.

There is no implementation.
No PoC is built.

Instead, I focus on:

  • Checking the decision structure

  • Identifying gaps between KPIs and real judgment

  • Clarifying organizational responsibility and stop conditions

At the end, I give a clear answer:
proceed — or do not proceed.


Why “stopping” matters

From a company’s perspective, this kind of review can:

  • Prevent tens of millions of yen in wasted investment

  • Outsource the difficult act of saying “no” to vendors

  • Provide a clear rationale for internal decision-making

If spending ¥300,000–¥500,000 can answer these questions,
it’s not an expensive investment.


Final thoughts

AI is not magic.
And a PoC is not a moral exemption.

Too many projects regret, after building,
what should have been questioned before.

At 63, I no longer say “let’s build it” lightly.

Instead, I ask one question first:

“Should this survive at all?”

If this resonates with you,
I’d be happy to have a quiet conversation.

コメント

Exit mobile version
タイトルとURLをコピーしました