Should AI Aim for “Ultimate Intelligence”? Redefining AI Design through Intelligence Fields and the Decision Trace Model

Introduction

AI has often developed with one major goal in mind:

to create a more intelligent intelligence.

Higher accuracy.
Stronger reasoning ability.
Larger models.
Intelligence closer to that of humans.
And ultimately, AGI — Artificial General Intelligence.

Behind this view, there is often an implicit assumption of:

  • one subject,
  • one intelligence,
  • one understanding of the world,
  • one center.

But is the most important question really how intelligent AI itself can become?

Perhaps what will matter more from now on is this:

What kind of relational structure does AI create within humans, organizations, institutions, and society?

Once we take this perspective, the goal of AI changes.

Instead of building AI as a single, ultimate intelligence, we need to design a structure in which:

  • humans,
  • AI,
  • organizations,
  • institutions,
  • environments,
  • records,
  • responsibility,
  • and values

are mutually connected.

A structure in which they can coexist without breaking down, make decisions when necessary, and correct themselves when mistakes occur.

In other words, AI should be designed as an:

Intelligence Field

I believe this will become one of the most important directions for AI design.


0. What Is Intelligence?

The Ability to Give Meaning to Information and Adjust Action toward a Purpose

We usually tend to think of intelligence as an ability that exists inside a single brain.

To calculate faster.
To remember more.
To reason more accurately.
To solve more complex problems.

The higher these abilities are, the more “intelligent” something is considered to be.

Modern AI has also developed for a long time around:

  • accuracy,
  • reasoning ability,
  • benchmarks,
  • model size,
  • and processing speed.

But here we need to pause for a moment.

Is intelligence really the ability to produce correct answers on its own?

If we look at the world of life, intelligence appears in a slightly different form.

In What Is Life?, Paul Nurse describes life in relation to information.

A butterfly is not simply flying randomly through the air.

It senses light.
It senses smell.
It senses vibrations in the air.
It reads the surrounding situation and acts accordingly.

It avoids the shadow of a bird.
It approaches the scent of nectar.
It detects danger.
It chooses its next action.

What is important here is that the butterfly is not merely receiving information.

It treats information from the outside world as something meaningful for its own survival and action.

In other words, for living beings, information is not merely data.

Information becomes “meaning” in relation to purpose.

Living beings receive information from the external world.

But that information becomes meaningful because the living being has some kind of purpose or need.

For example, the same small black movement may be “food” for a frog.

But if the frog is not hungry, it may not trigger action.

Here we find a relationship among:

  • the state of the external environment,
  • the internal state,
  • needs,
  • purpose,
  • and action.

In other words, living beings are not merely processing information.

They are giving meaning to information in relation to their own state and purpose, and converting it into action.

This also connects to the idea of “means-end reasoning” discussed in Kazuhisa Todayama’s Introduction to Philosophy.

Humans can have a purpose and think about which means they should choose to achieve that purpose.

They can have goals that cannot be achieved immediately, plan toward them, choose means, and revise their actions when they fail.

This is not a mere reaction.

There is a structure in which information is interpreted in light of a purpose, and means are selected accordingly.

Intelligence appears precisely within this structure.

If we look more closely at the evolution of cognitive design in living beings, the nature of intelligence becomes even clearer.

At the simplest stage, a living being simply follows a command such as:

“Do R.”

This is an entity that always performs the same action.

At the next stage, it follows a form such as:

“If C, then do R.”

It changes action R according to condition C in the external environment.

This is a living being with conditional branching.

At a more advanced stage, the structure becomes:

“If C and D, then do R.”

Here, D represents the internal state.

In other words, action changes not only according to the external environment, but also according to internal states such as hunger, fatigue, danger perception, and need.

At this point, information begins to have a deeper meaning.

External information C is no longer just a stimulus.

It becomes meaningful for action in relation to internal state D.

At this point, we can think of:

  • symbolized external information as “meaning,”
  • internal states and needs that orient behavior as “purpose,”
  • and the relation between them as producing “action.”

From this perspective, intelligence is not simply computational ability.

Intelligence is:

the ability to receive information, form meaning, select action in light of purpose, and correct itself according to the result.

Put more simply:

intelligence is the power to transform information into meaning, meaning into action, and the result of action into self-correction.

What is important here is that intelligence does not exist only inside a closed brain.

Intelligence always exists within a cycle of:

  • external environment,
  • internal state,
  • purpose,
  • meaning,
  • action,
  • and correction.

In other words, intelligence is not a computational ability separated from the world.

It is:

a function that continues to adjust itself while relating to the world.

Once we take this perspective, the goal of AI also changes.

What AI needs is not simply the ability to process large amounts of information.

What matters is:

  • in what context information is given meaning,
  • what is treated as a purpose,
  • what action it leads to,
  • how it is corrected when mistakes occur,
  • and with whom that action is made possible.

In other words, AI intelligence must also be considered not as something closed inside a model, but as something that exists in relation to humans, society, institutions, and the environment.

No matter how high-performing AI becomes, if its output does not acquire meaning in society, is not connected to purpose, is not converted into action, and cannot be corrected when it is wrong, then intelligence is not truly functioning.

That is why what we need from now on is not to create a single ultimate intelligence.

What we need is:

a structure in which humans, AI, organizations, institutions, and environments can share information, form meaning, adjust purposes, act, and continue to correct themselves.

This is where the idea of the:

Intelligence Field

emerges.

Intelligence is not something closed inside a single brain.

Intelligence is a relational function in which information becomes meaning, meaning connects to purpose, and action and correction emerge.

And in the age of AI, what we truly need to design is not a single intelligence, but:

a field in which intelligence can continue to function in a healthy way.


1. What Is an Intelligence Field?

As we saw in the previous section, intelligence is not a capacity closed inside a single brain.

Intelligence is a relational function in which:

information becomes meaning,
meaning connects to purpose,
purpose is converted into action,
and the result is corrected.

From this perspective, intelligence no longer exists only inside a single subject.

Humans, AI, organizations, institutions, environments, records, and culture relate to one another.

Through those relationships, information is given meaning, purposes are adjusted, actions are produced, and failures are corrected.

I call this kind of field, in which intelligence arises through relationships, an:

Intelligence Field

An Intelligence Field is not simply about making an AI model more powerful.

Rather, it is about designing:

a structure in which humans, AI, and society can share information, form meaning, adjust purposes, act, and continue to correct themselves.

For example, in a company, frontline workers, managers, experts, customers, rules, culture, and past experience relate to one another and form organizational intelligence.

In medicine, patients’ complaints, nurses’ observations, doctors’ judgments, test results, hospital systems, and ethical standards are connected and support medical decisions.

In social institutions, laws, customs, trust, responsibility, exception handling, and human acceptance interact to make social judgment and correction possible.

In other words, an Intelligence Field is:

a way of understanding intelligence not as the capacity of a single subject, but as a structure that emerges through relationships.

In the age of AI, what we truly need to design is not a single “smart AI,” but:

a field in which intelligence can continue to work in a healthy way.


2. Why “Ultimate Intelligence” Is Not Enough

If we understand intelligence as:

a function that transforms information into meaning, connects meaning to purpose, acts, and continues to correct itself,

then the goal of AI also changes.

Simply building larger models.
Achieving higher accuracy.
Enabling more complex reasoning.

Of course, these things are important.

But they alone do not ensure that intelligence will function healthily within society.

That is because the intelligence required in society is not the ability to produce correct answers in isolation.

It is:

the ability of humans, AI, organizations, institutions, and environments to continue relating to one another without breaking down.

For example, even if AI produces a highly accurate prediction, that prediction does not immediately become socially effective action.

For whom does that prediction have meaning?
To what purpose is it connected?
Who will adopt it?
How far should it be automated?
Who stops it when an exception occurs?
How is it corrected when it is wrong?
Who takes responsibility for the result?

If these questions are not designed into the system, AI output cannot be used stably within society.

In other words, the issue is not only whether AI is intelligent.

What matters more is:

whether AI intelligence can have meaning, lead to action, and remain correctable within its relationship with humans and society.

If we strengthen AI without designing this connection to society, problems will emerge precisely at the point of connection.

There is an output that looks correct.
But no one can take responsibility for it.

There is convenient automation.
But no one knows how to stop it.

There is high-accuracy decision support.
But it cannot be corrected when exceptions occur.

There is massive information processing.
But it does not lead to human trust or acceptance.

In such a state, AI may be “intelligent,” yet remain unstable within society.

That is why AI does not need to become a single ultimate intelligence.

What is needed is:

a structure in which humans, AI, organizations, institutions, and environments can coexist, make decisions, and continue to correct themselves.

It is not enough to enhance AI capability.

We must design:

where that capability is used,
to what purpose it is connected,
at what boundary it stops,
and how it can be corrected.

This is why the idea of the Intelligence Field becomes necessary

3. What Is the Goal of an Intelligence Field?

The goal of AI as an Intelligence Field is not:

for AI to surpass humanity.

Nor is it simply for AI to process more information and produce more accurate answers.

The goal of an Intelligence Field is:

to create a state in which humans, AI, organizations, institutions, and environments can coexist without breaking down, make decisions when necessary, and continue correcting themselves when mistakes occur.

AI may propose something.

But no one knows for whom it has meaning.
No one knows what purpose it is connected to.
No one knows who should receive it.
No one knows how much it should be trusted.
No one knows who should correct it when it is wrong.

In such a situation, even if AI produces “answers,” intelligence is not yet truly functioning within society.

What the Intelligence Field seeks is not merely to increase AI output.

Rather, it seeks to create a cycle in which AI output:

  • gains meaning within relationships with humans and society,
  • connects to purpose,
  • becomes action,
  • is suspended when necessary,
  • corrected when necessary,
  • and returned again into learning.

In other words, the goal of the Intelligence Field is not:

to create an AI that produces correct answers,

but rather:

to create a social structure in which information, meaning, purpose, action, and correction continuously circulate.

The problem is not that AI becomes powerful.

The problem is whether that power can exist in a form that remains stable within relationships among humans, organizations, institutions, and environments.

That is why, within the Intelligence Field, the following become more important than “ultimate intelligence”:

  • the ability to coexist,
  • the ability to stop,
  • the ability to correct,
  • the ability to take responsibility,
  • and the ability to relearn.

To truly utilize AI intelligence within society, we must not only strengthen intelligence itself.

We must design the conditions under which intelligence can function.


4. What an Intelligence Field Requires

An Intelligence Field requires far more than a simple AI model.

This is because the goal of the Intelligence Field is not merely to make AI itself smarter.

Its goal is to create:

a structure in which humans, AI, organizations, institutions, and environments can coexist without breaking down, make decisions, and continue correcting themselves.

What matters, therefore, is not “intelligence itself,” but:

a field in which intelligence can function in a healthy way.

For intelligence to function healthily, it is not enough for information merely to exist.

We must also consider:

  • for whom that information has meaning,
  • in what context it is interpreted,
  • who can be trusted,
  • where the system should stop,
  • how mistakes are corrected,
  • and how the process is recorded and reused.

Because of this, an Intelligence Field requires at least the following layers.


1. Relation Layer

Intelligence does not emerge only inside isolated individuals.

Information acquires meaning because it exists within relationships.

The same information changes meaning depending on:

  • who produced it,
  • who receives it,
  • and within what relationship it is shared.

The same sentence carries different weight depending on whether it comes from:

  • a manager,
  • a customer,
  • or a specialist.

The same data is treated differently depending on whether it is viewed by:

  • a frontline worker,
  • a manager,
  • or an auditor.

Therefore, the Intelligence Field must handle:

  • who relates to whom,
  • who possesses what information,
  • who trusts what information,
  • who plays which role,
  • and within what context these relationships exist.

This is not merely a network diagram.

Relationships are the foundation that:

  • gives information meaning,
  • adjusts purposes,
  • and enables action.

In this sense, the Relation Layer is:

the foundation upon which intelligence emerges.


2. Context Layer

AI often produces locally correct answers.

But in the real world, what is locally correct is not always appropriate.

This is because the meaning of information changes according to context.

For example:

“Please respond immediately.”

In ordinary operations, this may simply be a request.

But during a critical system outage, it becomes an emergency instruction.

Likewise, the same customer statement changes meaning depending on whether:

  • it is the first inquiry,
  • or follows repeated unresolved problems.

Real-world decisions are shaped by:

  • situation,
  • time,
  • relationships,
  • history,
  • institutions,
  • culture,
  • and atmosphere.

The Intelligence Field cannot ignore context.

It must handle not only information itself, but:

  • under what situation it appeared,
  • within what flow it emerged,
  • and against what background it exists.

The Context Layer is therefore:

the layer that transforms information into meaning.


3. Trust Layer

Society does not operate through information processing alone.

Humans evaluate not only information itself, but also:

  • who produced it,
  • what evidence supports it,
  • and how far it should be trusted.

Even highly accurate information becomes questionable if it comes from an untrustworthy source.

Conversely, incomplete information from trusted experts or frontline workers may become critically important.

The same applies to AI.

For AI outputs to function within society, it must be clear:

  • how trustworthy the output is,
  • under what conditions it may be used,
  • who verified it,
  • and how far automation should be allowed.

The Intelligence Field therefore must design:

  • whose information is trusted,
  • which evidence matters,
  • who approved it,
  • how far trust extends,
  • and how trust degrades.

The Trust Layer is:

the layer that makes information socially usable.


4. Boundary Layer

Within the Intelligence Field, we must not think only about:

“What AI can do.”

Before that, we must define:

what AI must never do.

The more powerful AI becomes, the more important boundaries become.

For example:

  • AI should not automatically exclude people.
  • AI should not finalize medical diagnoses autonomously.
  • AI should not terminate contracts autonomously.
  • AI should not make decisions without appeal rights.
  • AI should not make definitive claims under high uncertainty.
  • AI should not process critical exceptions without logging them.

Without boundaries, AI becomes dangerous within society precisely because no one knows:

  • how far it should be trusted,
  • or where it should stop.

Social trust is not built merely on what AI can do.

It is built on:

where AI stops.

The Boundary Layer is therefore:

the layer that prevents intelligence from becoming uncontrollable within society.


5. Repair Layer

The Intelligence Field must not aim merely for:

“never failing.”

Of course reducing failure matters.

But in the real world, exceptions and errors inevitably occur.

Situations change.
Data is incomplete.
Human values change.
Institutions and environments evolve.

What truly matters is:

the ability to recover and correct.

When AI produces a mistaken proposal, it should be possible to suspend it instead of automatically executing it.

When decisions become difficult, escalation to humans must be possible.

When exceptions occur, they should be recorded rather than hidden.

When rules become outdated, they should be updated.

Without such repairability, AI cannot survive long within society.

The Intelligence Field requires structures that allow systems to:

  • suspend,
  • retry,
  • escalate,
  • accept human intervention,
  • update rules,
  • and learn from failure.

The goal of the Intelligence Field is not perfect intelligence.

It is:

Correctable Intelligence

The Repair Layer is therefore:

the layer that enables intelligence to recover from failure and continuously adapt to change.


6. Decision Layer

Here we finally arrive at the concept of “decision,” the core idea behind the Decision Trace Model.

Decision is extremely important within the Intelligence Field.

However, decision itself is not the Intelligence Field.

Decision emerges upon:

  • relationships,
  • context,
  • trust,
  • boundaries,
  • and repairability.

Decision is not merely AI producing an answer.

A piece of information is:

  • interpreted within context,
  • connected to purpose,
  • validated through trustworthy evidence,
  • checked against boundaries,
  • possibly escalated to humans,
  • and transformed into action in a correctable form.

Only within this structure does decision truly emerge.

The Decision Layer is therefore:

the core component that handles decision-making within the Intelligence Field.

It handles:

  • which signals to use,
  • under what conditions to act,
  • under what conditions to suspend,
  • when to hand decisions to humans,
  • when boundaries require stopping,
  • and where decisions should be recorded.

The Decision System is therefore:

the mechanism that transforms information into socially actionable decisions.


7. Trace Layer

Within the Intelligence Field, decisions and corrections must not disappear once made.

This is because intelligence does not emerge from isolated outputs.

It develops through accumulated and reusable experience.

What matters is not merely preserving outcomes.

We must preserve:

  • why a decision was made,
  • what information was used,
  • what context influenced it,
  • who was involved,
  • what boundaries were triggered,
  • where exceptions occurred,
  • and how corrections were made.

With such records:

  • failures become material for improvement,
  • exceptions become clues for redesign,
  • and human decisions become reusable organizational intelligence rather than isolated experience.

Within the Intelligence Field:

recorded relationships themselves become intelligence.

The Trace Layer is therefore:

the layer that transforms the Intelligence Field into a continuously learning structure.

Exit mobile version
タイトルとURLをコピーしました