In recent years, the way we use LLMs has diverged into two major approaches.
One is
using tools like Claude Code to generate and execute code
The other is
using prompts to directly obtain answers from the model
Both approaches are extremely powerful and are rapidly spreading in real-world applications.
However, there is one critical question we must ask:
👉 Where does the “decision” actually exist in these systems?
Prompt-Based AI — It Appears to Decide, But Nothing Exists
Consider a simple prompt:
The LLM produces a plausible answer.
However, at this moment:
-
The decision criteria are not explicitly defined anywhere
-
We cannot explain why that decision was made
-
The same input may yield different results
In other words:
👉 The decision appears to exist, but structurally, it does not exist
Claude Code — It Looks Fixed, But Has the Same Problem
Now consider code generation tools like Claude Code.
If we ask:
“Write fraud detection logic”
we might get something like:
freeze_account()
At first glance, this looks like a clear decision rule.
But in reality:
-
Why is the threshold 0.8?
-
Who decided it?
-
Under what conditions does it change?
This is not a true decision.
👉 It is merely a piece of text that resembles a decision, now fixed in code
The Common Problem — No Location of Decision
Prompting and code generation may seem different, but they share the same fundamental issue.
| Method | State |
|---|---|
| Prompting | Decision is implicit |
| Code Generation | Decision is arbitrarily fixed |
In both cases:
👉 There is no ownership, rationale, or history of the decision
The Solution Is Not a Tool — It Is an Architectural Concept
The key point is this:
👉 The problem is not whether we use Claude Code or refine prompts
What we need is:
👉 A design concept that defines where decisions exist (a meta-architecture)
Externalizing Decision as an Architecture
Decisions should not reside inside LLMs.
They must exist as explicit external structures.
A basic structure looks like this:
→ Defines meaningDSL
→ Defines decision conditions
Behavior Tree
→ Defines execution structure
For example:
THEN freeze_account
ELSE allow_transaction
With this structure:
-
Decision criteria are explicit
-
Changes are manageable
-
Versioning is possible
-
Auditing becomes feasible
The Proper Role of LLMs
So what is the role of LLMs?
The answer is clear:
👉 LLMs do not make decisions
👉 They support decision-making
Specifically:
-
Generating signals (scores, summaries)
-
Proposing ontologies
-
Assisting DSL creation
-
Explaining decisions
In other words:
LLM = Support Engine for Decision Structures
The Proper Positioning of Prompting and Claude Code
Now we can clearly define their roles.
Prompting
👉 A tool for thinking about decision structures
-
Articulating decision criteria
-
Structuring logic
-
Comparing alternatives
Claude Code
👉 A tool for implementing decision structures
-
Generating DSL parsers
-
Building Behavior Tree executors
-
Implementing Decision Trace storage
-
Creating APIs and UI
Prompt × Claude Code — Only Together Do They Work
The key insight is:
👉 These are not competing approaches — they only become meaningful when combined
Because each alone has clear limitations.
Why Each Alone Is Insufficient
Prompting Alone
You can:
-
Define decision criteria
-
Explain structures
But:
-
It does not become executable
-
It is not fixed in a system
-
It is not reliably logged
👉 You can design, but nothing runs
Claude Code Alone
You can:
-
Build systems
-
Execute logic
But:
-
Decision criteria become implicit
-
Logic is embedded in code
-
The rationale is unclear
👉 It runs, but no one knows why
What Happens When Combined
By combining both:
👉 Design and implementation remain separated, yet connected
This is a critical structure.
The Workflow
Step 1: Design Decision Structure via Prompting
-
Define fraud detection criteria
-
Establish ontology of risk levels
-
Express conditions as DSL
-
Design Behavior Tree branches
Key point:
👉 Externalize decisions as language before turning them into code
Step 2: Implement Structure via Claude Code
-
Generate DSL parsers
-
Build Behavior Tree runners
-
Implement Decision Trace storage
-
Expose as APIs
Key point:
👉 Implement without breaking the defined decision structure
What This Separation Enables
1. Decisions Become Independent from Code
-
Managed externally as DSL
-
Easily modifiable
2. The Role of LLMs Becomes Clear
-
Focus on supporting design
-
No execution responsibility
3. Systems Become Explainable
-
Decisions can be traced
-
Stored as Decision Trace
The Most Important Rule
👉 Use Prompt → Claude Code in this order
If reversed:
-
Claude Code generates implicit logic
-
Explanations are added afterward
Result:
👉 Decisions become black boxes again
A Critical Pitfall
Even this structure has a major risk.
👉 Without a design concept, both prompting and Claude Code will re-embed decisions
Prompting
→ Decisions disappear into conversations
-
Explained temporarily
-
Not structurally preserved
-
Not reproducible
Claude Code
→ Decisions are fixed inside code
-
Appear explicit
-
But lack rationale and history
-
Contain hidden assumptions
Ultimately:
👉 Both approaches collapse decisions back into black boxes
What Truly Matters
The conclusion is clear.
What matters is not:
-
Prompting
-
Claude Code
👉 What matters is the architectural concept that externalizes decisions
Why This Matters
This is not just a technical issue.
It is about:
👉 Whether AI systems can carry responsibility
If Decisions Do Not Exist Structurally
-
They cannot be reproduced
-
They cannot be improved
-
They cannot be audited
-
Responsibility cannot be assigned
In other words:
👉 The AI may function, but the system does not truly exist
Final Conclusion
-
Prompting allows us to describe decisions
-
Claude Code allows us to implement decisions
But:
👉 Neither can become the decision itself
Looking Forward
AI systems will shift from:
👉 Model-centric architectures
to
👉 Decision-structure-centric architectures
And within that shift:
👉 LLMs will no longer be decision-makers, but components that support decision structures
In One Sentence
👉 What matters is not the tools, but the architectural decision of where decisions exist.

AIシステム設計・意思決定構造の設計を専門としています。
Ontology・DSL・Behavior Treeによる判断の外部化、マルチエージェント構築に取り組んでいます。
Specialized in AI system design and decision-making architecture.
Focused on externalizing decision logic using Ontology, DSL, and Behavior Trees, and building multi-agent systems.

コメント