With the rise of generative AI, prompts have rapidly moved to the center of many systems.
We instruct AI using natural language—to classify, summarize, and generate proposals.
This way of working has already become common across many real-world environments.
However, once AI is actually used in practice, a fundamental limitation quickly appears.
- The same input produces inconsistent results
- It is difficult to explain why a decision was made
- It is unclear how far automation should go
- Conditions for escalating to humans are not defined
In other words, while AI can generate outputs, those outputs are too unstable to be treated as decisions.
This problem cannot be solved by improving model performance alone.
Because the core issue is not intelligence—it is the absence of an explicit decision structure.
This is where DSL becomes essential.
What Is DSL
DSL stands for Domain Specific Language.
It is a language designed for a specific purpose within a particular domain.
In this context, DSL is not just a format for writing configurations.
It is a way to explicitly describe decisions, rules, and structures in a given domain.
General-purpose languages like Python or JavaScript allow you to write any kind of logic.
In contrast, DSL intentionally restricts flexibility in order to make decision logic:
- easier to read
- easier to validate
- easier to reuse
The differences can be summarized as follows:
| Aspect | General-purpose Language | DSL |
|---|---|---|
| Purpose | Write arbitrary logic | Express domain-specific decisions |
| Readability | Developer-oriented | Domain-oriented |
| Decision representation | Often implicit | Explicit |
| Reproducibility | Implementation-dependent | Structurally consistent |
The key point is that DSL is not just a different way of writing code.
👉 It is a framework for turning implicit judgment into explicit structure.
Why Rigor Is Necessary
When using AI in real systems, what is needed is not “plausible outputs.”
What is needed is:
- What should be done
- Why it should be done
- When the system should stop
- Who is responsible
Rigor does not mean writing things in excessive detail.
It means structuring decisions so that assumptions, conditions, exceptions, boundaries, and responsibilities are clearly defined.
There are four main reasons why rigor is necessary.
1. Reproducibility
A system that produces different decisions under similar conditions cannot be trusted in real operations.
Even if reality is not perfectly identical, there must be a stable structure of “under these conditions, this happens.”
Without rigor, decisions appear arbitrary—like the mood of a model or the habits of an implementer.
2. Explainability
Whenever AI is introduced, the question inevitably arises:
👉 “Why did this happen?”
This becomes critical for decisions such as:
- rejection
- escalation
- pricing changes
- reward allocation
- approvals
Explanations cannot rely on post-hoc narratives.
They require a predefined decision structure.
3. Responsibility Boundaries
Even if AI suggests something, it is not always clear:
- Should it be executed automatically?
- Should a human approve it?
- Should it be stopped under certain conditions?
If these boundaries are unclear, responsibility becomes unclear.
Rigor is a mechanism for preventing responsibility from being diffused.
4. Operational Viability
Systems are not used once—they are operated continuously.
They must be:
- maintained
- improved
- audited
- handed over
This requires decisions to be written in a verifiable form, not embedded in intuition or natural language ambiguity.
Why Prompts Lack Rigor
Prompts are powerful.
They allow complex reasoning with minimal implementation.
But when prompts are used as the core of decision systems, limitations appear.
The reason is simple:
👉 A prompt is language, not a specification.
Natural language is flexible, which makes it ideal for interaction.
But that same flexibility introduces ambiguity.
For example:
- Interpretations may vary
- Priority of conditions is unclear
- Edge cases are hard to define
- Stop conditions are buried
- Logs are difficult to structure
Prompts are excellent for extracting meaning.
But they are not suitable for fixing decisions as a system.
This does not mean prompts should be discarded.
👉 It means prompts need structure around them.
That structure is DSL.
Decision Trace Model and DSL
This becomes clearer through the Decision Trace Model.
Decision-making is structured as:
Event → Signal → Decision → Boundary → Human → Log
The key insight is:
👉 AI output is not the final decision.
- Event: What happened
- Signal: AI predictions or scores
- Decision: What to do
- Boundary: When to stop or escalate
- Human: Who takes responsibility
- Log: What is recorded
The critical distinction is:
👉 Signal and Decision are fundamentally different.
AI produces Signals.
For example:
- “High purchase intent”
- “High risk inquiry”
- “Regulatory relevance detected”
But deciding:
- whether to issue a coupon
- whether to respond automatically
- whether to escalate to a human
- whether to do nothing
belongs to the Decision layer.
DSL is the mechanism that makes this Decision and Boundary explicit.
How DSL Brings Rigor
The essence of DSL is to decompose decisions into:
- readable
- verifiable
- executable
structures.
1. Explicit Conditions
decision:
condition:
- signal.intent == "purchase"
- signal.score > 0.8
The decision criteria are explicitly defined as specifications, not hidden in code.
2. Separation of Decision and Action
action:
- send_coupon
- log_decision
This separation allows flexibility in modifying behavior without changing decision logic.
3. Explicit Boundaries
boundary:
- signal.risk > 0.7: escalate_to_human
The system explicitly defines when it must stop or escalate.
4. Structured Logging
log:
- event_id
- decision_reason
- selected_action
Decision history becomes traceable and auditable.
Combining with Ontology
DSL becomes more powerful when combined with ontology.
Ontology defines the meaning of terms used in decisions.
For example:
{
"intent": ["purchase", "browse", "exit"],
"risk_level": ["low", "medium", "high"]
}
This ensures that the vocabulary used in DSL remains consistent.
👉 Ontology defines meaning
👉 DSL defines decisions using that meaning
Combining with Behavior Trees
Behavior Trees define execution flow.
While DSL defines what to do, BT defines how to evaluate and execute.
Example:
Selector
├── Condition: high_risk
│ └── Action: escalate_to_human
├── Condition: purchase_and_high_score
│ └── Action: send_coupon
└── Action: log_only
Roles:
- Ontology → meaning
- DSL → decision logic
- BT → execution structure
Together, they form a robust decision system.
Implementation Example
class DecisionEngine:
def __init__(self, rules):
self.rules = rules
def evaluate(self, signal):
for rule in self.rules:
if all(cond(signal) for cond in rule["conditions"]):
return rule["action"]
return "no_action"
This shows how DSL-defined rules can be executed externally.
Do We Have to Write DSL Manually?
A natural question:
👉 “Do we have to write everything manually?”
The answer is:
👉 No—and we shouldn’t.
DSL is not about writing everything by hand.
It is about structuring decisions.
Role Separation
- DSL → defines stable structure
- AI → handles variability
👉 DSL = skeleton
👉 AI = muscle
How DSL Is Created
- Core logic is designed by humans
- Templates are reused
- AI assists DSL generation
- Rules evolve through feedback
DSL becomes a living design asset.
Why DSL Is Still Necessary
Even with AI:
👉 AI cannot produce accountable decisions.
AI generates probable outputs.
But systems require:
- explicit conditions
- reproducibility
- explainability
- responsibility
DSL acts as:
👉 the interface of decision-making
Use Cases
Retail
Optimize incentives based on user behavior and constraints.
Manufacturing
Ensure compliance decisions are structured and auditable.
Customer Support
Define when to automate and when to escalate.
Conclusion
DSL is not about writing rules.
👉 It is about structuring decisions.
AI can generate outputs.
But without structure:
- decisions are unstable
- responsibility is unclear
- systems cannot be trusted
Through:
- Ontology (meaning)
- DSL (decision)
- Behavior Tree (execution)
AI evolves into:
👉 a decision system
DSL is not a replacement for prompts.
👉 It is the foundation that makes prompts usable in real systems.
For the creation of DSLs, please also refer to the following articles:
- How to Build Ontology, DSL, and Behavior Trees Efficiently and Accurately — A Practical Method for Designing Decision Structures
- How to Design an AI Orchestrator — Implementing Decision Structures with GNNs, Ontologies, DSLs, and Behavior Trees
AIシステム設計・意思決定構造の設計を専門としています。
Ontology・DSL・Behavior Treeによる判断の外部化、マルチエージェント構築に取り組んでいます。
Specialized in AI system design and decision-making architecture.
Focused on externalizing decision logic using Ontology, DSL, and Behavior Trees, and building multi-agent systems.
