■ Introduction
The evolution of AI has so far been driven by improvements in compute performance.
- Parallel processing with GPUs
- Optimization of large-scale matrix operations
- Expansion of memory bandwidth
However, as discussed in
“From the LLM Era to the Multi-Agent Era: How Will Semiconductor Architecture Change?”,
as AI evolves from LLMs to multi-agent systems,
the bottleneck is clearly shifting.
It is no longer about computation.
It is about:
“How data is connected.”
At the center of this shift are:
Optoelectronic Convergence and Interposer Technology
■ 1. Why Optoelectronic Convergence Is Necessary
Traditional semiconductor systems have relied on electrical signals to transfer data.
However, as AI evolves, several critical challenges have emerged:
● Shift in Bottlenecks
- Latency in inter-chip communication
- Increasing power consumption
- Thermal limitations
- Bandwidth constraints
Especially in multi-agent and distributed AI systems,
inter-chip communication becomes more dominant than on-chip computation.
● Energy Characteristics: Compute vs Communication
On-chip computation has been continuously optimized through:
- Process scaling (miniaturization)
- Architectural optimization (parallelization and specialization: GPU / ASIC / AI accelerators)
As a result:
Energy per operation has steadily decreased.
In contrast, inter-chip communication is fundamentally different:
- Long-distance transmission (off-package / board-level)
- Higher driving voltage requirements
- I/O circuitry dominates power consumption
As a result:
Energy per bit for communication is orders of magnitude higher than for computation.
● This Is Where Optics Comes In
Optical communication offers:
- Ultra-high speed (low latency)
- Low power consumption (especially over longer distances)
- High bandwidth
Therefore, a natural division of roles emerges:
Computation in electricity, communication in light
■ 2. What Is an Interposer?
An interposer is:
An intermediate substrate that connects multiple chips.
● Basic Structure
- Logic chips (GPU / CPU)
- Memory (HBM: High Bandwidth Memory)
- Interposer (connection layer)
This enables what is known as:
2.5D packaging
● Structural Differences from Conventional Design
In traditional systems:
- Logic and memory are connected via PCB
- Wiring is long, limiting bandwidth, latency, and power efficiency
In contrast, 2.5D packaging:
- Places logic and HBM in close proximity on the same interposer
- Uses ultra-fine wiring (micrometer scale)
- Connects vertically via TSVs (Through Silicon Vias)
● Data Flow (Operation)
This is the essence of the architecture.
① Data Request (Read)
- The GPU issues a memory access request
- The signal travels through fine interposer wiring
- It reaches the HBM stack
- Accesses memory cells via TSVs
→ Data is retrieved
② Data Transfer
- Retrieved data is output from HBM
- Travels through TSVs to the interposer
- Moves through ultra-wide bandwidth wiring back to the GPU
→ Short-distance, high-bandwidth transfer
③ Compute
- The GPU processes the received data
- Writes results back to memory as needed
● Why It Is Fast (Core Principle)
The key is:
Distance and parallelism
Short Distance
- Millimeter-scale
- Significantly shorter than conventional centimeter-scale paths
→ Lower latency and lower power consumption
High Parallelism
- Thousands of bits wide buses
- HBM uses channel-based parallel architecture
→ Bandwidth increases dramatically (TB/s scale)
● Intuitive Comparison
Traditional (DDR + PCB)
- Narrow paths over long distances
- Congestion occurs
2.5D (HBM + Interposer)
- Wide paths over very short distances
- Massive parallel data transfer
■ 3. The Essential Role of the Interposer
The interposer is not just a connection layer.
It is a system-level integration platform.
● ① Ultra-High-Density Interconnect
- Micron-scale wiring
- TSVs
- Thousands to tens of thousands of connections
→ Foundation of the chiplet era
● ② Proximity of Memory and Logic
- Ultra-fast HBM integration
- Reduced latency
- Increased bandwidth
→ Directly impacts AI performance
● ③ Heterogeneous Integration
- Logic + memory + photonic devices
- Integration across different process technologies
→ System-level optimization
■ 4. Optoelectronic Convergence × Interposer
The key question is:
Where should optical devices be integrated?
One of the most effective answers is:
On the interposer
● Why the Interposer?
- Optimal boundary between electrical and optical domains
- Acts as a hub connecting multiple chips
- High design flexibility
● Resulting Architecture
- Logic chips (AI processing)
- Memory (HBM)
- Photonics (optical I/O)
- Interposer (integration platform)
The interposer evolves from “wiring” into a platform for integration
■ 5. Relationship with the Chiplet Era
Semiconductor design is shifting from monolithic integration to:
Chiplet architectures
● Traditional
- Everything integrated into a single chip
- High manufacturing complexity
- Scaling limitations
● Chiplets
- Functionally separated chips
- Manufactured with optimal processes
- Flexible system composition
The key challenge becomes:
How to connect chiplets
→ The answer: Interposers
■ 6. Impact on AI Architecture
This is not just a hardware evolution.
It transforms the structure of AI itself.
● Before (LLM Era)
- Compute-centric
- Single large models
- Dominated by internal processing
● After (Multi-Agent Era)
- Communication-centric
- Distributed agents
- Dominated by interconnection
A shift from “computation” to “connectivity”
■ 7. Connection to Decision Trace Model
This shift aligns with software architecture as well.
● Multi-Agent Systems
- Multiple perspectives
- Parallel generation
- Expanded exploration space
● Decision Trace Model
- Structured final decisions
- Boundary-based control
- Human-in-the-loop
The key is:
Information flow between agents
● Hardware Mapping
- Multi-Agent → Distributed chiplets
- Communication → Optical interconnect
- Decision integration → Interposer
The interposer can be seen as a physical orchestrator
■ 8. Future Outlook
This field will continue to evolve:
● Directions
- Co-Packaged Optics (CPO)
- Silicon photonics
- 3D stacking
- Optical memory
● Fundamental Shift
Semiconductors are evolving:
From compute devices to connectivity platforms
■ Takeaway
Optoelectronic convergence and interposer technology are not just about speed.
They fundamentally change:
The structure of systems
Ultimately, what matters is:
- What information flows
- Where decisions are made
- Where control is enforced
Hardware itself is beginning to embody decision structures
If AI becomes a decision system,
then the underlying semiconductor infrastructure must evolve to support it.
This evolution is not optional.
It is inevitable.

AIシステム設計・意思決定構造の設計を専門としています。
Ontology・DSL・Behavior Treeによる判断の外部化、マルチエージェント構築に取り組んでいます。
Specialized in AI system design and decision-making architecture.
Focused on externalizing decision logic using Ontology, DSL, and Behavior Trees, and building multi-agent systems.
