Concurrency Patterns: The 5-Pattern Pipeline
Monolex uses a synergistic pipeline of five concurrency patterns to handle high-volume terminal output. Each pattern solves its own problem AND enables the next pattern to work correctly.
[Q.E.D VERIFIED] All patterns documented here are verified against production code with file:line citations.
The Complete Pipeline
PTY Master (unlimited speed)
|
v
+===============================================================+
| PATTERN 1: Unbounded Channel |
| Guarantee: NO DATA LOSS |
| Mechanism: Unlimited buffer, never drops |
+===============================================================+
|
v
+===============================================================+
| PATTERN 2: Unix Socket Kernel Buffer |
| Guarantee: NATURAL BACKPRESSURE |
| Mechanism: ~208KB kernel buffer, blocks write when full |
+===============================================================+
|
v
+===============================================================+
| PATTERN 3: Actor (SessionActor) |
| Guarantee: NO RACE CONDITIONS, ORDERED PROCESSING |
| Mechanism: Single owner, message queue (MPSC) |
+===============================================================+
|
v
+===============================================================+
| PATTERN 4: AtomicState (Coalescing Buffer) |
| Guarantee: FIXED MEMORY, LATEST STATE ONLY |
| Mechanism: Overwrite (not accumulate) |
+===============================================================+
|
v
+===============================================================+
| PATTERN 5: ACK Flow Control |
| Guarantee: SCREEN STABILITY |
| Mechanism: Wait for render confirmation before next emit |
+===============================================================+
|
v
xterm.js WebGL Render (60fps)
Data Reduction Summary
Stage Input Output Reduction 1. Unbounded Channel 10,000/sec 10,000/sec 0% (preserves all) 2. Socket Buffer 10,000/sec 10,000/sec 0% (backpressure only) 3. Actor 10,000/sec 10,000/sec 0% (ordering only) 4. AtomicState 10,000/sec ~500/sec 95% (coalescing) 5. ACK ~500/sec 60/sec 88% (rate limiting) TOTAL 10,000/sec 60/sec 99.4%
All 10,000 updates are PROCESSED by the VTE parser. Only EMISSION to frontend is reduced. This is state coalescing , not data loss.
Pattern 1: Actor over Mutex
[Q.E.D VERIFIED] SessionActor pattern implemented in lib.rs eliminates lock contention.
The Problem
Multiple sources want terminal state simultaneously:
PTY Output Thread ------+
|
User Input Handler -----+------> Terminal State <------ Resize Handler
| (Grid)
Renderer Timer ---------+ ^
|
Close Handler
Without protection: DATA RACE
Why Actor Wins
Mutex Approach
Writers block readers
Lock contention under high PTY output
Priority inversion (resize blocks PTY)
Deadlock risk with multiple resources
Actor Approach
No blocking (async send)
FIFO guaranteed
Impossible deadlock
Easy to reason about
Timeline Comparison
Mutex:
T=0ms PTY: lock() --> write --> unlock()
T=1ms Resize: lock() --> BLOCKED (50ms!)
T=51ms Resize: finally acquires lock
Actor:
T=0ms PTY: tx.send(Line1) --> completes instantly
T=0ms Resize: tx.send(Resize) --> completes instantly
T=0ms Actor: recv(Line1) --> process
T=1ms Actor: recv(Resize) --> apply NEW dimensions
T=2ms Actor: recv(Line2) --> process with NEW dimensions
Pattern Origin : Actor model from Carl Hewitt (1973), proven in Erlang telecom systems (99.9999999% uptime).
Pattern 2: ACK Flow Control
[Q.E.D VERIFIED] ACK gate blocks pull() when waiting_ack is true.
atomic_state.rs:411-453
atomic-cell-injector.ts:144-165
/// Pull GridUpdate if ready, None if not ready
pub fn pull ( & mut self ) -> Option < GridUpdate > {
// 1. Waiting for ACK?
if self . waiting_ack {
return None ; // <-- ACK GATE: blocks emission
}
// ... other checks (BSU/ESU, implicit sync) ...
// 5. Build update
let update = self . build_update ();
// 7. Mark waiting for ACK
self . waiting_ack = true ;
self . ack_deadline = Some ( Instant :: now () + Duration :: from_millis ( ACK_TIMEOUT_MS ));
Some ( update )
}
ACK State Machine
+----------------------------+
| |
+----------------v----------------+ emit() |
| | ----------+
| waiting_ack = false |
| (READY TO EMIT) |
| |
+----------------+----------------+
|
| new PTY data arrives
| emit("pty-grid")
| waiting_ack = true
v
+--------------------------------+
| |
| waiting_ack = true | <-- More PTY data arrives
| (WAITING FOR ACK) | AtomicState absorbs, no emit
| |
+----------------+----------------+
|
| Frontend: invoke("grid_ack")
| waiting_ack = false
|
+---> back to READY TO EMIT
Timeline Example
T=0ms PTY data arrives, waiting_ack=false -> emit("pty-grid"), waiting_ack=true
T=5ms PTY data arrives, waiting_ack=true -> AtomicState absorbs (NO emit)
T=10ms PTY data arrives, waiting_ack=true -> AtomicState absorbs (NO emit)
T=15ms PTY data arrives, waiting_ack=true -> AtomicState absorbs (NO emit)
T=16ms Frontend renders, invoke("grid_ack") -> waiting_ack=false
T=16ms New state ready -> emit("pty-grid"), waiting_ack=true
...
Result: 4 PTY updates, 2 emits, 0 frame drops, smooth 60fps
Pattern 3: EPOCH Sync (Resize Safety)
[Q.E.D VERIFIED] Epoch validation prevents stale GridUpdates after resize.
atomic-cell-injector.ts:171-175
private inject ( session : Session , term : Terminal , update : GridUpdate ): boolean {
// Epoch validation: reject stale GridUpdates from before the last resize
if ( update . epoch < session . currentEpoch ) {
return false ;
}
// ... rest of inject logic
}
The Resize Problem
T=0ms GridUpdate (epoch=1, 80x24) in flight
T=1ms User resizes terminal to 120x40
T=2ms Frontend receives old GridUpdate (80x24)
--> WRONG DIMENSIONS! Corruption!
EPOCH Solution
T=0ms GridUpdate (epoch=1, 80x24) in flight
T=1ms User resizes: prepareResize() -> epoch=2
T=2ms Frontend receives old GridUpdate (epoch=1)
T=2ms epoch=1 < currentEpoch=2 --> REJECTED (no corruption)
T=3ms New GridUpdate (epoch=2, 120x40) arrives --> APPLIED
Frontend: prepareResize
Backend: AtomicState::resize
/**
* Prepare for resize by incrementing epoch.
* Call this BEFORE xterm.js resize to ensure stale GridUpdates are rejected.
*/
prepareResize ( sessionId : string ): number {
const session = this . sessions . get ( sessionId );
if ( session && session . injectCount > 0 ) {
session . currentEpoch ++ ;
return session . currentEpoch ;
}
return 0 ;
}
Pattern 4: AtomicState Absorber
[Q.E.D VERIFIED] AtomicState overwrites instead of queuing, guaranteeing fixed memory.
Overwrite vs Queue
Queue Approach (BAD) Input: S1 -> S2 -> S3 -> ... -> S10
Queue: [S1][S2][S3]...[S10]
Memory: Grows indefinitely
User: Sees OLD states
Overwrite Approach (GOOD) Input: S1 -> S2 -> S3 -> ... -> S10
State: [S10] (only current)
Memory: Fixed (~80KB)
User: Sees CURRENT state
Memory Calculation
Typical Terminal: 120 cols x 40 rows = 4,800 cells
Cell Size:
- char: 4 bytes (UTF-32)
- fg: 4 bytes (RGBA)
- bg: 4 bytes (RGBA)
- flags: 4 bytes
- width: 1 byte
Total: ~17 bytes/cell
Grid Memory: 4,800 x 17 = 81,600 bytes = ~80KB
With scrollback (1000 lines):
(40 + 1000) x 120 x 17 = ~2MB
Memory Comparison (1 minute of high output)
Approach Calculation Memory Queue-Based 1000 states/sec x 60 sec x 80KB 4.8GB AtomicState 1 state x 80KB 80KB Difference - 60,000x less
The Dual Guarantee
ONE mechanism (overwrite) provides TWO guarantees:
1. MEMORY STABILITY
Queue: [S1][S2][S3]...[S1000] -> Memory grows
Overwrite: [S1000] -> Memory fixed
2. TEMPORAL CONSISTENCY
Queue: User sees S1, S2, S3... (past states)
Overwrite: User sees S1000 (current state)
For terminals, CURRENT is what matters. Intermediate states are noise.
Pattern 5: BSU/ESU Detection
[Q.E.D VERIFIED] Synchronized Update detection prevents mid-frame rendering.
fn detect_bsu ( & self , data : & [ u8 ]) -> bool {
// BSU: ESC P = 1 s ESC \
// Or: ESC [ ? 2026 h
data . windows ( 7 ) . any ( | w | w == b" \x1b P=1s \x1b\\ " )
|| data . windows ( 9 ) . any ( | w | w == b" \x1b [?2026h" )
}
fn detect_esu ( & self , data : & [ u8 ]) -> bool {
// ESU: ESC P = 2 s ESC \
// Or: ESC [ ? 2026 l
data . windows ( 7 ) . any ( | w | w == b" \x1b P=2s \x1b\\ " )
|| data . windows ( 9 ) . any ( | w | w == b" \x1b [?2026l" )
}
How It Works
Shell sends: BSU + [frame data] + ESU
T=0ms BSU detected -> syncing=true, sync_deadline=16ms
T=1ms frame data arrives -> AtomicState absorbs (no pull allowed)
T=5ms frame data arrives -> AtomicState absorbs
T=10ms ESU detected -> syncing=false
T=10ms pull() called -> returns GridUpdate with complete frame
Result: User sees complete frame, no partial renders
Timeout Safety
const BSU_ESU_TIMEOUT_MS : u64 = 16 ;
// In tick():
if let Some ( deadline ) = self . sync_deadline {
if now >= deadline {
self . syncing = false ;
self . sync_deadline = None ;
}
}
If ESU never arrives, timeout releases the gate after 16ms to prevent deadlock.
Synergy Matrix
Each pattern solves its own problem AND helps others:
Pattern Own Guarantee Helps Next Pattern 1. Unbounded Channel No data loss All downstream patterns receive complete data 2. Socket Buffer Natural backpressure (~208KB limit) Unbounded channel doesn’t grow infinitely 3. Actor No race conditions, ordered processing AtomicState receives ordered updates 4. AtomicState (Coalesce) Fixed memory, latest state only ACK doesn’t need to handle thousands of updates 5. ACK Screen stability, 60fps guarantee Frontend renders smoothly at sustainable rate
Reverse Dependency (Efficiency Cascade)
ACK slows emit rate (60/sec max)
|
+--> AtomicState absorbs more updates per emit
|
+--> Data reduction ratio increases
|
+--> Better efficiency, less work per emit
AtomicState processes instantly (just overwrite)
|
+--> Actor's message queue stays short
|
+--> No command backlog
|
+--> Actor loop runs fast
Actor reads socket continuously (no blocking)
|
+--> Socket buffer drains quickly
|
+--> write() rarely blocks
|
+--> PTY Daemon stays fast
What If You Remove a Pattern?
Without Pattern 1 (Unbounded Channel)
Bounded channel would drop data when full
Terminal output corrupted, missing content
User sees incomplete results
Without Pattern 2 (Socket Buffer)
No kernel-level backpressure
Pattern 1’s unbounded channel could grow infinitely
Memory explosion if Tauri is slow
Without Pattern 3 (Actor)
Multiple threads accessing state simultaneously
Race conditions, data corruption
Locks required -> contention -> slowdown
Without Pattern 4 (AtomicState)
Need queue between Actor and ACK
Queue grows: 10,000 - 60 = 9,940 items/sec accumulation
Memory grows indefinitely
Temporal mismatch: user sees past, not present
Frontend receives all ~500 coalesced states/sec
xterm.js event queue grows
Frame drops, stuttering
Screen instability
Pattern Origins
These are NOT new inventions. They are PROVEN patterns combined for terminal rendering:
Pattern Origin Proven In Unbounded Channel Rust std library (mpsc) Async systems, message passing Socket Buffer Unix (1983), BSD Sockets 40+ years of network programming Actor Carl Hewitt (1973), Erlang (1986) Erlang telecom (99.9999999% uptime) Coalescing Buffer GUI frameworks (decades old) React batching, game engines ACK Flow Control TCP/IP (1974), Vint Cerf Every internet connection, 50+ years
Monolex’s innovation: Putting these patterns together in the RIGHT LAYER (VTE parsing layer, not IPC layer).
SMPC/OFAC Applied
SMPC (Simplicity is Managed Part Chaos) :
Each pattern is SIMPLE (one purpose, one guarantee)
Combined, they handle COMPLEX scenarios (high output, concurrent access)
No single “complex” solution - 5 simple solutions working together
OFAC (Order is a Feature of Accepted Chaos) :
Accept: PTY output is chaotic (unlimited, unpredictable)
Accept: Renderer is slow (60fps max)
Accept: Intermediate states don’t matter
Order emerges: User sees current state, screen is stable
The architecture doesn’t FIGHT chaos. It ACCEPTS chaos and extracts ORDER.