Skip to main content

SessionActor Pattern

The SessionActor pattern is the core of Monolex’s terminal architecture. It eliminates all locks by ensuring a single owner for all session state.

The Problem: Lock Contention

Traditional terminal architectures suffer from lock contention:
┌─────────────────────────────────────────────────────────────────┐
│  TRADITIONAL ARCHITECTURE (Multiple Owners)                     │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Thread A (PTY Read)     Thread B (Resize)    Thread C (UI)     │
│       │                       │                    │            │
│       ▼                       ▼                    ▼            │
│  ┌─────────────────────────────────────────────────────────┐    │
│  │              Mutex<SessionState>                        │    │
│  │              ↓ Lock contention ↓                        │    │
│  │              Potential deadlocks                        │    │
│  └─────────────────────────────────────────────────────────┘    │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘
Problems:
  • Lock contention under high load
  • Potential for deadlocks
  • Complex reasoning about concurrent access
  • Performance degradation

The Solution: Single Owner

SessionActor owns all state. No locks required.
┌─────────────────────────────────────────────────────────────────┐
│  SESSIONACTOR ARCHITECTURE (Single Owner)                       │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  IPC Handler      PTY Socket       Resize Event                 │
│       │               │                 │                       │
│       │     MPSC      │      MPSC       │                       │
│       └───────────────┼─────────────────┘                       │
│                       ▼                                         │
│              ┌────────────────────────────────────────────┐     │
│              │  SessionActor    ← Single owner            │     │
│              │                                            │     │
│              │  sessions: HashMap<String, SessionState>   │     │
│              │  grid_workers: HashMap<String, Sender>     │     │
│              │  pty_client: PtyClient                     │     │
│              │                                            │     │
│              │  NO LOCKS!                                 │     │
│              └────────────────────────────────────────────┘     │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Implementation

Core Structure

struct SessionActor {
    // All session state owned by this single actor
    sessions: HashMap<String, Arc<SessionState>>,
    grid_workers: HashMap<String, UnboundedSender<GridMessage>>,
    log_workers: HashMap<String, UnboundedSender<LogMessage>>,
    paused_sessions: HashSet<String>,

    // Communication channels
    rx: UnboundedReceiver<SessionCommand>,
    tx: UnboundedSender<SessionCommand>,

    // External connections
    pty_client: PtyClient,
    app_handle: AppHandle,
}

Command Pattern

All operations are sent as commands through MPSC channels:
enum SessionCommand {
    Create { cols: u16, rows: u16, reply: oneshot::Sender<String> },
    Write { session_id: String, data: Vec<u8> },
    Resize { session_id: String, cols: u16, rows: u16, epoch: u64 },
    Close { session_id: String },
    Pause { session_id: String },
    Resume { session_id: String },
    GridAck { session_id: String },
}

Event Loop

impl SessionActor {
    async fn run(mut self) {
        while let Some(cmd) = self.rx.recv().await {
            match cmd {
                SessionCommand::Create { cols, rows, reply } => {
                    let session_id = self.create_session(cols, rows).await;
                    let _ = reply.send(session_id);
                }
                SessionCommand::Write { session_id, data } => {
                    self.write_to_session(&session_id, &data).await;
                }
                SessionCommand::Resize { session_id, cols, rows, epoch } => {
                    self.resize_session(&session_id, cols, rows, epoch).await;
                }
                // ... other commands
            }
        }
    }
}

Benefits

AspectWith LocksSessionActor
DeadlocksPossibleImpossible
Lock contentionUnder loadNone
ReasoningComplexSimple
PerformanceDegradesConsistent
Code complexityHighLow

GridWorker: Per-Session Processing

Each session has its own GridWorker running in a separate task:
┌─────────────────────────────────────────────────────────────────┐
│  GRIDWORKER PER SESSION                                         │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  SessionActor                                                   │
│       │                                                         │
│       ├── GridWorker (session-1) ──→ emit("pty-grid-session-1") │
│       │   └── Terminal Parser                                   │
│       │   └── ACK state                                         │
│       │                                                         │
│       ├── GridWorker (session-2) ──→ emit("pty-grid-session-2") │
│       │   └── Terminal Parser                                   │
│       │   └── ACK state                                         │
│       │                                                         │
│       └── GridWorker (session-N) ──→ emit("pty-grid-session-N") │
│           └── Terminal Parser                                   │
│           └── ACK state                                         │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Message Flow Example

User types 'ls' in terminal:

1. Frontend: invoke("write_to_pty", { sessionId, data: "ls\n" })
2. Tauri: Sends SessionCommand::Write to SessionActor
3. SessionActor: Forwards to PtyClient via Unix socket
4. PTY Daemon: Writes to PTY, reads output
5. PTY Daemon: Sends output back via Unix socket
6. SessionActor: Routes to GridWorker
7. GridWorker: Parses terminal data, converts cells
8. GridWorker: emit("pty-grid-{sessionId}", GridUpdate)
9. Frontend: Receives event, injects to xterm buffer
10. Frontend: invoke("grid_ack", { sessionId })
11. Repeat from step 6 if more data pending

Why Not Other Patterns?

Why not RwLock?

  • Still has contention between readers and writers
  • Deadlock potential with multiple locks
  • Complex upgrade/downgrade semantics

Why not Actor per Operation?

  • Too fine-grained, message passing overhead
  • Loses locality of related state
  • Harder to reason about ordering

Why SessionActor?

  • Natural boundary: one actor per “thing” (terminal sessions)
  • All related state co-located
  • Simple event loop, easy to understand
  • Guaranteed ordering within actor

SMPC/OFAC Applied

PrincipleApplication
SMPCSingle owner = simple reasoning
MPSC channels = simple communication
No locks = no complexity
OFACActor pattern emerged from need
Per-session workers = natural parallelism
Commands = organized chaos