On this page
When Race Control's AI generates a PortableSequence, the Director App doesn't just fire-and-forget the steps. It routes the sequence through a carefully designed three-layer execution stack: SequenceScheduler → SequenceExecutor → ExtensionHostService. Each layer has a single job, and together they handle concurrency, variable resolution, step dispatch, and failure recovery.
This post explains how sequences actually get executed on the broadcast machine.
The Three Layers
┌──────────────────────────────────────────────────┐
│ SequenceScheduler │
│ (Queue, History, Variables, Progress, Cancel) │
└──────────────────────┬───────────────────────────┘
│
▼
┌──────────────────────────────────────────────────┐
│ SequenceExecutor │
│ (Built-in intents, Extension dispatch, │
│ Soft failure, system.executeSequence) │
└──────────────────────┬───────────────────────────┘
│
▼
┌──────────────────────────────────────────────────┐
│ ExtensionHostService │
│ (iRacing, OBS, Discord intent handlers) │
└──────────────────────────────────────────────────┘
Sequences can arrive from multiple sources — each tagged with its origin:
| Source | When | Example |
|---|---|---|
director-loop | CloudPoller receives a sequence from Race Control | AI-generated battle camera sequence |
manual | Operator clicks "Execute" in the Sequences panel | Pre-built "Safety Car" sequence |
event-mapper | Extension event triggers a mapped sequence | Flag change triggers caution sequence |
stream-deck | External Stream Deck button press | Quick replay sequence |
webhook | External HTTP trigger | Chat bot command |
All sources funnel through SequenceScheduler.enqueue() — the single execution entry point.
Layer 1: SequenceScheduler
The scheduler manages what executes and when. It provides queue management, concurrency control, variable resolution, and execution history.
Enqueue and Priority
Every sequence enters via enqueue(), which returns a unique execution ID for tracking:
async enqueue(
sequence: PortableSequence,
variables: Record<string, unknown> = {},
options?: {
source?: 'manual' | 'director-loop' | 'ai-agent' | 'stream-deck' | 'webhook' | 'event-mapper';
priority?: boolean;
}
): Promise<string>;The priority flag controls execution behavior:
Default (priority: false): The sequence joins the queue. It waits for any currently executing sequence to finish before running. Sequences execute in order, one at a time.
Priority (priority: true): The scheduler cancels the currently executing sequence (if any), clears the queue, and starts the priority sequence immediately. This is the cancel-and-replace pattern — a priority sequence means "everything before this is obsolete."
Priority sequences are used for time-sensitive broadcast events. If the AI detects a crash and needs to switch to a replay immediately, it sends a priority sequence that interrupts whatever camera angle was running.
Variable Resolution
Before a sequence executes, the scheduler resolves all $var(name) references in step payloads. This is substitution-only — no expression evaluation, no arithmetic. A reference like $var(driverNumber) is replaced with the literal value provided.
Resolution order:
- Explicit values — Provided by the caller (user filled in the form, or AI provided the value)
- Context values — Auto-populated from live telemetry/session data
- Default values — From the variable definition in the sequence
- Unresolved — Required variable missing → execution fails immediately
Variable resolution happens in the scheduler, not the executor, so all steps receive fully resolved payloads.
Execution History
The scheduler maintains an in-memory ring buffer of the last 25 execution results (configurable). Each entry captures:
interface ExecutionResult {
executionId: string;
sequenceId: string;
sequenceName: string;
status: 'completed' | 'partial' | 'failed' | 'cancelled';
source: 'manual' | 'director-loop' | 'ai-agent' | 'stream-deck' | 'webhook' | 'event-mapper';
priority: boolean;
startedAt: string;
completedAt: string;
totalDurationMs: number;
resolvedVariables: Record<string, unknown>;
steps: StepResult[];
}History is not persisted to disk — it resets on app restart. For a live broadcast session, 25 entries provides enough context to debug "what just happened?" without accumulating stale data.
Progress Events
During execution, the scheduler emits fine-grained progress events that the UI subscribes to:
interface SequenceProgress {
executionId: string;
sequenceId: string;
sequenceName: string;
currentStep: number;
totalSteps: number;
stepIntent: string;
stepStatus: 'running' | 'success' | 'skipped' | 'failed';
log: string; // Formatted: "⏳ Step 2/5: broadcast.showLiveCam..."
}Special synthetic intents mark sequence boundaries:
sequence.start— Emitted before the first stepsequence.end— Emitted after the last step (or on cancellation)
The orchestrator also subscribes to progress events to track execution for status reporting to the operator.
Cancellation
Two cancellation mechanisms:
cancelCurrent()— Stops the currently executing sequence. The cancellation flag is checked between steps, so steps themselves are atomic.cancelQueued(executionId)— Removes a specific queued sequence. Queue positions are recalculated after removal.
Layer 2: SequenceExecutor
The executor handles how each step runs. It is a headless, intent-driven runtime that operates purely on the PortableSequence format — it doesn't know or care how the sequence was created.
Step Dispatch
For each step, the executor routes based on the intent namespace:
| Intent Prefix | Handler | Example |
|---|---|---|
system.wait | Built-in: setTimeout delay | { durationMs: 3000 } |
system.log | Built-in: Console log | { message: "Switching to leader", level: "INFO" } |
system.executeSequence | Built-in: Fetch from library and execute inline | { sequenceId: "caution-sequence" } |
overlay.show / overlay.hide | Built-in: Dispatch to OverlayBus | { overlayId: "leaderboard" } |
| Everything else | Dispatched to ExtensionHostService | broadcast.showLiveCam, obs.switchScene, etc. |
The system.executeSequence intent enables sequence nesting — one sequence can reference another from the library and execute it inline. This allows composable broadcast recipes.
Soft Failure Model
When the executor encounters a step it can't execute (e.g., obs.switchScene when OBS isn't connected), it does not abort the entire sequence. Instead:
- The executor checks
extensionHost.hasActiveHandler(step.intent) - If no handler is active, the step is skipped with a warning
- Execution continues with the next step
- The final
ExecutionResultrecords the step asskippedwith the reason
This soft failure model is critical for broadcast resilience. If OBS disconnects mid-sequence, the camera switches and announcements should still execute — only the OBS-specific steps get skipped. The result status is partial (some steps succeeded, some skipped) rather than failed.
A step is marked failed (not skipped) only if the handler throws an error during execution — meaning the extension was connected but the command itself failed.
Layer 3: ExtensionHostService
The extension host routes intents to the correct extension. Each extension registers intent handlers during activation:
// iRacing extension
director.registerIntentHandler('broadcast.showLiveCam', async (payload) => {
broadcastMessage(IRSDK_BROADCAST_CAM_SWITCH_NUM, payload.carNum, payload.camGroup, 0);
});
// OBS extension
director.registerIntentHandler('obs.switchScene', async (payload) => {
await obs.call('SetCurrentProgramScene', { sceneName: payload.sceneName });
});
// Discord extension
director.registerIntentHandler('communication.announce', async (payload) => {
await discordService.speak(payload.message);
});When the executor calls extensionHost.executeIntent('broadcast.showLiveCam', payload), the host looks up the registered handler and invokes it. The extension system is covered in detail in a separate post.
Worked Example: Auto-Director Sequence
Here's what happens when the AI generates a "battle camera" sequence during a race:
- CloudPoller receives a
200 OKwith a PortableSequence from Race Control - The orchestrator's
onSequencecallback fires, callingscheduler.enqueue(sequence, {}, { source: 'director-loop', priority: false }) - SequenceScheduler assigns an
executionId, resolves$var()references, and starts execution (or queues if something is already running) - The scheduler emits
progresswithsequence.start - For each step:
- Scheduler emits
progresswith statusrunning - SequenceExecutor dispatches the step:
obs.switchScene→ ExtensionHostService → OBS extension → WebSocket call to OBSbroadcast.showLiveCam→ ExtensionHostService → iRacing extension → shared memory broadcast messagesystem.wait→ built-in setTimeoutcommunication.announce→ ExtensionHostService → Discord extension → TTS
- Scheduler emits
progresswithsuccess,skipped, orfailed
- Scheduler emits
- Scheduler emits
progresswithsequence.end ExecutionResultis pushed to the history ring bufferhistoryChangedevent fires — the orchestrator sees the completion and callscloudPoller.onSequenceCompleted(sequenceId)- CloudPoller triggers an immediate poll for the next sequence
The entire flow, from API response to hardware command, typically completes in under a second (excluding system.wait delays that are intentional timing gaps between broadcast actions).