Back to blog
ai-contextiracingtelemetrydirector-apparchitecture

From Raw Telemetry to Race Events: The Publisher Integration

How the Director app's iRacing extension was upgraded from streaming raw telemetry to publishing structured race events — and how this fundamentally changes what the AI Director knows about a race in progress.

·Sim RaceCenter Team·9 min read
On this page

The AI Director's understanding of a live race just got substantially sharper. This post describes the shift from the legacy raw telemetry pipeline to the new Publisher Event model — where driver rigs now send structured, pre-analysed race events instead of a firehose of raw sensor data. The change affects every part of the stack, from what runs on driver rigs to what the Executor model sees when it selects a sequence.

The Problem with Raw Telemetry

The previous architecture used a Python publisher_service.py prototype on each driver rig that read iRacing's shared memory and streamed approximately 150 telemetry variables at 5Hz to Race Control. This created several compounding problems:

ProblemImpact
~150 raw fields at 5Hz25–50 KB/s per rig, 95% of which the AI never used
No shared types with Director or Race ControlSeparate Python codebase, no type safety
Manual pip install per rigNo production-grade upgrade path
Hard-coded config filesNo discovery or dashboard visibility
AI received only current snapshotNo sense of what happened during the race, only what is right now

The last point is the most significant for AI quality. A raw telemetry snapshot tells you that Car #7 is in P3 with a 0.4s gap to P2. It does not tell you that Car #7 has passed three cars in the last two laps, is on fresh tyres after a short pit, and has been battling for position since lap 8. Events encode history.

The New Architecture: Director as Publisher

The fix is built into the Director app's existing director-iracing extension. Rather than a separate process, driver rigs now run the Director app with the iRacing extension in Publisher Mode — the same process that handles camera control and overlays on a media rig, now reading the telemetry variable buffer, detecting race events locally, and POSTing them to Race Control.

Two Distinct Rig Roles

The platform now has two clearly separated rig types:

RoleExtensions ActiveDirector LoopNetwork Path
Driver RigiRacing (publisher mode only)Disabled — no sequence executionOutbound to Race Control via internet
Media / Director RigiRacing, OBS, Discord, YouTube, etc.Active — polls Race Control for sequencesOutbound to Race Control via internet

Key constraint: Driver rigs are fire-and-forget. They POST events to Race Control and receive nothing back. There is no back-channel, no command delivery to driver rigs, and no LAN assumption — each rig connects independently over the internet. The AI Director on the media rig retrieves events from Race Control, not from rigs directly.

Driver Rig (publisher mode — outbound only)
                                       ┌─────────────────────────────────────────┐
  iRacing Shared Memory (5Hz)  ──────▶ │ TelemetryReader → EventDetector →      │
                                       │ IdentityOverride → Publisher            │
                                       └─────────────────┬───────────────────────┘
                                                         │ POST /api/telemetry/events
                                                         ▼
                                              Race Control API
                                           (raceEvents Cosmos container)
                                                         ▲
                                                         │ GET /sequences/next
                                       ┌─────────────────┴───────────────────────┐
                                       │ Director Agent → SequenceExecutor →     │
                                       │ OBS / Discord / iRacing cameras          │
                                       └─────────────────────────────────────────┘
Media / Director Rig (Director Loop — unchanged)

The two rigs never communicate with each other. Race Control is the sole intermediary.

What Gets Read — Eight Fields Instead of 150

The iRacing extension's publisher mode reads only the 8 telemetry variables the AI pipeline actually uses:

iRacing VariableTypePurpose
CarIdxPositionint[64]Race position per car
CarIdxOnPitRoadbool[64]Pit road detection
CarIdxTrackSurfaceint[64]Track / pit / out-of-world surface
CarIdxLastLapTimefloat[64]Last lap time per car
CarIdxBestLapTimefloat[64]Best lap time per car
CarIdxLapCompletedint[64]Laps completed per car
CarIdxClassPositionint[64]Class position per car
SessionFlagsbitfieldYellow / red / green flag state

The TelemetryFrame holding these fields is internal — it is never transmitted to Race Control. Only derived RaceEvent payloads leave the rig.

Event Detection on the Rig

The EventDetector compares consecutive TelemetryFrame snapshots against an in-memory SessionState to produce RaceEvent objects. Eight event types are emitted:

Event TypeTriggerBroadcast Relevance
OVERTAKEPosition swap between consecutive frames, excluding pit cyclesHigh — immediate camera opportunity
BATTLE_STATEGap transitions: ENGAGED < 1.0s · CLOSING < 2.0s shrinking · BROKEN > 2.0sHigh — sustained narrative arc
PIT_ENTRYCarIdxOnPitRoad transitions false → trueMedium — strategy story
PIT_EXITCarIdxOnPitRoad transitions true → falseMedium — rejoins and undercut completion
INCIDENTFlag bitmask change + position/speed anomalyHigh — safety and drama
LAP_COMPLETECarIdxLapCompleted increment with lap time capturedLow — lap counter and timing
POSITION_CHANGEPosition change not attributed to an overtake (e.g. pit cycle delta)Medium — context for leaderboard
SECTOR_COMPLETELap distance crosses sector boundaryLow — sector timing reference

The RaceEvent Wire Format

Every event sent to Race Control takes this shape:

interface RaceEvent {
  id: string;                          // UUID v4 — idempotency key in Cosmos
  raceSessionId: string;               // Cosmos partition key
  type: RaceEventType;
  timestamp: number;                   // Unix ms
  lap: number;                         // Leader lap at time of event
  involvedCars: {
    carIdx: number;
    carNumber: string;
    driverName: string;                // Real-world booked name (resolved on-rig)
    position?: number;
  }[];
  payload: Record<string, unknown>;    // Event-specific data: gap, lapTime, etc.
  ttl: number;                         // 7,776,000 (90 days)
}

Events are batched in-memory and flushed every 2 seconds or when 20 events accumulate. Failed POSTs are retried three times with exponential backoff; events older than 30 seconds are discarded rather than retried.

Identity Resolution at the Edge

iRacing identifies drivers by CarIdx (0–63). Race Control and human operators know drivers by their booked stage name or real name. The IdentityOverride service resolves this mapping before any data leaves the rig.

At session check-in, the Director receives the rig's booked driver assignment from Race Control. This creates a carIdx → bookedDriverName map. Every RaceEvent emitted by EventDetector has its involvedCars[].driverName replaced with the booked name before it is buffered for transmission. If no override exists, the iRacing name from session YAML is used as a fallback.

The result: the raceEvents Cosmos container always contains real-world driver identities. The AI prompt never sees CarIdx numbers.

Cloud-Synthesised Events

Some race situations require correlating data from multiple rigs simultaneously — something no single rig can detect. Race Control's event synthesiser runs post-ingestion (non-blocking) and writes additional events back to the raceEvents container with source: 'cloud'.

Cloud-synthesised event types include:

Event TypeTrigger
FOCUS_VS_FOCUS_BATTLE≥2 publisher rigs are focused on cars with a gap < 1.0s across 2+ frames
STINT_HANDOFFDRIVER_SWAP events correlated across rigs in endurance sessions
RIG_FAILOVERA rig's heartbeat lapses; another rig covers the same car
UNDERCUT_DETECTEDPIT_ENTRY timing patterns suggest an undercut attempt vs. cars ahead
IN_LAP_DECLAREDLap-time degradation pattern following a PIT_EXIT matches an in-lap
FOCUS_GROUP_ON_TRACKDeduplicated group of cars currently in focus across publisher rigs
SESSION_LEADER_CHANGEOverall or class leader changes, synthesised from POSITION_CHANGE events

These events are stored alongside rig-sourced events in the same raceEvents container. The AI pipeline sees them identically.

What the AI Executor Now Knows

Before this change, the Executor received a point-in-time AISnapshot containing the current leaderboard, session flags, and a short rolling window of position history. The snapshot was always about the present — it had no memory of what raced the way it was.

After this change, the Executor's constructSessionSnapshot() function includes a recent raceEvents timeline from the Cosmos container. When reasoning about which template to select, the model now has access to:

  • OVERTAKE events from the last few laps showing which cars are gaining positions and through what mechanism (racing move vs. pit differential)
  • BATTLE_STATE history showing how long a battle has been engaged and whether gaps are growing or shrinking
  • PIT_ENTRY / PIT_EXIT events that explain why a car moved on the leaderboard without an on-track pass
  • INCIDENT markers that indicate whether damage or off-tracks have affected car pace
  • LAP_COMPLETE timing trends showing whether a car is on a hot stint or managing a degrading tyre

This shifts the Executor from "what is true right now?" to "what story has been building over the last few laps?"

Implications for Template Selection

When the Executor evaluates available templates, it should consider:

  • A broadcast.showLiveCam for a battle is most compelling when BATTLE_STATE shows ENGAGED status that has persisted for multiple laps — the battle has a narrative history, not just a current gap
  • An OVERTAKE event in the last 30 seconds is a strong trigger for a replay/highlight template, not just a live camera
  • Back-to-back PIT_EXIT events for cars that were in the same battle frame a potential undercut — worth an overlay or commentary sequence
  • A car with no recent OVERTAKE or BATTLE_STATE events but a fast LAP_COMPLETE trend may be building for a late charge — a compelling "sleeper" storyline
  • Cloud-synthesised FOCUS_VS_FOCUS_BATTLE events indicate that two publisher rigs both decided the same battle is their focal point — strong signal for the broadcast to follow

The scan_recent_events Tool

The Executor has access to the scan_recent_events AI tool, which queries the raceEvents container directly. This allows the model to retrieve a filtered event timeline during sequence generation:

// Tool call example
scan_recent_events({
  sessionId: "session-abc-123",
  eventTypes: ["OVERTAKE", "BATTLE_STATE"],
  sinceMs: Date.now() - 120_000,   // last 2 minutes
  limit: 20
})

Use this tool when the current AISnapshot leaderboard alone is insufficient to explain why a car is in a certain position, or when selecting between two similarly-ranked templates where event recency would break the tie.

Practical Effect on Broadcast Quality

The transition from raw telemetry to structured events fixes a fundamental information asymmetry: before, the AI had detailed current state but almost no history. A car sitting in P3 looked identical whether it had been there all race or had just made a three-position charge from P6.

Structured events give the AI the vocabulary to distinguish these cases. A well-timed OVERTAKE followed by another BATTLE_STATE: ENGAGED three laps later is a completely different broadcast story than a P3 car that has been there, uncontested, since lap 2.

The AI Director's job is to tell that story. The publisher gives it the words.