The Synchronicity Engine
1 — Architectural Logic
A tri-partite system for digesting time.
The Engine operates across three interlocking components.
The Eye
High-resolution acquisition of the live environment. The input layer — what the machine sees in the room it inhabits.
The Brain
Real-time frame analysis using Computer Vision. Assigns Interest Weights to temporal segments — deciding which moments carry density and which do not.
The Loom
The storage and playback engine. Folds recent generations of footage into the current frame, driven by the Brain's weights. The site of the recursion.
2 — The Temporal Zoom Heuristic
Time treated as elastic.
Unlike standard video — which treats every second as equal — the Engine assigns a value to time. Stasis compresses. Activity expands. The machine is not a neutral recorder; it is an editor that runs in real-time.
Stasis Compression
When the ML identifies empty room, repetitive micro-movements, or darkness, it increases the read-head speed. Uneventful time is swallowed.
Activity Dilation
Gestures of construction, human interaction, complex tool use — these slow the playback and increase the Opacity Weight of that layer. Labour becomes visually dense.
A "Time-Condensed Fossil." Ten hours of real-time might produce a twenty-minute video where periods of intense human labour are visually thick and slow — while the intervening hours are mere ghost-frames.
3 — The Modes of Playback
A director's vocabulary of time.
The Engine is not a passive loop. It is a multi-state machine. In performance the operator — or the heuristic layer — switches between specific temporal behaviours. Each mode is a different relationship between the machine and its own accumulated past.
High-fidelity, real-time passthrough. Used at the start of the build to ground the audience in the physical space before the temporal warping begins. Establishing the "now."
Time stretches and contracts based on detected activity. Lingers on moments of high information density — fine-motor assembly, complex wiring — and snaps through stasis. Playback that feels alive.
Successive generations of the past are folded into the frame at high speed. The "Time Machine" mode — collapsing hours of construction into a dense visual fossil where the past layers over the present with increasing opacity.
Disparate frames pulled from the entire 8-week buffer simultaneously. A temporal montage where multiple points in the build history occupy the screen at once — a non-linear visual record of the machine's evolution.
The engine jumps between speeds, directions, and generations without linear sequence. Breaks the audience's sense of "when" they are. Emphasises abstraction once the recorded data reaches peak density.
Playback prioritised by what is happening — only frames tagged as "Interaction," or "Hand," or "Screen." The machine curates its own history to reveal a single narrative thread running through the build.
"The Engine has a specific vocabulary of time. During the build, I can shift it from a simple Linear witness into an Elastic or Compressed state. It's an automated editor that can jump into Scattered or Context-Specific modes to reveal the history of the build in ways a standard camera never could."
4 — Research Avenues · Phase 1 Roadmap
From night's work to production machine.
Three research trajectories are required to move the functional prototype into a reliable performance system.
The Mode Controller — state management
Building a robust state-management system that can transition between modes — fading from Linear to Compressed, for instance — without breaking the frame-capture pipeline mid-performance.
Developing the operator interface for the live show. The dashboard needs to allow real-time mode switching, with enough visual feedback to make live decisions legible under performance conditions.
Semantic Tagging & Retrieval — the ML layer
Researching how the ML layer passes semantic tags — "hand," "tool," "screen," "interaction" — to the storage database in real time, without creating a latency bottleneck in the capture pipeline.
Ensuring Context-Specific mode can instantly query the 8-week archive to surface every frame where a specific action occurred. Speed of retrieval is a performance-critical requirement.
Temporal Zoom Implementation — the elastic algorithm
Refining the algorithm that decides the slope of the speed change in Elastic mode. How do we linger on interesting frames without the acceleration feeling jarring? The transition is as important as the destination.
Researching lightweight pre-trained models — MediaPipe, TSN — that can map visual interest to a numerical value (0.0–1.0) and live-drive a playbackRate variable without frame drops.
5 — Budgetary Justification
A technical feasibility study.
The £8,250 ask funds two things: the hardware that takes the system from a browser experiment to a persistent physical archive, and the specialist consultancy that bridges standard motion detection with sophisticated temporal tagging.
6 — Conceptual Lineage
Not generative AI — interpretive AI.
The distinction matters. This machine does not create from nothing. It shapes what is already there — it digests duration. The lineage runs through artists who understood time as material.
We are building a machine that doesn't just record the history of its own making — it digests it. It discards the empty space and crystallises the action, leaving behind a visual object that carries the physical weight of the time spent building it.