Initial build: video clips, OSC control, render loop

2026-03-17

What happened

First working build. The core render loop is up: Sokol window, OpenGL context, vsync'd frame callback. Media clips scan from a directory at startup and are indexed alphabetically by type — audio .wav files, .png/.jpg images, and video either as .avi (MJPEG, mmap'd) or a directory of sequentially numbered JPEGs.

OSC listener runs on a background thread and writes incoming commands into atomic fields that the render loop reads once per frame — /vj/audio, /vj/image, /vj/video, /vj/gain, /vj/stop. No command gets dropped even if it arrives mid-frame.

Video decode is libjpeg-turbo: one JPEG frame per render tick, ~5ms for a 1080p frame on Pi 4 NEON. MJPEG AVI files are mmap'd so the OS handles paging. The waveform shader (waveform.glsl) visualizes live audio using a 1D GPU texture updated from the ALSA ring buffer.

The initial test assets are a directory of solid-color JPEGs and a few WAV files — enough to verify the pipeline end-to-end without real footage.


commits: b234d74

← All devlog entries