A timecode generator, its syncing software and a Media Asset Management workflow proof of concept

GNSS (GPS) signals are everywhere, use them to sync recordings!
And in the rare case they aren’t, just use a (cheap) signal repeater like this one.
Better, simpler, cheaper, faster : pick four
Yes, other well established solutions exist but this one is 8x cheaper: 25$ for each synced device VS 200$. And it is simpler on many aspects:
- no jam sync necessary: it’s GNSS based
- no frame rate settings to mess with: it’s frame rate agnostic
- time and date are punched in: no midnight rollover problem and no TC conflict for multi-day shoots
- 60 and 120 FPS compatible (did I say framerate doesn’t matter?)
- syncs both the start and end of long takes: automated audio drift correction
- subframe precision (because millis from wireless lavs add up to frames…)
- pas besoin d’une calisse d’application cellulaire
Power it, plug it, shoot it, save it, sync it. (to the tune of Daftpunk Technologic)
What is this?
TicTacSync (with caps) is a DIY timecode generator built from off-the-shelf open source hardware (OSHWA) components (see below for details, total cost: 25 USD !!!). It outputs a custom audio timecode based on GPS signal and can be used like any TC box plugged in MIC camera input for dual system sound and multicam sync.
This site proposes a cheaper and better mouse trap: a novel syncing method and apparatus that focuses on time of day record starting and ending determination rather than frame count. Here I give instructions for building a global navigation satellite system (GNSS) open source and open hardware dongle + software combo to:
- sync camera video with sound from a dedicated audio recorder
- sync multiple camera takes and show them aligned on your timeline
- sync multiple audio recorders
A minimal setup (a camera and a recorder) cost you only 2 x 25$ (add 25$ for each additional device).
Syncing of audio and video is done (before importing into your video editing program) with the open source accompanying software tictacsync. Multiple cameras syncing is done the same way, see Multicam for a Davinci Resove workflow example.
The Grand Plan
Building up a community of devs and users, I want to incite camera manufacturers to implement Global Navigation Satellite System (GNSS) time of day (TOD) based head-tail microsecond time stamping of their recordings. Where needed, cheap RF 1PPS repeater could be deployed if GNSS signal is too weak.
And I want to break up the gate keeping that emerged among users due to the high cost proprietary hardware and software industry ecosystem: “We’re using those expensive tools because we are pros… if you can’t afford the tools of the trade, too bad…”
Tech is now (or always has been?) used by capitalists, VC investors and private equity firms to lock in creators: let’s Seize the Means of Computation!
TicTacSync: syncing for the DIY adventurous
This cheap hardware/software combo will appeal to videographers who have encountered some limitations of their editing software “waveform analysis syncing” and should also interest scientists who need to timestamp data recordings within sub-millisecond precision.
Want to sync dual-system sound without breaking the bank? Doing multicam? Multisound? Need clock drift correction? tictacsync does it all!
TicTacCode is a new audio timing track format to timestamp both the start and the end of a recording. It is not SMPTE LTC and does not identifies individual frames (hence by design it is frame rate agnostic).
TicTacSync (note the case) is the hardware dongle generating TicTacCode. It fetches UTC time from GNSS signals. You need one for each recording device. Build it yourself: see below for parts (kits available soon).
For syncing media files and asset management,
tictacsync,mamsync,mamconf,mamdavandmamreapare the post-production CLI programs that processes recordings shot with TicTacSync dongles, see repo.tictacsyncormamsyncdo the syncing before putting your clips into your NLE of choice (see demo on the left). As an added bonus it does time stretching to correct excessive clock drift between your devices, if any (see code).
This won’t be a commercial product (not from me anyway). I’m sharing information and code to build your own devices: some assembly required. The post processing software is CLI only (Command-line Interface)…
Here’s what you’ll need:
- a GNSS module with PPS output (1 Pulse Per Second), 5$
- a SAMD21 based board , 4$
- a charger module, 5$
- a lipo battery, 7$
- a cable to plug into your audio recorder or camera
Prices as of May 2025, from those typical retailers: AliExpress; Seeed Studio and Adafruit.
Flashed Sale
For the inexperienced, flashing the SAMD21 board is the most adventurous task so I’m willing to do this part for you: a pre-flashed (and tested) board is available on Tindie.
Soon I’ll offer hand assembled prototypes too… hoping the project get some traction. Maybe some Shenzhen hacker will be pleased by the idea and whip up a one board PCB, driving the cost down even more.
MAM workflow proof of concept
While the synchronization program combines the audio files with the videos, it simultaneously cuts and assembles the ISO files for further processing. Then these files are stored according to a naming convention that allows for their automatic retrieval.
Simplified teamwork, avoid conforming
The commands mamsync, mamconf, mamdav and mamreap, forms a software suite for exploring concurrent audio and pictorial editing: when satisfied by a rough cut in Resolve, submit some jobs to your Reaper sound editor and hear the changes back in Resolve! mam stands for Media Asset Management, dav for Davinci Resolve and reap for Reaper. I’ve opted for Davinci Resolve and Reaper for their scripting capabilities.
You’re editing video and need some FX or FOL on top of field recordings? Do you want to add BG soundscapes? Here are the steps to follow:
- from Resolve, export the timeline EDL under the otio format, say
cut29.otio - optionally render the timeline to guide the sound editor under
cut29.mov - run
mamreap cut29.otio cut29.movto build the Reaper script - in Reaper the sound editor calls the action
Load Movie Audio - Bam : the tracks and video appear neatly checkered, with identifying colors (see below)
- the sound editor does his magic (with ISOs and added tracks) and saves a new mix
- in Resolve you run the
Load New Soundscript - Voilà: new sound for each clips on the timeline and no pricey field recorder PT workflow.
N.B.: In this screen grab, everything seen (colors, item names, markers) appears as is, when calling Load Movie Sound, a python generated Lua script. And, yes, each ISO has handles.

For team work, the sound editor could have access to the ISO files through sharing software like Syncthing. Mechanism for re-incorporating sound works into later re-edits are on the drawing board (dealing with tightly bound tracks vs loosely bound tracks and using automated Reaper sub-projects agregation). Sadly, the Resolve script overwrites TimelineItem and Takes properties and settings, at least until someone joins the project…
Calling it a day
I’ve spent way more time I needed on the software side for this to be useful for me… I learned Reaper and Resolve Lua scripting just to implement ideas I had and now I want to go back to hardware conception and production. a TC slate? a circled take logger? (aka NanoLockit Logger), we’ll see!