opening remarks

Stating the obvious (if you landed on this page by chance): tictacsync main function is to sync audio files to video files when shooting with a double-system sound set-up.

The intended workflow when using the tictacsync command is

  • copy cards from cameras and audio recorder(s) onto your computer storage
  • run tictacsync on the raw files
  • ingest the synced clips into your NLE software, not the raw footage
  • enjoy high quality guide track while editing.

One of the planned use case is multi-day documentary or movie shooting, so its IO structure is aligned on what I found on the internet as best practice for file management (YMMV). The ambitious goal of this software is to be useful for DITs, picture editors, sound editors and sound recordists on indy productions

Installation

This uses the python interpreter and multiple packages (so you need python 3 and pip). Also, you need to install two non-python command line executables: ffmpeg and sox. Make sure those are accessible through your PATH system environment variable. Then pip install the syncing program:

> pip install tictacsync

This should install python dependencies and the tictacsync command.

Usage

call the program with a folder as argument:

> tictacsync clipFolderWithCards

tictacsync input structure

The tictacsync command expects at least one folder as its argument: the program will search for video and audio files that share common timecode segments and join them. The timecode must have been recorded from accompanying TicTacSync timecode generators. NB: by design this is not the standard SMPTE Longitudinal Time Code (see project motivation for reasons why).

The set of video and audio files to be synced can be anywhere under the folder passed as argument: the program will recursively scan it and before any syncing occurs, determine if you’re using a structured input or a loose input:

structured input
A structured input is detected when media files from a given device (camera or audio recorder) are grouped under their own dedicated folder, one for each device (the folder name is then interpreted as the device name, eg: CAM_A/; CAM_B/; SD788/). And all the files in a given folder must be from the same device (ffprobe is used to distinguish device fingerprints, if file metadata permits it). NB: an intermediary level of folders for memory cards can be used and folders identifying cameras will still be detected; e.g., DAY001/CAM_B/ROLL01/DSC0034.MOV
loose input
When conditions for strutured input aren’t met.

Both kinds of input are allowed but if you want: A) to sync multicam shots, or B) Isolated sound tracks (ISOs) to be produced, structured input is mandatory (see tictacsync options description for more info).

tictacsync output structure

Running tictacsync, synced videos can be written at different locations, depending on user’s choice:

  • synced videos are written alongside originals (i.e., alongside non-synced videos with AUX TC) in a neighboring folder named SyncedMedia, or
  • they are written in a mirrored tree folder structure (like Davinci Resolve is storing its proxies): the whole files and folder hierarchy is copied alongside any non media files (shooting logs, Script Supervisor Notes, etc…); copied starting at a top level directory named destination folder for synced daillies. I do concede that this mode exhibits some kind of feature creep and tictacsync starts to be a media asset management (MAM) program, but that’s the point.

Using the preferred second output mode, it is suggested the anchor folder resides on another physical drive than the original raw files. Those should be kept as is for archiving until the post production process is finished: never renamed, never edited.

“alongside mode” output example (default mode)

Folder structure before syncing:

MyBigMovie/
└── day01
    ├── Zoom_H4n
    │   ├── 4CH000X.wav
    │   └── tracks.txt
    └── CANONCAM
        ├── card01
        │   ├── canon24fps01.MOV
        │   └── canon24fps02.MOV
        └── card02
            └── canon24fps03.MOV

after running the tictacsync command:

MyBigMovie/
└── day01
    ├── Zoom_H4n
    │   ├── 4CH000X.wav
    │   └── tracks.txt
    └── CANONCAM
        ├── card01
        │   ├── SyncedMedia <= new folder
        │   │   ├── canon24fps01.MOV <= synced
        │   │   └── canon24fps02.MOV <= synced
        │   ├── canon24fps01.MOV <= untouched
        │   └── canon24fps02.MOV <= untouched
        └── card02
            ├── SyncedMedia
            │   └── canon24fps03.MOV <= synced
            └── canon24fps03.MOV <= untouched

The synced clips have the same name and reside in a folder name SyncedMedia placed at the same level as the original clips which are still present and unmodified (hence with TC).

“mirrored tree mode” output example

Note below the absence of newly created SyncedMedia folders: all the clips present are synced ones; the originals reside in the other file structure (not shown here, as it is identical except for the top level name, SuperMovieIV_RAW_files rather than SuperMovieIV_Synced_files). RAW suggesting unprocessed files, not the video codec!

└── SuperMovieIV_Synced_files
    ├── day001
    │   ├── CAM_A
    │   │   ├── ROLL_001
    │   │   │   ├── DSC0123.MOV
    │   │   │   └── DSC0124.MOV
    │   │   └── ROLL_002
    │   │       ├── DSC0123.MOV
    │   │       ├── DSC0124.MOV
    │   │       └── DSC0125.MOV
    │   ├── CAM_B
    │   │   ├── ROLL_001
    │   │   │   ├── IMG0123.MOV
    │   │   │   ├── IMG0124.MOV
    │   │   │   └── IMG0125.MOV
    │   │   └── ROLL_002
    │   │       ├── IMG0123.MOV
    │   │       ├── IMG0124.MOV
    │   │       └── IMG0125.MOV
    │   ├── SD788T
    │   │   └── tape01
    │   │       ├── S01T01.WAV
    │   │       ├── S01T02.WAV
    │   │       └── S01T03.WAV
    │   └── script_logs.txt
    └── day002
        ├── CAM_A
        │   ├── ROLL_001
        │   │   ├── DSC0124.MOV
        │   │   └── DSC0133.MOV
        │   └── ROLL_002
        │       ├── DSC0123.MOV
        │       └── DSC0124.MOV
        ├── CAM_B
        │   ├── ROLL_001
        │   │   ├── IMG0123.MOV
        │   │   ├── IMG0124.MOV
        │   │   └── IMG0125.MOV
        │   └── ROLL_002
        │       ├── IMG0123.MOV
        │       ├── IMG0124.MOV
        │       └── IMG0125.MOV
        ├── SD788T
        │   └── tape001
        │       ├── S01T01.WAV
        │       ├── S01T02.WAV
        │       └── S01T03.WAV
        └── script_logs.txt

Note that, as September 2025, the mirrored output mode is not yet implemented (tictacsync version 0.98a0)

Multicam output structure

A clarification

Almost always when the term timecode is used, the correct word should rather be timestamp. Like the expression timecode embedded in metadata… It is incorrect, it is a timestamp:

timecode
a separate track on which time references are continually recorded in digital form as an aid to editing (dictionary.com).
timestamp
an indication of the date and time recorded as part of a digital signal or file (such as an email, digital photograph, radio broadcast, or text message) indicating the time of creation, transmission, etc. (Merriam Webster)

A timecode is recorded for the whole duration of the signal which needs timing (here on audio tracks on both the camera and the audio field recorder); a timestamp is an unique value, typically indicating the start of the recording.

Multicam clips synchronization

Apart from synchronizing sound to video, tictacsync can group and synchronize multicam clips together: it can read and decode the timecode and, on output, write a timestamp into the file metadata. This information is then used by the editing program to present the multiple camera angles simultaneously on playback (in sync).

Multicam clips are automatically detected and, on output, written in a dedicated folder SyncedMulticamClips, keeping their camera indentification via a subfolder:

MyBigMovie
└── day01
    ├── SyncedMulticamClips
    │   ├── leftCAM
    │   │   ├── canon24fps01.MOV
    │   │   ├── canon24fps02.MOV
    │   │   └── canon24fps03.MOV
    │   └── rightCAM
    │       ├── DSC_8063.MOV
    │       ├── DSC_8064.MOV
    │       └── DSC_8065.MOV
    ├── Zoom_H4n
    │   ├── 4CH000X.wav
    │   └── tracks.txt
    ├── leftCAM
    │   ├── -84ms.txt
    │   ├── card01
    │   │   ├── canon24fps01.MOV
    │   │   └── canon24fps02.MOV
    │   └── card02
    │       └── canon24fps03.MOV
    └── rightCAM
        ├── ROLL01
        │   ├── DSC_8063.MOV
        │   └── DSC_8064.MOV
        └── ROLL02
            └── DSC_8065.MOV

Storing the camera ID within a folder name can be used by Davinci Resolve to assign a reel name to each clip in order to “detect clips from the same camera” when creating a multicam clip (see screen grabs here at the Multicam section). Similar workflows should be possible on other NLEs.

[TODO] will detail and explain

  • command line options
  • calibration file of camera <delay>.txt
  • audio metadata file tracks.txt
  • usage of ISO files (update remergemix!)