- %path expands to the path of the photo/video
- %keywords expands to the IPTC keywords of the photo
- {format} expands to the photo's EXIF date, e.g. {YYYY MM}
The `debug` package reads DEBUG on first require to configure itself.
So far `log.js` is the first file to require it so it works, but if `yargs` started using `debug`
then setting `process.env.DEBUG` would do nothing here.
Instead we can use `debug.enable()` method to set the filter dynamically.
This will help understand usage patterns to know what to focus on, e.g.
- are many people using thumbsup on Windows?
- are there many galleries with > 10,000 photos?
The current code doesn't create an output structure for them, so we don't create thumbnails.
This is good since the thumbnail generation would likely fail.
However we still try to render thumbnails in the themes.
The themes could be smart enough to skip invalid files, but it's easier to ignore them from the start.
- This is the simplest way to ensure all dependencies are there
- Also much faster than installing them on every build
- Especially since Travis runs Ubuntu Trusty, which doesn’t have ffmpeg in its repositories (must be compiled from source)
- The recursive call within Handlebars was causing huge memory spikes
- It seems it deep cloned (?) the entire Album objects at every recursive call
- This itself could bloat to several hundred megs of RAM for very large galleries
Replacing it with a simple breadcrumbs navigation for now.
1. Move from a JSON index to a SQLite database.
- This allows the indexing to be interrupted & resumed
- Updating the index consumes less RAM than loading / saving an entire JSON object
- Loading the index consumes less RAM since it can be streamed, only exacting the properties we need every time (instead of loading all EXIF data in memory, only to discard most of it later)
- These make a big difference when processing 10,000+ photos
2. Switch from <glob> to a manual <readdir>
- Glob would take several hundred or GB of RAM when asked to find several thousand files
- Manual approach with <micromatch> library does the same thing in a fraction of the time / memory usage
3. Exiftool optimisations
- Run 1 exiftool process per CPU, still in batch mode (divide all files to be read into 1 bucket per CPU)
- Stream the exiftool output instead of buffering it in memory