For minimal system requirements, the Tube Archivist stack needs around 2GB of available memory for a small testing setup and around 4GB of available memory for a mid to large sized installation. Minimal with dual core with 4 threads, better quad core plus.
For minimal system requirements, the Tube Archivist stack needs around 2GB of available memory for a small testing setup and around 4GB of available memory for a mid to large sized installation. Minimal with dual core with 4 threads, better quad core plus.
Note for arm64 hosts: The Tube Archivist container is multi arch, so is Elasticsearch. RedisJSON doesn't offer arm builds, but you can use the image `bbilly1/rejson`, an unofficial rebuild for arm64.
This project requires docker. Ensure it is installed and running on your system.
This project requires docker. Ensure it is installed and running on your system.
Note for **arm64**: Tube Archivist is a multi arch container, same for redis. For Elasitc Search use the official image for arm64 support. Other architectures are not supported.
Save the [docker-compose.yml](./docker-compose.yml) file from this reposity somewhere permanent on your system, keeping it named `docker-compose.yml`. You'll need to refer to it whenever starting this application.
Save the [docker-compose.yml](./docker-compose.yml) file from this reposity somewhere permanent on your system, keeping it named `docker-compose.yml`. You'll need to refer to it whenever starting this application.
Edit the following values from that file:
Edit the following values from that file:
@ -153,6 +153,8 @@ Wildcards "*" can not be used for the Access-Control-Allow-Origin header. If the
Use `bbilly1/tubearchivist-es` to automatically get the recommended version, or use the official image with the version tag in the docker-compose file.
Use `bbilly1/tubearchivist-es` to automatically get the recommended version, or use the official image with the version tag in the docker-compose file.
Use official Elastic Search for **arm64**.
Stores video meta data and makes everything searchable. Also keeps track of the download queue.
Stores video meta data and makes everything searchable. Also keeps track of the download queue.
- Needs to be accessible over the default port `9200`
- Needs to be accessible over the default port `9200`
- Needs a volume at **/usr/share/elasticsearch/data** to store data
- Needs a volume at **/usr/share/elasticsearch/data** to store data
@ -121,8 +121,8 @@ The field **Refresh older than x days** takes a number where TubeArchivist will
## Thumbnail check
## Thumbnail check
This will check if all expected thumbnails are there and will delete any artwork without matching video.
This will check if all expected thumbnails are there and will delete any artwork without matching video.
## Index backup
## ZIP file index backup
Create a zip file of the metadata and select **Max auto backups to keep** to automatically delete old backups created from this task.
Create a zip file of the metadata and select **Max auto backups to keep** to automatically delete old backups created from this task. For data consistency, make sure there aren't any other tasks running that will change the index during the backup process. This is very slow, particularly for large archives. Use snapshots instead.
# Actions
# Actions
@ -166,8 +166,8 @@ If the video you are trying to import is not available on YouTube any more, **Tu
## Embed thumbnails into media file
## Embed thumbnails into media file
This will write or overwrite all thumbnails in the media file using the downloaded thumbnail. This is only necessary if you didn't download the files with the option *Embed Thumbnail* enabled or want to make sure all media files get the newest thumbnail. Follow the docker-compose logs to monitor progress.
This will write or overwrite all thumbnails in the media file using the downloaded thumbnail. This is only necessary if you didn't download the files with the option *Embed Thumbnail* enabled or want to make sure all media files get the newest thumbnail. Follow the docker-compose logs to monitor progress.
## Backup Database
## ZIP file index backup
This will backup your metadata into a zip file. The file will get stored at *cache/backup* and will contain the necessary files to restore the Elasticsearch index formatted **nd-json** files.
This will backup your metadata into a zip file. The file will get stored at *cache/backup* and will contain the necessary files to restore the Elasticsearch index formatted **nd-json** files. For data consistency, make sure there aren't any other tasks running that will change the index during the backup process. This is very slow, particularly for large archives.
BE AWARE: This will **not** backup any media files, just the metadata from the Elasticsearch.
BE AWARE: This will **not** backup any media files, just the metadata from the Elasticsearch.
<p><i>Zip file backups are very slow for large archives and consistency is not guaranteed, use snapshots instead. Make sure no other tasks are running when creating a Zip file backup.</i></p>
<p>Current index backup schedule: <spanclass="settings-current">
<p>Current index backup schedule: <spanclass="settings-current">
{% if config.scheduler.run_backup %}
{% if config.scheduler.run_backup %}
{% for key, value in config.scheduler.run_backup.items %}
{% for key, value in config.scheduler.run_backup.items %}
@ -332,8 +333,9 @@
</div>
</div>
</div>
</div>
<divclass="settings-group">
<divclass="settings-group">
<h2>Backup database</h2>
<h2>ZIP file index backup</h2>
<p>Export your database to a zip file stored at <spanclass="settings-current">cache/backup</span>.</p>
<p>Export your database to a zip file stored at <spanclass="settings-current">cache/backup</span>.</p>
<p><i>Zip file backups are very slow for large archives and consistency is not guaranteed, use snapshots instead. Make sure no other tasks are running when creating a Zip file backup.</i></p>