selfhosted-apps-docker/zammad
DoTheEvolution dc7fff56d1 update
2021-06-09 20:27:56 +02:00
..
readme.md update 2021-06-09 20:27:56 +02:00

Zammad in docker

guide-by-example

logo

Purpose & Overview

Ticketing system.

Zammad is a modern, open source, good looking web base helpdesk/customer support system.
Written in Ruby. This deployment uses PostgreSQL for database powered by elasticsearch, with nginx for web server.

Requirements

Elastisearch requires higher limit of maximum virtual address space for memory mapping.
To check the current limit:

sysctl vm.max_map_count

Default likely being ~65k.

To set it permanently to ~260k as required by elasticsearch:

  • For arch linux, it means creating elasticsearch.conf in /usr/lib/sysctl.d/, containing the line that sets the desired max map count:

    /usr/lib/sysctl.d/elasticsearch.conf

    vm.max_map_count=262144
    

    This is done automatically if one would install elasticsearch package on docker host.

  • For debian based distros you put the line in /etc/sysctl.conf

This is done on the docker host.
Afterwards, reboot and check again with sysctl vm.max_map_count

Files and directory structure

/home/
└── ~/
    └── docker/
        └── zammad-docker-compose/
            ├── ..lots of files and folders..
            ├── .env
            └── docker-compose.override.yml
  • .env - a file containing environment variables for docker compose
  • docker-compose.override.yml - an override of the the default compose file

Use git clone to download the latest zammad-docker-compose repo from github.

git clone https://github.com/zammad/zammad-docker-compose.git

This brings a lot of files and folders and everything is pre-prepared.

docker-compose

We are not touching the compose file.
Changes are done only to the docker-compose.override.yml and the .env file.

I prefer to store backups as bind mounts rather than volumes, as volumes can get destroyed by a simple docker-compose down -v that can popup in terminal history if used before, and be run by accident.

So here we override backup location, set two env variables to fix reverse proxy, and join it to reverse proxy caddy network.

docker-compose.override.yml

version: '2'
services:

  zammad-nginx:
    ports:
      - "8080:8080"
    environment:
    - NGINX_SERVER_SCHEME=https
    - RAILS_TRUSTED_PROXIES=['127.0.0.1', '::1', 'caddy']

  zammad-backup:
    volumes:
      - ./zammad-backup:/var/tmp/zammad
      - ./zammad-data:/opt/zammad

networks:
  default:
    external:
      name: $DOCKER_MY_NETWORK

.env

# GENERAL
MY_DOMAIN=example.com
DOCKER_MY_NETWORK=caddy_net
TZ=Europe/Bratislava

IMAGE_REPO=zammad/zammad-docker-compose
POSTGRES_PASS=zammad
POSTGRES_USER=zammad
RESTART=always
# don't forget to add the minus before the version
VERSION=-4.0.0-25

All containers must be on the same network.
Which is named in the .env file.
If one does not exist yet: docker network create caddy_net

Reverse proxy

Caddy v2 is used, details here.

Caddyfile

ticket.{$MY_DOMAIN} {
    reverse_proxy zammad-nginx:8080
}

Part of solving the situation when zammad is behind a proxy is accounting for a security measure where cookies are not accepted on a http connection by zammad's ngnix server. The secure TLS connection ends at caddy and then the communication between caddy and zammad's ngnix server is just plain http.
This will cause CSRF token verification failed when trying to log in to zammad.

The way the issue is solved is adding two env variables to the compose override file, under nginx container.
These tell zammad's nginx server to use https scheme for X-Forwarded-Proto header, and to trust proxy server with hostname caddy.

environment:
  - NGINX_SERVER_SCHEME=https
  - RAILS_TRUSTED_PROXIES=['127.0.0.1', '::1', 'caddy']

This is just an explanation, no need to do anyting, the lines are included in the override file.

First run

  • Setup admin email and password.
  • Organization name and domain - System URL.
  • Email notifications, using smpt. It needs domain set, wont work with just localhost.
  • Setup email channel.
    This should be an email address where any email received will create an unassigned ticket in zammad and sender will be added to users.
    But even if it is not planned to be in use, it is required for sending email notifications using triggers.

interface-pic

Basic setup and use

  • Zammad does not really like to show dropdown menus, whenever you are filling up some text field where various entries should popup, like list of organizations, you need to write first two characters for something to show up.
  • Create an organization.
  • Create a user as memember of this org. Give them email.
  • Check if there is a group in groups and if it has assigned email.
  • Test if creating a ticket will send notifications as expected.
  • Check triggers for lot of relevant goodies.

Update

While Watchtower might work for containers of the stack, might be preferable to just do backup and restore in a new git clone.

Backup and restore

Backup

Out of the box a container doing daily backups is running. Creating two files - backup of the database, and backup of the zammad files. By default these are saved to a docker volume, but in override it has been changed to a bind mount in the zammad-docker-compose directory.

Additionaly using borg that makes daily snapshot of the entire ~/docker directory will keep backups safe.

Restore

  • shutdown the stack and remove all named volumes docker-compose down -v Warning, by default the backups are in one of these volumes be sure you have them somewhere safe before using -v
  • delete entire zammad-docker-compose directory containing the compose file and shit
  • git clone the repo,
    edit the override file and env file for your setup
  • start it all up docker-compose up -d and wait few minutes till everything finishes,
    ctop, select nginx container, arrow left shows the log,
    should be at - "starting nginx..."
  • stop it all - docker-compose down
  • extract the database backup
    gzip -dk 20210605053326_zammad_db.psql.gz
    rename it to something shorter like backup_db.psql
    place it in to /var/lib/docker/volumes/zammad-docker-compose_zammad-postgresql-data/_data/
    I use nnn file manager as root.
  • start zammad-postgresql container docker-compose up -d zammad-postgresql
  • exec in to it docker exec -it zammad-docker-compose_zammad-postgresql_1 bash
    test you can connect to the database psql -U zammad
    quit \q
    back in bash, drop the existing database dropdb zammad_production -U zammad
    create a new empty database createdb zammad_production -U zammad
    restore data from backup in to it
    psql zammad_production < /var/lib/postgresql/data/backup_db.psql -U zammad
    if you get some errors about already existing, you forgot to drop the database
  • exit and down the container docker-compose down
  • on docker host navigate to /var/lib/docker/volumes/zammad-docker-compose_zammad-data/_data/ and delete everything there
  • extract zammad data somewhere
    tar -xvpf 20210605053326_zammad_files.tar.gz
    copy content of opt/zammad/ containing directories - app, bin, config,...
    in to /var/lib/... where you previously deleted this stuff
    again, I use nnn file manager as root.
  • start everything
    docker-compose up -d
  • exec to rake container and run rake searchindex:rebuild to get search working again

In case something is not working right, check nginx logs. Depending on how you copied the stuff, there could be ownership issue so in nginx check /opt/zammad and its content with ls -al, if its owned by zammad user. if its root use chown -R zammad:zammad /opt/zammad and down and up the stack.