selfhosted-apps-docker/README.md
2023-03-20 19:56:18 +01:00

9.6 KiB

Selfhosted-Apps-Docker

guide-by-example

logo


Can also just check the directories listed at the top for work in progress

Check also StarWhiz / docker_deployment_notes
Repo documents self hosted apps in similar format and also uses caddy for reverse proxy



Core concepts

  • docker-compose.yml does not need any editing to get something up, changes are to be done in the .env file.
  • Not using ports directive if theres only web traffic for a container.
    Theres an expectation of running a reverse proxy which makes mapping ports to a docker host unnecessary. Instead expose is used which is basically just documentation.
  • For persistent storage bind mount ./whatever_data is used. No volumes, nor static path somewhere... just relative path next to compose file.
  • No version declaration at the beginning of compose, as the practice was deprecated

Requirements

Basic linux and basic docker-compose knowledge. The shit here is pretty hand holding and detailed, but it still should not be your first time running a docker container.


Caddy reverse proxy

Kinda the heart of the setup is Caddy reverse proxy.
It's described in most details and all guides have reverse proxy section with Caddyfile config specific for them.
Caddy is really great at simplifying the mess of https certificates, where you don't really have to deal with anything, while having a one simple, readable config file.

But no problem if using traefik or nginx proxy manager. You just have to deal with proxy settings on your own, and 90% of the time its just sending traffic to port 80 and nothing else.


Docker network

You really want to create a custom named docker network and use it.

docker network create caddy_net

It can be named whatever, but what it does over default is that it provides automatic DNS resolution between containers. Meaning one can exec in to a container and ping another container by its hostname.
This makes config files simpler and cleaner.


.env

Often the .env file is used as env_file, which can be a bit difficult concept at a first glance.

env_file: .env

  • .env - actual name of a file that is used only by compose.
    It is used automatically just by being in the directory with the docker-compose.yml
    Variables in it are available during the building of a container, but unless named in the environment: option, they are not available once the container is running.
  • env_file - an option in compose that defines an existing external file.
    Variables in this file will be available in the running container, but not during building of the container.

So a compose file having env_file: .env mixes these two together.

Benefit is that you do not need to make changes at multiple places. Adding variables or changing a name in .env does not require you to also go in to compose to add/change it there... also the compose file looks much cleaner, less cramped.

Only issue is that all variables from the .env file are available in all containers that use this env_file: .env method.
That can lead to potential issues if a container picks up environment variable that is intended for a different container of the stack.

In the setups here it works and is tested, but if you start to use this everywhere without understanding it, you can encounter issues. So one of the troubleshooting steps might be abandoning .env and write out the variables directly in the compose file only under containers that want them.


Docker images latest tag

Most of the time the images are without any tag, which defaults to latest tag being used.
This is frowned upon, and you should put there the current tags once things are going. It will make updates easier when you know you can go back to a working version with backups and knowing image version.


Cloudflare

For managing DNS records. The free tier provides lot of management options and benefits. Like proxy between your domain and your server, so no one can get your public IP just from your domain name. Or 5 firewall rules that allow you to geoblock whole world except your country.

How to move to cloudflare.


ctop

official site

ctop-look

htop like utility for quick containers management.

It is absofuckinglutely amazing in how simple yet effective it is.

  • hardware use overview, so you know which container uses how much cpu, ram, bandwidth, IO,...
  • detailed info on a container, it's IP, published and exposed ports, when it was created,..
  • quick management, quick exec in to a container, check logs, stop it,...

Written in Go, so its super fast and installation is trivial when it is a single binary.
download linux-amd64 version; make it executable with chmod +x; move it to /usr/bin/; now you can ctop anywhere.


Sendinblue

Services often need ability to send emails, for notification, registration, password reset and such... Sendinblue is free, offers 300 mails a day and is easy to setup.

EMAIL_HOST=smtp-relay.sendinblue.com
EMAIL_PORT=587
EMAIL_HOST_USER=whoever_example@gmail.com>
EMAIL_HOST_PASSWORD=xcmpwik-c31d9eykwewf2342df2fwfj04-FKLzpHgMjGqP23
EMAIL_USE_TLS=1

Archlinux as a docker host

My go-to is archlinux as I know it the best. Usually in a virtual machine with snapshots before updates.

For Arch installation I had this notes on how to install and what to do afterwards.
But after archinstall script started to be included with arch ISO I switched to that.
For after the install setup I created Ansible-Arch repo that gets shit done in few minutes without danger of forgetting something.
Ansible is really easy to use and very easy to read and understand playbooks, so it might be worth the time to check out the concept to setup own ansible scripts.

The best aspect of having such repo is that it is a dedicated place where one can write solution to issues encountered, or enable freshly discovered feature for all future deployments.



For docker noobs

First, docker is easy. Like really.

Second, there are two main uses.

  • A developer who daily works on apps and docker eases everything about it, from setting up enviroment, to testing and deployment.
  • A hosting approach, where you are not that concerned with detials of what and how works in the container that is prepared for you by developers, you just want it running

This whole repo is obviously about the second use. So be careful that you wont spend time on resources used to educate developers. Sure, if you get through that you will know docker better, but theres always the danger that after sinking 4 hours learning, one still cant get plain nginx web server up and working and loses motivation.
And my personal preference in learning is getting something up as fast as possible and then tinker with it and try to understand how it works.

So when googling for guides, look for docker compose rather than just docker tutorials and notice if they are talking some core fundamentals or deployment.

  • This one is pretty good. That entire channel has good stuff.

Will add shit I encounter and like.