I’m a big fan of running small things in containers for personal use.
I was not happy using possibly outdated (and missing, but formerly extant) documentation for running Mastodon under Docker. There were plenty of stale breadcrumbs but I wasn’t comfortable with what I saw.
I and another person were sort of lamenting this on the #server-owners channel in the Mastodon Discord (open only to Patreon supporters, please support them at https://www.patreon.com/mastodon !)
I’ve started a private Github repo that I’ll make public soon, when I think it’s fully debugged. But I’ll post what I have so far here and you can reach me on Mastodon at https://bvp.me/@bplein/ aka @bplein@bvp.me and give me feedback.
One of my first complaints about the existing Mastodon on Docker docs/breadcrumbs was that it was developer centric, not end user centric. For example, the Mastodon images would be built locally, not just run from the images built and stored on hub.docker.com. This allows developers to tweak the local code and retest, but it’s onerous for end users and requires the full Mastodon Github repo to work, not just a minimal amount of configuration and YAML files.
Also there’s the concept of using ENV files (.env) but in this case, there was .env.production and references to that in YAML etc. We don’t need that. We can use a single .env file.
Finally, there was an issue with lots of customization going into docker-compose.yml directly, instead of having it all in the .env file. I wanted to correct that.
This initial work ASSUMES you have a DOCKERIZED nginx reverse proxy and are either using the acme-companion container that is also by the nginx-proxy repo, or something similar to do certs.
I have just posted my nginx-proxy+acme-companion docker-compose settings at https://github.com/bplein/nginx-proxy-acme-docker-compose. I haven’t rigorously tested it in many environments but it works on my systems.
After you have nginx running correctly, follow this for Mastodon:
- Create a home for your docker-compose code to run your site.
- Add the following .env file, and docker-compose.yml files
.env (edit passwords, email addresses, etc., but do not edit the lines like “DB_HOST=${COMPOSE_PROJECT_NAME}-db-1” as the COMPOSE_PROJECT_NAME variable handles all this and makes the internal name resolution work correctly every time :
# Change as needed
#
# Do not change ${COMPOSE_PROJECT_NAME} references.
# This is used to declaratively set the internal DNS names
DOCKER_IMAGE_TAG=v4.0.2
EXTERNAL_NETWORK=nginx-proxy
LETSENCRYPT_EMAIL=changeme@example.com
TZ=America/Chicago
# Fixed project name - must be unique on host - determines names of running containers
# avoid dots, as in test.example.com, use other delimiters instead.
COMPOSE_PROJECT_NAME=test-example-com
# Federation
# ----------
# This identifies your server and cannot be changed safely later
# ----------
LOCAL_DOMAIN=test.example.com
# Redis
# -----
REDIS_HOST=${COMPOSE_PROJECT_NAME}-redis-1
REDIS_PORT=6379
# PostgreSQL
# ----------
DB_HOST=${COMPOSE_PROJECT_NAME}-db-1
DB_USER=mastodon
DB_NAME=mastodon_production
DB_PASS=0123456789abcdef
DB_PORT=5432
# Elasticsearch (optional)
# ------------------------
ES_ENABLED=true
ES_HOST=${COMPOSE_PROJECT_NAME}-es-1
ES_PORT=9200
# Authentication for ES (optional)
ES_USER=elastic
ES_PASS=changeme
# Secrets
# -------
# Make sure to use `docker-compose run --rm web bundle exec rake secret` to generate secrets
# -------
SECRET_KEY_BASE=generate_me
OTP_SECRET=generate_me
# Web Push
# --------
# Generate with `docker-compose run --rm web bundle exec rake mastodon:webpush:generate_vapid_key`
# --------
VAPID_PRIVATE_KEY=generate_me
VAPID_PUBLIC_KEY=generate_me
# Sending mail
# ------------
SMTP_SERVER=mailhost.example.com
SMTP_PORT=587
SMTP_LOGIN=changeme@example.com
SMTP_PASSWORD=changeme
SMTP_FROM_ADDRESS=notifications@test.example.com
# File storage (optional, backblaze example given)
# -----------------------
S3_ENABLED=false
S3_BUCKET=my-bucket-name
AWS_ACCESS_KEY_ID=<my access key id>
AWS_SECRET_ACCESS_KEY=<my secret access key>
S3_ENDPOINT=https://s3.us-west-001.backblazeb2.com
S3_ALIAS_HOST=my-bucket-name.s3.us-west-001.backblazeb2.com
# IP and session retention
# -----------------------
# Make sure to modify the scheduling of ip_cleanup_scheduler in config/sidekiq.yml
# to be less than daily if you lower IP_RETENTION_PERIOD below two days (172800).
# -----------------------
IP_RETENTION_PERIOD=31556952
SESSION_RETENTION_PERIOD=31556952
docker-compose.yml
version: '3'
services:
db:
restart: always
image: postgres:14-alpine
shm_size: 256mb
networks:
- internal_network
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'mastodon', '-d', 'mastodon_production']
volumes:
- ./postgres14:/var/lib/postgresql/data
- ./postgres-backup:/var/lib/postgres/backup
environment:
# - 'POSTGRES_HOST_AUTH_METHOD=trust'
- 'POSTGRES_USER=${DB_USER}'
- 'POSTGRES_PASSWORD=${DB_PASS}'
- 'POSTGRES_DB=${DB_NAME}'
redis:
restart: always
image: redis:7-alpine
networks:
- internal_network
healthcheck:
test: ['CMD', 'redis-cli', 'ping']
volumes:
- ./redis:/data
es:
restart: always
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.4
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m -Des.enforce.bootstrap.checks=true"
- "xpack.license.self_generated.type=basic"
- "xpack.security.enabled=false"
- "xpack.watcher.enabled=false"
- "xpack.graph.enabled=false"
- "xpack.ml.enabled=false"
- "bootstrap.memory_lock=true"
- "cluster.name=es-mastodon"
- "discovery.type=single-node"
- "thread_pool.write.queue_size=1000"
networks:
- internal_network
healthcheck:
test: ["CMD-SHELL", "curl --silent --fail es:9200/_cluster/health || exit 1"]
volumes:
- ./elasticsearch/data:/usr/share/elasticsearch/data
- ./elasticsearch/logs:/usr/share/elasticsearch/logs
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
web:
image: "tootsuite/mastodon:${DOCKER_IMAGE_TAG}"
restart: always
env_file: .env.production
environment:
NGINX_HOST: ${LOCAL_DOMAIN}
NGINX_PORT: 80
TZ: ${TZ}
VIRTUAL_HOST: ${LOCAL_DOMAIN}
VIRTUAL_PATH: "/"
LETSENCRYPT_HOST: ${LOCAL_DOMAIN}
LETSENCRYPT_EMAIL: ${LETSENCRYPT_EMAIL}
command: bash -c "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 80"
networks:
- internal_network
- external_network
healthcheck:
test: ['CMD-SHELL', 'wget -q --spider --proxy=off web:80/health || exit 1']
depends_on:
- db
- redis
- es
volumes:
- ./public/system:/mastodon/public/system
streaming:
image: "tootsuite/mastodon:${DOCKER_IMAGE_TAG}"
restart: always
env_file: .env.production
environment:
VIRTUAL_HOST: ${LOCAL_DOMAIN}
VIRTUAL_PATH: "/api/v1/streaming"
VIRTUAL_PORT: 4000
command: node ./streaming
networks:
- internal_network
- external_network
healthcheck:
test: ['CMD-SHELL', 'wget -q --spider --proxy=off streaming:4000/api/v1/streaming/health || exit 1']
depends_on:
- db
- redis
sidekiq:
image: "tootsuite/mastodon:${DOCKER_IMAGE_TAG}"
restart: always
env_file: .env.production
command: bundle exec sidekiq
depends_on:
- db
- redis
networks:
- internal_network
- external_network
volumes:
- ./public/system:/mastodon/public/system
healthcheck:
test: ['CMD-SHELL', "ps aux | grep '[s]idekiq\ 6' || false"]
networks:
external_network:
name: ${EXTERNAL_NETWORK}
external: true
internal_network:
internal: true
If you are running a dockerized nginx-proxy on ${EXTERNAL_NETWORK}
, you do not need to edit the docker-compose.yml file at all.
Note that the web server, streaming server, etc. DO NOT expose a port. They don’t have to, as the nginx proxy will be both on the external network and the internal (mastodon) network and can talk to them without binding ports.
If you use a manually installed (non-container) nginx-proxy, you’ll have to add back in the port exposures. That is NOT YET COVERED by the private repo, but I may support that eventually.