yala
June 14, 2024, 1:00pm
1
After some struggles with setting up the current AppFlowy-Cloud environment for a self-hosted community, I have recorded some notes about how the steps could be canonicalised for installations behind a Træfik reverse proxy.
Try it out and let’s use this as a starting point to improve the way how self-hosting AppFlowy-Cloud is documented.
opened 04:12AM - 14 Jun 24 UTC
**main use cases of the proposed feature**
As a platform engineer, I need to … be able to spin up conventional and minimally reconfigured appflowy instances quickly, in order to provision many communities with it.
As a distributed systems engineer, I need to be able to isolate the side-effects of an application deployment cleanly, in order to to provide for consistency of an environment.
**what types of users can benefit from using your proposed feature**
Families, Homelab engineers, Civic society: groups, communities, initiatives, associations, not-for-profits, non-government-organisations, foundations (<= there is money ..), Internet Service Providers, AppFlowy Developers
**Additional context**
The current `docker-compose.yml` and the `deploy.env` example present a very opinionated setup, which is tightly coupled with the code https://github.com/AppFlowy-IO/AppFlowy-Cloud/issues/228#issuecomment-2167072296 and makes assumptions about the target system. While the Nginx configuration contains an example for binding its ports to the host system, a deployment behind a reverse proxy is a lot more likely for common production environments.
Additionally, the database initialisation logic in the before migration hard-codes the database name and the password for the supabase/auth `gotrue` user. Since the `POSTGRES_USER` will be part of the `superuser` role, this can also be provided at run time with dynamically created initialisation logic.
https://github.com/AppFlowy-IO/AppFlowy-Cloud/blob/430e3e15c9a1dc6aba2a9599d17d946a61ac7cae/migrations/before/20230312043000_supabase_auth.sql#L29
https://github.com/AppFlowy-IO/AppFlowy-Cloud/blob/430e3e15c9a1dc6aba2a9599d17d946a61ac7cae/migrations/before/20230312043000_supabase_auth.sql#L35
This issue documents the outcome of modifications that were applied to make this run behind a Træfik reverse proxy as edge router and load balancer.
Further no Docker volumes are used for capturing database state, but mountpoints of ZFS datasets, which are managed externally from the Docker daemon.
.env
```sh
FQDN=appflowy.example.com
API_EXTERNAL_URL=https://${FQDN}
POSTGRES_USER=com_example_appflowy
POSTGRES_PASSWORD=<openssl rand -hex 32>
POSTGRES_DB=com_example_appflowy
POSTGRES_HOST=postgres
REDIS_HOST=redis
GOTRUE_HOST=gotrue
GOTRUE_POSTGRES_USER=supabase_auth_admin
GOTRUE_POSTGRES_PASSWORD=<openssl rand -hex 32>
GOTRUE_DATABASE_URL=postgres://${GOTRUE_POSTGRES_USER}:${GOTRUE_POSTGRES_PASSWORD}@${POSTGRES_HOST}:5432/${POSTGRES_DB}
APPFLOWY_GOTRUE_BASE_URL=http://${GOTRUE_HOST}:9999
APPFLOWY_DATABASE_URL=postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:5432/${POSTGRES_DB}
APPFLOWY_REDIS_URI=redis://${REDIS_HOST}:6379
APPFLOWY_ACCESS_CONTROL=true
APPFLOWY_WEBSOCKET_MAILBOX_SIZE=6000
APPFLOWY_DATABASE_MAX_CONNECTIONS=40
ADMIN_FRONTEND_REDIS_URL=redis://${REDIS_HOST}:6379
ADMIN_FRONTEND_GOTRUE_URL=http://${GOTRUE_HOST}:9999
GOTRUE_JWT_SECRET=<openssl rand -hex 32>
GOTRUE_JWT_EXP=7200
GOTRUE_MAILER_AUTOCONFIRM=false
GOTRUE_RATE_LIMIT_EMAIL_SENT=100
GOTRUE_SMTP_HOST=
GOTRUE_SMTP_PORT=465
GOTRUE_SMTP_USER=
GOTRUE_SMTP_PASS=<openssl rand -hex 32>
GOTRUE_SMTP_ADMIN_EMAIL=
GOTRUE_ADMIN_EMAIL=
GOTRUE_ADMIN_PASSWORD=<openssl rand -hex 32>
GOTRUE_EXTERNAL_GOOGLE_ENABLED=false
GOTRUE_EXTERNAL_GOOGLE_CLIENT_ID=
GOTRUE_EXTERNAL_GOOGLE_SECRET=
GOTRUE_EXTERNAL_GOOGLE_REDIRECT_URI=${API_EXTERNAL_URL}/gotrue/callback
GOTRUE_EXTERNAL_GITHUB_ENABLED=true
GOTRUE_EXTERNAL_GITHUB_CLIENT_ID=
GOTRUE_EXTERNAL_GITHUB_SECRET=
GOTRUE_EXTERNAL_GITHUB_REDIRECT_URI=${API_EXTERNAL_URL}/gotrue/callback
GOTRUE_EXTERNAL_DISCORD_ENABLED=false
GOTRUE_EXTERNAL_DISCORD_CLIENT_ID=
GOTRUE_EXTERNAL_DISCORD_SECRET=
GOTRUE_EXTERNAL_DISCORD_REDIRECT_URI=${API_EXTERNAL_URL}/gotrue/callback
APPFLOWY_S3_USE_MINIO=true
APPFLOWY_S3_MINIO_URL=https://minio.example.com
APPFLOWY_S3_ACCESS_KEY=
APPFLOWY_S3_SECRET_KEY=
APPFLOWY_S3_BUCKET=com.example.appflowy
APPFLOWY_S3_REGION=eu-central-1
APPFLOWY_MAILER_SMTP_HOST=${GOTRUE_SMTP_HOST}
APPFLOWY_MAILER_SMTP_PORT=465
APPFLOWY_MAILER_SMTP_USERNAME=${GOTRUE_SMTP_USER}
APPFLOWY_MAILER_SMTP_PASSWORD=${GOTRUE_SMTP_PASS}
RUST_LOG=info
APPFLOWY_HISTORY_URL=http://localhost:50051
APPFLOWY_HISTORY_REDIS_URL=redis://${REDIS_HOST}:6379
APPFLOWY_HISTORY_DATABASE_NAME=${POSTGRES_DB}
APPFLOWY_HISTORY_DATABASE_URL=postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:5432/${POSTGRES_DB}
POSTGRES_DATA=/data/bases/postgres/com.example.appflowy
REDIS_DATA=/data/bases/redis/com.example.appflowy
```
nginx.conf
```
events {
worker_connections 1024;
}
http {
# docker dns resolver
resolver 127.0.0.11 valid=10s;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 8080;
# https://github.com/nginxinc/nginx-prometheus-exporter
location = /stub_status {
stub_status;
}
}
server {
listen 80;
client_max_body_size 10M;
underscores_in_headers on;
# GoTrue
location /gotrue/ {
set $gotrue gotrue;
proxy_pass http://$gotrue:9999;
rewrite ^/gotrue(/.*)$ $1 break;
# Allow headers like redirect_to to be handed over to the gotrue
# for correct redirecting
proxy_set_header Host $http_host;
proxy_pass_request_headers on;
}
# WebSocket
location /ws {
set $appflowy_cloud appflowy_cloud;
proxy_pass http://$appflowy_cloud:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_read_timeout 86400;
}
# AppFlowy-Cloud
# created a separate location block for handling CORS preflight (OPTIONS) requests specifically for the /api endpoint.
location = /api/options {
if ($http_origin ~* (http://127.0.0.1:8000)) {
add_header 'Access-Control-Allow-Origin' $http_origin;
}
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE, PATCH';
add_header 'Access-Control-Allow-Headers' 'Content-Type, Authorization, Accept, Client-Version';
add_header 'Access-Control-Max-Age' 3600;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
return 204;
}
location /api/chat {
set $appflowy_cloud appflowy_cloud;
proxy_pass http://$appflowy_cloud:8000;
proxy_http_version 1.1;
proxy_set_header Connection "";
chunked_transfer_encoding on;
proxy_buffering off;
proxy_cache off;
proxy_read_timeout 600s;
proxy_connect_timeout 600s;
proxy_send_timeout 600s;
}
location /api {
set $appflowy_cloud appflowy_cloud;
proxy_pass http://$appflowy_cloud:8000;
proxy_set_header X-Request-Id $request_id;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Set CORS headers for other requests
if ($http_origin ~* (http://127.0.0.1:8000)) {
add_header 'Access-Control-Allow-Origin' $http_origin always;
}
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, PATCH' always;
add_header 'Access-Control-Allow-Headers' 'Content-Type, Authorization, Accept, Client-Version' always;
add_header 'Access-Control-Max-Age' 3600 always;
}
# Admin Frontend
# Optional Module, comment this section if you are did not deploy admin_frontend in docker-compose.yml
location / {
set $admin_frontend admin_frontend;
proxy_pass http://$admin_frontend:3000;
proxy_set_header X-Scheme $scheme;
proxy_set_header Host $host;
}
}
}
```
20230312043000_supabase_auth.sql
```diff
29c29
< ) THEN CREATE USER supabase_auth_admin BYPASSRLS NOINHERIT CREATEROLE LOGIN NOREPLICATION PASSWORD 'root';
---
> ) THEN CREATE USER supabase_auth_admin BYPASSRLS NOINHERIT CREATEROLE LOGIN NOREPLICATION PASSWORD '<openssl rand -hex 32>';
35c35
< GRANT CREATE ON DATABASE postgres TO supabase_auth_admin;
---
> GRANT CREATE ON DATABASE com_example_appflowy TO supabase_auth_admin;
```
compose.yml
```yaml
networks:
internal:
web:
external: true
services:
nginx:
restart: on-failure
image: nginx:alpine
healthcheck:
test: ['CMD-SHELL', 'ash -c "[[ $$(curl -s -o /dev/null -w \"%{http_code}\" http://localhost:80) == \"308\" ]]" || exit 1']
start_period: 10s
interval: 10s
retries: 5
timeout: 3s
depends_on:
gotrue:
condition: service_started
appflowy_cloud:
condition: service_started
admin_frontend:
condition: service_started
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
networks: ["internal", "web"]
labels:
traefik.enable: true
traefik.http.routers.io-allmende-appflowy-web.entrypoints: web
traefik.http.routers.io-allmende-appflowy-web.rule: Host(`${FQDN}`)
traefik.http.routers.io-allmende-appflowy-web.middlewares: http-to-https
traefik.http.middlewares.http-to-https.redirectscheme.scheme: https
traefik.http.middlewares.http-to-https.redirectscheme.permanent: true
traefik.http.routers.io-allmende-appflowy-webs.entrypoints: webs
traefik.http.routers.io-allmende-appflowy-webs.rule: Host(`${FQDN}`)
traefik.http.routers.io-allmende-appflowy-webs.tls: true
traefik.http.routers.io-allmende-appflowy-webs.tls.certresolver: le
postgres:
build:
context: ./context/postgres
restart: on-failure
environment:
- POSTGRES_USER
- POSTGRES_DB
- POSTGRES_PASSWORD
- POSTGRES_HOST
volumes:
- ./migrations/before:/docker-entrypoint-initdb.d
- ${POSTGRES_DATA}:/var/lib/postgresql/data
networks: ["internal"]
healthcheck:
test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
start_period: 20s
interval: 30s
retries: 5
timeout: 5s
redis:
restart: on-failure
image: redis:alpine
networks: ["internal"]
command: --save 60 1 --loglevel warning
healthcheck:
test: ["CMD-SHELL", "redis-cli ping | grep PONG"]
start_period: 10s
interval: 10s
retries: 5
timeout: 3s
volumes:
- ${REDIS_DATA}:/data
gotrue:
depends_on:
postgres:
condition: service_healthy
restart: on-failure
image: appflowyinc/gotrue:${GOTRUE_VERSION:-latest}
networks: ["internal"]
environment:
- GOTRUE_SITE_URL=appflowy-flutter://
- URI_ALLOW_LIST=*
- GOTRUE_JWT_SECRET
- GOTRUE_JWT_EXP
- GOTRUE_DB_DRIVER=postgres
- API_EXTERNAL_URL
- DATABASE_URL=${GOTRUE_DATABASE_URL}
- PORT=9999
- GOTRUE_SMTP_HOST
- GOTRUE_SMTP_PORT
- GOTRUE_SMTP_USER
- GOTRUE_SMTP_PASS
- GOTRUE_MAILER_URLPATHS_CONFIRMATION=/gotrue/verify
- GOTRUE_MAILER_URLPATHS_INVITE=/gotrue/verify
- GOTRUE_MAILER_URLPATHS_RECOVERY=/gotrue/verify
- GOTRUE_MAILER_URLPATHS_EMAIL_CHANGE=/gotrue/verify
- GOTRUE_SMTP_ADMIN_EMAIL
- GOTRUE_SMTP_MAX_FREQUENCY
- GOTRUE_RATE_LIMIT_EMAIL_SENT=${GOTRUE_RATE_LIMIT_EMAIL_SENT:-100}
- GOTRUE_MAILER_AUTOCONFIRM=${GOTRUE_MAILER_AUTOCONFIRM:-false}
# Google OAuth config
- GOTRUE_EXTERNAL_GOOGLE_ENABLED
- GOTRUE_EXTERNAL_GOOGLE_CLIENT_ID
- GOTRUE_EXTERNAL_GOOGLE_SECRET
- GOTRUE_EXTERNAL_GOOGLE_REDIRECT_URI
# GITHUB OAuth config
- GOTRUE_EXTERNAL_GITHUB_ENABLED
- GOTRUE_EXTERNAL_GITHUB_CLIENT_ID
- GOTRUE_EXTERNAL_GITHUB_SECRET
- GOTRUE_EXTERNAL_GITHUB_REDIRECT_URI
# Discord OAuth config
- GOTRUE_EXTERNAL_DISCORD_ENABLED
- GOTRUE_EXTERNAL_DISCORD_CLIENT_ID
- GOTRUE_EXTERNAL_DISCORD_SECRET
- GOTRUE_EXTERNAL_DISCORD_REDIRECT_URI
appflowy_cloud:
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
restart: on-failure
image: appflowyinc/appflowy_cloud:${APPFLOWY_CLOUD_VERSION:-latest}
networks: ["internal"]
environment:
- RUST_LOG=${RUST_LOG:-info}
- APPFLOWY_ENVIRONMENT=production
- APPFLOWY_DATABASE_URL
- APPFLOWY_REDIS_URI
- APPFLOWY_GOTRUE_JWT_SECRET=${GOTRUE_JWT_SECRET}
- APPFLOWY_GOTRUE_JWT_EXP=${GOTRUE_JWT_EXP}
- APPFLOWY_GOTRUE_BASE_URL
- APPFLOWY_GOTRUE_EXT_URL=${API_EXTERNAL_URL}
- APPFLOWY_GOTRUE_ADMIN_EMAIL=${GOTRUE_ADMIN_EMAIL}
- APPFLOWY_GOTRUE_ADMIN_PASSWORD=${GOTRUE_ADMIN_PASSWORD}
- APPFLOWY_S3_USE_MINIO
- APPFLOWY_S3_MINIO_URL
- APPFLOWY_S3_ACCESS_KEY
- APPFLOWY_S3_SECRET_KEY
- APPFLOWY_S3_BUCKET
- APPFLOWY_S3_REGION
- APPFLOWY_ACCESS_CONTROL
- APPFLOWY_DATABASE_MAX_CONNECTIONS
admin_frontend:
depends_on:
appflowy_cloud:
condition: service_started
gotrue:
condition: service_started
redis:
condition: service_healthy
restart: on-failure
image: appflowyinc/admin_frontend:${APPFLOWY_ADMIN_FRONTEND_VERSION:-latest}
networks: ["internal"]
environment:
- RUST_LOG=${RUST_LOG:-info}
- ADMIN_FRONTEND_REDIS_URL
- ADMIN_FRONTEND_GOTRUE_URL
- ADMIN_FRONTEND_APPFLOWY_CLOUD_URL=${ADMIN_FRONTEND_APPFLOWY_CLOUD_URL:-http://appflowy_cloud:8000}
appflowy_history:
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
restart: on-failure
image: appflowyinc/appflowy_history:${APPFLOWY_HISTORY_VERSION:-latest}
networks: ["internal"]
environment:
- RUST_LOG=${RUST_LOG:-info}
- APPFLOWY_HISTORY_REDIS_URL
- APPFLOWY_HISTORY_ENVIRONMENT=production
- APPFLOWY_HISTORY_DATABASE_NAME
- APPFLOWY_HISTORY_DATABASE_URL
```
This setup favours convention over configuration, but for secrets and custom namespaces.
It also adds a first round of healthchecks and also state to redis.
There are two obstacles that present themselves on setting this up, where application runtime state is applied.
- nginx.conf, where only certain routes are needed and the conventional DNS name of the appflowy_cloud container reappears
- 20230312043000_supabase_auth.sql, which might be replaced by a little `migrate` container that knows how to replace certain values in the SQL with variables. This could be achieved by using an `init.d` `.sh` script instead, and passing down the required variables to the container.
Apart from that, AppFlowy Cloud appears quite stable.
1 Like