Skip to content

Deployment for Production

The following instructions are needed to set up Field-TM for production on your own cloud server.

Set up the Field-TM on a cloud server

Set up a server and domain name

  • Get a cloud server (tested with Ubuntu 22.04).
  • Set up a domain name, and point the DNS to your cloud server.
  • SSH into your server. Set up a user with sudo called svcftm. this is a good guide for basic server setup including creation of a user.

Run the install script

curl -L https://get.field.hotosm.org -o install.sh
bash install.sh

# Then follow the prompts

Additional Environment Variables

Variables are set in .env. Some can be updated manually, as required.

S3_ACCESS_KEY & S3_SECRET_KEY

In most circumstances these variables should be provided to authenticate with your S3 provider. However, some providers (such as AWS), allow for instance profiles to be attached to your server, with required permissions preconfigured. By default connections made from the EC2 instance with attached instance profile will be automatically authenticated. S3_ACCESS_KEY and S3_SECRET_KEY should be set to blank strings in this case ="".

ODK_ Variables

These can point to an externally hosted instance of ODK Central.

Or ODK Central can be started as part of the Field-TM docker compose stack, and variables should be set accordingly.

Other Domains

If you run Field-TM with ODK or QField included, then the domains will default to:

${FTM_DOMAIN} --> Field-TM (backend serves everything)
odk.${FTM_DOMAIN} --> ODK Central
qfield.${FTM_DOMAIN} --> Garage S3

The domain defaults can be overridden with:

FTM_ODK_DOMAIN
FTM_QFIELD_DOMAIN

Connecting to a remote database

  • A database may be located on a headless Linux server in the cloud.
  • To access the database via GUI tool such as PGAdmin, it is possible using port tunneling.
ssh username@server.domain -N -f -L {local_port}:localhost:{remote_port}

# Example
ssh root@field.hotosm.org -N -f -L 5430:localhost:5433

This will map port 5432 on the remote machine to port 5430 on your local machine.

Backup Process

  • Backup Field-TM database:

    GIT_BRANCH=dev
    backup_filename="field-tm-db-${GIT_BRANCH}-$(date +'%Y-%m-%d').sql.gz"
    echo $backup_filename
    
    docker exec -i -e PGPASSWORD=PASSWORD_HERE \
    field-tm-${GIT_BRANCH}-fieldtm-db-1 \
    pg_dump --verbose --encoding utf8 --format c -U fieldtm fieldtm \
    | gzip -9 > "$backup_filename"
    

Note: if you are dumping to import into a pre-existing database, you should also include the --clean flag.

This will drop the existing tables prior to import, and should prevent conflicts.

  • Backup ODK Central database:

    GIT_BRANCH=dev
    backup_filename="field-tm-odk-db-${GIT_BRANCH}-$(date +'%Y-%m-%d').sql.gz"
    echo $backup_filename
    
    docker exec -i -e PGPASSWORD=PASSWORD_HERE \
    field-tm-${GIT_BRANCH}-central-db-1 \
    pg_dump --verbose --encoding utf8 --format c -U odk odk | \
    gzip -9 > "$backup_filename"
    
  • Backup the S3 data:
GIT_BRANCH=dev
backup_filename="field-tm-s3-${GIT_BRANCH}-$(date +'%Y-%m-%d').tar.gz"
echo $backup_filename

docker run --rm -i --entrypoint=tar \
-u 0:0 \
-v $PWD:/backups -v \
field-tm-s3-data-${GIT_BRANCH}:/mnt/data \
ghcr.io/hotosm/field-tm:${GIT_BRANCH} \
-cvzf "/backups/$backup_filename" /mnt/data/

Manual Database Restores

The restore should be as easy as:

# On a different machine (else change the container name)
GIT_BRANCH=dev
backup_filename=field-tm-db-${GIT_BRANCH}-XXXX-XX-XX.sql.gz

cat "$backup_filename" | gunzip | \
docker exec -i -e PGPASSWORD=NEW_PASSWORD_HERE \
field-tm-${GIT_BRANCH}-fieldtm-db-1 \
pg_restore --verbose -U fieldtm -d fieldtm

# For ODK
backup_filename=field-tm-odk-db-${GIT_BRANCH}-XXXX-XX-XX.sql.gz
cat "$backup_filename" | gunzip | \
docker exec -i -e PGPASSWORD=NEW_PASSWORD_HERE \
field-tm-${GIT_BRANCH}-central-db-1 \
pg_restore --verbose -U odk -d odk

# For S3 (with the backup file in current dir)
backup_filename=field-tm-s3-${GIT_BRANCH}-XXXX-XX-XX.tar.gz
docker run --rm -i --entrypoint=tar \
-u 0:0 --working-dir=/ \
-v $backup_filename:/$backup_filename -v \
ghcr.io/hotosm/field-tm:${GIT_BRANCH} \
-xvzf "$backup_filename"

However, in some cases you may have existing data in the database (i.e. if you started the docker compose stack & the API ran the migrations!).

In this case you can import into a fresh db, before attaching to the Field-TM containers:

export GIT_BRANCH=dev

# Shut down the running database & delete the data
docker compose -f deploy/compose.$GIT_BRANCH.yaml down -v

# First, ensure you have a suitable .env with database vars
# Start the databases only
docker compose -f deploy/compose.$GIT_BRANCH.yaml up -d fieldtm-db central-db

# (Optional) restore odk central from the backup
backup_filename=field-tm-central-db-${GIT_BRANCH}-XXXX-XX-XX-sql.gz

cat "$backup_filename" | gunzip | \
docker exec -i \
field-tm-${GIT_BRANCH}-central-db-1 \
pg_restore --verbose -U odk -d odk

# Restore field-tm from the backup
backup_filename=field-tm-db-${GIT_BRANCH}-XXXX-XX-XX-sql.gz

cat "$backup_filename" | gunzip | \
docker exec -i \
field-tm-${GIT_BRANCH}-fieldtm-db-1 \
pg_restore --verbose -U fieldtm -d fieldtm

# Run the entire docker compose stack
docker compose -f deploy/compose.$GIT_BRANCH.yaml up -d

Help! Field-TM Prod Is Broken 😨

Debugging

  • Log into the production server, field.hotosm.org and view the container logs:

    docker logs field-tm-main-backend-1
    docker logs field-tm-main-backend-2
    docker logs field-tm-main-backend-3
    docker logs field-tm-main-backend-4
    

    Note there are four replica containers running, and any one of them could have handled the request. You should check them all.

    They often provide useful traceback information, including timestamps.

  • Reproduce the error on your local machine!

Making a hotfix

  • Sometimes fixes just can't wait to go through the dev --> staging --> production cycle. We need the fix now!
  • In this case, a hotfix can be made directly to the main branch:
    • Create a branch hotfix/something-i-fixed, add your code and test it works locally.
    • Push your branch, then create a PR against the main branch in Github.
    • Merge in the PR and wait for the deployment.
    • Later the code can be pulled back into develop / staging.

The prod server is broken, but dev / stage work?

  • Have been here a few times before...
  • Always keep in mind that if you recently push to prod, then the code is likely the same across instances, so it's likely external factors:
    1. Database migrations - some mismatch between database fields or data inconsistency.
    2. The server that it's hosted on - this may be different between environments.
    3. The deployment configuration - be it docker or Kubernetes, this may vary slightly between environments.
    4. The logging / telemetry config - I have been caught out here before! OpenTelemetry may only be configured on production and could potentially have bugs.