Docker Deployment🔗
This guide covers deploying the Apheris Hub using Docker with an automated deployment script. For Kubernetes deployments using Helm, see the Kubernetes Deployment Guide.
Prerequisites🔗
Before you begin, ensure you have the following:
Hardware Requirements🔗
The ApherisFold application, particularly OpenFold3, has the following resource requirements:
- Modern GPU with at least 48GB GPU memory and CUDA 11+ support (e.g. NVIDIA A100, H100, L40S, RTX 6000 and others). In AWS, the G6e instance is an example of a cost-effective machine that supports OpenFold3.
- At least 300GB of disk space
- Docker environment with Nvidia GPU drivers and the Nvidia Container Toolkit installed
Software Requirements🔗
- Docker (Docker Desktop or Docker Engine) with the daemon running
- bash (requires GNU Bash ≥ 4.0)
- base64 (encoding credentials to pull Docker images)
- PostgreSQL (optional) - The deployment script can automatically deploy PostgreSQL as a Docker container. For external PostgreSQL (recommended for production), see Database Configuration.
Apheris Hub API Key🔗
You'll need an API Key to pull the ApherisFold Docker images. Request access at https://www.apheris.com/apherisfold or contact support@apheris.com.
Download Deployment Files🔗
Download the deployment files and place them in the same directory:
If you'd like to review the deployment script before downloading, you can view its contents here:
deploy_apherisfold
#!/usr/bin/env bash
readonly DEBUG=${DEBUG:-false}
[[ "$DEBUG" == "true" || "$DEBUG" == "1" ]] && set -o verbose
set -o errexit
set -o nounset
set -o pipefail
[[ ${BASH_SOURCE[0]} != "${0}" ]] && (
echo "do not source ${BASH_SOURCE[0]}" >&2
exit 36
)
# ---------------------------
# Exit codes
# ---------------------------
readonly ERR_CONFIG_MISSING=10
readonly ERR_DOCKER_NOT_RUNNING=11
readonly ERR_INVALID_PULLSECRET=13
readonly ERR_INVALID_CONFIG=14
readonly ERR_WAIT_FOR_TIMEOUT=15
readonly EXIT_SUCCESS=0
# ---------------------------
# Defaults for config values
# ---------------------------
# Model
readonly DEFAULT_MODELS_REPOSITORY="quay.io/apheris/hub-apps"
readonly DEFAULT_MOCK_ENABLED="true"
readonly DEFAULT_MOCK_TAG="0.28.0-mock-by-file"
readonly DEFAULT_MOCK_PORT=""
readonly DEFAULT_OF3_ENABLED="false"
readonly DEFAULT_OF3_TAG="0.28.0-openfold3-by-file"
readonly DEFAULT_OF3_PORT=""
readonly DEFAULT_B2_ENABLED="false"
readonly DEFAULT_B2_TAG="0.28.0-boltz2-by-file"
readonly DEFAULT_B2_PORT=""
readonly DEFAULT_GPU="all"
readonly DEFAULT_SHM_SIZE="8G"
# Hub
readonly DEFAULT_HUB_ENABLED="true"
readonly DEFAULT_HUB_REPOSITORY="quay.io/apheris/hub"
readonly DEFAULT_HUB_TAG="1.1.0"
readonly DEFAULT_HUB_PORT=""
# Auth
readonly DEFAULT_AUTH_ENABLED="false"
readonly DEFAULT_AUTH_DOMAIN="https://apheris-ai-dev.eu.auth0.com/"
readonly DEFAULT_AUTH_AUDIENCE="https://hub.fold.apheris.com/api"
readonly DEFAULT_AUTH_CLIENT_ID="pndJvZpK2u1uAbZ6DHkNFVnhak4j9Xwi"
readonly DEFAULT_AUTH_BROWSER_URL=""
# Storage defaults (already effectively used, just making them explicit)
readonly DEFAULT_INPUT_DIR="\$HOME/apheris-hub/input"
readonly DEFAULT_OUTPUT_DIR="\$HOME/apheris-hub/output"
# DB defaults
readonly DEFAULT_DB_DEPLOY="true" # if you want local postgres by default
readonly DEFAULT_DB_PORT="" # empty = random port
readonly DEFAULT_DB_DSN="" # must be provided by config if external
# Registry and MSA secrets
readonly DEFAULT_PULL_SECRET="" # must be provided by config
readonly DEFAULT_MSA_SECRET="" # optional; defaults to PULL_SECRET if not set
# ---------------------------
# Constants
# ---------------------------
readonly DEFAULT_CONFIG_FILE="./config.yaml"
readonly LABEL="apheris.hub=true"
readonly DEFAULT_NETWORK="apheris-hub"
readonly HUB_CONTAINER="apheris-hub"
readonly POSTGRES_CONTAINER="apheris-hub-postgres"
readonly POSTGRES_VOLUME="apheris-hub-postgres-data"
# Address to bind random ports to when config does not specify one.
# For local-only access, keep 127.0.0.1.
# If you need the service accessible from other hosts, change to 0.0.0.0.
readonly RANDOM_BIND_ADDR="127.0.0.1"
# Docker command (can be overridden by environment: DOCKER_CMD="sudo docker" ...)
readonly DOCKER_CMD="${DOCKER_CMD:-docker}"
# ---------------------------
# Minimal YAML parsing
# ---------------------------
declare -A YAML_VARS=()
parse_yaml() {
local file="$1"
local -a key_stack=()
local line raw trimmed indent value rest key path
local level i
while IFS= read -r line || [[ -n "$line" ]]; do
raw="$line"
# Strip comments (simple, but works for our config)
raw="${raw%%#*}"
# Skip empty / whitespace-only lines
[[ "$raw" =~ ^[[:space:]]*$ ]] && continue
# Compute indentation (assumes 2 spaces per level)
trimmed="${raw#"${raw%%[![:space:]]*}"}"
indent="${raw%"$trimmed"}"
level=$((${#indent} / 2))
line="$trimmed"
# Split key and value on first colon
key="${line%%:*}"
rest="${line#*:}"
# Normalize key
key="${key%%[[:space:]]*}"
key="${key%$'\r'}"
# Update key stack at this level
key_stack[level]="$key"
i=$((level + 1))
while ((i < ${#key_stack[@]})); do
unset "key_stack[i]"
((i++))
done
# Trim value
rest="${rest#$' '}"
rest="${rest%$'\r'}"
# Trim trailing spaces
while [[ "$rest" =~ [[:space:]]$ ]]; do
rest="${rest%[[:space:]]}"
done
value="$rest"
# No value => just a parent mapping (e.g. "hub:"), nothing to store
[[ -z "$value" ]] && continue
# Strip surrounding quotes if present
if [[ "$value" == \"*\" && "$value" == *\" ]]; then
value="${value:1:${#value}-2}"
elif [[ "$value" == \'*\' && "$value" == *\' ]]; then
value="${value:1:${#value}-2}"
fi
# Build full dotted path: e.g. hub.auth.enabled
path="${key_stack[0]}"
for ((i = 1; i <= level; i++)); do
[[ -n "${key_stack[$i]-}" ]] || continue
path+=".${key_stack[$i]}"
done
YAML_VARS["$path"]="$value"
done <"$file"
}
yaml_get() {
local key="$1"
local default="${2-}"
if [[ ${YAML_VARS[$key]+_} ]]; then
echo "${YAML_VARS[$key]}"
else
echo "$default"
fi
}
yaml_get_or_default_if_empty() {
local key="$1"
local default="${2-}"
local value
value="$(yaml_get "$key" "$default")"
if [[ -z "$value" ]]; then
echo "$default"
else
echo "$value"
fi
}
# ---------------------------
# Small helpers
# ---------------------------
die() {
local msg="$1"
local code="${2:-1}"
echo "Error (code ${code}): ${msg}" >&2
exit "$code"
}
silence() {
# Usage: silence <cmd> [args...]
# Runs the command, captures all output, and:
# - prints it only on error (or if DEBUG=true/1)
# - returns the original exit code
local cmd="$1"
shift
local output exit_code
set +o errexit
output=$("$cmd" "$@" 2>&1)
exit_code=$?
set -o errexit
if [[ $exit_code -ne 0 || "$DEBUG" == "true" || "$DEBUG" == "1" ]]; then
echo "[command failed: $cmd $*] (exit code: $exit_code)" >&2
echo "$output" >&2
fi
return $exit_code
}
wait_for() {
local cmd=$1
local retry_count=0
local max_retries=40
while [[ $retry_count -lt $max_retries ]] && ! (eval "$cmd" &>/dev/null); do
sleep 0.5
retry_count=$((retry_count + 1))
done
if [[ $retry_count -ge $max_retries ]]; then
die "Timed out waiting for: $cmd" "$ERR_WAIT_FOR_TIMEOUT"
fi
}
needs_cmd() {
command -v "$1" >/dev/null 2>&1 || die "Required command not found: $1" "$2"
}
bool() {
[[ "${1,,}" == "true" ]] && echo "true" || echo "false"
}
expand_path() {
local p="$1"
case "$p" in
~) p="$HOME" ;;
~/*) p="$HOME/${p:2}" ;;
esac
p="${p//\$\{HOME\}/$HOME}"
p="${p//\$HOME/$HOME}"
echo "$p"
}
# ---------------------------
# Args / usage
# ---------------------------
CONFIG_FILE="$DEFAULT_CONFIG_FILE"
CMD="run"
usage() {
cat <<EOF
Usage: ${0##*/} [command] [options]
Commands:
run Start hub + models as per config.
cleanup Stop/remove all labeled containers (hub + models) and network (if not used by other containers).
cleanup-postgres Remove the local Postgres container (if managed by this script). WARNING: This deletes persisted DB state.
cleanup-storage Remove storage folders as per config.
diagnose Print current configuration and status of relevant docker resources.
Options:
-c, --config PATH Config file (default: $DEFAULT_CONFIG_FILE)
-h, --help Show this help
EOF
}
parse_args() {
if [[ $# -gt 0 && "$1" != "-"* ]]; then
CMD="$1"
shift
fi
while [[ $# -gt 0 ]]; do
case "$1" in
-c | --config)
[[ -n "${2-}" && "${2:0:1}" != "-" ]] || die "Option $1 requires a path" "$ERR_INVALID_CONFIG"
CONFIG_FILE="$2"
shift
;;
-c=* | --config=*)
CONFIG_FILE="${1#*=}"
;;
-h | --help)
usage
exit $EXIT_SUCCESS
;;
*)
die "Unknown option: $1. Please use --help for usage information." "$ERR_INVALID_CONFIG"
;;
esac
shift
done
}
validate_args() {
[[ -f "$CONFIG_FILE" ]] || die "Config file not found: $CONFIG_FILE" "$ERR_CONFIG_MISSING"
needs_cmd "$DOCKER_CMD" "$ERR_DOCKER_NOT_RUNNING"
# Check Docker daemon reachability and permissions
local output exit_code
set +o errexit
output=$("$DOCKER_CMD" info 2>&1)
exit_code=$?
set -o errexit
if [[ $exit_code -ne 0 ]]; then
echo "Error: Failed to talk to Docker daemon using: $DOCKER_CMD" >&2
echo "---- docker info output ----" >&2
echo "$output" >&2
echo "----------------------------" >&2
if grep -qi "permission denied" <<<"$output"; then
cat >&2 <<EOF
It looks like Docker is running, but this user/command does not have permission
to access the Docker daemon socket.
Possible fixes:
- Ensure the user running this script is in the 'docker' group (and re-login), OR
- Run this script with a different Docker command, for example:
DOCKER_CMD="sudo docker" $0 "$@"
Current user: $USER
Current DOCKER_CMD: $DOCKER_CMD
EOF
else
cat >&2 <<EOF
Docker daemon is not running or not accessible.
Please:
- Make sure the Docker service is running (e.g. 'systemctl status docker'), AND
- Ensure '$DOCKER_CMD info' works from your shell before running this script.
EOF
fi
exit "$ERR_DOCKER_NOT_RUNNING"
fi
}
# ---------------------------
# Config loading
# ---------------------------
load_config() {
parse_yaml "$CONFIG_FILE"
AUTH_ENABLED="$(bool "$(yaml_get 'hub.auth.enabled' "$DEFAULT_AUTH_ENABLED")")"
AUTH_DOMAIN="$(yaml_get_or_default_if_empty 'hub.auth.domain' "$DEFAULT_AUTH_DOMAIN")"
AUTH_AUDIENCE="$(yaml_get_or_default_if_empty 'hub.auth.audience' "$DEFAULT_AUTH_AUDIENCE")"
AUTH_CLIENT_ID="$(yaml_get_or_default_if_empty 'hub.auth.clientId' "$DEFAULT_AUTH_CLIENT_ID")"
AUTH_BROWSER_URL="$(yaml_get_or_default_if_empty 'hub.auth.browserUrl' "$DEFAULT_AUTH_BROWSER_URL")"
INPUT_DIR="$(expand_path "$(yaml_get 'storage.input' "$DEFAULT_INPUT_DIR")")"
OUTPUT_DIR="$(expand_path "$(yaml_get 'storage.output' "$DEFAULT_OUTPUT_DIR")")"
NETWORK_NAME="$(yaml_get 'network' "$DEFAULT_NETWORK")"
HUB_ENABLED="$(bool "$(yaml_get 'hub.enabled' "$DEFAULT_HUB_ENABLED")")"
HUB_REPOSITORY="$(yaml_get 'hub.repository' "$DEFAULT_HUB_REPOSITORY")"
HUB_TAG="$(yaml_get 'hub.tag' "$DEFAULT_HUB_TAG")"
HUB_PORT="$(yaml_get 'hub.port' "$DEFAULT_HUB_PORT")"
DB_DEPLOY="$(bool "$(yaml_get 'db.deploy' "$DEFAULT_DB_DEPLOY")")"
DB_DSN="$(yaml_get 'db.dsn' "$DEFAULT_DB_DSN")"
DB_PORT="$(yaml_get 'db.port' "$DEFAULT_DB_PORT")"
PULL_SECRET="$(yaml_get 'pullSecret' "$DEFAULT_PULL_SECRET")"
MSA_SECRET="$(yaml_get 'msaSecret' "$DEFAULT_MSA_SECRET")"
[[ -z "$MSA_SECRET" || "$MSA_SECRET" == "..." ]] && MSA_SECRET="$PULL_SECRET"
MOCK_ENABLED="$(bool "$(yaml_get 'models.mock.enabled' "$DEFAULT_MOCK_ENABLED")")"
MOCK_REPOSITORY="$(yaml_get 'models.mock.repository' "$DEFAULT_MODELS_REPOSITORY")"
MOCK_TAG="$(yaml_get 'models.mock.tag' "$DEFAULT_MOCK_TAG")"
MOCK_PORT="$(yaml_get 'models.mock.port' "$DEFAULT_MOCK_PORT")"
OF3_ENABLED="$(bool "$(yaml_get 'models.openfold3.enabled' "$DEFAULT_OF3_ENABLED")")"
OF3_REPOSITORY="$(yaml_get 'models.openfold3.repository' "$DEFAULT_MODELS_REPOSITORY")"
OF3_TAG="$(yaml_get 'models.openfold3.tag' "$DEFAULT_OF3_TAG")"
OF3_GPUS="$(yaml_get 'models.openfold3.gpus' "$DEFAULT_GPU")"
OF3_SHM="$(yaml_get 'models.openfold3.shm_size' "$DEFAULT_SHM_SIZE")"
OF3_PORT="$(yaml_get 'models.openfold3.port' "$DEFAULT_OF3_PORT")"
B2_ENABLED="$(bool "$(yaml_get 'models.boltz2.enabled' "$DEFAULT_B2_ENABLED")")"
B2_REPOSITORY="$(yaml_get 'models.boltz2.repository' "$DEFAULT_MODELS_REPOSITORY")"
B2_TAG="$(yaml_get 'models.boltz2.tag' "$DEFAULT_B2_TAG")"
B2_GPUS="$(yaml_get 'models.boltz2.gpus' "$DEFAULT_GPU")"
B2_SHM="$(yaml_get 'models.boltz2.shm_size' "$DEFAULT_SHM_SIZE")"
B2_PORT="$(yaml_get 'models.boltz2.port' "$DEFAULT_B2_PORT")"
validate_config
}
validate_model() {
local key="$1"
local enabled="$2"
local repository="$3"
local tag="$4"
local port="${5-}"
[[ "$enabled" == "true" ]] || return 0
[[ -n "$repository" ]] || die "models.${key}.repository must be set" "$ERR_INVALID_CONFIG"
[[ -n "$tag" ]] || die "models.${key}.tag must be set" "$ERR_INVALID_CONFIG"
if [[ -n "$port" ]]; then
[[ "$port" =~ ^[0-9]+$ ]] || die "configured port for '${key}' is invalid: '$port'" "$ERR_INVALID_CONFIG"
fi
}
validate_hub() {
local enabled="$1"
local repository="$2"
local tag="$3"
local port="${4-}"
[[ "$enabled" == "true" ]] || return 0
[[ -n "$repository" ]] || die "hub.repository must be set" "$ERR_INVALID_CONFIG"
[[ -n "$tag" ]] || die "hub.tag must be set" "$ERR_INVALID_CONFIG"
if [[ -n "$port" ]]; then
[[ "$port" =~ ^[0-9]+$ ]] || die "configured port for 'hub' is invalid: '$port'" "$ERR_INVALID_CONFIG"
fi
}
validate_db() {
local deploy="$1"
local dsn="$2"
local port="${3-}"
if [[ "$deploy" == "false" && -z "$dsn" ]]; then
die "db.deploy=false requires db.dsn" "$ERR_INVALID_CONFIG"
fi
if [[ "$deploy" == "true" ]]; then
if [[ -n "$port" ]]; then
[[ "$port" =~ ^[0-9]+$ ]] || die "configured port for 'db' is invalid: '$port'" "$ERR_INVALID_CONFIG"
fi
fi
}
validate_config() {
validate_hub "$HUB_ENABLED" "$HUB_REPOSITORY" "$HUB_TAG" "${HUB_PORT-}"
validate_db "$DB_DEPLOY" "$DB_DSN" "${DB_PORT-}"
validate_model "mock" "$MOCK_ENABLED" "$MOCK_REPOSITORY" "$MOCK_TAG" "${MOCK_PORT-}"
validate_model "openfold3" "$OF3_ENABLED" "$OF3_REPOSITORY" "$OF3_TAG" "${OF3_PORT-}"
validate_model "boltz2" "$B2_ENABLED" "$B2_REPOSITORY" "$B2_TAG" "${B2_PORT-}"
}
# ---------------------------
# Core actions
# ---------------------------
# Helper: run docker command, capture output, and handle errors
# Returns successfully if docker command succeeds, otherwise processes error and dies
run_docker_with_error_handling() {
local name="$1"
local component="$2"
local image="$3"
local port_display="$4" # e.g., "port 7777" or "with a random host port"
shift 4
# Remaining args are the full docker run command arguments
local output
# Use || true to prevent errexit from stopping execution on failure
output=$("$DOCKER_CMD" run "$@" "$image" 2>&1) || {
local exit_code=$?
echo "[command failed: docker run ... $image] (exit code: $exit_code)" >&2
echo "$output" >&2
silence "$DOCKER_CMD" rm -f "$name" >/dev/null 2>&1 || true
# Check for specific error types
if echo "$output" | grep -qiE "(unauthorized|authentication required|access.*not authorized|pull access denied)"; then
die "Failed to pull image '${image}': unauthorized. Please configure a valid 'pullSecret' in your config file." "$ERR_INVALID_PULLSECRET"
elif echo "$output" | grep -qiE "(address already in use|bind.*failed)"; then
die "Failed to start ${component} ${port_display} is already in use" "$ERR_INVALID_CONFIG"
else
die "Failed to start ${component} ${port_display}" "$ERR_INVALID_CONFIG"
fi
}
}
# Helper: start container with 2 modes:
# - If host_port is non-empty (configured in config), try to bind exactly that port once.
# On failure, exit with clear error (no fallback).
# - If host_port is empty, bind RANDOM_BIND_ADDR:0:inner_port (random free port),
# then discover and return the assigned host port.
start_container_with_port_choice() {
local name="$1"
local component="$2"
local host_port="$3"
local inner_port="$4"
local image="$5"
shift 5
local extra_opts=("$@")
# Ensure no stale container with this name
silence "$DOCKER_CMD" rm -f "$name" >/dev/null 2>&1 || true
if [[ -n "$host_port" ]]; then
run_docker_with_error_handling "$name" "$component" "$image" "on configured port: port ${host_port}" \
-d --name "$name" "${extra_opts[@]}" -p "${host_port}:${inner_port}"
else
run_docker_with_error_handling "$name" "$component" "$image" "with a random host port:" \
-d --name "$name" "${extra_opts[@]}" -p "${RANDOM_BIND_ADDR}:0:${inner_port}"
local mapped
mapped="$("$DOCKER_CMD" port "$name" "$inner_port" 2>/dev/null | tail -n1 || true)"
if [[ -z "$mapped" ]]; then
die "Could not determine mapped host port for ${component}" "$ERR_INVALID_CONFIG"
fi
host_port="${mapped##*:}"
fi
echo "$host_port"
}
docker_login_if_needed() {
if [[ -z "$PULL_SECRET" || "$PULL_SECRET" == "..." ]]; then
echo "pullSecret not set — skipping docker login." >&2
return 0
fi
local decoded
if ! decoded="$(echo "$PULL_SECRET" | base64 --decode 2>/dev/null)"; then
die "pullSecret is not valid base64" "$ERR_INVALID_PULLSECRET"
fi
local user pass
user="${decoded%%:*}"
pass="${decoded#*:}"
if [[ -z "$user" || -z "$pass" || "$user" == "$pass" ]]; then
die "pullSecret must decode to 'user:token'" "$ERR_INVALID_PULLSECRET"
fi
echo "Logging in to quay.io as robot user '${user}'..."
echo -n "$pass" | silence "$DOCKER_CMD" login quay.io -u "$user" --password-stdin \
|| die "docker login failed" "$ERR_INVALID_PULLSECRET"
}
ensure_folders() {
echo "Ensuring storage folders..."
mkdir -p "$INPUT_DIR" "$OUTPUT_DIR"
chmod 777 "$INPUT_DIR" "$OUTPUT_DIR" || true
}
ensure_network() {
echo "Ensuring docker network '$NETWORK_NAME'..."
if silence "$DOCKER_CMD" network inspect "$NETWORK_NAME" >/dev/null; then
return 0
fi
echo "Network '$NETWORK_NAME' does not exist, trying to create it..." >&2
if ! silence "$DOCKER_CMD" network create --driver bridge "$NETWORK_NAME" >/dev/null; then
die "Failed to ensure docker network '$NETWORK_NAME'. Check Docker daemon and permissions." "$ERR_DOCKER_NOT_RUNNING"
fi
}
stop_labeled_containers_except_postgres() {
echo "Stopping labeled ApherisFold containers (models + hub)..."
local ids
ids="$("$DOCKER_CMD" ps -a --filter "label=$LABEL" --quiet || true)"
if [[ -z "$ids" ]]; then
echo "No labeled ApherisFold containers found."
return 0
fi
while IFS= read -r id; do
[[ -n "$id" ]] || continue
local name
name="$("$DOCKER_CMD" inspect --format '{{.Name}}' "$id" 2>/dev/null || echo "")"
name="${name#/}"
if [[ "$name" == "$POSTGRES_CONTAINER" ]]; then
continue
fi
echo "Stopping container '$name' (ID: $id)..."
silence "$DOCKER_CMD" stop "$id" >/dev/null || true
echo "Removing container '$name' (ID: $id)..."
silence "$DOCKER_CMD" rm "$id" >/dev/null || true
done <<<"$ids"
echo "Removed all labeled ApherisFold containers (except Postgres)."
}
cleanup_postgres() {
# Explicit destructive action only.
if [[ "$DB_DEPLOY" == "true" ]]; then
echo "Removing local Postgres container '${POSTGRES_CONTAINER}' (this deletes persisted DB state)..."
silence "$DOCKER_CMD" rm -f "$POSTGRES_CONTAINER" >/dev/null 2>&1 || true
echo "Removing local Postgres data volume '$POSTGRES_VOLUME'..."
silence "$DOCKER_CMD" volume rm "$POSTGRES_VOLUME" >/dev/null 2>&1 || true
echo "Local Postgres and its data volume removed."
else
echo "No local Postgres to remove (db.deploy=false means using external database)."
fi
}
start_postgres_if_needed() {
if [[ "$DB_DEPLOY" != "true" ]]; then
echo "Using external Postgres (dsn from config)."
return 0
fi
local image="postgres:15-alpine"
echo "Ensuring Postgres data volume '$POSTGRES_VOLUME'..."
silence "$DOCKER_CMD" volume create "$POSTGRES_VOLUME" >/dev/null 2>&1 || true
local opts=(
--restart unless-stopped
--network "$NETWORK_NAME"
--volume "${POSTGRES_VOLUME}:/var/lib/postgresql/data"
-e POSTGRES_USER=postgres
-e POSTGRES_PASSWORD=postgres
-e POSTGRES_DB=hub
)
echo "Starting local Postgres container '${POSTGRES_CONTAINER}'..."
DB_PORT="$(start_container_with_port_choice "$POSTGRES_CONTAINER" "Postgres" "${DB_PORT-}" 5432 "$image" "${opts[@]}")"
echo "Waiting for Postgres to be ready..."
wait_for "$DOCKER_CMD exec $POSTGRES_CONTAINER pg_isready -U postgres"
DB_DSN="postgres://postgres:postgres@${POSTGRES_CONTAINER}:5432/hub?sslmode=disable"
echo "Local Postgres is ready on port ${DB_PORT}"
}
run_model_mock() {
if [[ "$MOCK_ENABLED" != "true" ]]; then
return 0
fi
local image="${MOCK_REPOSITORY}:${MOCK_TAG}"
local opts=(
--restart unless-stopped
--network "$NETWORK_NAME"
--volume "${INPUT_DIR}:/input"
--volume "${OUTPUT_DIR}:/output"
-e BASE_INPUT_DIRECTORY=/input
-e BASE_OUTPUT_DIRECTORY=/output
)
echo "Starting mock model container..."
MOCK_PORT="$(start_container_with_port_choice "mock" "mock model" "${MOCK_PORT-}" 8000 "$image" "${opts[@]}")"
echo "Mock model container port: $MOCK_PORT"
}
run_model_openfold3() {
if [[ "$OF3_ENABLED" != "true" ]]; then
return 0
fi
local image="${OF3_REPOSITORY}:${OF3_TAG}"
local opts=(
--restart unless-stopped
--gpus "$OF3_GPUS"
--shm-size="$OF3_SHM"
--network "$NETWORK_NAME"
--volume "${INPUT_DIR}:/input"
--volume "${OUTPUT_DIR}:/output"
-e BASE_INPUT_DIRECTORY=/input
-e BASE_OUTPUT_DIRECTORY=/output
)
echo "Starting openfold3 model container..."
OF3_PORT="$(start_container_with_port_choice "openfold3" "openfold3 model" "${OF3_PORT-}" 8000 "$image" "${opts[@]}")"
echo "Openfold3 model container port: $OF3_PORT"
}
run_model_boltz2() {
if [[ "$B2_ENABLED" != "true" ]]; then
return 0
fi
local image="${B2_REPOSITORY}:${B2_TAG}"
local opts=(
--restart unless-stopped
--gpus "$B2_GPUS"
--shm-size="$B2_SHM"
--network "$NETWORK_NAME"
--volume "${INPUT_DIR}:/input"
--volume "${OUTPUT_DIR}:/output"
-e BASE_INPUT_DIRECTORY=/input
-e BASE_OUTPUT_DIRECTORY=/output
)
echo "Starting boltz2 model container..."
B2_PORT="$(start_container_with_port_choice "boltz2" "boltz2 model" "${B2_PORT-}" 8000 "$image" "${opts[@]}")"
echo "Boltz2 model container port: $B2_PORT"
}
run_hub() {
if [[ "$HUB_ENABLED" != "true" ]]; then
return 0
fi
local image="${HUB_REPOSITORY}:${HUB_TAG}"
local opts=(
--restart unless-stopped
--network "$NETWORK_NAME"
--volume "${INPUT_DIR}:/input"
--volume "${OUTPUT_DIR}:/output"
-e APH_HUB_INPUT_ROOT_DIRECTORY=/input
-e APH_HUB_OUTPUT_ROOT_DIRECTORY=/output
-e APH_HUB_DATABASE_DSN="$DB_DSN"
-e APH_HUB_AUTH_ENABLED="$AUTH_ENABLED"
-e APH_HUB_AUTH_DOMAIN="$AUTH_DOMAIN"
-e APH_HUB_AUTH_CLIENT_ID="$AUTH_CLIENT_ID"
-e APH_HUB_AUTH_AUDIENCE="$AUTH_AUDIENCE"
-e APH_HUB_MSA_API_KEY="$MSA_SECRET"
)
if [[ -n "$AUTH_BROWSER_URL" ]]; then
opts+=(-e APH_HUB_AUTH_BROWSER_URL="$AUTH_BROWSER_URL")
fi
echo "Starting ApherisFold Hub container..."
HUB_PORT="$(start_container_with_port_choice "$HUB_CONTAINER" "hub" "${HUB_PORT-}" 8080 "$image" "${opts[@]}")"
echo "ApherisFold Hub container port: $HUB_PORT"
}
run_all() {
docker_login_if_needed
ensure_network
ensure_folders
stop_labeled_containers_except_postgres
start_postgres_if_needed
run_model_mock
run_model_openfold3
run_model_boltz2
run_hub
echo ""
echo "Running containers:"
"$DOCKER_CMD" ps --filter "label=$LABEL"
if [[ "$HUB_ENABLED" == "true" ]]; then
if [[ -n "$HUB_PORT" ]]; then
echo ""
echo "ApherisFold Hub UI is running and accessible at:"
echo " http://localhost:${HUB_PORT}"
echo ""
else
echo "Hub is enabled but no host port was determined."
echo "Check 'docker ps' output above."
fi
fi
}
network_has_containers() {
local net="$1"
# If network doesn't exist, treat as "no containers"
if ! silence "$DOCKER_CMD" network inspect "$net" >/dev/null 2>&1; then
return 1
fi
local count
count="$("$DOCKER_CMD" network inspect "$net" \
--format '{{ len .Containers }}' 2>/dev/null || echo "0")"
[[ "$count" -gt 0 ]]
}
cleanup_all() {
stop_labeled_containers_except_postgres
if network_has_containers "$NETWORK_NAME"; then
echo "Keeping network '$NETWORK_NAME' because containers are still attached."
echo "Attached containers:"
"$DOCKER_CMD" network inspect "$NETWORK_NAME" --format '{{ range .Containers }}- {{ .Name }}{{"\n"}}{{ end }}' \
2>/dev/null || true
else
echo "Removing docker network '$NETWORK_NAME' (no containers attached)..."
silence "$DOCKER_CMD" network rm "$NETWORK_NAME" >/dev/null 2>&1 || true
fi
echo "Cleanup complete."
}
diagnose() {
echo "== diagnose =="
echo "config file: ${CONFIG_FILE}"
echo ""
echo "-- hub --"
echo "hub.enabled: ${HUB_ENABLED}"
echo "hub.repository: ${HUB_REPOSITORY}"
echo "hub.tag: ${HUB_TAG}"
echo "hub.port: $([[ -n "${HUB_PORT}" ]] && echo "${HUB_PORT}" || echo \(empty\))"
echo ""
echo "hub.auth.enabled: ${AUTH_ENABLED}"
echo "hub.auth.domain: ${AUTH_DOMAIN}"
echo "hub.auth.audience: ${AUTH_AUDIENCE}"
echo "hub.auth.clientId: ${AUTH_CLIENT_ID}"
echo "hub.auth.browserUrl: $([[ -n "${AUTH_BROWSER_URL}" ]] && echo "${AUTH_BROWSER_URL}" || echo \(empty\))"
echo ""
echo "-- storage --"
echo "storage.input: ${INPUT_DIR}"
echo "storage.output: ${OUTPUT_DIR}"
echo ""
echo "-- db --"
echo "db.deploy: ${DB_DEPLOY}"
echo "db.dsn: $([[ -n "${DB_DSN}" ]] && echo \(set\) || echo \(empty\))"
echo "db.port: $([[ -n "${DB_PORT}" ]] && echo "${DB_PORT}" || echo \(empty\))"
echo ""
echo "-- models --"
echo "models.mock.enabled: ${MOCK_ENABLED}"
echo "models.mock.repository: ${MOCK_REPOSITORY}"
echo "models.mock.tag: ${MOCK_TAG}"
echo "models.mock.port: $([[ -n "${MOCK_PORT}" ]] && echo "${MOCK_PORT}" || echo \(empty\))"
echo ""
echo "models.openfold3.enabled: ${OF3_ENABLED}"
echo "models.openfold3.repository: ${OF3_REPOSITORY}"
echo "models.openfold3.tag: ${OF3_TAG}"
echo "models.openfold3.port: $([[ -n "${OF3_PORT}" ]] && echo "${OF3_PORT}" || echo \(empty\))"
echo ""
echo "models.boltz2.enabled: ${B2_ENABLED}"
echo "models.boltz2.repository: ${B2_REPOSITORY}"
echo "models.boltz2.tag: ${B2_TAG}"
echo "models.boltz2.port: $([[ -n "${B2_PORT}" ]] && echo "${B2_PORT}" || echo \(empty\))"
echo ""
echo "-- secrets --"
echo "pullSecret (registry): $([[ -z "${PULL_SECRET}" || "${PULL_SECRET}" == "..." ]] && echo \(empty\) || echo \(set\))"
if [[ -z "${MSA_SECRET}" || "${MSA_SECRET}" == "..." ]]; then
echo "msaSecret (Foldify MSA): (empty)"
elif [[ "${MSA_SECRET}" == "${PULL_SECRET}" ]]; then
echo "msaSecret (Foldify MSA): (set as pullSecret)"
else
echo "msaSecret (Foldify MSA): (set)"
fi
echo ""
echo "-- network --"
echo "network: ${NETWORK_NAME}"
echo ""
echo "-- docker version --"
"$DOCKER_CMD" version || true
echo ""
echo "-- labeled containers --"
"$DOCKER_CMD" ps -a --filter "label=${LABEL}" || true
echo ""
echo "-- network inspect --"
"$DOCKER_CMD" network inspect "${NETWORK_NAME}" || true
echo ""
}
# ---------------------------
# Main
# ---------------------------
parse_args "$@"
validate_args
load_config
case "$CMD" in
run) run_all ;;
cleanup) cleanup_all ;;
cleanup-postgres) cleanup_postgres ;;
diagnose) diagnose ;;
*)
die "Unknown command: $CMD. Please use --help for usage information." "$ERR_INVALID_CONFIG"
;;
esac
After downloading, make the script executable (Linux/macOS only):
chmod +x deploy_apherisfold
Your directory structure should look like this:
/path/to/apherisfold/
├─ deploy_apherisfold
└─ config.yaml
Configure Your Deployment🔗
Edit the config.yaml file to customize your deployment. Here is the complete configuration file:
hub:
enabled: true
port: 8080
auth:
# Set enabled: false for single-user mode (no authentication)
# Set enabled: true and provide values from your identity provider (Auth0, Okta, etc.) - recommended
# Set enabled: true with empty values to use Apheris Demo Auth0 (localhost only)
enabled: false
domain: ""
audience: ""
clientId: ""
models:
# Set enabled=true to pull the model image and run its container
# Set enabled=false to skip the model (it will not be pulled or started)
# Boltz2 is disabled by default
mock:
enabled: true
port: 7771
openfold3:
enabled: true
port: 7779
boltz2:
enabled: false
port: 7773
db:
# true = deploy local Postgres in Docker
# false = use external Postgres and set dsn below
deploy: true
# Leave empty when deploy=true (auto-generated)
# For external DBs (e.g. RDS), specify a DSN explicitly:
# dsn: "postgres://user:pass@host:5432/dbname?sslmode=require"
# Note: Special characters in username/password must be percent-encoded
dsn: ""
# Apheris-provided base64-encoded "user:token" for pulling model images from a container registry
pullSecret: "..."
# Optional: Base64-encoded API key for Apheris-hosted Foldify MSA server (msa.foldify.apheris.net)
# Only set this if you use a private registry and change the Apheris-provided pullSecret
# If not specified, pullSecret will be used for both registry authentication and MSA access
msaSecret: ""
Pull Secret🔗
The Apheris Hub API Key serves two purposes:
- Docker Registry Authentication: Authenticates with Quay.io to pull Docker images for the Hub and models
- MSA Server Authentication: Authenticates with the Apheris-hosted Foldify MSA server when using that service
Request your Apheris Hub API Key from https://www.apheris.com/apherisfold or contact support@apheris.com.
Warning
Keep your API Key or pull secret secure. Do not commit it to version control or share it publicly.
Add the API Key directly to your config.yaml:
pullSecret: "your-apheris-api-key"
The same API Key will automatically be used for both registry authentication and MSA server access, unless you specify a separate msaSecret (see below).
Using a Private Registry🔗
If you're hosting the Apheris Hub images in your own private registry, you need to provide a base64-encoded pull secret in the format username:password.
Encode your registry credentials:
echo -n "username:password" | base64
Configure in config.yaml:
pullSecret: "base64-encoded-private-registry-credentials"
Additionally, update the image repositories in your configuration to point to your private registry:
hub:
repository: "registry.example.com/apheris/hub"
models:
openfold3:
repository: "registry.example.com/apheris/hub-apps"
MSA Secret for Private Registry Deployments🔗
When using a private container registry, you need to specify the msaSecret configuration to maintain access to the Apheris-hosted Foldify MSA server.
Why is this needed?
The Apheris Hub API Key serves dual purposes:
- Authenticates with Quay.io to pull Docker images
- Authenticates with the Apheris-hosted Foldify MSA server (
msa.foldify.apheris.net)
When you host images in your own private registry, you'll use different credentials for pulling images (your private registry credentials), but you still need the original Apheris Hub API Key to access the MSA server:
# Your private registry credentials for pulling images
pullSecret: "base64-encoded-private-registry-credentials"
# Original Apheris Hub API Key for MSA server access
msaSecret: "your-apheris-api-key"
When to use msaSecret
Most users do NOT need to set msaSecret. Only configure it if you're using a private container registry. If you're pulling images directly from Apheris's Quay.io repositories, the pullSecret will be used for both registry authentication and MSA server access automatically.
If you're switching to a private registry and need to reference your original Apheris Hub API Key for the msaSecret field, you can find it in your initial deployment configuration, alternatively request access at https://www.apheris.com/apherisfold or contact support@apheris.com.
Database🔗
The Apheris Hub requires a PostgreSQL database to store job metadata, user information, and application state. You can choose between two options:
- use the built-in PostgreSQL container (default)
- connect to an external database instance (recommended).
Using the Built-in PostgreSQL Container🔗
By default, the deployment script creates and manages a local PostgreSQL container automatically. This is suitable for development, testing, and small production deployments.
db:
deploy: true
The built-in database:
- Persists data in a Docker volume
- Runs on the same Docker network as the Hub
Using an External PostgreSQL Database🔗
For production deployments or when using managed database services like AWS RDS or Azure Database for PostgreSQL, or Google Cloud SQL, you should connect to an external PostgreSQL instance.
To use an external database, set db.deploy=false and provide a connection string in the db.dsn field, for example:
db:
deploy: false
dsn: "postgres://username:password@hostname:5432/database?sslmode=require"
Percent-Encoding Required for Special Characters
If your database username or password contains special characters (like @, :, /, #, ?, &, =), you must percent-encode them in the DSN.
Example: With password p@ss:word/123, the DSN becomes:
postgres://user:p%40ss%3Aword%2F123@host:5432/db?sslmode=require
Please make sure to provide the correct SSL mode based on your database provider. For AWS RDS and most managed PostgreSQL services, use sslmode=require that requires SSL but does not verify the server certificate.
Database Requirements🔗
Your external PostgreSQL database must:
- Have a database created for the Apheris Hub (e.g.,
apheris_hub) - Grant the configured user full access to the database with permissions:
CREATE,SELECT,INSERT,UPDATE,DELETE,ALTER - Be accessible from the Docker host (verify network connectivity and firewall rules)
The Hub will automatically create the required tables and schema on first startup.
Models🔗
Toggle each model under models.<name>.enabled. Change default ports if needed. To enable just OpenFold3, set:
models:
mock:
enabled: false
openfold3:
enabled: true
boltz2:
enabled: false
Storage🔗
The Apheris Hub uses two storage directories for job inputs and outputs. These directories are mounted as Docker volumes into all Hub components (the Hub itself and all enabled models), allowing them to share data.
Default Configuration🔗
By default, storage directories are created in your home directory:
storage:
input: "$HOME/apheris-hub/input"
output: "$HOME/apheris-hub/output"
Custom Storage Locations🔗
You can configure custom paths for input and output directories. This is useful when:
- You want to use a specific directory for organizing your data
- You need to use a mounted network storage or external drive
- You want separate storage for different deployments
Please add the followinging to your config.yaml, replacing the paths with your desired locations:
storage:
input: "/mnt/data/apheris/input"
output: "/mnt/data/apheris/output"
Storage Usage🔗
Input directory🔗
The Hub places job input files in the input directory where models can access them.
If working directly with a model, place your job input files here (e.g., protein sequences, configuration files) so the models will read from this location.
Output directory🔗
The models write job results here. Each job creates a subdirectory with its results that the Hub can read and display.
Important Notes
- The deployment script automatically creates storage directories if they don't exist
- Ensure the directories are writable by the user running the deployment script
- Both directories are mounted as Docker volumes with read-write access
Hub Settings🔗
You can redefine the Hub's port and enable or disable authentication settings:
hub:
port: 8080
auth:
enabled: false
See Authentication Setup for detailed authentication configuration.
Authentication Setup🔗
The Apheris Hub supports OAuth 2.0 / OpenID Connect (OIDC) authentication for securing access and enabling multi-user environments. When enabled, the Hub performs JWT token validation and enforces user segregation based on the email claim in the token.
Authentication Features🔗
- Protocol: OAuth 2.0 with OIDC for token-based authentication
- Token Validation: The Hub validates JWTs against the configured identity provider's JWKS endpoint
- User Segregation: Jobs and data are isolated per user based on the
emailclaim in the JWT token. Each user can only access their own jobs and outputs.
Required Configuration Parameters🔗
To enable authentication, configure the following parameters in config.yaml:
hub:
auth:
enabled: true
domain: "https://your-auth-domain.auth0.com/"
audience: "https://your-api-audience.com"
clientId: "your-client-id"
Domain🔗
The issuer URL of your identity provider (e.g., Auth0, Okta, Dex). This is used to discover the JWKS endpoint for token validation. Must include the trailing slash.
Audience🔗
The API identifier that tokens must be issued for. This ensures tokens are intended for the Apheris Hub API.
Client ID🔗
The OAuth 2.0 client ID used by the Hub's frontend to authenticate users.
Authentication Scenarios🔗
The Apheris Hub supports three authentication scenarios to accommodate different use cases:
Scenario 1: Single-User Mode (No Authentication)🔗
Best for: Personal use, testing, or environments where all users can share data
To disable authentication and run the Hub in single-user mode without access control, set:
hub:
auth:
enabled: false
Scenario 2: Multi-User Mode with Your Own Identity Provider (Recommended)🔗
Best for: Production deployments, organizations with existing identity management
Why recommended: Full control over user management, compliance with your security policies, and production-grade reliability
To enable multi-user mode with your own identity provider, configure:
hub:
auth:
enabled: true
domain: "https://your-auth-domain.auth0.com/"
audience: "https://your-api-audience.com"
clientId: "your-client-id"
Replace the values with your identity provider credentials. See the Authentication Setup guide for Identity Provider Requirements and detailed configuration steps.
Scenario 3: Try Multi-User Mode with Apheris Demo Auth0🔗
Best for: Quickly evaluating authentication features before setting up your own identity provider
Localhost Only
This option uses a shared Apheris-managed Auth0 tenant for demonstration purposes. It is only configured for localhost deployment (ports 8080 or 8081) and is advised for evaluation only.
If you need to serve the Hub on a custom domain or endpoint and require authentication configuration assistance, please contact support@apheris.com.
To enable multi-user mode using Apheris demo Auth0 credentials, set:
hub:
auth:
enabled: true
domain: ""
audience: ""
clientId: ""
When you set enabled: true and leave the other values empty, the deployment script automatically uses Apheris demo Auth0 credentials. This allows you to experience the authentication flow and user segregation features without setting up your own identity provider.
After evaluating, we strongly recommend switching to your own identity provider (Scenario 2) for any real usage. See the Authentication Setup guide for Identity Provider Requirements and detailed configuration steps.
Start the Deployment🔗
Run the deployment script:
./deploy_apherisfold
The script automatically creates storage directories, the Docker network, a local PostgreSQL container (if enabled), and starts all enabled models and the hub.
After starting, the Hub will be available at the configured port (default: localhost:8080).
Available Commands🔗
The script supports several commands for managing your deployment that are listed below.
Run🔗
Info
Running ./deploy_apherisfold without any arguments defaults to the run command.
Start or restart the deployment:
./deploy_apherisfold run
Or simply:
./deploy_apherisfold
Ensures the network and folders exist, stops any previously labeled containers, starts PostgreSQL (if enabled), then the enabled models, and finally the hub.
Cleanup🔗
Stop and remove all containers:
./deploy_apherisfold cleanup
Stops and removes all hub/model containers and the network (unless other containers still use it). Database data persists.
Cleanup Storage🔗
Warning
This permanently deletes all input files and job results. After running this command, the Hub will not be able to display previous job results or access any stored data. Only use this when you need to completely wipe the state and start fresh.
Remove storage folders:
./deploy_apherisfold cleanup-storage
Removes the storage folders defined in the config file, deleting all persisted inputs and outputs.
Cleanup PostgreSQL🔗
Warning
This permanently deletes the PostgreSQL database, including all job metadata, job history, and user information. After running this command, the Hub will have no record of previously run jobs. Only use this when you need to completely wipe the state and start fresh.
Remove PostgreSQL container and data:
./deploy_apherisfold cleanup-postgres
Removes the local PostgreSQL container and destroys the persisted database state (only when db.deploy=true).
Diagnose🔗
View deployment configuration and status:
./deploy_apherisfold diagnose
Prints the parsed configuration, Docker status for relevant images, and the current network/container layout to help troubleshoot issues.
To save the output to a file for sharing with support, please run:
./deploy_apherisfold diagnose > diagnose.txt
Help🔗
View help information on the commands and options usage:
./deploy_apherisfold --help
Common Workflows🔗
Redeploy while keeping state🔗
Use the following command to redeploy the Apheris Hub while retaining all existing data, configurations, and job history:
./deploy_apherisfold
Redeploy while entirely removing state🔗
Warning
This workflow permanently deletes all data, including job inputs, outputs, database records, and job history. The Hub will have no record of previous jobs and cannot display any past results. Only use this when you need to completely wipe the state and start fresh.
Use the following sequence of commands to completely remove all existing data, configurations, and job history before redeploying:
./deploy_apherisfold cleanup
./deploy_apherisfold cleanup-storage
./deploy_apherisfold cleanup-postgres
./deploy_apherisfold
Use diagnose whenever you need to confirm what ports and configurations the script is applying.
Support🔗
When requesting support, it's helpful to provide diagnostic information about your deployment. Generate a diagnostic file before contacting support:
./deploy_apherisfold diagnose > diagnose.txt
This creates a diagnose.txt file containing your deployment configuration, Docker status, and network layout.
To report issues, request assistance, or inquire about advanced deployment options, contact support@apheris.com and attach the diagnose.txt file to help the support team troubleshoot issues more efficiently.