Docker Deployment🔗
This guide covers deploying the Apheris Hub using Docker with an automated deployment script. For Kubernetes deployments using Helm, see the Kubernetes Deployment Guide.
Prerequisites🔗
Before you begin, ensure you have the following:
Hardware Requirements🔗
The ApherisFold application, particularly OpenFold3, has the following resource requirements:
- Modern GPU with at least 48GB GPU memory and CUDA 13+ support (e.g. NVIDIA A100, H100, L40S, RTX 6000 and others). In AWS, the G6e instance is an example of a cost-effective machine that supports OpenFold3.
- At least 300GB of disk space
- Docker environment with Nvidia GPU drivers and the Nvidia Container Toolkit installed
Software Requirements🔗
- Docker (Docker Desktop or Docker Engine) with the daemon running
- bash (requires GNU Bash ≥ 4.0)
- base64 (encoding credentials to pull Docker images)
- PostgreSQL (optional) - The deployment script can automatically deploy PostgreSQL as a Docker container. For external PostgreSQL (recommended for production), see Database Configuration.
Apheris Hub API Key🔗
You'll need an API Key to pull the ApherisFold Docker images. Request access at https://www.apheris.com/applications/apherisfold or contact support@apheris.com.
Download Deployment Files🔗
Download the deployment files and place them in the same directory:
If you'd like to review the deployment script before downloading, you can view its contents here:
deploy_apherisfold
#!/usr/bin/env bash
readonly DEBUG=${DEBUG:-false}
[[ "$DEBUG" == "true" || "$DEBUG" == "1" ]] && set -o verbose
set -o errexit
set -o nounset
set -o pipefail
[[ ${BASH_SOURCE[0]} != "${0}" ]] && (
echo "do not source ${BASH_SOURCE[0]}" >&2
exit 36
)
# ---------------------------
# Exit codes
# ---------------------------
readonly ERR_CONFIG_MISSING=10
readonly ERR_DOCKER_NOT_RUNNING=11
readonly ERR_INVALID_PULLSECRET=13
readonly ERR_INVALID_CONFIG=14
readonly ERR_WAIT_FOR_TIMEOUT=15
readonly EXIT_SUCCESS=0
# ---------------------------
# Defaults for config values
# ---------------------------
# Model
readonly DEFAULT_MODELS_REPOSITORY="quay.io/apheris/hub-apps"
readonly DEFAULT_MOCK_ENABLED="true"
readonly DEFAULT_MOCK_TAG="0.49.0-mock-by-file"
readonly DEFAULT_MOCK_PORT=""
readonly DEFAULT_MOCK_CAPABILITIES="inference,finetuning"
readonly DEFAULT_OF3_ENABLED="false"
readonly DEFAULT_OF3_TAG="0.49.0-openfold3-by-file"
readonly DEFAULT_OF3_PORT=""
readonly DEFAULT_OF3_CAPABILITIES="inference,finetuning"
readonly DEFAULT_B2_ENABLED="false"
readonly DEFAULT_B2_TAG="0.49.0-boltz2-by-file"
readonly DEFAULT_B2_PORT=""
readonly DEFAULT_B2_CAPABILITIES="inference"
readonly DEFAULT_PROTENIX_ENABLED="false"
readonly DEFAULT_PROTENIX_TAG="0.49.0-protenix-by-file"
readonly DEFAULT_PROTENIX_PORT=""
readonly DEFAULT_PROTENIX_CAPABILITIES="inference"
readonly DEFAULT_GPU="all"
readonly DEFAULT_SHM_SIZE="8G"
# Hub
readonly DEFAULT_HUB_ENABLED="true"
readonly DEFAULT_HUB_REPOSITORY="quay.io/apheris/hub"
readonly DEFAULT_HUB_TAG="1.3.1"
readonly DEFAULT_HUB_PORT=""
readonly DEFAULT_HUB_MSA_ENABLED="false"
# Auth
readonly DEFAULT_AUTH_ENABLED="false"
readonly DEFAULT_AUTH_DOMAIN="https://apheris-ai-dev.eu.auth0.com/"
readonly DEFAULT_AUTH_AUDIENCE="https://hub.fold.apheris.com/api"
readonly DEFAULT_AUTH_ISSUER=""
readonly DEFAULT_AUTH_EXTRA_SCOPES=""
readonly DEFAULT_AUTH_CLIENT_ID="pndJvZpK2u1uAbZ6DHkNFVnhak4j9Xwi"
readonly DEFAULT_AUTH_PROVIDER_TYPE=""
readonly DEFAULT_AUTH_BROWSER_URL=""
# CA certificate path
readonly DEFAULT_HUB_CA_CERT=""
readonly DEFAULT_HUB_FINETUNING_HEARTBEAT_TIMEOUT="5m"
# Storage defaults (already effectively used, just making them explicit)
readonly DEFAULT_INPUT_DIR="\$HOME/apheris-hub/input"
readonly DEFAULT_OUTPUT_DIR="\$HOME/apheris-hub/output"
# DB defaults
readonly DEFAULT_DB_DEPLOY="true" # if you want local postgres by default
readonly DEFAULT_DB_PORT="" # empty = random port
readonly DEFAULT_DB_DSN="" # must be provided by config if external
# Registry secret
readonly DEFAULT_PULL_SECRET="" # must be provided by config
# ---------------------------
# Constants
# ---------------------------
readonly DEFAULT_CONFIG_FILE="./config.yaml"
readonly LABEL="apheris.hub=true"
readonly DEFAULT_NETWORK="apheris-hub"
readonly HUB_CONTAINER="apheris-hub"
readonly POSTGRES_CONTAINER="apheris-hub-postgres"
readonly POSTGRES_VOLUME="apheris-hub-postgres-data"
# Address to bind random ports to when config does not specify one.
# For local-only access, keep 127.0.0.1.
# If you need the service accessible from other hosts, change to 0.0.0.0.
readonly RANDOM_BIND_ADDR="127.0.0.1"
# Docker command (can be overridden by environment: DOCKER_CMD="sudo docker" ...)
readonly DOCKER_CMD="${DOCKER_CMD:-docker}"
# ---------------------------
# Minimal YAML parsing
# ---------------------------
declare -A YAML_VARS=()
parse_yaml() {
local file="$1"
local -a key_stack=()
local line raw trimmed indent value rest key path
local level i
while IFS= read -r line || [[ -n "$line" ]]; do
raw="$line"
# Strip comments (simple, but works for our config)
raw="${raw%%#*}"
# Skip empty / whitespace-only lines
[[ "$raw" =~ ^[[:space:]]*$ ]] && continue
# Compute indentation (assumes 2 spaces per level)
trimmed="${raw#"${raw%%[![:space:]]*}"}"
indent="${raw%"$trimmed"}"
level=$((${#indent} / 2))
line="$trimmed"
# Split key and value on first colon
key="${line%%:*}"
rest="${line#*:}"
# Normalize key
key="${key%%[[:space:]]*}"
key="${key%$'\r'}"
# Update key stack at this level
key_stack[level]="$key"
i=$((level + 1))
while ((i < ${#key_stack[@]})); do
unset "key_stack[i]"
((i++))
done
# Trim value
rest="${rest#$' '}"
rest="${rest%$'\r'}"
# Trim trailing spaces
while [[ "$rest" =~ [[:space:]]$ ]]; do
rest="${rest%[[:space:]]}"
done
value="$rest"
# No value => just a parent mapping (e.g. "hub:"), nothing to store
[[ -z "$value" ]] && continue
# Strip surrounding quotes if present
if [[ "$value" == \"*\" && "$value" == *\" ]]; then
value="${value:1:${#value}-2}"
elif [[ "$value" == \'*\' && "$value" == *\' ]]; then
value="${value:1:${#value}-2}"
fi
# Build full dotted path: e.g. hub.auth.enabled
path="${key_stack[0]}"
for ((i = 1; i <= level; i++)); do
[[ -n "${key_stack[$i]-}" ]] || continue
path+=".${key_stack[$i]}"
done
YAML_VARS["$path"]="$value"
done <"$file"
}
yaml_get() {
local key="$1"
local default="${2-}"
if [[ ${YAML_VARS[$key]+_} ]]; then
echo "${YAML_VARS[$key]}"
else
echo "$default"
fi
}
yaml_get_or_default_if_empty() {
local key="$1"
local default="${2-}"
local value
value="$(yaml_get "$key" "$default")"
if [[ -z "$value" ]]; then
echo "$default"
else
echo "$value"
fi
}
yaml_get_list() {
local file="$1"
local target="$2"
local default="${3-}"
local -a key_stack=()
local line raw trimmed indent rest key path value
local level i
local target_level=-1
local result=""
while IFS= read -r line || [[ -n "$line" ]]; do
raw="$line"
raw="${raw%%#*}"
[[ "$raw" =~ ^[[:space:]]*$ ]] && continue
trimmed="${raw#"${raw%%[![:space:]]*}"}"
indent="${raw%"$trimmed"}"
level=$((${#indent} / 2))
line="$trimmed"
if [[ "$line" == "- "* ]]; then
if (( target_level >= 0 && level == target_level + 1 )); then
value="${line#- }"
value="${value%$'\r'}"
while [[ "$value" =~ [[:space:]]$ ]]; do
value="${value%[[:space:]]}"
done
if [[ "$value" != *:* ]]; then
if [[ "$value" == \"*\" && "$value" == *\" ]]; then
value="${value:1:${#value}-2}"
elif [[ "$value" == \'*\' && "$value" == *\' ]]; then
value="${value:1:${#value}-2}"
fi
if [[ -n "$result" ]]; then
result+=$'\n'
fi
result+="$value"
fi
fi
continue
fi
if (( target_level >= 0 && level <= target_level )); then
break
fi
key="${line%%:*}"
rest="${line#*:}"
key="${key%%[[:space:]]*}"
key="${key%$'\r'}"
key_stack[level]="$key"
i=$((level + 1))
while ((i < ${#key_stack[@]})); do
unset "key_stack[i]"
((i++))
done
path="${key_stack[0]}"
for ((i = 1; i <= level; i++)); do
[[ -n "${key_stack[$i]-}" ]] || continue
path+=".${key_stack[$i]}"
done
rest="${rest#$' '}"
rest="${rest%$'\r'}"
if [[ "$path" == "$target" && -z "$rest" ]]; then
target_level=$level
fi
done <"$file"
if [[ -n "$result" ]]; then
echo "$result"
else
echo "$default"
fi
}
# ---------------------------
# Small helpers
# ---------------------------
die() {
local msg="$1"
local code="${2:-1}"
echo "Error (code ${code}): ${msg}" >&2
exit "$code"
}
silence() {
# Usage: silence <cmd> [args...]
# Runs the command, captures all output, and:
# - prints it only on error (or if DEBUG=true/1)
# - returns the original exit code
local cmd="$1"
shift
local output exit_code
set +o errexit
output=$("$cmd" "$@" 2>&1)
exit_code=$?
set -o errexit
if [[ $exit_code -ne 0 || "$DEBUG" == "true" || "$DEBUG" == "1" ]]; then
echo "[command failed: $cmd $*] (exit code: $exit_code)" >&2
echo "$output" >&2
fi
return $exit_code
}
wait_for() {
local cmd=$1
local retry_count=0
local max_retries=40
while [[ $retry_count -lt $max_retries ]] && ! (eval "$cmd" &>/dev/null); do
sleep 0.5
retry_count=$((retry_count + 1))
done
if [[ $retry_count -ge $max_retries ]]; then
die "Timed out waiting for: $cmd" "$ERR_WAIT_FOR_TIMEOUT"
fi
}
needs_cmd() {
command -v "$1" >/dev/null 2>&1 || die "Required command not found: $1" "$2"
}
bool() {
[[ "${1,,}" == "true" ]] && echo "true" || echo "false"
}
join_lines_with_commas() {
local value="$1"
local result=""
local line
while IFS= read -r line; do
[[ -n "$line" ]] || continue
if [[ -n "$result" ]]; then
result+=","
fi
result+="$line"
done <<<"$value"
printf '%s' "$result"
}
normalize_list() {
local value="$1"
local result=""
local item
local -a items=()
value="${value//$'\n'/,}"
IFS=',' read -r -a items <<<"$value"
for item in "${items[@]}"; do
item="${item#"${item%%[![:space:]]*}"}"
item="${item%"${item##*[![:space:]]}"}"
[[ -n "$item" ]] || continue
if [[ -n "$result" ]]; then
result+=$'\n'
fi
result+="$item"
done
printf '%s' "$result"
}
expand_path() {
local p="$1"
# Trim leading/trailing whitespace to avoid YAML parsing quirks
p="${p#"${p%%[![:space:]]*}"}"
p="${p%"${p##*[![:space:]]}"}"
case "$p" in
\~) p="$HOME" ;;
\~/*) p="$HOME/${p#~/}" ;;
"\$HOME") p="$HOME" ;;
"\$HOME/"*) p="$HOME/${p#\$HOME/}" ;;
"\${HOME}") p="$HOME" ;;
"\${HOME}/"*) p="$HOME/${p#\$\{HOME\}/}" ;;
esac
if [[ "$p" == "$HOME/~" ]]; then
p="$HOME"
elif [[ "$p" == "$HOME/~/"* ]]; then
p="$HOME/${p#"$HOME/~/"}"
fi
printf '%s' "$p"
}
require_abs_path() {
local p="$1"
local label="$2"
[[ -z "$p" || "$p" == /* ]] || die "$label must be an absolute path: $p" "$ERR_INVALID_CONFIG"
}
# ---------------------------
# Args / usage
# ---------------------------
CONFIG_FILE="$DEFAULT_CONFIG_FILE"
CMD="run"
usage() {
cat <<EOF
Usage: ${0##*/} [command] [options]
Commands:
run Start hub + models as per config.
cleanup Stop/remove all labeled containers (hub + models) and network (if not used by other containers).
cleanup-postgres Remove the local Postgres container (if managed by this script). WARNING: This deletes persisted DB state.
cleanup-storage Remove storage folders as per config.
diagnose Print current configuration and status of relevant docker resources.
Options:
-c, --config PATH Config file (default: $DEFAULT_CONFIG_FILE)
-h, --help Show this help
EOF
}
parse_args() {
if [[ $# -gt 0 && "$1" != "-"* ]]; then
CMD="$1"
shift
fi
while [[ $# -gt 0 ]]; do
case "$1" in
-c | --config)
[[ -n "${2-}" && "${2:0:1}" != "-" ]] || die "Option $1 requires a path" "$ERR_INVALID_CONFIG"
CONFIG_FILE="$2"
shift
;;
-c=* | --config=*)
CONFIG_FILE="${1#*=}"
;;
-h | --help)
usage
exit $EXIT_SUCCESS
;;
*)
die "Unknown option: $1. Please use --help for usage information." "$ERR_INVALID_CONFIG"
;;
esac
shift
done
}
validate_args() {
[[ -f "$CONFIG_FILE" ]] || die "Config file not found: $CONFIG_FILE" "$ERR_CONFIG_MISSING"
CONFIG_FILE="$(cd "$(dirname "$CONFIG_FILE")" && pwd)/$(basename "$CONFIG_FILE")"
[[ -f "$CONFIG_FILE" ]] || die "Config file not found after path resolution: $CONFIG_FILE" "$ERR_CONFIG_MISSING"
needs_cmd "$DOCKER_CMD" "$ERR_DOCKER_NOT_RUNNING"
# Check Docker daemon reachability and permissions
local output exit_code
set +o errexit
output=$("$DOCKER_CMD" info 2>&1)
exit_code=$?
set -o errexit
if [[ $exit_code -ne 0 ]]; then
echo "Error: Failed to talk to Docker daemon using: $DOCKER_CMD" >&2
echo "---- docker info output ----" >&2
echo "$output" >&2
echo "----------------------------" >&2
if grep -qi "permission denied" <<<"$output"; then
cat >&2 <<EOF
It looks like Docker is running, but this user/command does not have permission
to access the Docker daemon socket.
Possible fixes:
- Ensure the user running this script is in the 'docker' group (and re-login), OR
- Run this script with a different Docker command, for example:
DOCKER_CMD="sudo docker" $0 "$@"
Current user: $USER
Current DOCKER_CMD: $DOCKER_CMD
EOF
else
cat >&2 <<EOF
Docker daemon is not running or not accessible.
Please:
- Make sure the Docker service is running (e.g. 'systemctl status docker'), AND
- Ensure '$DOCKER_CMD info' works from your shell before running this script.
EOF
fi
exit "$ERR_DOCKER_NOT_RUNNING"
fi
}
# ---------------------------
# Config loading
# ---------------------------
load_config() {
parse_yaml "$CONFIG_FILE"
AUTH_ENABLED="$(bool "$(yaml_get 'hub.auth.enabled' "$DEFAULT_AUTH_ENABLED")")"
AUTH_DOMAIN="$(yaml_get_or_default_if_empty 'hub.auth.domain' "$DEFAULT_AUTH_DOMAIN")"
AUTH_AUDIENCE="$(yaml_get_or_default_if_empty 'hub.auth.audience' "$DEFAULT_AUTH_AUDIENCE")"
AUTH_ISSUER="$(yaml_get_or_default_if_empty 'hub.auth.issuer' "$DEFAULT_AUTH_ISSUER")"
AUTH_EXTRA_SCOPES="$(yaml_get_or_default_if_empty 'hub.auth.extraScopes' "$DEFAULT_AUTH_EXTRA_SCOPES")"
AUTH_CLIENT_ID="$(yaml_get_or_default_if_empty 'hub.auth.clientId' "$DEFAULT_AUTH_CLIENT_ID")"
AUTH_PROVIDER_TYPE="$(yaml_get_or_default_if_empty 'hub.auth.providerType' "$DEFAULT_AUTH_PROVIDER_TYPE")"
AUTH_BROWSER_URL="$(yaml_get_or_default_if_empty 'hub.auth.browserUrl' "$DEFAULT_AUTH_BROWSER_URL")"
HUB_CA_CERT="$(yaml_get 'hub.caCert' "$DEFAULT_HUB_CA_CERT")"
[[ -n "$HUB_CA_CERT" ]] && HUB_CA_CERT="$(expand_path "$HUB_CA_CERT")"
require_abs_path "$HUB_CA_CERT" "hub.caCert"
INPUT_DIR="$(expand_path "$(yaml_get 'storage.input' "$DEFAULT_INPUT_DIR")")"
OUTPUT_DIR="$(expand_path "$(yaml_get 'storage.output' "$DEFAULT_OUTPUT_DIR")")"
NETWORK_NAME="$(yaml_get 'network' "$DEFAULT_NETWORK")"
HUB_ENABLED="$(bool "$(yaml_get 'hub.enabled' "$DEFAULT_HUB_ENABLED")")"
HUB_REPOSITORY="$(yaml_get 'hub.repository' "$DEFAULT_HUB_REPOSITORY")"
HUB_TAG="$(yaml_get 'hub.tag' "$DEFAULT_HUB_TAG")"
HUB_PORT="$(yaml_get 'hub.port' "$DEFAULT_HUB_PORT")"
HUB_MSA_ENABLED="$(bool "$(yaml_get 'hub.msa.enabled' "$DEFAULT_HUB_MSA_ENABLED")")"
HUB_FINETUNING_HEARTBEAT_TIMEOUT="$(yaml_get_or_default_if_empty 'hub.finetuningHeartbeatTimeout' "$DEFAULT_HUB_FINETUNING_HEARTBEAT_TIMEOUT")"
DB_DEPLOY="$(bool "$(yaml_get 'db.deploy' "$DEFAULT_DB_DEPLOY")")"
DB_DSN="$(yaml_get 'db.dsn' "$DEFAULT_DB_DSN")"
DB_PORT="$(yaml_get 'db.port' "$DEFAULT_DB_PORT")"
PULL_SECRET="$(yaml_get 'pullSecret' "$DEFAULT_PULL_SECRET")"
MOCK_ENABLED="$(bool "$(yaml_get 'models.mock.enabled' "$DEFAULT_MOCK_ENABLED")")"
MOCK_REPOSITORY="$(yaml_get 'models.mock.repository' "$DEFAULT_MODELS_REPOSITORY")"
MOCK_TAG="$(yaml_get 'models.mock.tag' "$DEFAULT_MOCK_TAG")"
MOCK_PORT="$(yaml_get 'models.mock.port' "$DEFAULT_MOCK_PORT")"
MOCK_CAPABILITIES="$(normalize_list "$(yaml_get_list "$CONFIG_FILE" 'models.mock.capabilities' "$DEFAULT_MOCK_CAPABILITIES")")"
MOCK_WEIGHTS_DIR="$(yaml_get 'models.mock.weightsDir' "")"
[[ -n "$MOCK_WEIGHTS_DIR" ]] && MOCK_WEIGHTS_DIR="$(expand_path "$MOCK_WEIGHTS_DIR")"
MOCK_WEIGHTS_CONFIG="$(yaml_get 'models.mock.weightsConfigFile' "")"
[[ -n "$MOCK_WEIGHTS_CONFIG" ]] && MOCK_WEIGHTS_CONFIG="$(expand_path "$MOCK_WEIGHTS_CONFIG")"
MOCK_WEIGHTS_ENV="$(yaml_get 'models.mock.weightsEnv' "")"
OF3_ENABLED="$(bool "$(yaml_get 'models.openfold3.enabled' "$DEFAULT_OF3_ENABLED")")"
OF3_REPOSITORY="$(yaml_get 'models.openfold3.repository' "$DEFAULT_MODELS_REPOSITORY")"
OF3_TAG="$(yaml_get 'models.openfold3.tag' "$DEFAULT_OF3_TAG")"
OF3_GPUS="$(yaml_get 'models.openfold3.gpus' "$DEFAULT_GPU")"
OF3_SHM="$(yaml_get 'models.openfold3.shm_size' "$DEFAULT_SHM_SIZE")"
OF3_PORT="$(yaml_get 'models.openfold3.port' "$DEFAULT_OF3_PORT")"
OF3_CAPABILITIES="$(normalize_list "$(yaml_get_list "$CONFIG_FILE" 'models.openfold3.capabilities' "$DEFAULT_OF3_CAPABILITIES")")"
OF3_WEIGHTS_DIR="$(yaml_get 'models.openfold3.weightsDir' "")"
[[ -n "$OF3_WEIGHTS_DIR" ]] && OF3_WEIGHTS_DIR="$(expand_path "$OF3_WEIGHTS_DIR")"
OF3_WEIGHTS_CONFIG="$(yaml_get 'models.openfold3.weightsConfigFile' "")"
[[ -n "$OF3_WEIGHTS_CONFIG" ]] && OF3_WEIGHTS_CONFIG="$(expand_path "$OF3_WEIGHTS_CONFIG")"
OF3_WEIGHTS_ENV="$(yaml_get 'models.openfold3.weightsEnv' "")"
B2_ENABLED="$(bool "$(yaml_get 'models.boltz2.enabled' "$DEFAULT_B2_ENABLED")")"
B2_REPOSITORY="$(yaml_get 'models.boltz2.repository' "$DEFAULT_MODELS_REPOSITORY")"
B2_TAG="$(yaml_get 'models.boltz2.tag' "$DEFAULT_B2_TAG")"
B2_GPUS="$(yaml_get 'models.boltz2.gpus' "$DEFAULT_GPU")"
B2_SHM="$(yaml_get 'models.boltz2.shm_size' "$DEFAULT_SHM_SIZE")"
B2_PORT="$(yaml_get 'models.boltz2.port' "$DEFAULT_B2_PORT")"
B2_CAPABILITIES="$(normalize_list "$(yaml_get_list "$CONFIG_FILE" 'models.boltz2.capabilities' "$DEFAULT_B2_CAPABILITIES")")"
B2_WEIGHTS_DIR="$(yaml_get 'models.boltz2.weightsDir' "")"
[[ -n "$B2_WEIGHTS_DIR" ]] && B2_WEIGHTS_DIR="$(expand_path "$B2_WEIGHTS_DIR")"
B2_WEIGHTS_CONFIG="$(yaml_get 'models.boltz2.weightsConfigFile' "")"
[[ -n "$B2_WEIGHTS_CONFIG" ]] && B2_WEIGHTS_CONFIG="$(expand_path "$B2_WEIGHTS_CONFIG")"
B2_WEIGHTS_ENV="$(yaml_get 'models.boltz2.weightsEnv' "")"
PROTENIX_ENABLED="$(bool "$(yaml_get 'models.protenix.enabled' "$DEFAULT_PROTENIX_ENABLED")")"
PROTENIX_REPOSITORY="$(yaml_get 'models.protenix.repository' "$DEFAULT_MODELS_REPOSITORY")"
PROTENIX_TAG="$(yaml_get 'models.protenix.tag' "$DEFAULT_PROTENIX_TAG")"
PROTENIX_GPUS="$(yaml_get 'models.protenix.gpus' "$DEFAULT_GPU")"
PROTENIX_SHM="$(yaml_get 'models.protenix.shm_size' "$DEFAULT_SHM_SIZE")"
PROTENIX_PORT="$(yaml_get 'models.protenix.port' "$DEFAULT_PROTENIX_PORT")"
PROTENIX_CAPABILITIES="$(normalize_list "$(yaml_get_list "$CONFIG_FILE" 'models.protenix.capabilities' "$DEFAULT_PROTENIX_CAPABILITIES")")"
validate_config
}
validate_model() {
local key="$1"
local enabled="$2"
local repository="$3"
local tag="$4"
local port="${5-}"
[[ "$enabled" == "true" ]] || return 0
[[ -n "$repository" ]] || die "models.${key}.repository must be set" "$ERR_INVALID_CONFIG"
[[ -n "$tag" ]] || die "models.${key}.tag must be set" "$ERR_INVALID_CONFIG"
if [[ -n "$port" ]]; then
[[ "$port" =~ ^[0-9]+$ ]] || die "configured port for '${key}' is invalid: '$port'" "$ERR_INVALID_CONFIG"
fi
}
validate_hub() {
local enabled="$1"
local repository="$2"
local tag="$3"
local port="${4-}"
[[ "$enabled" == "true" ]] || return 0
[[ -n "$repository" ]] || die "hub.repository must be set" "$ERR_INVALID_CONFIG"
[[ -n "$tag" ]] || die "hub.tag must be set" "$ERR_INVALID_CONFIG"
if [[ -n "$port" ]]; then
[[ "$port" =~ ^[0-9]+$ ]] || die "configured port for 'hub' is invalid: '$port'" "$ERR_INVALID_CONFIG"
fi
}
validate_db() {
local deploy="$1"
local dsn="$2"
local port="${3-}"
if [[ "$deploy" == "false" && -z "$dsn" ]]; then
die "db.deploy=false requires db.dsn" "$ERR_INVALID_CONFIG"
fi
if [[ "$deploy" == "true" ]]; then
if [[ -n "$port" ]]; then
[[ "$port" =~ ^[0-9]+$ ]] || die "configured port for 'db' is invalid: '$port'" "$ERR_INVALID_CONFIG"
fi
fi
}
validate_model_weights() {
local key="$1"
local enabled="$2"
local weights_dir="$3"
local weights_config="$4"
local weights_env="$5"
[[ "$enabled" == "true" ]] || return 0
# If no custom weights config, nothing to validate
[[ -z "$weights_dir" && -z "$weights_config" && -z "$weights_env" ]] && return 0
# With inline configuration, validate weightsDir when provided because it may still be mounted.
if [[ -n "$weights_env" ]]; then
if [[ -n "$weights_dir" ]]; then
if [[ ! -d "$weights_dir" ]]; then
die "models.${key}.weightsDir does not exist: $weights_dir" "$ERR_INVALID_CONFIG"
fi
require_abs_path "$weights_dir" "models.${key}.weightsDir"
fi
return 0
fi
# If using file-based configuration, validate both weightsDir and weightsConfigFile
if [[ -n "$weights_dir" || -n "$weights_config" ]]; then
if [[ -z "$weights_dir" ]]; then
die "models.${key}.weightsDir must be set when using weightsConfigFile" "$ERR_INVALID_CONFIG"
fi
if [[ -z "$weights_config" ]]; then
die "models.${key}.weightsConfigFile must be set when using weightsDir" "$ERR_INVALID_CONFIG"
fi
# Validate paths exist
if [[ ! -d "$weights_dir" ]]; then
die "models.${key}.weightsDir does not exist: $weights_dir" "$ERR_INVALID_CONFIG"
fi
if [[ ! -f "$weights_config" ]]; then
die "models.${key}.weightsConfigFile does not exist: $weights_config" "$ERR_INVALID_CONFIG"
fi
# Validate paths are absolute
require_abs_path "$weights_dir" "models.${key}.weightsDir"
require_abs_path "$weights_config" "models.${key}.weightsConfigFile"
fi
}
validate_capabilities() {
local key="$1"
local capabilities="$2"
local supported_capabilities="$3"
local cap
while IFS= read -r cap; do
[[ -n "$cap" ]] || continue
if ! grep -Fxq "$cap" <<<"$supported_capabilities"; then
die "models.${key}.capabilities contains unsupported capability: ${cap}. Supported values: ${supported_capabilities//$'\n'/, }" "$ERR_INVALID_CONFIG"
fi
done <<<"$capabilities"
}
validate_config() {
validate_hub "$HUB_ENABLED" "$HUB_REPOSITORY" "$HUB_TAG" "${HUB_PORT-}"
validate_db "$DB_DEPLOY" "$DB_DSN" "${DB_PORT-}"
validate_model "mock" "$MOCK_ENABLED" "$MOCK_REPOSITORY" "$MOCK_TAG" "${MOCK_PORT-}"
validate_model "openfold3" "$OF3_ENABLED" "$OF3_REPOSITORY" "$OF3_TAG" "${OF3_PORT-}"
validate_model "boltz2" "$B2_ENABLED" "$B2_REPOSITORY" "$B2_TAG" "${B2_PORT-}"
validate_model "protenix" "$PROTENIX_ENABLED" "$PROTENIX_REPOSITORY" "$PROTENIX_TAG" "${PROTENIX_PORT-}"
validate_model_weights "mock" "$MOCK_ENABLED" "$MOCK_WEIGHTS_DIR" "$MOCK_WEIGHTS_CONFIG" "$MOCK_WEIGHTS_ENV"
validate_model_weights "openfold3" "$OF3_ENABLED" "$OF3_WEIGHTS_DIR" "$OF3_WEIGHTS_CONFIG" "$OF3_WEIGHTS_ENV"
validate_model_weights "boltz2" "$B2_ENABLED" "$B2_WEIGHTS_DIR" "$B2_WEIGHTS_CONFIG" "$B2_WEIGHTS_ENV"
validate_capabilities "mock" "$MOCK_CAPABILITIES" "$(normalize_list "inference,finetuning")"
validate_capabilities "openfold3" "$OF3_CAPABILITIES" "$(normalize_list "inference,finetuning")"
validate_capabilities "boltz2" "$B2_CAPABILITIES" "$(normalize_list "inference")"
validate_capabilities "protenix" "$PROTENIX_CAPABILITIES" "$(normalize_list "inference")"
}
# ---------------------------
# Core actions
# ---------------------------
# Helper: add custom weights configuration to docker run options
# Parameters: weights_env, weights_dir, weights_config
add_custom_weights_if_present() {
local weights_env="$1"
local weights_dir="$2"
local weights_config="$3"
CUSTOM_WEIGHTS_OPTS=()
# weightsEnv takes precedence over file-based configuration
if [[ -n "$weights_env" ]]; then
CUSTOM_WEIGHTS_OPTS+=(-e "APH_AVAILABLE_WEIGHTS=${weights_env}")
if [[ -n "$weights_dir" ]]; then
CUSTOM_WEIGHTS_OPTS+=(--volume "${weights_dir}:/weights:ro")
fi
elif [[ -n "$weights_dir" && -n "$weights_config" ]]; then
CUSTOM_WEIGHTS_OPTS+=(--volume "${weights_dir}:/weights:ro")
CUSTOM_WEIGHTS_OPTS+=(--volume "${weights_config}:/config/weights.json:ro")
CUSTOM_WEIGHTS_OPTS+=(-e "APH_WEIGHTS_CONFIG_FILE=/config/weights.json")
fi
}
# Helper: run docker command, capture output, and handle errors
# Returns successfully if docker command succeeds, otherwise processes error and dies
run_docker_with_error_handling() {
local name="$1"
local component="$2"
local image="$3"
local port_display="$4" # e.g., "port 7777" or "with a random host port"
shift 4
# Remaining args are the full docker run command arguments
local output
# Use || true to prevent errexit from stopping execution on failure
output=$("$DOCKER_CMD" run "$@" "$image" 2>&1) || {
local exit_code=$?
echo "[command failed: docker run ... $image] (exit code: $exit_code)" >&2
echo "$output" >&2
silence "$DOCKER_CMD" rm -f "$name" >/dev/null 2>&1 || true
# Check for specific error types
if echo "$output" | grep -qiE "(unauthorized|authentication required|access.*not authorized|pull access denied)"; then
die "Failed to pull image '${image}': unauthorized. Please configure a valid 'pullSecret' in your config file." "$ERR_INVALID_PULLSECRET"
elif echo "$output" | grep -qiE "(address already in use|bind.*failed)"; then
die "Failed to start ${component} ${port_display} is already in use" "$ERR_INVALID_CONFIG"
else
die "Failed to start ${component} ${port_display}" "$ERR_INVALID_CONFIG"
fi
}
}
# Helper: start container with 2 modes:
# - If host_port is non-empty (configured in config), try to bind exactly that port once.
# On failure, exit with clear error (no fallback).
# - If host_port is empty, bind RANDOM_BIND_ADDR:0:inner_port (random free port),
# then discover and return the assigned host port.
start_container_with_port_choice() {
local name="$1"
local component="$2"
local host_port="$3"
local inner_port="$4"
local image="$5"
shift 5
local extra_opts=("$@")
# Ensure no stale container with this name
silence "$DOCKER_CMD" rm -f "$name" >/dev/null 2>&1 || true
if [[ -n "$host_port" ]]; then
run_docker_with_error_handling "$name" "$component" "$image" "on configured port: port ${host_port}" \
-d --name "$name" "${extra_opts[@]}" -p "${host_port}:${inner_port}"
else
run_docker_with_error_handling "$name" "$component" "$image" "with a random host port:" \
-d --name "$name" "${extra_opts[@]}" -p "${RANDOM_BIND_ADDR}:0:${inner_port}"
local mapped
mapped="$("$DOCKER_CMD" port "$name" "$inner_port" 2>/dev/null | tail -n1 || true)"
if [[ -z "$mapped" ]]; then
die "Could not determine mapped host port for ${component}" "$ERR_INVALID_CONFIG"
fi
host_port="${mapped##*:}"
fi
echo "$host_port"
}
docker_login_if_needed() {
if [[ -z "$PULL_SECRET" || "$PULL_SECRET" == "..." ]]; then
echo "pullSecret not set — skipping docker login." >&2
return 0
fi
local decoded
if ! decoded="$(echo "$PULL_SECRET" | base64 --decode 2>/dev/null)"; then
die "pullSecret is not valid base64" "$ERR_INVALID_PULLSECRET"
fi
local user pass
user="${decoded%%:*}"
pass="${decoded#*:}"
if [[ -z "$user" || -z "$pass" || "$user" == "$pass" ]]; then
die "pullSecret must decode to 'user:token'" "$ERR_INVALID_PULLSECRET"
fi
echo "Logging in to quay.io as robot user '${user}'..."
echo -n "$pass" | silence "$DOCKER_CMD" login quay.io -u "$user" --password-stdin \
|| die "docker login failed" "$ERR_INVALID_PULLSECRET"
}
ensure_folders() {
echo "Ensuring storage folders..."
mkdir -p "$INPUT_DIR" "$OUTPUT_DIR"
chmod 777 "$INPUT_DIR" "$OUTPUT_DIR" || true
}
ensure_network() {
echo "Ensuring docker network '$NETWORK_NAME'..."
if silence "$DOCKER_CMD" network inspect "$NETWORK_NAME" >/dev/null; then
return 0
fi
echo "Network '$NETWORK_NAME' does not exist, trying to create it..." >&2
if ! silence "$DOCKER_CMD" network create --driver bridge "$NETWORK_NAME" >/dev/null; then
die "Failed to ensure docker network '$NETWORK_NAME'. Check Docker daemon and permissions." "$ERR_DOCKER_NOT_RUNNING"
fi
}
stop_labeled_containers_except_postgres() {
echo "Stopping labeled ApherisFold containers (models + hub)..."
local ids
ids="$("$DOCKER_CMD" ps -a --filter "label=$LABEL" --quiet || true)"
if [[ -z "$ids" ]]; then
echo "No labeled ApherisFold containers found."
return 0
fi
while IFS= read -r id; do
[[ -n "$id" ]] || continue
local name
name="$("$DOCKER_CMD" inspect --format '{{.Name}}' "$id" 2>/dev/null || echo "")"
name="${name#/}"
if [[ "$name" == "$POSTGRES_CONTAINER" ]]; then
continue
fi
echo "Stopping container '$name' (ID: $id)..."
silence "$DOCKER_CMD" stop "$id" >/dev/null || true
echo "Removing container '$name' (ID: $id)..."
silence "$DOCKER_CMD" rm "$id" >/dev/null || true
done <<<"$ids"
echo "Removed all labeled ApherisFold containers (except Postgres)."
}
cleanup_postgres() {
# Explicit destructive action only.
if [[ "$DB_DEPLOY" == "true" ]]; then
echo "Removing local Postgres container '${POSTGRES_CONTAINER}' (this deletes persisted DB state)..."
silence "$DOCKER_CMD" rm -f "$POSTGRES_CONTAINER" >/dev/null 2>&1 || true
echo "Removing local Postgres data volume '$POSTGRES_VOLUME'..."
silence "$DOCKER_CMD" volume rm "$POSTGRES_VOLUME" >/dev/null 2>&1 || true
echo "Local Postgres and its data volume removed."
else
echo "No local Postgres to remove (db.deploy=false means using external database)."
fi
}
cleanup_storage() {
echo "Removing storage folders (this deletes all input/output data)..."
if [[ -d "$INPUT_DIR" ]]; then
echo "Removing input directory '$INPUT_DIR'..."
rm -rf "$INPUT_DIR" || true
else
echo "Input directory does not exist: $INPUT_DIR"
fi
if [[ -d "$OUTPUT_DIR" ]]; then
echo "Removing output directory '$OUTPUT_DIR'..."
rm -rf "$OUTPUT_DIR" || true
else
echo "Output directory does not exist: $OUTPUT_DIR"
fi
echo "Storage folders removed."
}
start_postgres_if_needed() {
if [[ "$DB_DEPLOY" != "true" ]]; then
echo "Using external Postgres (dsn from config)."
return 0
fi
local image="postgres:15-alpine"
echo "Ensuring Postgres data volume '$POSTGRES_VOLUME'..."
silence "$DOCKER_CMD" volume create "$POSTGRES_VOLUME" >/dev/null 2>&1 || true
local opts=(
--restart unless-stopped
--network "$NETWORK_NAME"
--volume "${POSTGRES_VOLUME}:/var/lib/postgresql/data"
-e POSTGRES_USER=postgres
-e POSTGRES_PASSWORD=postgres
-e POSTGRES_DB=hub
)
echo "Starting local Postgres container '${POSTGRES_CONTAINER}'..."
DB_PORT="$(start_container_with_port_choice "$POSTGRES_CONTAINER" "Postgres" "${DB_PORT-}" 5432 "$image" "${opts[@]}")"
echo "Waiting for Postgres to be ready..."
wait_for "$DOCKER_CMD exec $POSTGRES_CONTAINER pg_isready -U postgres"
DB_DSN="postgres://postgres:postgres@${POSTGRES_CONTAINER}:5432/hub?sslmode=disable"
echo "Local Postgres is ready on port ${DB_PORT}"
}
run_model_mock() {
if [[ "$MOCK_ENABLED" != "true" ]]; then
return 0
fi
local image="${MOCK_REPOSITORY}:${MOCK_TAG}"
local opts=(
--restart unless-stopped
--network "$NETWORK_NAME"
--volume "${INPUT_DIR}:/input"
--volume "${OUTPUT_DIR}:/output"
-e BASE_INPUT_DIRECTORY=/input
-e BASE_OUTPUT_DIRECTORY=/output
-e APH_MODEL_SCOPE="$(join_lines_with_commas "$MOCK_CAPABILITIES")"
)
add_custom_weights_if_present "$MOCK_WEIGHTS_ENV" "$MOCK_WEIGHTS_DIR" "$MOCK_WEIGHTS_CONFIG"
opts+=("${CUSTOM_WEIGHTS_OPTS[@]}")
echo "Starting mock model container..."
MOCK_PORT="$(start_container_with_port_choice "mock" "mock model" "${MOCK_PORT-}" 8000 "$image" "${opts[@]}")"
echo "Mock model container port: $MOCK_PORT"
}
run_model_openfold3() {
if [[ "$OF3_ENABLED" != "true" ]]; then
return 0
fi
local image="${OF3_REPOSITORY}:${OF3_TAG}"
local opts=(
--restart unless-stopped
--gpus "$OF3_GPUS"
--shm-size="$OF3_SHM"
--network "$NETWORK_NAME"
--volume "${INPUT_DIR}:/input"
--volume "${OUTPUT_DIR}:/output"
-e BASE_INPUT_DIRECTORY=/input
-e BASE_OUTPUT_DIRECTORY=/output
-e APH_MODEL_SCOPE="$(join_lines_with_commas "$OF3_CAPABILITIES")"
)
add_custom_weights_if_present "$OF3_WEIGHTS_ENV" "$OF3_WEIGHTS_DIR" "$OF3_WEIGHTS_CONFIG"
opts+=("${CUSTOM_WEIGHTS_OPTS[@]}")
echo "Starting openfold3 model container..."
OF3_PORT="$(start_container_with_port_choice "openfold3" "openfold3 model" "${OF3_PORT-}" 8000 "$image" "${opts[@]}")"
echo "Openfold3 model container port: $OF3_PORT"
}
run_model_boltz2() {
if [[ "$B2_ENABLED" != "true" ]]; then
return 0
fi
local image="${B2_REPOSITORY}:${B2_TAG}"
local opts=(
--restart unless-stopped
--gpus "$B2_GPUS"
--shm-size="$B2_SHM"
--network "$NETWORK_NAME"
--volume "${INPUT_DIR}:/input"
--volume "${OUTPUT_DIR}:/output"
-e BASE_INPUT_DIRECTORY=/input
-e BASE_OUTPUT_DIRECTORY=/output
-e APH_MODEL_SCOPE="$(join_lines_with_commas "$B2_CAPABILITIES")"
)
add_custom_weights_if_present "$B2_WEIGHTS_ENV" "$B2_WEIGHTS_DIR" "$B2_WEIGHTS_CONFIG"
opts+=("${CUSTOM_WEIGHTS_OPTS[@]}")
echo "Starting boltz2 model container..."
B2_PORT="$(start_container_with_port_choice "boltz2" "boltz2 model" "${B2_PORT-}" 8000 "$image" "${opts[@]}")"
echo "Boltz2 model container port: $B2_PORT"
}
run_model_protenix() {
if [[ "$PROTENIX_ENABLED" != "true" ]]; then
return 0
fi
local image="${PROTENIX_REPOSITORY}:${PROTENIX_TAG}"
local opts=(
--restart unless-stopped
--gpus "$PROTENIX_GPUS"
--shm-size="$PROTENIX_SHM"
--network "$NETWORK_NAME"
--volume "${INPUT_DIR}:/input"
--volume "${OUTPUT_DIR}:/output"
-e BASE_INPUT_DIRECTORY=/input
-e BASE_OUTPUT_DIRECTORY=/output
-e APH_MODEL_SCOPE="$(join_lines_with_commas "$PROTENIX_CAPABILITIES")"
)
echo "Starting protenix model container..."
PROTENIX_PORT="$(start_container_with_port_choice "protenix" "protenix model" "${PROTENIX_PORT-}" 8000 "$image" "${opts[@]}")"
echo "Protenix model container port: $PROTENIX_PORT"
}
run_hub() {
if [[ "$HUB_ENABLED" != "true" ]]; then
return 0
fi
local image="${HUB_REPOSITORY}:${HUB_TAG}"
local opts=(
--restart unless-stopped
--network "$NETWORK_NAME"
--volume "${INPUT_DIR}:/input"
--volume "${OUTPUT_DIR}:/output"
--volume "${CONFIG_FILE}:/apheris/hub-config.yaml:ro"
-e APH_HUB_INPUT_ROOT_DIRECTORY=/input
-e APH_HUB_OUTPUT_ROOT_DIRECTORY=/output
-e APH_HUB_DATABASE_DSN="$DB_DSN"
-e APH_HUB_CONFIG_FILE=/apheris/hub-config.yaml
-e APH_HUB_AUTH_ENABLED="$AUTH_ENABLED"
-e APH_HUB_AUTH_DOMAIN="$AUTH_DOMAIN"
-e APH_HUB_AUTH_CLIENT_ID="$AUTH_CLIENT_ID"
-e APH_HUB_AUTH_AUDIENCE="$AUTH_AUDIENCE"
-e APH_HUB_AUTH_ISSUER="$AUTH_ISSUER"
-e APH_HUB_AUTH_EXTRA_SCOPES="$AUTH_EXTRA_SCOPES"
-e APH_HUB_MSA_ENABLED="$HUB_MSA_ENABLED"
)
if [[ -n "$AUTH_BROWSER_URL" ]]; then
opts+=(-e APH_HUB_AUTH_BROWSER_URL="$AUTH_BROWSER_URL")
fi
if [[ -n "$AUTH_PROVIDER_TYPE" ]]; then
opts+=(-e APH_HUB_AUTH_PROVIDER_TYPE="$AUTH_PROVIDER_TYPE")
fi
if [[ -n "$HUB_FINETUNING_HEARTBEAT_TIMEOUT" ]]; then
opts+=(-e APH_HUB_FINETUNING_HEARTBEAT_TIMEOUT="$HUB_FINETUNING_HEARTBEAT_TIMEOUT")
fi
# Mount CA certificate if configured
if [[ -n "$HUB_CA_CERT" ]]; then
if [[ ! -f "$HUB_CA_CERT" ]]; then
die "CA certificate file not found: $HUB_CA_CERT" "$ERR_INVALID_CONFIG"
fi
opts+=(--volume "${HUB_CA_CERT}:/etc/ssl/certs/custom-ca.crt:ro")
fi
# Pass through selected APH_HUB_* environment variables, for example to configure timeouts:
# APH_HUB_PENDING_REQUEST_TIMEOUT=6h ./deploy_apherisfold
for var_name in \
APH_HUB_PENDING_REQUEST_TIMEOUT \
APH_HUB_ACCEPTED_REQUEST_TIMEOUT \
APH_HUB_WEBSOCKET_ENABLE_CORS \
APH_HUB_MSA_POLL_INTERVAL \
APH_HUB_MSA_REQUEST_TIMEOUT \
APH_HUB_MSA_SERVERS \
APH_HUB_MSA_WORKER_POLL_INTERVAL \
APH_HUB_MSA_WORKER_COUNT \
APH_HUB_MSA_LEASE_DURATION \
APH_HUB_MSA_MAX_ATTEMPTS \
APH_HUB_MSA_RETRY_BASE_DELAY \
APH_HUB_MSA_RETRY_MAX_DELAY; do
local value="${!var_name-}"
if [[ -n "$value" ]]; then
opts+=(-e "${var_name}=${value}")
fi
done
echo "Starting ApherisFold Hub container..."
HUB_PORT="$(start_container_with_port_choice "$HUB_CONTAINER" "hub" "${HUB_PORT-}" 8080 "$image" "${opts[@]}")"
echo "ApherisFold Hub container port: $HUB_PORT"
}
run_all() {
docker_login_if_needed
ensure_network
ensure_folders
stop_labeled_containers_except_postgres
start_postgres_if_needed
run_model_mock
run_model_openfold3
run_model_boltz2
run_model_protenix
run_hub
echo ""
echo "Running containers:"
"$DOCKER_CMD" ps --filter "label=$LABEL"
if [[ "$HUB_ENABLED" == "true" ]]; then
if [[ -n "$HUB_PORT" ]]; then
echo ""
echo "ApherisFold Hub UI is running and accessible at:"
echo " http://localhost:${HUB_PORT}"
echo ""
else
echo "Hub is enabled but no host port was determined."
echo "Check 'docker ps' output above."
fi
fi
}
network_has_containers() {
local net="$1"
# If network doesn't exist, treat as "no containers"
if ! silence "$DOCKER_CMD" network inspect "$net" >/dev/null 2>&1; then
return 1
fi
local count
count="$("$DOCKER_CMD" network inspect "$net" \
--format '{{ len .Containers }}' 2>/dev/null || echo "0")"
[[ "$count" -gt 0 ]]
}
cleanup_all() {
stop_labeled_containers_except_postgres
if network_has_containers "$NETWORK_NAME"; then
echo "Keeping network '$NETWORK_NAME' because containers are still attached."
echo "Attached containers:"
"$DOCKER_CMD" network inspect "$NETWORK_NAME" --format '{{ range .Containers }}- {{ .Name }}{{"\n"}}{{ end }}' \
2>/dev/null || true
else
echo "Removing docker network '$NETWORK_NAME' (no containers attached)..."
silence "$DOCKER_CMD" network rm "$NETWORK_NAME" >/dev/null 2>&1 || true
fi
echo "Cleanup complete."
}
diagnose() {
echo "== diagnose =="
echo "config file: ${CONFIG_FILE}"
echo ""
echo "-- hub --"
echo "hub.enabled: ${HUB_ENABLED}"
echo "hub.repository: ${HUB_REPOSITORY}"
echo "hub.tag: ${HUB_TAG}"
echo "hub.port: $([[ -n "${HUB_PORT}" ]] && echo "${HUB_PORT}" || echo \(empty\))"
echo "hub.finetuningHeartbeatTimeout: $([[ -n "${HUB_FINETUNING_HEARTBEAT_TIMEOUT}" ]] && echo "${HUB_FINETUNING_HEARTBEAT_TIMEOUT}" || echo \(empty\))"
echo "hub.caCert: $([[ -n "${HUB_CA_CERT}" ]] && echo "${HUB_CA_CERT}" || echo \(empty\))"
echo ""
echo "-- hub auth --"
echo "hub.auth.enabled: ${AUTH_ENABLED}"
echo "hub.auth.domain: ${AUTH_DOMAIN}"
echo "hub.auth.audience: ${AUTH_AUDIENCE}"
echo "hub.auth.issuer: $([[ -n "${AUTH_ISSUER}" ]] && echo "${AUTH_ISSUER}" || echo \(empty\))"
echo "hub.auth.providerType: $([[ -n "${AUTH_PROVIDER_TYPE}" ]] && echo "${AUTH_PROVIDER_TYPE}" || echo \(empty\))"
echo "hub.auth.extraScopes: $([[ -n "${AUTH_EXTRA_SCOPES}" ]] && echo "${AUTH_EXTRA_SCOPES}" || echo \(empty\))"
echo "hub.auth.clientId: ${AUTH_CLIENT_ID}"
echo "hub.auth.browserUrl: $([[ -n "${AUTH_BROWSER_URL}" ]] && echo "${AUTH_BROWSER_URL}" || echo \(empty\))"
echo ""
echo "-- hub msa --"
echo "hub.msa.enabled: ${HUB_MSA_ENABLED}"
echo ""
echo "-- storage --"
echo "storage.input: ${INPUT_DIR}"
echo "storage.output: ${OUTPUT_DIR}"
echo ""
echo "-- db --"
echo "db.deploy: ${DB_DEPLOY}"
echo "db.dsn: $([[ -n "${DB_DSN}" ]] && echo \(set\) || echo \(empty\))"
echo "db.port: $([[ -n "${DB_PORT}" ]] && echo "${DB_PORT}" || echo \(empty\))"
echo ""
echo "-- models --"
echo "models.mock.enabled: ${MOCK_ENABLED}"
echo "models.mock.repository: ${MOCK_REPOSITORY}"
echo "models.mock.tag: ${MOCK_TAG}"
echo "models.mock.port: $([[ -n "${MOCK_PORT}" ]] && echo "${MOCK_PORT}" || echo \(empty\))"
echo "models.mock.capabilities: ${MOCK_CAPABILITIES//$'\n'/, }"
echo "models.mock.weightsDir: $([[ -n "${MOCK_WEIGHTS_DIR}" ]] && echo "${MOCK_WEIGHTS_DIR}" || echo \(empty\))"
echo "models.mock.weightsConfigFile: $([[ -n "${MOCK_WEIGHTS_CONFIG}" ]] && echo "${MOCK_WEIGHTS_CONFIG}" || echo \(empty\))"
echo "models.mock.weightsEnv: $([[ -n "${MOCK_WEIGHTS_ENV}" ]] && echo \(set\) || echo \(empty\))"
echo ""
echo "models.openfold3.enabled: ${OF3_ENABLED}"
echo "models.openfold3.repository: ${OF3_REPOSITORY}"
echo "models.openfold3.tag: ${OF3_TAG}"
echo "models.openfold3.port: $([[ -n "${OF3_PORT}" ]] && echo "${OF3_PORT}" || echo \(empty\))"
echo "models.openfold3.capabilities: ${OF3_CAPABILITIES//$'\n'/, }"
echo "models.openfold3.weightsDir: $([[ -n "${OF3_WEIGHTS_DIR}" ]] && echo "${OF3_WEIGHTS_DIR}" || echo \(empty\))"
echo "models.openfold3.weightsConfigFile: $([[ -n "${OF3_WEIGHTS_CONFIG}" ]] && echo "${OF3_WEIGHTS_CONFIG}" || echo \(empty\))"
echo "models.openfold3.weightsEnv: $([[ -n "${OF3_WEIGHTS_ENV}" ]] && echo \(set\) || echo \(empty\))"
echo ""
echo "models.boltz2.enabled: ${B2_ENABLED}"
echo "models.boltz2.repository: ${B2_REPOSITORY}"
echo "models.boltz2.tag: ${B2_TAG}"
echo "models.boltz2.port: $([[ -n "${B2_PORT}" ]] && echo "${B2_PORT}" || echo \(empty\))"
echo "models.boltz2.capabilities: ${B2_CAPABILITIES//$'\n'/, }"
echo "models.boltz2.weightsDir: $([[ -n "${B2_WEIGHTS_DIR}" ]] && echo "${B2_WEIGHTS_DIR}" || echo \(empty\))"
echo "models.boltz2.weightsConfigFile: $([[ -n "${B2_WEIGHTS_CONFIG}" ]] && echo "${B2_WEIGHTS_CONFIG}" || echo \(empty\))"
echo "models.boltz2.weightsEnv: $([[ -n "${B2_WEIGHTS_ENV}" ]] && echo \(set\) || echo \(empty\))"
echo ""
echo "models.protenix.enabled: ${PROTENIX_ENABLED}"
echo "models.protenix.repository: ${PROTENIX_REPOSITORY}"
echo "models.protenix.tag: ${PROTENIX_TAG}"
echo "models.protenix.port: $([[ -n "${PROTENIX_PORT}" ]] && echo "${PROTENIX_PORT}" || echo \(empty\))"
echo "models.protenix.capabilities: ${PROTENIX_CAPABILITIES//$'\n'/, }"
echo ""
echo "-- secrets --"
echo "pullSecret (registry): $([[ -z "${PULL_SECRET}" || "${PULL_SECRET}" == "..." ]] && echo \(empty\) || echo \(set\))"
echo ""
echo "-- network --"
echo "network: ${NETWORK_NAME}"
echo ""
echo "-- docker version --"
"$DOCKER_CMD" version || true
echo ""
echo "-- labeled containers --"
"$DOCKER_CMD" ps -a --filter "label=${LABEL}" || true
echo ""
echo "-- network inspect --"
"$DOCKER_CMD" network inspect "${NETWORK_NAME}" || true
echo ""
}
# ---------------------------
# Main
# ---------------------------
parse_args "$@"
validate_args
load_config
case "$CMD" in
run) run_all ;;
cleanup) cleanup_all ;;
cleanup-postgres) cleanup_postgres ;;
cleanup-storage) cleanup_storage ;;
diagnose) diagnose ;;
*)
die "Unknown command: $CMD. Please use --help for usage information." "$ERR_INVALID_CONFIG"
;;
esac
After downloading, make the script executable (Linux/macOS only):
chmod +x deploy_apherisfold
Your directory structure should look like this:
/path/to/apherisfold/
├─ deploy_apherisfold
└─ config.yaml
Configure Your Deployment🔗
Edit the config.yaml file to customize your deployment. Here is the complete configuration file:
hub:
enabled: true
port: 8080
auth:
# Set enabled: false for single-user mode (no authentication)
# Set enabled: true and provide values from your identity provider (Auth0, Okta, etc.) - recommended
# Set enabled: true with empty values to use Apheris Demo Auth0 (localhost only)
enabled: false
domain: ""
issuer: ""
browserUrl: ""
clientId: ""
audience: ""
extraScopes: ""
providerType: ""
# Optional: CA certificate file path (e.g., for custom identity providers)
# The file will be mounted to /etc/ssl/certs/custom-ca.crt in the container
# caCert: "/path/to/your/ca.crt"
# Optional: Finetuning heartbeat timeout
# finetuningHeartbeatTimeout: "5m"
msa:
# Enable global MSA server configuration for the Hub.
enabled: false
# Polling interval for asynchronous MSA providers (e.g. colabfold).
pollInterval: "5s"
# Per-request timeout for MSA provider calls.
requestTimeout: "5m"
# Global MSA server definitions.
# Required when enabled=true and must contain at least one server.
servers: []
# - name: "Public ColabFold"
# type: "colabfold"
# url: "https://api.colabfold.com"
# defaultActive: true
# config: {}
# - name: "NVIDIA ColabFold"
# type: "nvidia-colabfold"
# url: "https://api.nim.example.com"
# defaultActive: false
# config:
# numberOfSequences: "500"
# eValue: "0.0001"
# databases:
# - "Uniref30_2302"
# headers:
# - name: "X-Source"
# value: "hub"
models:
# Set enabled=true to pull the model image and run its container
# Set enabled=false to skip the model (it will not be pulled or started)
# Boltz2 and Protenix are disabled by default
mock:
enabled: true
port: 7771
openfold3:
enabled: true
port: 7779
boltz2:
enabled: false
port: 7773
protenix:
enabled: false
port: 7775
# Capabilities (optional, supported values: "inference", "finetuning"):
# If not specified, the model wrapper uses defaults based on the model type:
# - mock, openfold3: inference + finetuning
# - boltz2, protenix: inference
# Override models.<name>.capabilities only when you need the wrapper to advertise different capabilities.
#
# Example:
# openfold3:
# capabilities:
# - inference
# - finetuning
# Custom Weights (optional, supported for OpenFold3, and Boltz-2):
# Method 1: weightsDir + weightsConfigFile (for JSON config file)
# Method 2: weightsDir + weightsEnv (for inline JSON)
# See README.md or https://www.apheris.com/docs/hub/apherisfold-application.html#customizing-model-weights
db:
# true = deploy local Postgres in Docker
# false = use external Postgres and set dsn below
deploy: true
# Leave empty when deploy=true (auto-generated)
# For external DBs (e.g. RDS), specify a DSN explicitly:
# dsn: "postgres://user:pass@host:5432/dbname?sslmode=require"
# Note: Special characters in username/password must be percent-encoded
dsn: ""
# Apheris-provided base64-encoded "user:token" for pulling model images from a container registry
pullSecret: "..."
Pull Secret🔗
The Apheris Hub API Key is used for authenticating with Quay.io to pull Docker images for the Hub and models.
Request your Apheris Hub API Key from https://www.apheris.com/applications/apherisfold or contact support@apheris.com.
Warning
Keep your API Key or pull secret secure. Do not commit it to version control or share it publicly.
Add the API Key directly to your config.yaml:
pullSecret: "your-apheris-api-key"
Using a Private Registry🔗
If you're hosting the Apheris Hub images in your own private registry, you need to provide a base64-encoded pull secret in the format username:password.
Encode your registry credentials:
echo -n "username:password" | base64
Configure in config.yaml:
pullSecret: "base64-encoded-private-registry-credentials"
Additionally, update the image repositories in your configuration to point to your private registry:
hub:
repository: "registry.example.com/apheris/hub"
models:
openfold3:
repository: "registry.example.com/apheris/hub-apps"
Database🔗
The Apheris Hub requires a PostgreSQL database to store job metadata, user information, and application state. You can choose between two options:
- use the built-in PostgreSQL container (default)
- connect to an external database instance (recommended).
Using the Built-in PostgreSQL Container🔗
By default, the deployment script creates and manages a local PostgreSQL container automatically. This is suitable for development, testing, and small production deployments.
db:
deploy: true
The built-in database:
- Persists data in a Docker volume
- Runs on the same Docker network as the Hub
Using an External PostgreSQL Database🔗
For production deployments or when using managed database services like AWS RDS or Azure Database for PostgreSQL, or Google Cloud SQL, you should connect to an external PostgreSQL instance.
To use an external database, set db.deploy=false and provide a connection string in the db.dsn field, for example:
db:
deploy: false
dsn: "postgres://username:password@hostname:5432/database?sslmode=require"
Percent-Encoding Required for Special Characters
If your database username or password contains special characters (like @, :, /, #, ?, &, =), you must percent-encode them in the DSN.
Example: With password p@ss:word/123, the DSN becomes:
postgres://user:p%40ss%3Aword%2F123@host:5432/db?sslmode=require
Please make sure to provide the correct SSL mode based on your database provider. For AWS RDS and most managed PostgreSQL services, use sslmode=require that requires SSL but does not verify the server certificate.
Database Requirements🔗
Your external PostgreSQL database must:
- Have a database created for the Apheris Hub (e.g.,
apheris_hub) - Grant the configured user full access to the database with permissions:
CREATE,SELECT,INSERT,UPDATE,DELETE,ALTER - Be accessible from the Docker host (verify network connectivity and firewall rules)
The Hub will automatically create the required tables and schema on first startup.
Models🔗
Toggle each model under models.<name>.enabled. Change default ports if needed. To enable just OpenFold3, set:
models:
mock:
enabled: false
openfold3:
enabled: true
boltz2:
enabled: false
Capabilities and scopes🔗
models.instances.<name>.capabilities sets the scopes available for that model deployment.
Supported values are inference, which covers prediction and benchmarking, and finetuning.
OpenFold3 supports both scopes, while Boltz-2 and Protenix support inference only.
For custom weights, set model_scope on each weight entry so the Hub can determine whether that weight supports inference (prediction and benchmarking), finetuning, or both.
Example:
models:
openfold3:
enabled: true
capabilities:
- inference
- finetuning
boltz2:
enabled: true
capabilities:
- inference
Note
Deploying different instances of a model with different scopes with Docker is currently not supported. Please use the Kubernetes deployment for that.
Storage🔗
The Apheris Hub uses two storage directories for job inputs and outputs. These directories are mounted as Docker volumes into all Hub components (the Hub itself and all enabled models), allowing them to share data.
Default Configuration🔗
By default, storage directories are created in your home directory:
storage:
input: "$HOME/apheris-hub/input"
output: "$HOME/apheris-hub/output"
Custom Storage Locations🔗
You can configure custom paths for input and output directories. This is useful when:
- You want to use a specific directory for organizing your data
- You need to use a mounted network storage or external drive
- You want separate storage for different deployments
Please add the following to your config.yaml, replacing the paths with your desired locations:
storage:
input: "/mnt/data/apheris/input"
output: "/mnt/data/apheris/output"
Storage Usage🔗
Input directory🔗
The Hub places job input files in the input directory where models can access them.
If working directly with a model, place your job input files here (e.g., protein sequences, configuration files) so the models will read from this location.
Output directory🔗
The models write job results here. Each job creates a subdirectory with its results that the Hub can read and display.
Important Notes
- The deployment script automatically creates storage directories if they don't exist
- Ensure the directories are writable by the user running the deployment script
- Both directories are mounted as Docker volumes with read-write access
Hub Settings🔗
You can redefine the Hub's port and enable or disable authentication settings:
hub:
port: 8080
auth:
enabled: false
See Authentication Setup for detailed authentication configuration.
Fine-tuning Settings🔗
Configure fine-tuning heartbeat timeout under hub:
hub:
finetuningHeartbeatTimeout: "10m"
Leave it empty to use the Hub default (5m).
Fine-tuning is currently supported for OpenFold3 deployments. Boltz-2 and Protenix support inference only.
For custom weights, set model_scope on each weight entry so the Hub can determine whether that weight supports inference (prediction and benchmarking), finetuning, or both.
Example:
models:
openfold3:
enabled: true
capabilities:
- inference
- finetuning
weightsDir: "/path/to/weights"
weightsEnv: '[{"model_type":"openfold3","version":"3.0.0-custom","description":"Custom OpenFold3 weights","model_scope":["inference","finetuning"],"mounted_path":"/weights/openfold3/custom"}]'
MSA Server Configuration🔗
MSA servers are deployment-managed and global. Administrators define them in config.yaml, and users can only select one of the configured servers (or opt out and upload .a3m files manually).
Supported MSA server types:
| Provider | Type identifier | Notes |
|---|---|---|
| ColabFold | colabfold |
Supports self-hosted deployments and public servers |
| NVIDIA NIM ColabFold | nvidia-colabfold |
Requires a deployed NVIDIA NIM MSA Search service |
Configure global MSA behavior under hub.msa:
hub:
msa:
enabled: true
pollInterval: "10s"
requestTimeout: "10m"
servers:
- name: "Public ColabFold"
type: "colabfold"
url: "https://api.colabfold.com"
defaultActive: true
config: {}
- name: "NVIDIA ColabFold"
type: "nvidia-colabfold"
url: "https://api.nim.example.com"
defaultActive: false
config:
numberOfSequences: "500"
eValue: "0.0001"
databases:
- "Uniref30_2302"
headers:
- name: "X-Api-Key"
value: "replace-me"
When hub.msa.enabled=true, set at least one entry in hub.msa.servers.
Use defaultActive: true on exactly one server to define the deployment-level fallback server for new users and for users whose stored active selection no longer resolves.
If a user explicitly disabled MSA usage, fallback is not applied for that user.
MSA Server Headers🔗
Use servers[].headers to send provider-specific headers (for example API keys or tenant IDs) with every request to that server.
hub:
msa:
enabled: true
servers:
- name: "NVIDIA ColabFold"
type: "nvidia-colabfold"
url: "https://api.nim.example.com"
config:
numberOfSequences: "500"
headers:
- name: "X-Api-Key"
value: "replace-me"
- name: "X-Client-Id"
value: "my-client-id"
For sensitive values, prefer injecting the full server list at runtime via APH_HUB_MSA_SERVERS.
The deploy_apherisfold script also supports one-off environment variable overrides for MSA settings:
APH_HUB_MSA_ENABLEDAPH_HUB_MSA_POLL_INTERVALAPH_HUB_MSA_REQUEST_TIMEOUTAPH_HUB_MSA_SERVERS
SSL/TLS Termination with nginx (Reverse Proxy)🔗
The Apheris Hub Docker container does not have built-in SSL/TLS support. If you need HTTPS access, you should place an nginx reverse proxy in front of the Hub container to handle SSL termination.
Important: File Upload Configuration
When using nginx as a reverse proxy, you must configure client_max_body_size to support large file uploads (MSA files can be several GB). By default, nginx limits request body size to 1 MB, which will cause 413 Payload Too Large errors.
Required nginx Configuration🔗
Add these directives to your nginx server block that proxies to the Hub:
# Allow unlimited file uploads for scientific data (MSA files can be several GB)
# Set to a specific size like "500m" if you need to impose limits
client_max_body_size 0;
# Timeouts for large file uploads (5 minutes)
client_body_timeout 300s;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
where
client_max_body_size 0;: Allows unlimited uploads. The Hub API already has a 34-minute timeout for ~5GB files over 20 Mbps networks.- Timeout values: 300 seconds (5 minutes) accommodates most large file transfers.
After updating your nginx configuration, test configuration syntax and reload nginx to apply changes:
nginx -t
nginx -s reload
For complete nginx reverse proxy setup including SSL certificates and additional security headers, see the nginx reverse proxy documentation.
Custom CA Certificates🔗
If your identity provider or external services use TLS certificates signed by a custom Certificate Authority, set hub.caCert to an absolute path:
hub:
caCert: "/absolute/path/to/your-ca.crt"
The file will be mounted to /etc/ssl/certs/custom-ca.crt in the Hub container and automatically trusted alongside system CAs.
After updating your config, re-run the deployment script with ./deploy_apherisfold so the container picks up the new mount (see Start the Deployment).
Verify the mount🔗
The Hub Docker image is based on scratch, so it has no shell. Please perform these host-side checks to verify that the CA certificate is correctly mounted:
# Confirm the mount exists
docker inspect apheris-hub --format json | jq -r '.[] | .Mounts[] | select(.Destination == "/etc/ssl/certs/custom-ca.crt") | "\(.Destination) -> \(.Source)"'
# Copy out the file and inspect it
docker cp apheris-hub:/etc/ssl/certs/custom-ca.crt /tmp/custom-ca.crt
openssl x509 -in /tmp/custom-ca.crt -noout -subject -issuer
Authentication Setup🔗
The Apheris Hub supports OAuth 2.0 / OpenID Connect (OIDC) authentication to validate JWTs, secure access, and isolate jobs per user by the email claim. Use the Authentication Setup guide for identity provider requirements and provider-specific guides for Auth0, Microsoft Entra, ForgeRock, and Dex.
Authentication Scenarios🔗
Scenario 1: Single-User Mode (No Authentication)🔗
Best for: Personal use, testing, or environments where all users can share data
To disable authentication and run the Hub in single-user mode without access control, set:
hub:
auth:
enabled: false
Scenario 2: Multi-User Mode with Your Own Identity Provider (Recommended)🔗
Best for: Production deployments, organizations with existing identity management
Why recommended: Full control over user management, compliance with your security policies, and production-grade reliability
To enable multi-user mode with your own identity provider:
- Configure the provider itself by following the appropriate guide:
- Take the values you obtain there (
domain,audience,clientId, and, if required,issuer/extraScopes/browserUrl) and set them underhub.authin yourconfig.yamlas shown in the examples provided in the Authentication Setup guide.
This keeps the Docker deployment, the Hub backend, and your identity provider in sync.
Scenario 3: Try Multi-User Mode with Apheris Demo Auth0🔗
Best for: Quickly evaluating authentication features before setting up your own identity provider
Localhost Only
This option uses a shared Apheris-managed Auth0 tenant for demonstration purposes. It is only configured for localhost deployment (ports 8080 or 8081) and is advised for evaluation only.
If you need to serve the Hub on a custom domain or endpoint and require authentication configuration assistance, please contact support@apheris.com.
To enable multi-user mode using Apheris demo Auth0 credentials, set:
hub:
auth:
enabled: true
domain: ""
audience: ""
clientId: ""
When you set enabled: true and leave the other values empty, the deployment script automatically uses Apheris demo Auth0 credentials. This allows you to experience the authentication flow and user segregation features without setting up your own identity provider.
After evaluating, we strongly recommend switching to your own identity provider (Scenario 2) for any real usage. See the Authentication Setup guide for Identity Provider Requirements and detailed configuration steps.
Start the Deployment🔗
Run the deployment script:
./deploy_apherisfold
The script automatically creates storage directories, the Docker network, a local PostgreSQL container (if enabled), and starts all enabled models and the hub.
After starting, the Hub will be available at the configured port (default: localhost:8080).
Available Commands🔗
The script supports several commands for managing your deployment that are listed below.
Run🔗
Info
Running ./deploy_apherisfold without any arguments defaults to the run command.
Start or restart the deployment:
./deploy_apherisfold run
Or simply:
./deploy_apherisfold
Ensures the network and folders exist, stops any previously labeled containers, starts PostgreSQL (if enabled), then the enabled models, and finally the hub.
Cleanup🔗
Stop and remove all containers:
./deploy_apherisfold cleanup
Stops and removes all hub/model containers and the network (unless other containers still use it). Database data persists.
Cleanup Storage🔗
Warning
This permanently deletes all input files and job results. After running this command, the Hub will not be able to display previous job results or access any stored data. Only use this when you need to completely wipe the state and start fresh.
Remove storage folders:
./deploy_apherisfold cleanup-storage
Removes the storage folders defined in the config file, deleting all persisted inputs and outputs.
Cleanup PostgreSQL🔗
Warning
This permanently deletes the PostgreSQL database, including all job metadata, job history, and user information. After running this command, the Hub will have no record of previously run jobs. Only use this when you need to completely wipe the state and start fresh.
Remove PostgreSQL container and data:
./deploy_apherisfold cleanup-postgres
Removes the local PostgreSQL container and destroys the persisted database state (only when db.deploy=true).
Diagnose🔗
View deployment configuration and status:
./deploy_apherisfold diagnose
Prints the parsed configuration, Docker status for relevant images, and the current network/container layout to help troubleshoot issues.
To save the output to a file for sharing with support, please run:
./deploy_apherisfold diagnose > diagnose.txt
Help🔗
View help information on the commands and options usage:
./deploy_apherisfold --help
Common Workflows🔗
Redeploy while keeping state🔗
Use the following command to redeploy the Apheris Hub while retaining all existing data, configurations, and job history:
./deploy_apherisfold
Redeploy while entirely removing state🔗
Warning
This workflow permanently deletes all data, including job inputs, outputs, database records, and job history. The Hub will have no record of previous jobs and cannot display any past results. Only use this when you need to completely wipe the state and start fresh.
Use the following sequence of commands to completely remove all existing data, configurations, and job history before redeploying:
./deploy_apherisfold cleanup
./deploy_apherisfold cleanup-storage
./deploy_apherisfold cleanup-postgres
./deploy_apherisfold
Use diagnose whenever you need to confirm what ports and configurations the script is applying.
Support🔗
When requesting support, it's helpful to provide diagnostic information about your deployment. Generate a diagnostic file before contacting support:
./deploy_apherisfold diagnose > diagnose.txt
This creates a diagnose.txt file containing your deployment configuration, Docker status, and network layout.
To report issues, request assistance, or inquire about advanced deployment options, contact support@apheris.com and attach the diagnose.txt file to help the support team troubleshoot issues more efficiently.