Apache NiFi 2.0.0 with Vault TLS Certificate Management

An end-to-end guide for installing Apache NiFi 2.0.0 on Ubuntu 22.04 with automated TLS via HashiCorp Vault.

Hey everyone!

I've been using Apache NifFi for over a decade, and it's one of the most incredible tools I've ever used. The ability to build complex data pipelines and workflows is impressive, and the community is fantastic. Since the 2.0.0 release, I've been doing some exciting work with Python processors in NiFi, and this should be in every data engineer's toolkit.

As I don't ever like to do something halfway, I set up an enterprise-level environment in my homelab for various projects. This post is my effort to document my steps to set up a production-ready environment for Apache NiFi 2.0.0.

Another reason I wanted to document this is that I had to pull from many different sources to get this working and had to go through a bunch of trial and error, and I'm hoping this post will save you some time. Please note that this is a work in progress, and I will update it as I go. Feel free to contact me on X with any questions or feedback.

If you follow the steps below, you will have a production-ready environment for Apache NiFi 2.0.0 with automated TLS certificate management via Vault, proxy via Traefik, and monitoring via Uptime Kuma.

This should give you a good starting point for your own environment. If this post is helpful, I'd be happy to do some follow-up posts to dive deeper into any part of this setup, post-setup, like integrating with a Github registry service or even some of the more interesting things I'm doing with NiFi.

Selfishly, I'm preparing to use this environment for a large Neo4j data pipeline for knowledge graphs and GenAI. Documenting as I go will help me remember what I did to get this working and to communicate with others about the project.

Best of luck with your setup!


Below is a comprehensive, end-to-end guide for setting up your Apache NiFi 2.0.0 instance with automated TLS certificate management via Vault on a fresh Ubuntu 22.04 server.

This document has been written so that every step—from system updates and dependency installation, through configuring SDKMAN with OpenJDK 21, establishing directory structures and permissions, configuring Vault (including AppRole and PKI roles), setting up the Vault Agent (with all required scripts and systemd services), installing and tuning NiFi, and finally setting up monitoring via Uptime Kuma—is documented.

All internal hostnames, credentials, and sensitive values are represented by placeholders so that you can customize them to your environment.

Note: This guide assumes that you already have a running Vault server with a PKI secrets engine mounted (we'll use the placeholder pki_int) and that you can create the required AppRole and PKI roles. The guide uses generic domain names (e.g. vault.internal-domain.com, nifi.internal-domain.com, uptime.internal-domain.com) rather than your internal URLs. – The monitoring section uses "push" scripts that work with Uptime Kuma; adjust the PUSH_URL placeholders with your actual Uptime Kuma push monitor URLs. This guide also documents where secret values (passwords, secret IDs, etc.) must be placed, with clear instructions on which file each setting belongs in.

1. System Setup on Ubuntu 22.04

  1. Download Ubuntu 22.04 LTS Visit ubuntu.com/download and download the ISO.

  2. Create a Bootable USB Use a tool like Etcher to write the ISO to a USB drive.

  3. Install Ubuntu Boot from the USB drive and follow the installation wizard. Choose your preferred language, set up your username (we'll assume serveradmin), configure partitions, etc.

  4. Reboot and log in.

2. Install Dependencies & Update the System

After installation, open a terminal and run:

sudo apt update && sudo apt upgrade -y
sudo apt autoremove -y

Then install essential packages:

sudo apt install -y build-essential tree unzip zip python3 python3-pip python3-venv lvm2 libssl-dev libffi-dev curl jq

3. Install SDKMAN & OpenJDK 21

Note: Although SDKMAN is not the very first step, it must be installed before Java-dependent software (like NiFi) is run.

  1. Install SDKMAN:

    curl -s "https://get.sdkman.io" | bash
    source "$HOME/.sdkman/bin/sdkman-init.sh"
    
  2. Install Temurin OpenJDK 21.0.5:

    sdk install java 21.0.5-tem
    
  3. Set as Default:

    sdk default java 21.0.5-tem
    
  4. Configure Your Shell (e.g., in ~/.zshrc or ~/.bashrc):

    echo 'export JAVA_HOME="$HOME/.sdkman/candidates/java/current"' >> ~/.zshrc
    echo 'export PATH="$JAVA_HOME/bin:$PATH"' >> ~/.zshrc
    source ~/.zshrc
    
  5. Verify Installation:

    echo $JAVA_HOME
    java -version
    

4. Vault Agent Directory Structure & Permissions

I chose to use HashiCorp Vault to manage the TLS certificates for NiFi. This is a very powerful tool that I've used to manage all of my secrets for this project. This is likely overkill for most people, but I wanted to tinker with it anddocument what many do for their production environments. This has been super powerful and I love how I've now got a rock solid system for managing my certificates, keeping them up to date, and rotating them as needed.

Create your working directories:

mkdir -p ~/projects/nifi
mkdir -p ~/projects/vault-agent/{certs,conf,logs,systemd,vault-templates}

Your vault-agent directory should resemble the following:

/home/serveradmin/projects/vault-agent/
├── certs
│   ├── nifi.p12           # Generated keystore
│   ├── nifi-trust-store
│   │   ├── certs
│   │   │   ├── ca_chain.crt
│   │   │   ├── ca.crt
│   │   │   ├── nifi.crt
│   │   │   └── nifi.key
│   │   └── nifiTrustStore.jks
│   └── scripts
│       ├── bootstrap.sh
│       ├── combined-import.sh
│       ├── monitor.sh        (optional)
│       ├── wrapper-import.sh
│       └── tests/...
├── conf
│   ├── vault-agent.hcl
│   ├── vault_agent_role_id
│   └── vault_agent_secret_id
├── logs
│   ├── bootstrap.log
│   ├── certificate_monitor.log
│   ├── combined_import.log
│   ├── log_monitor.log
│   ├── monitor.log
│   ├── monitor_state.json
│   ├── service_monitor.log
│   ├── vault-agent.error.log
│   └── vault-agent.log
├── monitoring
│   ├── check_certificates.sh
│   ├── check_logs.sh
│   └── check_services.sh
├── systemd
│   ├── vault-agent-monitor.service
│   ├── vault-agent.service
│   └── vault-agent-wrapper.service
├── vault-agent.pid
└── vault-templates
    ├── ca_chain.tpl
    ├── ca.tpl
    ├── cert.tpl
    └── key.tpl

Set permissions as follows:

sudo chown -R serveradmin:serveradmin ~/projects
sudo chmod 755 ~/projects
sudo chmod -R 700 ~/projects/vault-agent/certs

5. Vault Server Prerequisites

Ensure your Vault server (at a placeholder URL like https://vault.internal-domain.com:8200) is configured with:

5.1 PKI Engine

  • Mount the PKI engine (if not already mounted) at pki_int.

5.2 PKI Role

Create a PKI role for issuing certificates to NiFi:

vault write pki_int/roles/internal-domain-dot-com \
    allow_bare_domains=true \
    allow_ip_sans=true \
    allow_localhost=true \
    allow_subdomains=true \
    allow_wildcard_certificates=true \
    allowed_domains="internal-domain.com" \
    client_flag=true \
    key_bits=2048 \
    key_type="rsa" \
    key_usage="DigitalSignature;KeyAgreement;KeyEncipherment" \
    max_ttl=336h \
    ttl=168h \
    require_cn=true \
    use_csr_common_name=true \
    use_csr_sans=true

(Replace internal-domain.com with your domain—in our example, it is used for NiFi certificates.)

5.3 AppRole

Create an AppRole for the Vault Agent:

vault write auth/approle/role/internal-domain-dot-com \
    bind_secret_id=true \
    secret_id_bound_cidrs= \
    secret_id_num_uses=0 \
    secret_id_ttl=0 \
    token_bound_cidrs= \
    token_explicit_max_ttl=0 \
    token_max_ttl=3600 \
    token_no_default_policy=false \
    token_num_uses=0 \
    token_period=3600 \
    token_policies="certs" \
    token_ttl=1800 \
    token_type="default"

Verify with:

vault read auth/approle/role/internal-domain-dot-com -format=json
vault read pki_int/roles/internal-domain-dot-com

6. Vault Agent Configuration on the NiFi Server

6.1 Directory Layout & Key Files

Your Vault Agent directory should be set up as detailed in Section 4. Confirm the following:

  • Configuration files in ~/projects/vault-agent/conf/
  • Template files in ~/projects/vault-agent/vault-templates/
  • Scripts in ~/projects/vault-agent/certs/scripts/

6.2 Vault Agent Configuration File & Templates

File: ~/projects/vault-agent/conf/vault-agent.hcl

Important: Replace any instance of generic placeholders such as vault.internal-domain.com or nifi.internal-domain.com with your actual domain names. Secret passwords (e.g., keystore/truststore passwords) are placeholders (e.g., <YOUR_KEYSTORE_PASS>) and should be replaced by your secure values.

pid_file = "/home/serveradmin/projects/vault-agent/vault-agent.pid"

vault {
  address        = "https://vault.internal-domain.com:8200"
  tls_skip_verify = true
}

auto_auth {
  method {
    type = "approle"
    config = {
      role_id_file_path   = "/home/serveradmin/projects/vault-agent/conf/vault_agent_role_id"
      secret_id_file_path = "/home/serveradmin/projects/vault-agent/conf/vault_agent_secret_id"
    }
  }
  sink {
    type = "file"
    config = {
      path = "/home/serveradmin/projects/vault-agent/conf/vault_agent_token"
    }
  }
}

template {
  source      = "/home/serveradmin/projects/vault-agent/vault-templates/cert.tpl"
  destination = "/home/serveradmin/projects/vault-agent/certs/nifi-trust-store/certs/nifi.crt"
}
template {
  source      = "/home/serveradmin/projects/vault-agent/vault-templates/key.tpl"
  destination = "/home/serveradmin/projects/vault-agent/certs/nifi-trust-store/certs/nifi.key"
}
template {
  source      = "/home/serveradmin/projects/vault-agent/vault-templates/ca.tpl"
  destination = "/home/serveradmin/projects/vault-agent/certs/nifi-trust-store/certs/ca.crt"
}
template {
  source      = "/home/serveradmin/projects/vault-agent/vault-templates/ca_chain.tpl"
  destination = "/home/serveradmin/projects/vault-agent/certs/nifi-trust-store/certs/ca_chain.crt"
}

Templates:

  • cert.tpl
{{- with secret "pki_int/issue/internal-domain-dot-com" "common_name=nifi.internal-domain.com" "alt_names=nifi.internal-domain.com" -}}
{{ .Data.certificate }}
{{- end -}}
  • key.tpl
{{- with secret "pki_int/issue/internal-domain-dot-com" "common_name=nifi.internal-domain.com" "alt_names=nifi.internal-domain.com" -}}
{{ .Data.private_key }}
{{- end -}}
  • ca.tpl
{{ with secret "pki_int/issue/internal-domain-dot-com" "common_name=nifi.internal-domain.com" }}
{{ .Data.issuing_ca }}
{{ end }}
  • ca_chain.tpl
{{- with secret "pki_int/issue/internal-domain-dot-com" "common_name=nifi.internal-domain.com" -}}
{{- range .Data.ca_chain -}}
{{ . }}
{{ "\n" }}
{{- end -}}
{{- end -}}

6.3 AppRole Credentials and PKI Role Settings

Place your AppRole credentials on the NiFi server:

echo "internal-domain-dot-com" > ~/projects/vault-agent/conf/vault_agent_role_id
echo "<YOUR_SECRET_ID>" > ~/projects/vault-agent/conf/vault_agent_secret_id
sudo chmod 600 ~/projects/vault-agent/conf/vault_agent_*
sudo chown serveradmin:serveradmin ~/projects/vault-agent/conf/vault_agent_*

(Replace <YOUR_SECRET_ID> with the actual secret ID obtained from Vault. We will not include real credentials in public documentation.)

6.4 Automation Scripts

These scripts handle various tasks—from verifying Vault connectivity and rotating AppRole credentials to creating keystores/truststores and monitoring for certificate changes.

Place the following scripts under ~/projects/vault-agent/certs/scripts/.

bootstrap.sh

This script bootstraps the Vault Agent environment. It validates the connection to Vault, checks and rotates the AppRole secret if needed, sets up necessary directories, cleans up old backups and logs, and finally launches the Vault Agent. It also supports a mode to generate certificates only when run with the --generate-certs flag.

#!/bin/bash
# File: certs/scripts/bootstrap.sh
set -eo pipefail

BASE_DIR="/home/serveradmin/projects/vault-agent"
LOG_FILE="${BASE_DIR}/logs/bootstrap.log"
VAULT_CONFIG_PATH="${BASE_DIR}/conf/vault-agent.hcl"
ROLE_ID_PATH="${BASE_DIR}/conf/vault_agent_role_id"
SECRET_ID_PATH="${BASE_DIR}/conf/vault_agent_secret_id"
APPROLE_NAME="internal-domain-dot-com"

export VAULT_ADDR="https://vault.internal-domain.com:8200"
export VAULT_SKIP_VERIFY=true

mkdir -p "$(dirname "$LOG_FILE")"
exec 1> >(tee -a "$LOG_FILE") 2>&1

log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"; }
error() { log "ERROR: $1"; exit 1; }

validate_vault_connection() {
    log "Validating Vault connection..."
    response=$(curl -k --silent --show-error "${VAULT_ADDR}/v1/sys/health")
    if [[ $? -ne 0 || -z "$response" ]]; then
        error "Cannot connect to Vault at ${VAULT_ADDR}"
    fi
    if ! echo "$response" | grep -q '"sealed":false'; then
        error "Vault is sealed"
    fi
}

validate_role_id() {
    log "Validating Role ID..."
    if [[ ! -f "$ROLE_ID_PATH" || ! -s "$ROLE_ID_PATH" ]]; then
        error "Role ID file missing or empty at $ROLE_ID_PATH"
    fi
    if ! vault read "auth/approle/role/${APPROLE_NAME}" >/dev/null 2>&1; then
        error "Role ${APPROLE_NAME} does not exist in Vault"
    fi
}

generate_secret_id() {
    log "Generating new Secret ID..."
    temp_file=$(mktemp)
    chmod 600 "$temp_file"
    retries=3
    success=false
    delay=5
    for ((i=1; i<=retries; i++)); do
        if vault write -format=json -f "auth/approle/role/$APPROLE_NAME/secret-id" > "$temp_file" 2>/dev/null; then
            success=true
            break
        else
            log "Attempt $i failed. Retrying in $delay seconds..."
            sleep $delay
            delay=$(( delay * 2 ))
        fi
    done
    if ! $success; then
        rm -f "$temp_file"
        error "Failed to generate secret_id after $retries attempts"
    fi
    secret_id=$(jq -r '.data.secret_id' "$temp_file")
    rm -f "$temp_file"
    if [[ -z "$secret_id" || "$secret_id" == "null" ]]; then
        error "Failed to extract valid secret_id"
    fi
    [[ -f "$SECRET_ID_PATH" ]] && cp "$SECRET_ID_PATH" "${SECRET_ID_PATH}.$(date +%Y%m%d_%H%M%S).bak" || true
    echo "$secret_id" > "$SECRET_ID_PATH"
    chmod 600 "$SECRET_ID_PATH"
    chown serveradmin:serveradmin "$SECRET_ID_PATH"
    log "New Secret ID generated and saved"
}

check_secret_id() {
    log "Checking Secret ID..."
    if [[ ! -f "$SECRET_ID_PATH" || ! -s "$SECRET_ID_PATH" || $(find "$SECRET_ID_PATH" -mtime +0.4 -print) ]]; then
        generate_secret_id
    else
        secret_id=$(cat "$SECRET_ID_PATH")
        role_id=$(cat "$ROLE_ID_PATH")
        auth_response=$(curl -k --silent --show-error --request POST --data "{\"role_id\":\"$role_id\",\"secret_id\":\"$secret_id\"}" "${VAULT_ADDR}/v1/auth/approle/login")
        if ! echo "$auth_response" | grep -q "auth"; then
            log "Existing Secret ID is invalid; generating new one"
            generate_secret_id
        else
            log "Existing Secret ID is valid"
        fi
    fi
}

validate_vault_config() {
    log "Validating Vault configuration file..."
    if [[ ! -f "$VAULT_CONFIG_PATH" ]]; then
        error "Vault configuration file not found at $VAULT_CONFIG_PATH"
    fi
    if ! head -n 1 "$VAULT_CONFIG_PATH" | grep -q 'pid_file\|vault\|auto_auth'; then
        error "Vault configuration file appears to be invalid"
    fi
}

cleanup_old_files() {
    log "Cleaning up old backups and logs..."
    find "${BASE_DIR}" -name "*.bak" -type f -mtime +7 -delete
    find "${BASE_DIR}/logs" -name "*.log" -type f -mtime +30 -delete
}

ensure_cert_directories() {
    log "Ensuring certificate directories exist..."
    mkdir -p "${BASE_DIR}/certs/nifi-trust-store/certs"
    chmod 750 "${BASE_DIR}/certs/nifi-trust-store/certs"
}

main() {
    log "Starting Vault Agent bootstrap process..."
    validate_vault_connection
    validate_role_id
    check_secret_id
    validate_vault_config
    cleanup_old_files
    ensure_cert_directories
    if [[ "$1" == "--generate-secret" ]]; then
        log "Secret ID generation completed"
        exit 0
    elif [[ "$1" == "--generate-certs" ]]; then
        log "Generating certificates only..."
        timeout 30s vault agent -config="$VAULT_CONFIG_PATH"
        exit 0
    fi
    log "Launching Vault Agent..."
    exec vault agent -config="$VAULT_CONFIG_PATH"
}

main "$@"

combined-import.sh

This script consolidates the certificate import process. It backs up existing keystores, verifies that the certificate and key pair match, creates the PKCS12 keystore for NiFi, builds a Java truststore from the CA chain, and cleans up old backups. Its purpose is to ensure the certificates are imported correctly and securely.

#!/bin/bash
# File: certs/scripts/combined-import.sh
set -eo pipefail

BASE_DIR="/home/serveradmin/projects/vault-agent"
LOG_FILE="${BASE_DIR}/logs/combined_import.log"
CERT_PATH="${BASE_DIR}/certs"
BACKUP_DIR="${BASE_DIR}/backups/$(date +%Y%m%d_%H%M%S)"

mkdir -p "$(dirname "$LOG_FILE")"
exec 1> >(tee -a "$LOG_FILE") 2>&1

log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"; }
error() { log "ERROR: $1"; exit 1; }

load_env() {
  log "Loading environment variables..."
  if [[ ! -f "${CERT_PATH}/.env" ]]; then
    error "Environment file not found: ${CERT_PATH}/.env"
  fi
  source "${CERT_PATH}/.env"
  local required_vars=(NIFI_KEYSTORE_PASSWORD NIFI_TRUSTSTORE_PASSWORD NIFI_TRUSTSTORE_PATH)
  for var in "${required_vars[@]}"; do
    if [[ -z "${!var}" ]]; then
      error "Missing required environment var: $var"
    fi
  done
}

backup_stores() {
  log "Backing up existing keystores/truststores..."
  mkdir -p "${BACKUP_DIR}/keystores"
  mkdir -p "${BACKUP_DIR}/nifi/certs"
  for store in "${NIFI_TRUSTSTORE_PATH}" "${CERT_PATH}/nifi.p12"; do
    if [[ -f "$store" ]]; then
      cp -f "$store" "${BACKUP_DIR}/keystores/$(basename "$store")"
    fi
  done
  if [[ -d "${CERT_PATH}/nifi-trust-store/certs" ]]; then
    cp -f "${CERT_PATH}/nifi-trust-store/certs"/* "${BACKUP_DIR}/nifi/certs/"
  fi
  log "Backup created in $BACKUP_DIR"
}

verify_cert_key_pair() {
    local cert="$1"
    local key="$2"
    local name="$3"
    log "Verifying ${name} certificate and key pair..."
    local cert_modulus
    local key_modulus
    cert_modulus=$(openssl x509 -noout -modulus -in "$cert" | md5sum)
    key_modulus=$(openssl rsa -noout -modulus -in "$key" | md5sum)
    if [[ "$cert_modulus" != "$key_modulus" ]]; then
        error "${name} certificate and key do not match!"
    fi
    log "${name} certificate and key pair match successfully"
}

verify_cert_chain() {
    local cert="$1"
    local ca_chain="$2"
    local name="$3"
    log "Verifying ${name} certificate chain..."
    if ! openssl verify -CAfile "$ca_chain" "$cert"; then
        error "${name} certificate chain verification failed"
    fi
    log "${name} certificate chain verified successfully"
}

create_keystore() {
    verify_cert_key_pair "${CERT_PATH}/nifi-trust-store/certs/nifi.crt" "${CERT_PATH}/nifi-trust-store/certs/nifi.key" "NiFi"
    cat "${CERT_PATH}/nifi-trust-store/certs/nifi.crt" "${CERT_PATH}/nifi-trust-store/certs/ca.crt" > "${CERT_PATH}/nifi-trust-store/certs/complete_chain.crt"
    verify_cert_chain "${CERT_PATH}/nifi-trust-store/certs/complete_chain.crt" "${CERT_PATH}/nifi-trust-store/certs/ca_chain.crt" "NiFi"
    log "Creating NiFi keystore (PKCS12)..."
    openssl pkcs12 -export \
        -in "${CERT_PATH}/nifi-trust-store/certs/nifi.crt" \
        -inkey "${CERT_PATH}/nifi-trust-store/certs/nifi.key" \
        -out "${CERT_PATH}/nifi.p12" \
        -name nifi -passout pass:"$NIFI_KEYSTORE_PASSWORD" -nodes
    [[ ! -f "${CERT_PATH}/nifi.p12" ]] && error "Failed to create NiFi keystore"
    rm -f "${CERT_PATH}/nifi-trust-store/certs/complete_chain.crt"
    log "NiFi keystore created successfully"
}

create_truststore() {
    log "Creating truststore..."
    rm -f "$NIFI_TRUSTSTORE_PATH"
    retry 3 keytool -importcert \
        -alias ca_chain \
        -file "${CERT_PATH}/nifi-trust-store/certs/ca_chain.crt" \
        -keystore "$NIFI_TRUSTSTORE_PATH" \
        -storepass "$NIFI_TRUSTSTORE_PASSWORD" \
        -noprompt -storetype JKS
}

verify_stores() {
    log "Verifying NiFi truststore..."
    retry 3 keytool -list -keystore "$NIFI_TRUSTSTORE_PATH" -storepass "$NIFI_TRUSTSTORE_PASSWORD" >/dev/null
}

cleanup_old_backups() {
    log "Cleaning up old backups..."
    find "${BASE_DIR}/backups" -type d -mtime +7 -exec rm -rf {} +
}

retry() {
    local retries=$1
    shift
    local count=0
    local wait=5
    local max_wait=60
    until "$@" || [[ $count -eq $retries ]]; do
        count=$((count + 1))
        if [[ $count -eq $retries ]]; then
            error "Command failed after $retries attempts: $*"
        fi
        log "Attempt $count/$retries failed. Retrying in $wait seconds..."
        sleep $wait
        wait=$(( wait * 2 < max_wait ? wait * 2 : max_wait ))
    done
}

main() {
    log "=== Starting combined import process ==="
    load_env
    backup_stores
    create_keystore
    create_truststore
    verify_stores
    cleanup_old_backups
    log "=== Combined import finished successfully ==="
}

main "$@"

wrapper-import.sh

This lightweight wrapper script provides a clean entry point for triggering the combined certificate import process from a systemd service. It simply calls combined-import.sh and logs the outcome.

#!/bin/bash
# File: certs/scripts/wrapper-import.sh
set -eo pipefail

BASE_DIR="/home/serveradmin/projects/vault-agent"
LOG_FILE="${BASE_DIR}/logs/wrapper_import.log"

mkdir -p "$(dirname "$LOG_FILE")"
exec 1> >(tee -a "$LOG_FILE") 2>&1

log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"; }

log "Wrapper import process started..."
"$BASE_DIR/certs/scripts/combined-import.sh"
log "Wrapper import process completed successfully."

monitor.sh

This optional script continuously monitors the certificate and key files for changes by comparing file checksums. If any changes are detected, it automatically triggers the certificate import process to update the keystore and truststore. This ensures that your system stays in sync with any certificate rotations or updates.

#!/bin/bash
# File: certs/scripts/monitor.sh
set -eo pipefail

BASE_DIR="/home/serveradmin/projects/vault-agent"
LOG_FILE="${BASE_DIR}/logs/monitor.log"
CERT_DIR="${BASE_DIR}/certs"
STATE_FILE="${BASE_DIR}/logs/monitor_state.json"
SLEEP_SECONDS=300

mkdir -p "$(dirname "$LOG_FILE")"
exec 1> >(tee -a "$LOG_FILE") 2>&1

log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"; }
error() { log "ERROR: $1"; return 1; }
save_state() { echo "$1" > "$STATE_FILE"; }
load_state() { [[ -f "$STATE_FILE" ]] && cat "$STATE_FILE"; }

while true; do
    current_checksum=$(find "${CERT_DIR}" -name "*.crt" -o -name "*.key" -type f -exec md5sum {} + | sort | md5sum)
    previous_checksum=$(load_state)
    if [[ "$current_checksum" != "$previous_checksum" ]]; then
        log "Certificate changes detected - triggering import"
        if "${BASE_DIR}/certs/scripts/combined-import.sh"; then
            save_state "$current_checksum"
            log "Import completed successfully"
        else
            error "Import failed"
        fi
    fi
    sleep "$SLEEP_SECONDS"
done

6.5 Systemd Service Files for Vault Agent

Place these files in /etc/systemd/system/.

vault-agent.service

This systemd service launches the Vault Agent using the bootstrap.sh script. It ensures the Vault Agent is running, manages secret rotations, and triggers certificate generation as needed—all while logging activity for troubleshooting.

[Unit]
Description=HashiCorp Vault Agent
After=network-online.target
Wants=network-online.target
StartLimitIntervalSec=60
StartLimitBurst=3

[Service]
Type=simple
User=serveradmin
Group=serveradmin
Environment=VAULT_ADDR=https://vault.internal-domain.com:8200
Environment=VAULT_SKIP_VERIFY=true
ExecStart=/home/serveradmin/projects/vault-agent/certs/scripts/bootstrap.sh
Restart=always
RestartSec=5
WorkingDirectory=/home/serveradmin/projects/vault-agent
StandardOutput=append:/home/serveradmin/projects/vault-agent/logs/vault-agent.log
StandardError=append:/home/serveradmin/projects/vault-agent/logs/vault-agent.error.log

[Install]
WantedBy=multi-user.target

vault-agent-wrapper.service

This systemd service wraps the certificate import process. It invokes wrapper-import.sh to execute the combined certificate import and keystore/truststore generation, keeping NiFi’s certificates up-to-date.

[Unit]
Description=Vault Agent Certificate Import Wrapper
After=vault-agent.service
Requires=vault-agent.service
StartLimitIntervalSec=60
StartLimitBurst=3

[Service]
Type=oneshot
User=serveradmin
Group=serveradmin
Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ExecStart=/home/serveradmin/projects/vault-agent/certs/scripts/wrapper-import.sh
WorkingDirectory=/home/serveradmin/projects/vault-agent
StandardOutput=append:/home/serveradmin/projects/vault-agent/logs/wrapper.log
StandardError=append:/home/serveradmin/projects/vault-agent/logs/wrapper.error.log

[Install]
WantedBy=multi-user.target

vault-agent-monitor.service (Optional)

This systemd service runs the monitor.sh script to continuously check for changes in certificate files. When changes are detected, it triggers the import process automatically, ensuring that any certificate updates are promptly applied.

[Unit]
Description=Vault Agent Certificate Monitor
After=vault-agent.service
Requires=vault-agent.service
StartLimitIntervalSec=60
StartLimitBurst=3

[Service]
Type=simple
User=serveradmin
Group=serveradmin
ExecStart=/home/serveradmin/projects/vault-agent/certs/scripts/monitor.sh
Restart=always
RestartSec=5
WorkingDirectory=/home/serveradmin/projects/vault-agent
StandardOutput=append:/home/serveradmin/projects/vault-agent/logs/monitor.log
StandardError=append:/home/serveradmin/projects/vault-agent/logs/monitor.error.log

[Install]
WantedBy=multi-user.target

6.6 Starting and Verifying Vault Agent

Reload systemd and enable the services:

sudo systemctl daemon-reload
sudo systemctl enable vault-agent.service
sudo systemctl enable vault-agent-wrapper.service
sudo systemctl enable vault-agent-monitor.service   # Optional

# Start the services in order:
sudo systemctl start vault-agent.service
sleep 10  # Allow time for certificates to be fetched
sudo systemctl start vault-agent-wrapper.service
sudo systemctl start vault-agent-monitor.service    # Optional

Verify by checking logs:

tail -f ~/projects/vault-agent/logs/vault-agent.log
tail -f ~/projects/vault-agent/logs/wrapper.log
tail -f ~/projects/vault-agent/logs/monitor.log  # if using monitor

Also, verify that the certificate files and keystores are in place:

ls -la ~/projects/vault-agent/certs/nifi-trust-store/certs/
keytool -list -v -keystore ~/projects/vault-agent/certs/nifi-trust-store/nifiTrustStore.jks -storepass <YOUR_TRUSTSTORE_PASS>

(Replace <YOUR_TRUSTSTORE_PASS> with your actual desired truststore password.)

7. NiFi 2.0.0 Installation & Configuration

7.1 Download & Unpack NiFi 2.0.0

cd ~/projects/nifi
wget https://downloads.apache.org/nifi/2.0.0/nifi-2.0.0-bin.tar.gz
tar -zxvf nifi-2.0.0-bin.tar.gz
mv nifi-2.0.0 nifi-2.0.0

7.2 Key Configuration Files

Important configuration files are located in ~/projects/nifi/nifi-2.0.0/conf/:

  • nifi.properties
  • authorizations.xml, authorizers.xml, users.xml
  • bootstrap.conf
  • logback.xml
  • zookeeper.properties (if used)

Sometimes it helps me to grab the content of my configuration files so I throw them into an LLM to help me with upgrades and other suggestions. I do that like this:

cd ~/projects/nifi/nifi-2.0.0/conf
for f in *.xml *.conf *.properties; do echo "=== $f ==="; cat "$f"; echo -e "\n"; done > all-nifi-configs.txt

7.3 nifi.properties – TLS, Data Directories, OIDC, etc

In this section, you’ll update the NiFi configuration file (nifi.properties) to enable secure communication and set up data management and authentication. The settings below configure HTTPS, point NiFi to the keystore and truststore generated by Vault Agent, define repository directories (useful when mounting separate storage), and enable OpenID Connect (OIDC) for authentication (with Google as an example). Replace all placeholder values (such as <YOUR_KEYSTORE_PASS>, <YOUR_GOOGLE_CLIENT_ID>, etc.) with your actual credentials and environment-specific details.

Edit ~/projects/nifi/nifi-2.0.0/conf/nifi.properties to include the following (use placeholders for secrets):

# HTTPS configuration
nifi.web.https.host=nifi.internal-domain.com
nifi.web.https.port=8443

# Keystore/Truststore settings generated by Vault Agent
nifi.security.keystore=/home/serveradmin/projects/vault-agent/certs/nifi.p12
nifi.security.keystoreType=PKCS12
nifi.security.keystorePasswd=<YOUR_KEYSTORE_PASS>
nifi.security.keyPasswd=<YOUR_KEYSTORE_PASS>

nifi.security.truststore=/home/serveradmin/projects/vault-agent/certs/nifi-trust-store/nifiTrustStore.jks
nifi.security.truststoreType=JKS
nifi.security.truststorePasswd=<YOUR_TRUSTSTORE_PASS>

# Web proxy (if using a reverse proxy)
nifi.web.proxy.host=nifi.internal-domain.com

# Repository directories (if using a dedicated data mount, see Section 8)
nifi.flowfile.repository.directory=/data/nifi/data/flowfile_repository
nifi.content.repository.directory.default=/data/nifi/data/content_repository
nifi.provenance.repository.directory.default=/data/nifi/data/provenance_repository
nifi.database.directory=/data/nifi/data/database_repository
nifi.status.repository.questdb.persist.location=/data/nifi/data/status_repository

# OpenID Connect (Google)
nifi.security.user.oidc.discovery.url=https://accounts.google.com/.well-known/openid-configuration
nifi.security.user.oidc.client.id=<YOUR_GOOGLE_CLIENT_ID>
nifi.security.user.oidc.client.secret=<YOUR_GOOGLE_CLIENT_SECRET>
nifi.security.user.oidc.additional.scopes=openid,profile,email
nifi.security.user.oidc.claim.identifying.user=email
nifi.security.user.oidc.token.refresh.window=5 mins

# Other properties remain as needed.

Explanation of Key Sections:

  • HTTPS Configuration: Sets the NiFi host and port to enable secure (HTTPS) access.
  • Keystore/Truststore Settings: Points NiFi to the keystore (nifi.p12) and truststore (nifiTrustStore.jks) generated by the Vault Agent. The passwords for these files are specified here, so ensure you replace <YOUR_KEYSTORE_PASS> and <YOUR_TRUSTSTORE_PASS> with your actual secure passwords.
  • Web Proxy Configuration: Optionally defines the proxy host if you’re using a reverse proxy setup.
  • Repository Directories: Specifies locations for various NiFi repositories (flowfile, content, provenance, etc.). This is particularly useful if you have a dedicated data mount or separate storage for NiFi’s operational data.
  • OpenID Connect (OIDC) Settings: Configures NiFi for user authentication via OIDC using Google as the identity provider. Replace <YOUR_GOOGLE_CLIENT_ID> and <YOUR_GOOGLE_CLIENT_SECRET> with your credentials, and adjust any additional scopes or claims as required by your authentication setup.

7.4 Starting NiFi

Start NiFi with:

cd ~/projects/nifi/nifi-2.0.0
bin/nifi.sh start
bin/nifi.sh status

Visit the NiFi UI at: https://nifi.internal-domain.com:8443/nifi/ (Ensure DNS or /etc/hosts maps "nifi.internal-domain.com" to your server's IP. In my case, I use bind9 to manage my DNS records which then points to Traefik proxy which then points to NiFi.)

Traefik Configuration

Traefik acts as a modern reverse proxy and load balancer for your NiFi deployment. The configuration snippet below defines a dynamic Traefik setup that routes HTTPS traffic to NiFi while enforcing strong security headers. Customize the placeholder values (e.g., <your-internal-ip>, nifi.internal-domain.com) to match your environment.

http:
  routers:
    nifi:
      entryPoints:
        - "https"
      rule: "Host(`nifi.internal-domain.com`)"
      middlewares:
        - nifi-add-x-proxy-host
      tls: {}
      service: nifi

  services:
    nifi:
      loadBalancer:
        servers:
          - url: "https://<your-internal-ip>:8443"  # Replace with your NiFi server's internal IP/port
        passHostHeader: true
        sticky:
          cookie: {}

  middlewares:
    nifi-add-x-proxy-host:
      headers:
        frameDeny: true
        sslRedirect: true
        browserXssFilter: true
        contentTypeNosniff: true
        forceSTSHeader: true
        stsIncludeSubdomains: true
        stsPreload: true
        stsSeconds: 15552000
        customFrameOptionsValue: SAMEORIGIN
        customRequestHeaders:
          X-ProxyHost: "nifi.internal-domain.com"
          X-ProxyScheme: "https"
          X-Forwarded-Host: "nifi.internal-domain.com"
          X-Forwarded-Proto: "https"
          X-Forwarded-Port: "443"

Explanation:

  • Router Configuration: The nifi router listens on the https entrypoint and routes requests matching the host nifi.internal-domain.com. It applies the nifi-add-x-proxy-host middleware to inject additional security and proxy headers. Enabling tls: {} ensures that TLS is used.

  • Service Definition: The nifi service specifies a load balancer that directs incoming requests to your NiFi instance. Replace <your-internal-ip>:8443 with your server's internal IP and port. The passHostHeader option preserves the original host header, and the sticky session configuration helps maintain session affinity using cookies.

  • Middleware Setup: The nifi-add-x-proxy-host middleware enforces a range of security headers:

    • Security Headers: Settings like frameDeny, sslRedirect, browserXssFilter, and contentTypeNosniff help secure the application.
    • HSTS Configuration: Options such as forceSTSHeader, stsIncludeSubdomains, stsPreload, and stsSeconds enforce HTTP Strict Transport Security.
    • Custom Proxy Headers: Custom headers (X-ProxyHost, X-ProxyScheme, X-Forwarded-Host, X-Forwarded-Proto, and X-Forwarded-Port) ensure the proper handling of requests through the proxy.

This configuration provides a robust foundation for securely exposing your NiFi interface via Traefik. Adjust the parameters to best fit your operational environment.


8. Optional: Data Mount & SMB Share Setup

Data Mount for NiFi Repositories

I usually store my NiFi data repositories (content, flowfile, provenance, etc.) on a separate mounted drive. Using a dedicated drive for data storage not only isolates your operational data from the OS disk but also helps improve performance and simplifies backups. Follow these steps to prepare and mount your drive, and then update your NiFi configuration to point to the new data directories.

  1. Identify and prepare your drive:

    lsblk
    sudo wipefs -a /dev/<device>
    sudo parted /dev/<device> mklabel gpt
    sudo parted /dev/<device> mkpart primary xfs 0% 100%
    sudo mkfs.xfs -f /dev/<device-partition>
    
  2. Mount the drive:

    sudo mkdir -p /data
    UUID=$(sudo blkid -s UUID -o value /dev/<device-partition>)
    echo "UUID=$UUID /data xfs defaults,noatime 0 2" | sudo tee -a /etc/fstab
    sudo mount -a
    sudo chown serveradmin:serveradmin /data
    sudo chmod 755 /data
    
  3. Update nifi.properties: After mounting, update your nifi.properties (see Section 7.3) so that NiFi’s repository directories point to /data/nifi/data/....


SMB Share (Optional)

If you require an SMB share—for example, for centralized file storage or backups—you can configure one on your NAS or TrueNAS device and mount it on Ubuntu. The instructions below outline the basic steps:

  1. On your NAS/TrueNAS: Configure an SMB share with a generic name like "shared_data" and create a user (e.g., smb_user).

  2. On Ubuntu:

    sudo apt install cifs-utils
    sudo mkdir -p /mnt/shared_data
    sudo bash -c 'cat > /root/.smbcredentials <<EOF
    username=smb_user
    password=<YOUR_SMB_PASSWORD>
    EOF'
    sudo chmod 600 /root/.smbcredentials
    echo "//smb.internal-domain.com/shared_data /mnt/shared_data cifs credentials=/root/.smbcredentials,vers=3.0,uid=1000,gid=1000,dir_mode=0777,file_mode=0777,noperm 0 0" | sudo tee -a /etc/fstab
    sudo mount -a
    

These steps will help you mount an SMB share on your Ubuntu server, enabling centralized storage or backups that can be accessed by NiFi or other services as needed.

  1. Verify:
ls -la /mnt/shared_data

9. Monitoring with Uptime Kuma

I use Uptime Kuma to monitor both HashiCorp Vault and the NiFi UI, as well as to receive push notifications from custom monitoring scripts that check certificate validity, service status, and log contents. The following sections provide the scripts for these tasks and explain how to configure Uptime Kuma to receive these updates.

9.1 Prepare the Monitoring Directory

Create a dedicated directory for the monitoring scripts and set appropriate permissions:

mkdir -p ~/projects/vault-agent/monitoring
chmod 750 ~/projects/vault-agent/monitoring

9.2 Certificate Monitoring Script

This script checks the expiration date of your NiFi certificate and pushes a status update to Uptime Kuma using a push URL.

File: ~/projects/vault-agent/monitoring/check_certificates.sh

#!/bin/bash
# monitoring/check_certificates.sh

BASE_DIR="/home/serveradmin/projects/vault-agent"
LOG_FILE="${BASE_DIR}/logs/certificate_monitor.log"
# Replace the URL below with your Uptime Kuma push URL for certificate monitoring
PUSH_URL="<https://uptime.internal-domain.com/api/push/YOUR-CERT-MONITOR-PUSH-URL>"

mkdir -p "$(dirname "$LOG_FILE")"
exec 1>> "$LOG_FILE" 2>&1

log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"; }

check_certificate() {
    local cert_file="$1"
    local cert_name="$2"
    if [[ ! -f "$cert_file" ]]; then
        log "Certificate file not found: $cert_file"
        curl -m 10 "${PUSH_URL}?status=down&msg=${cert_name}+certificate+not+found" || true
        return 1
    fi
    local expiry_date
    expiry_date=$(openssl x509 -enddate -noout -in "$cert_file" | cut -d= -f2)
    local expiry_epoch
    expiry_epoch=$(date -d "${expiry_date}" +%s)
    local now_epoch
    now_epoch=$(date +%s)
    local days_left=$(( (expiry_epoch - now_epoch) / 86400 ))
    log "Certificate $cert_name has $days_left days remaining."
    if (( days_left < 7 )); then
        curl -m 10 "${PUSH_URL}?status=down&msg=${cert_name}+expires+in+${days_left}+days" || true
        return 1
    else
        curl -m 10 "${PUSH_URL}?status=up&msg=${cert_name}+valid+for+${days_left}+days" || true
        return 0
    fi
}

check_certificate "${BASE_DIR}/certs/nifi-trust-store/certs/nifi.crt" "NiFi"

9.3 Service Monitoring Script

This script checks whether the Vault Agent service is running and pushes the service status to Uptime Kuma.

File: ~/projects/vault-agent/monitoring/check_services.sh

#!/bin/bash
# monitoring/check_services.sh

BASE_DIR="/home/serveradmin/projects/vault-agent"
LOG_FILE="${BASE_DIR}/logs/service_monitor.log"
# Replace the URL below with your Uptime Kuma push URL for service monitoring
PUSH_URL="<https://uptime.internal-domain.com/api/push/YOUR-SERVICE-MONITOR-PUSH-URL>"

mkdir -p "$(dirname "$LOG_FILE")"
exec 1>> "$LOG_FILE" 2>&1

log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"; }

check_service() {
    local service="$1"
    local service_name="$2"
    if systemctl is-active --quiet "$service"; then
        log "$service_name is running"
        curl -m 10 "${PUSH_URL}?status=up&msg=${service_name}+running" || true
        return 0
    else
        log "$service_name is not running"
        curl -m 10 "${PUSH_URL}?status=down&msg=${service_name}+not+running" || true
        return 1
    fi
}

# Check Vault Agent service
check_service "vault-agent" "Vault-Agent"

9.4 Log Monitoring Script

This script scans log files for errors (e.g., error, exception, failure) and pushes a status update if issues are found.

File: ~/projects/vault-agent/monitoring/check_logs.sh

#!/bin/bash
# monitoring/check_logs.sh

BASE_DIR="/home/serveradmin/projects/vault-agent"
LOG_FILE="${BASE_DIR}/logs/log_monitor.log"
# Replace the URL below with your Uptime Kuma push URL for log monitoring
PUSH_URL="<https://uptime.internal-domain.com/api/push/YOUR-LOG-MONITOR-PUSH-URL>"

mkdir -p "$(dirname "$LOG_FILE")"
exec 1>> "$LOG_FILE" 2>&1

log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"; }

check_logs() {
    local log_path="$1"
    local component="$2"
    local hours=${3:-1}
    if [[ ! -f "$log_path" ]]; then
        log "Log file not found: $log_path"
        curl -m 10 "${PUSH_URL}?status=down&msg=${component}+log+file+not+found" || true
        return 1
    fi
    local cutoff
    cutoff=$(date -d "$hours hour ago" '+%Y-%m-%d %H:%M:%S')
    local errors
    errors=$(grep -Ei 'error|exception|failure' "$log_path" | sed "1!G;h;\$!d" | awk -v cutoff="$cutoff" '
    {
        match($0, /\[([0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2})\]/, arr)
        if (arr[1] >= cutoff) { print $0 }
    }')
    if [[ -n "$errors" ]]; then
        local error_count
        error_count=$(echo "$errors" | wc -l)
        log "Found $error_count errors in $component logs"
        curl -m 10 "${PUSH_URL}?status=down&msg=${error_count}+errors+in+${component}+logs" || true
        return 1
    else
        log "No errors found in $component logs"
        curl -m 10 "${PUSH_URL}?status=up&msg=No+errors+in+${component}+logs" || true
        return 0
    fi
}

check_logs "${BASE_DIR}/logs/vault-agent.log" "Vault-Agent"
check_logs "${BASE_DIR}/logs/monitor.log" "Monitor"
check_logs "${BASE_DIR}/logs/combined_import.log" "Certificate-Import"

Uptime Kuma Configuration Instructions

In addition to these scripts, you need to configure Uptime Kuma to monitor your services and receive push notifications.

1. HTTP Monitors

Create HTTP monitors in Uptime Kuma to periodically check the health of your key services.

  • Apache NiFi Monitor:

    • Type: HTTP
    • Name: Apache NiFi
    • URL: https://nifi.internal-domain.com/nifi/#/logout-complete (Replace with your actual NiFi UI endpoint.)
    • Interval: 30 seconds
    • Timeout: 24 seconds (adjust as needed)
    • Accepted Status Codes: 200-299
    • Notifications: Configure your notification channels (e.g., email, Slack) as desired.
  • HashiCorp Vault Monitor:

    • Type: HTTP
    • Name: HashiCorp Vault
    • URL: https://vault.internal-domain.com:8200/v1/sys/health (Replace with your actual Vault health endpoint.)
    • Interval: 60 seconds
    • Timeout: 48 seconds (adjust as needed)
    • Accepted Status Codes: 200-299
    • Notifications: Set up notifications similarly.

2. Push Monitors

Push monitors in Uptime Kuma allow your custom scripts to push status updates.

  • Certificate Monitoring Push:

    • Type: Push
    • Name: HashiCorp Vault - NiFi Certificate Monitor
    • Interval: 43200 seconds (12 hours)
    • Push Token: Uptime Kuma will generate a push token for this monitor. Copy this token and update the <YOUR-CERT-MONITOR-PUSH-URL> placeholder in check_certificates.sh.
  • Log Monitoring Push:

    • Type: Push
    • Name: HashiCorp Vault - NiFi Log Monitor
    • Interval: 300 seconds
    • Push Token: Copy the generated token and update <YOUR-LOG-MONITOR-PUSH-URL> in check_logs.sh.

3. Additional Configuration

Below is an example JSON export (with obfuscated URLs and tokens) that shows how my monitors are configured. You don't need to import this directly—it simply illustrates the key fields:

{
  "monitor_37": {
    "type": "http",
    "id": 37,
    "name": "Apache NiFi",
    "interval": 30,
    "active": true,
    "maxretries": 0,
    "retryInterval": 30,
    "upsideDown": false,
    "parent": 31,
    "tags": [
      { "tag_id": 19, "name": "Applications", "color": "#059669", "value": "" },
      { "tag_id": 21, "name": "DataFlow", "color": "#047857", "value": "" }
    ],
    "notificationIDList": { "1": true },
    "accepted_statuscodes": [ "200-299" ],
    "url": "https://nifi.internal-domain.com/nifi/#/logout-complete",
    "timeout": 24,
    "resendInterval": 0,
    "expiryNotification": false,
    "ignoreTls": false,
    "maxredirects": 10,
    "method": "GET",
    "httpBodyEncoding": "json"
  },
  "monitor_36": {
    "type": "http",
    "id": 36,
    "name": "Hashicorp Vault",
    "interval": 60,
    "active": true,
    "maxretries": 0,
    "retryInterval": 60,
    "upsideDown": false,
    "parent": 29,
    "tags": [
      { "tag_id": 20, "name": "Docker", "color": "#10B981", "value": "" },
      { "tag_id": 42, "name": "Secrets", "color": "#B91C1C", "value": "" },
      { "tag_id": 27, "name": "Security", "color": "#DC2626", "value": "" }
    ],
    "notificationIDList": { "1": true },
    "accepted_statuscodes": [ "200-299" ],
    "url": "https://vault.internal-domain.com:8200/v1/sys/health",
    "timeout": 48,
    "resendInterval": 0,
    "expiryNotification": false,
    "ignoreTls": false,
    "maxredirects": 10,
    "method": "GET",
    "httpBodyEncoding": "json"
  },
  "monitor_89": {
    "type": "push",
    "id": 89,
    "name": "Hashicorp Vault - NiFi Certificate Monitor",
    "interval": 43200,
    "active": true,
    "maxretries": 0,
    "retryInterval": 43200,
    "upsideDown": false,
    "parent": 29,
    "tags": [
      { "tag_id": 42, "name": "Secrets", "color": "#B91C1C", "value": "" },
      { "tag_id": 27, "name": "Security", "color": "#DC2626", "value": "" }
    ],
    "notificationIDList": { "1": true },
    "accepted_statuscodes": [ "200-299" ],
    "pushToken": "8z0LXNks2s"
  },
  "monitor_91": {
    "type": "push",
    "id": 91,
    "name": "Hashicorp Vault - NiFi Log Monitor",
    "interval": 300,
    "active": true,
    "maxretries": 0,
    "retryInterval": 300,
    "upsideDown": false,
    "parent": 29,
    "tags": [
      { "tag_id": 42, "name": "Secrets", "color": "#B91C1C", "value": "" },
      { "tag_id": 27, "name": "Security", "color": "#DC2626", "value": "" }
    ],
    "notificationIDList": { "1": true },
    "accepted_statuscodes": [ "200-299" ],
    "pushToken": "ukfPEadUGW"
  }
}

Standard Steps to Configure Uptime Kuma

  1. Log in to your Uptime Kuma dashboard.
  2. For HTTP Monitors:
    • Click "Add New Monitor."
    • Select the "HTTP" type.
    • Fill in the required fields (Name, URL, Interval, Timeout, Accepted Status Codes, etc.).
    • Save and test the monitor.
  3. For Push Monitors:
    • Click "Add New Monitor."
    • Select the "Push" type.
    • Provide a name (e.g., "Hashicorp Vault - NiFi Certificate Monitor").
    • Uptime Kuma will generate a push token. Copy this token and update your corresponding monitoring script with it.
    • Save the monitor.
  4. Organize and Tag:
    • Use tags and notification settings to categorize monitors and set up alert routing as needed.
  5. Test:
    • Verify that your HTTP monitors respond correctly and that your push scripts trigger updates in Uptime Kuma as expected.

Remember to replace all obfuscated URLs, tokens, and other placeholder values with your own secure settings.

9.5 Setting File Permissions and Cron Jobs

Make the monitoring scripts executable:

chmod 700 ~/projects/vault-agent/monitoring/check_*.sh

Then edit your crontab (e.g., using crontab -e) to schedule them:

# Run certificate check every 12 hours
0 */12 * * * /home/serveradmin/projects/vault-agent/monitoring/check_certificates.sh >> /home/serveradmin/projects/vault-agent/logs/certificate_monitor.log 2>&1

# Run service check every minute
* * * * * /home/serveradmin/projects/vault-agent/monitoring/check_services.sh >> /home/serveradmin/projects/vault-agent/logs/service_monitor.log 2>&1

# Run log check every 5 minutes
*/5 * * * * /home/serveradmin/projects/vault-agent/monitoring/check_logs.sh >> /home/serveradmin/projects/vault-agent/logs/log_monitor.log 2>&1

10. Summary and Next Steps

You have now set up an end-to-end system on a fresh Ubuntu 22.04 server that includes:

  • System Preparation: The operating system is updated and essential dependencies are installed.

  • SDKMAN & OpenJDK 21: Java is installed and configured via SDKMAN, ensuring that NiFi’s Java requirements are met.

  • Directory Structure: A standardized layout is created for both Apache NiFi and the Vault Agent, including dedicated directories for certificates, configurations, logs, systemd service files, and monitoring scripts.

  • Vault Server Prerequisites: The Vault server is configured with a PKI secrets engine and a dedicated AppRole (named internal-domain-com in this example) to securely issue TLS certificates.

  • Vault Agent Configuration: The Vault Agent is set up with a configuration file (vault-agent.hcl) and accompanying templates (cert.tpl, key.tpl, ca.tpl, ca_chain.tpl) to automatically fetch and manage TLS certificates for NiFi. Additionally, AppRole credentials are securely stored, and automation scripts (such as bootstrap.sh, combined-import.sh, wrapper-import.sh, and an optional monitor.sh) handle secret rotation, certificate generation, and keystore/truststore creation. Systemd services are defined to manage these processes continuously.

  • NiFi 2.0.0 Installation & Configuration: Apache NiFi is downloaded, unpacked, and configured to use the keystore and truststore generated by the Vault Agent. The nifi.properties file is updated to enable HTTPS, set up data repository directories (with options for a dedicated data mount), and configure OpenID Connect (OIDC) for authentication. In addition, a Traefik configuration snippet is provided to proxy HTTPS traffic securely to NiFi.

  • Optional Data Mount & SMB Share: For enhanced performance and easier backups, you have the option to mount a separate drive for NiFi repositories. Instructions are also provided for configuring and mounting an SMB share for centralized file storage.

  • Monitoring with Uptime Kuma: Custom monitoring scripts (check_certificates.sh, check_services.sh, and check_logs.sh) are provided to push status updates to Uptime Kuma via push monitors. Detailed instructions explain how to configure both HTTP and push monitors in Uptime Kuma, along with recommended cron job schedules to run these scripts periodically.

Next Steps

  1. Review and Adjust Placeholder Values:

    • Replace all example URLs (such as vault.internal-domain.com, nifi.internal-domain.com, and uptime.internal-domain.com) with your actual domain names.
    • Substitute placeholders like <YOUR_KEYSTORE_PASS>, <YOUR_TRUSTSTORE_PASS>, <YOUR_GOOGLE_CLIENT_ID>, <YOUR_GOOGLE_CLIENT_SECRET>, and <YOUR_SECRET_ID> with your secure, production-ready credentials.
  2. Test Each Component Individually:

    • Verify that Vault connectivity is working correctly and that the Vault Agent can generate and rotate secrets as expected.
    • Confirm that NiFi loads the proper keystore and truststore, and that HTTPS is functioning.
    • Run the monitoring scripts manually to ensure that they correctly push updates to Uptime Kuma.
    • Test the Traefik proxy configuration by accessing your NiFi UI through the reverse proxy.
  3. Document Environment-Specific Modifications:

    • If you choose to use a dedicated data mount or an SMB share, document how the disk was partitioned, formatted, and mounted.
    • Note any additional customizations or nonstandard configurations that are unique to your setup.

By following this guide, you now have a secure, automated, and fully monitored NiFi deployment with dynamic TLS certificate management using Vault. This comprehensive setup, adapted from my own homelab documentation, should provide you with a solid foundation for building complex data pipelines in a production-like environment.

Happy deploying!

— Henry Sowell