Upgrading an Old Unifi Controller Container
Introduction
My Unifi install randomly stopped letting me log in one day and started throwing the incredibly helpful message, "There was an error making that request. Please try again later."
I figured it was time to move it off the old linuxserver/unifi image and onto something modern.
The hard part turned out to be a missing part of my Cadddy config. This is the process that worked for me. My old container was on 8.0.24 and I had no problem restoring the backup to the new image on the same version, then upgrading straight to 10.1.85.
Export a Backup
First, export a backup from the old controller UI.
If you do not have UI access anymore, like I didn’t, you can usually grab one of the autobackups from the container data directory under data/backup.
Once you have the backup file somewhere safe, stop the old container.
Update Your Compose File
I brought the controller back up on the same old 8.0.24 image first so I could restore the backup in a matching-ish environment before upgrading the application itself.
unifi:
image: lscr.io/linuxserver/unifi-network-application:8.0.24
container_name: unifi
env_file: ./unifi.env
volumes:
- "./unifi:/config"
ports:
- 3478:3478/udp
- 10001:10001/udp
- 8080:8080
- 8843:8843
- 8880:8880
- 6789:6789
- 5514:5514
depends_on:
- unifi-db
restart: unless-stopped
unifi-db:
image: mongo:8.2.5
container_name: unifi-db
env_file: ./unifi.env
volumes:
- ./unifi-db:/data/db
- ./unifi-db/init-mongo.sh:/docker-entrypoint-initdb.d/init-mongo.sh:ro
restart: unless-stopped
Note that I’ve pinned Mongo to a particular version since I read that it’s not safe to run whatever they have tagged latest and expect it to just work.
Create the Env File
I like to keep all the .env files separate for all the containers in my homelab. Here’s my unifi.env:
PUID=1000
PGID=1000
MONGO_USER=unifi
MONGO_PASS=REDACTED
MONGO_HOST=unifi-db
MONGO_PORT=27017
MONGO_DBNAME=unifi
MONGO_AUTHSOURCE=admin
MEM_LIMIT=1024
MONGO_INITDB_ROOT_USERNAME=root
MONGO_INITDB_ROOT_PASSWORD=REDACTED
Add the Mongo Init Script
The LinuxServer folks have cooked up a simple init script that configures your Mongo instance with a DB and user for the Unifi Controller. The restore flow would not work for me until I created the Mongo user with access to the extra restore database, which I found in a Github issue. I used this init-mongo.sh:
#!/bin/bash
if which mongosh > /dev/null 2>&1; then
mongo_init_bin='mongosh'
else
mongo_init_bin='mongo'
fi
"${mongo_init_bin}" <<EOF
use ${MONGO_AUTHSOURCE}
db.auth("${MONGO_INITDB_ROOT_USERNAME}", "${MONGO_INITDB_ROOT_PASSWORD}")
db.createUser({
user: "${MONGO_USER}",
pwd: "${MONGO_PASS}",
roles: [
{ db: "${MONGO_DBNAME}", role: "dbOwner" },
{ db: "${MONGO_DBNAME}_stat", role: "dbOwner" },
{ db: "${MONGO_DBNAME}_audit", role: "dbOwner" },
{ db: "${MONGO_DBNAME}_restore", role: "dbOwner" }
]
})
EOF
This script is only run by the Mongo image on first initialization, so if you’ve already got a Mongo container running you’ll have to manually create the user and DBs with the appropriate roles.
Update Caddy
I also had to change my Caddy config. Without the header_up Host {host} line the web UI mostly worked, but backup import kept failing with strange 403 errors.
@unifi host unifi.nugent.zone
handle @unifi {
reverse_proxy https://unifi:8443 {
header_up Host {host}
transport http {
tls
tls_insecure_skip_verify
}
}
}
As best I can tell, that header fix makes Caddy pass the original public hostname through to Unifi instead of the upstream container hostname. Without it, some requests appear to arrive as if they were for the internal backend name, which is apparently enough to upset part of the restore flow and produce 403 responses.
Start Everything
Once the compose file, env file, and Mongo init script are in place:
docker compose up -d
Then reload Caddy so it picks up the config change. You can do what I always do and docker restart caddy, or the more correct form:
docker exec caddy caddy reload --config /etc/caddy/Caddyfile
Restore the Backup
After that, open the Unifi UI and go through the restore-from-backup flow with the file you exported earlier.
At this stage I was still using the 8.0.26 image tag above. That let me get the old backup imported cleanly before changing anything else. I waited a bit until all my devices were showing telemetry in the UI before shutting down the container.
Upgrade to a Modern Version
Once the restore is complete and the controller is healthy again, update the image tag to something modern and bring the stack back up.
I had good luck going straight from 8.0.24 to 10.1.85 and running docker compose up -d
Hopefully this saves someone else from tearing their hair out.