I’ve been testing/running a few docker containers/services and Caddy for reverse proxying to them.
I’m a Docker n00b, but have been reading about it lately. Trying to figure out the correct/a good way of updating these Docker based services/images/containers.
As far as I can tell, the regular/recommended way is to delete the container, pull latest images and recreate the container (obviously making sure setting/files/data) is not lost/on internal volumes.
But, since I’m using a reverse proxy to get HTTPS, it’s a bit tedious, as I have to figure out the new allocated IP and change Caddy’s config and restart it.
Any obvious approach/method I’m missing?
(Not so sure how I feel about docker, but wanted to give it a try before deciding …)
You can setup Caddy to point to your containers name instead of it’s IP address (e.g. http://caddy). Docker uses an internal DNS that resolves the hostnames of the containers if they’re on the same network (if you don’t specify a network specifically, they’ll all be on the same default network).
Also, you may want to look into docker-compose. I’m in a bit of a hurry as I write this, but you can create a YAML file that will hold all your containers and configuration. Updating is then a matter of pulling the new image and running docker-compose up -d <container_name>. Environmental variables, volumes, and all other things are all handled by docker-compose which means it’s a lot less effort to maintain.
… But I guess that would assume I’m running the Docker version Caddy. I guess I should … (I’m running it natively, installed through apt I think, so the resolver will not resolve the DNS names internal to Docker.) I guess I should switch …
Wouldn’t Caddy be “something like nginx-proxy”?
At the moment I set ut up, it looked like the config syntax and getting the Let’s Encrypt certificate was a bit quicker and easier …
nginx or ha-proxy will be considered more closely if I move any of this into production. (I was testing media servers using Docker. Ended up using Plex non-docker.)