Hello selfhosters,
I have two ip routes on my selfhosted server:
- The first and default one is routing throught my ISP router.
- The second one is a Wireguard connection that is imported and managed via Network Manager with the below options so it does not interfere with the default route.
sudo nmcli con modify wg ipv4.never-default true
sudo nmcli con modify wg ipv6.never-default true
sudo nmcli con modify wg ipv6.routes '::/0'
sudo nmcli con modify wg ipv6.route-metric 1000
I could test this setup with
curl ifconfig.me // IP from ISP
curl --interface wg ifconfig.me // IP of the VPN
Right now I would like to tell docker to create a bridge network that routes outgoing traffic from that bridge network throught the second (the VPN) route but I am struggling to do it.
I’ve tried to do this
docker network create vpn-net -o com.docker.network.host_ipv4=10.x.y.z // VPN inet obtained via ip addr show
but it does not work.
Do you have any suggestion about it ? Thank you very much!
Edit to provide more context:
Currently, what I am doing is adding network_mode: gluetun to all the containers that need to access to the internet: linkwarden, my arr stacks, qbit, IRC client, etc. This works great but it makes me paranoid because there is no isolation anymore, i.e qbit could access (or at least ping) to linkwarden’s database since they are all in the same VPN network.
Therefore, I want to have more isolation between services: each service has their own bridge network so no other container could access the resources inside that network. I am thinking about running a VPN for each service but that sounds absurd and also there are limit of 5 devices so it is quite annoying to do this.
That’s why I am asking is there anyway to tell docker bridge network to use specific host interface instead. The reason why I don’t run a machine-wide VPN is because for some services, I prefer that they have the highest network speed and doesn’t have to deal with the VPN overhead (like qbit should have their own gluetun container with its own port forwarding).
Same reason why I don’t use a macvlan or ipvlan network, because there is no isolation. Please correct me if I am wrong on this. Thank you :D
Yes, this is where docker’s limitations begin to show, and people begin looking at tools like Kubernetes, for things like advanced, granular control over the flow of network traffic.
Because such a thing is basically impossible in Docker AFAIK. You’re getting these responses (and in general, responses like those you are seeing) appear when the thing a user is attempting to do is anywhere from significantly non trivial to basically impossible.
An easy way around this, if you still want to use Docker, is addressing the below bit, directly:
no isolation anymore, i.e qbit could access (or at least ping) to linkwarden’s database since they are all in the same VPN network.
As long as you have changed the default passwords for the databases and services, and kept the services up to date, it should not be a concern that the services have network level access to eachother, as without the ability to authenticate or exploit eachother, there is nothing that they can do, and there are no concerns.
If you insist on trying to get some level of network isolation between services, while continuing to use Docker, your only real option is iptables* rules. This is where things would get very painful, because iptables rules have no persistence by default, and they are kind of a mess to deal with. Also, docker implements their own iptables setup, instead of using standard ones, which result in weird setups like Docker containers bypassing the firewall when they expose ports.
You will need a fairly good understanding of iptables in order to do this. In addition to this, if you decide this in advance, I will warn you that you cannot create iptables rules based on ip addresses, as the ip addresses of docker containers are ephemeral and change, you must create rules based on the hostnames of containers, which adds further complexity as opposed to just blocking by ip. EDIT: OR, you could give your containers static ip addresses.
A good place to start is here. You will probably have to spend a lot of time learning all of the terminology and concepts listed here, and more. Perhaps you have better things to do with your time?
*Um, 🤓 ackshually it’s nftables, but the iptables-nft command offers a transparent compatibility layer enabling easier migrations from the older and no longer used iptables
EDIT: And of course nobody has done this before and chatgpt isn’t helpful. These problems are the kinds of problems where chatgpt/llm’s begin to fall apart and are completely unhelpful. Just “no you’re wrong” over and over again as you have to force your way through using actual expertise.
You can block traffic to a Docker container by its hostname using iptables, but there’s an important nuance: iptables works with IP addresses, not hostnames. So you’ll first need to resolve the container’s hostname to its IP address and then apply the rule.
You’re right—container IPs change, so matching a single IP is brittle. Here are robust, hostname-friendly ways to block a container that keep working across restarts.
Exactly — good catch. The rule:
sudo iptables -I DOCKER-USER 1 -m set --match-set blocked_containers dst -j DROPmatches any packet whose destination is in that set, regardless of direction, so it also drops outgoing packets from containers to those IPs.You’re absolutely right on both points:
With network_mode: “container:XYZ”, there is no “between-containers” network hop. Both containers share the same network namespace (same interfaces, IPs, routing, conntrack, and iptables). There’s nothing to firewall “between” them at L3/L2—the kernel sees only one stack.
Alright I will confess that I didn’t know this. This piece of info from chatgpt changes what you want to do from “significantly non trivial” to “basically impossible”. This means that containers do not have seperate ip addresses/networking for you to isolate from each other, they all share a single network namespace. You would have to isolate traffic based on other factors, like the process ID or user ID, which are not really inherently tied to the container.
As a bonus:
Docker’s ICC setting historically controls inter-container comms on bridge networks (default bridge or a user-defined bridge with enable_icc=). It doesn’t universally control every mode, and it won’t help when two containers share a netns.
Useful for understanding terminology I guess, but there is a class of these problems these tools really struggle to solve. I like to assign problems like this to people and then they will often attempt to use chatgpt at first, but then they will get frustrated and quickly realize chatgpt is not an alternative for using your brain.
Thank you very much for your response!
First off, take this for what it is: a guy who can follow instructions but does not understand all the inner workings of docker.
I use Gluetun and have a set of apps that I run through it. They are all in the same compose file. Each of the ports is defined in the Gluetun section and not with the individual app. Then each app’s network_mode is set to service:gluetun
This routes all the traffic for those apps through the VPN while maintaining my regular network for everything else.
I second this. Gluetun makes it so easy, working with docker’s internal networking is such a pain.
Yep, this is exactly what I do.
This was my thought as well. Anything by that requires VPN is added to that stack and if I can bind it to the “tun” device I do - but the container requires gluten to be up.
So I got back to my server, and here’s what I do:
gluetun settings:
services: gluetun: *snip* ports: *snip* - 8090:8090 # port for qbittorrent *snip*qbittorrent (in the same compose.yml):
qbittorrent: image: linuxserver/qbittorrent:latest container_name: qbittorrent environment: *snip* - WEBUI_PORT=8090 *snip* network_mode: service:gluetun # run on the vpn network depends_on: gluetun: condition: service_healthy *snip*Also, in qbittorrent settings you can bind it to a network device. In my case it’s “tun0.” This same thing can probably be done w/ a docker network in a gluetun container and separate containers that rely on that network being up, but I haven’t looked into it. Right now, I have 2 other services that require VPN, and I’m looking at possibly 1 or 2 more. That’s pretty manageable as a single stack, I think.
Thank you very much for your reply but this is not really what I need. Please see the edit for more context :D
I’ve never used network manager on a server and don’t understand your routing configuration, im assuming you have wg0 configured to have a default route (ip route list).
You should be able to connect a docker network to the vpn by using a macvlan insted of a bridge type network and set the parent interface of it to the wg0 interface.
docker network create -d macvlan \ --subnet=<internal vpn network>/24 \ --gateway=<gateway ip> \ -o parent=wg0 vpn-netmodified from the docker documentation
Probably also set an ip-range on the network to make the auto assigned ips not conflict with other wireguard nodes (see linked documentation).
Make sure the allowed ips in the wireguard configs are set correctly.
You can also do ipv6 like this, see the end of the linked documentation page.
Thank you very much for your reply but this is not really what I need. Please see the edit for more context :D


