Currently, I have two VPN clients on most of my devices:
- One for connecting to a LAN
- One commercial VPN for privacy reasons
I usually stay connected to the commercial VPN on all my devices, unless I need to access something on that LAN.
This setup has a few drawbacks:
- Most commercial VPN providers have a limit on the number of simulations connected clients
- I either obfuscate my IP or am able to access resources on that LAN, including my Pi-Hole fur custom DNS-based blocking
One possible solution for this would be to route all internet traffic through a VPN client on the router in the LAN and figuring out how to still be able to at least have a port open for the VPN docker container allowing access to the LAN. But then the ability to split tunnel around that would be pretty hard to achieve.
I want to be able to connect to a VPN host container on the LAN, which in turn routes all internet traffic through another VPN client container while allowing LAN traffic, but still be able to split tunnel specific applications on my Android/Linux/iOS devices.
Basically this:
+---------------------+ internet traffic +--------------------+
| | remote LAN traffic | |
| Client |------------------->|VPN Host Container |
| (Android/iOS/Linux) | |in remote LAN |
| | | |
+---------------------+ +--------------------+
| | |
| remote LAN traffic| | internet traffic
split tunneled traffic| |-------- |
| | v
v | +---------------------------+
+---------------------+ v | |
| regular LAN or | +-----------+ | VPN Client Container |
| internet connection | |remote LAN | | connects to commercial VPN|
+---------------------+ +-----------+ | |
| |
+---------------------------+
Any recommendations on how to achieve this, especially considering client apps for Android and iOS with the ability to split tunnel per application?
Update:
Got it by following this guide.
Ended up modifying this setup to have better control over potential IP leakage
I use Tailscale to do this. I install the software on everything I can, but for resources on the LAN that don’t have Tailscale running I use its Subnet Router feature to masquerade the traffic and connect to those clients.
As for the commercial VPN, it’s a bit more involved. I have a few Exit Nodes (VPS) that take incoming Tailscale traffic destined to the Internet and re-route it via the commercial VPN’s WireGuard network interface.
This was a huge challenge for me (lots of
iptables
,ip6tables
rules) but I have it down to a reproducible script I can provide if you’d like an example.My next goal is to containerize the two VPS servers into one with Docker. Tailscale is a bit annoying that you can’t have multiple Nodes running on the same machine (hence my temporary two VPS solution).
Note: capitalized terms are Tailscale feature names
I’ve been tempted by Tailscale a few times before, but I don’t want to depend on their proprietary clients and control server. The latter could be solved by selfhosting Headscale, but at this point I figure that going for a basic Wireguard setup is probably easier to maintain.
I’d like to have a look at your rules setup, I’m especially curious if/how you approached the event of the commercial VPN Wireguard tunnel(s) on your exit node(s) going down, which depending on the setup may send requests meant to go through the commercial VPN through your VPS exit node.
Personally, I ended up with two Wireguard containers in the target LAN, a wireguard-server and a **wireguard-client **container.
They both share a docker network with a specific subnet {DOCKER_SUBNET} and wireguard-client has a static IP {WG_CLIENT_IP} in that subnet.
The wireguard-client has a slightly altered standard config to establish a tunnel to an external endpoint, a commercial VPN in this case:
[Interface] PrivateKey = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Address = XXXXXXXXXXXXXXXXXXX PostUp = iptables -t nat -A POSTROUTING -o wg+ -j MASQUERADE PreDown = iptables -t nat -D POSTROUTING -o wg+ -j MASQUERADE PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT [Peer] PublicKey = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX AllowedIPs = 0.0.0.0/0,::0/0 Endpoint = XXXXXXXXXXXXXXXXXXXX
where
PostUp = iptables -t nat -A POSTROUTING -o wg+ -j MASQUERADE PreDown = iptables -t nat -D POSTROUTING -o wg+ -j MASQUERADE
are responsible for properly routing traffic coming in from outside the container and
is your standard kill-switch meant to block traffic going out of any network interface except the tunnel interface in the event of the tunnel going down.
The wireguard-server container has these PostUPs and -Downs:
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
default rules that come with the template and allow for routing packets through the server tunnel
PostUp = wg set wg0 fwmark 51820
the traffic out of the tunnel interface get marked
PostUp = ip -4 route add 0.0.0.0/0 via {WG_CLIENT_IP} table 51820
add a rule to routing table 51820 for routing all packets through the wireguard-client container
PostUp = ip -4 rule add not fwmark 51820 table 51820
packets not marked should use routing table 51820
PostUp = ip -4 rule add table main suppress_prefixlength 0
respect manual rules added to main routing table
PostUp = ip route add {LAN_SUBNET} via {DOCKER_SUBNET_GATEWAY_IP} dev eth0
route packages with a destination in {LAN_SUBNET} to the actual {LAN_SUBNET} of the host
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; ip route del {LAN_SUBNET} via {DOCKER_SUBNET_GATEWAY_IP} dev eth0
delete those rules after the tunnel goes down
PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -I OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -D OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT
Basically the same kill-switch as in wireguard-client, but with the mark manually substituted since the command it relied on didn’t work in my server container for some reason and AFAIK the mark actually doesn’t change.
Now do I actually need the kill-switch in wireguard-server? Is the kill-switch in wireguard-client sufficient? I’m not even sure anymore.
Your setup looks more advanced than mine, and I’d really like to do something similar. I’m just going to copy/paste what I have with some addresses replaced by:
VPN_IPV4_CLIENT_ADDRESS
: The WireGuard IPv4 address of the VPN provider’s interface (e.g. 172.0.0.1)VPN_IPV6_CLIENT_ADDRESS
: The WireGuard IPv6 address of the VPN provider’s interfaceVPN_IPV6_CLIENT_ADDRESS_PLUS_ONE
: The next IPv6 address that comes afterVPN_IPV6_CLIENT_ADDRESS
. I can’t remember the logic behinds this but I’d found an article online explaining it.WG_INTERFACE
: The WireGuard network interface name (e.g. wg0) for the commercial VPNI left
100.64.0.0/10
,fd7a:115c:a1e0::/96
in my example because those are the networks Tailscale traffic will come from. I also lefttailscale0
because that is the typical interface. Obviously these can be changed to support any network.I’m using Alpine Linux so I don’t have the
PostUp
,PostDown
, etc. in my WireGuard configuration. I’m not usingwg-quick
at all.Before I hit paste, one thing I’ll say is I haven’t addressed the “kill switch” yet. But so far (~4 months) when the VPN provider’s tunnel goes down nothing leaks. 🤞
sysctl -w net.ipv4.ip_forward=1 sysctl -w net.ipv6.conf.all.forwarding=1 sysctl -p ip link add dev WG_INTERFACE type wireguard ip addr add VPN_IPV4_CLIENT_ADDRESS/32 dev WG_INTERFACE ip -6 addr add VPN_IPV6_CLIENT_ADDRESS/127 dev WG_INTERFACE wg setconf WG_INTERFACE /etc/wireguard/WG_INTERFACE.conf ip link set up dev WG_INTERFACE iptables -t nat -A POSTROUTING -o WG_INTERFACE -j MASQUERADE iptables -t nat -A POSTROUTING -o WG_INTERFACE -s 100.64.0.0/10 -j MASQUERADE ip6tables -t nat -A POSTROUTING -o WG_INTERFACE -j MASQUERADE ip6tables -t nat -A POSTROUTING -o WG_INTERFACE -s fd7a:115c:a1e0::/96 -j MASQUERADE iptables -A FORWARD -i WG_INTERFACE -o tailscale0 -j ACCEPT iptables -A FORWARD -i tailscale0 -o WG_INTERFACE -j ACCEPT iptables -A FORWARD -i WG_INTERFACE -o tailscale0 -m state --state RELATED,ESTABLISHED -j ACCEPT ip6tables -A FORWARD -i WG_INTERFACE -o tailscale0 -j ACCEPT ip6tables -A FORWARD -i tailscale0 -o WG_INTERFACE -j ACCEPT ip6tables -A FORWARD -i WG_INTERFACE -o tailscale0 -m state --state RELATED,ESTABLISHED -j ACCEPT mkdir -p /etc/iproute2/rt_tables echo "70 wg" >> /etc/iproute2/rt_tables echo "80 tailscale" >> /etc/iproute2/rt_tables ip rule add from 100.64.0.0/10 table tailscale ip route add default via VPN_IPV4_CLIENT_ADDRESS dev WG_INTERFACE table tailscale ip -6 rule add from fd7a:115c:a1e0::/96 table tailscale ip -6 route add default via VPN_IPV6_CLIENT_ADDRESS_PLUS_1 dev WG_INTERFACE table tailscale ip rule add from VPN_IPV4_CLIENT_ADDRESS/32 table wg ip route add default via VPN_IPV4_CLIENT_ADDRESS dev WG_INTERFACE table wg service tailscale start rc-update add tailscale default iptables -A INPUT -i tailscale0 -p udp --dport 53 -j ACCEPT iptables -A INPUT -i tailscale0 -p tcp --dport 53 -j ACCEPT ip6tables -A INPUT -i tailscale0 -p udp --dport 53 -j ACCEPT ip6tables -A INPUT -i tailscale0 -p tcp --dport 53 -j ACCEPT service unbound start rc-update add unbound default /sbin/iptables-save > /etc/iptables/rules-save /sbin/ip6tables-save > /etc/ip6tables/rules-save tailscale up --accept-dns=false --accept-routes --advertise-exit-node
Forgot to mention that I run a DNS server for blocking too. When using Tailscale I’ve found it’s important to use their resolver as upstream otherwise App Connectors won’t work (the VPN provider tunnels on each VPS routes to different countries so DNS wasn’t in sync). This kind of sucks but I make do with it after a month or two of App Connectors being very iffy.