For a description of the setup I use to connect all my computers via tinc in the first place, see How to set up a small tinc VPN using IPv6 payloads over IPv4 and iproute2.
After I had my tinc network up and running, I wondered how I could tunnel my internet traffic from the peer nodes to the entry node through tinc to replace my old OpenVPN setup.
As the internet is more or less still based on IPv4 connectivity, the easiest way I found was to make tinc dual stack on the affected nodes and route the traffic outwards through a standard IPv4 NAT on the entry node.
Choosing an IPv4 Subnet
I’d recommend using a random IPv4 private subnet instead of 10.0.0.X or the like, see the Choosing an IPv6 Subnet section of my first tinc post for my reasons.
So let’s get started by rolling the dice for our IPv4 subnet. This time we only need 16bit of random information instead of 40, or a 2-digit hexadecimal number insdead of the 10-digit we needed for IPv6. Shows quite drastically how short IPv4 addresses are.
And as one usually writes IPv4 in decimal notation, you can enter this in your shell to generate your address range in python:
python
import random
print("10.{}.{}.X".format(random.randint(0,255), random.randint(0,255)))
Rolling the dice by hand is a valid approach, too. ;) For me, this resulted in 10.88.24.X
.
Configuration
Entry Node Configuration
I’ll list the whole, but modified, versions of the config files from my first tinc article in this guide to put everything into context.
As a reminder, $STATIC_IP
is the public static (presumably IPv4) IP of your entry node, and $INTERFACE
will be replaced by tinc itself, so no need for yourself to do it. The remaining variables should be self-explanatory, but you can read about them in the first tinc article, too.
Let’s get started with /etc/tinc/$VPN_NAME/hosts/$ENTRY_NODE_NAME
, you have to add another Subnet
line with the tinc-internal IPv4 address of the entry node:
Address = $STATIC_IP
Subnet = fde3:b35d:dadf::1/128
Subnet = 10.88.24.1/32
# make this node the default gateway for the network
Subnet = 0.0.0.0/0
Thanks to Bernhard for pointing out the missing Subnet = 0.0.0.0/0 part.
And then, assign the new address to the tinc interface brought up by /etc/tinc/$VPN_NAME/tinc-up
:
1 2 3 4 | #!/bin/sh
ip link set $INTERFACE up
ip addr add fde3:b35d:dadf::1/64 dev $INTERFACE
ip addr add 10.88.24.1/24 dev $INTERFACE
|
Peer Node Configuration
The same needs to be done to each peer node that needs to access the internet through tinc - with an individual IPv4 address per node, of course.
First, in /etc/tinc/$VPN_NAME/hosts/$NODE_NAME
:
Subnet = fde3:b35d:dadf::X/128
Subnet = 10.88.24.X/32
And then, in /etc/tinc/$VPN_NAME/tinc-up
:
1 2 3 4 | #!/bin/sh
ip link set $INTERFACE up
ip addr add fde3:b35d:dadf::X/64 dev $INTERFACE
ip addr add 10.88.24.X/24 dev $INTERFACE
|
NAT on the entry node
If you haven’t done it already, you need to enable IPv4 forwarding in /etc/sysctl.conf
:
net.ipv4.ip_forward=1
I use shorewall on all my servers as a frontend to iptables. Everything is in a few config files and actually nice and easy to use, in contrast to iptables. Shorewall can be used as a simple one-interface firewall, but also supports more complex setups, like the one we need here.
In case you prefer to use raw iptables, I expect you to know what to do anyway. ;)
The situation for our tinc setup is exactly like the two-interfaces example that comes with shorewall, in Debian it’s located at /usr/share/doc/shorewall/examples/two-interfaces
. The only thing you need to adjust is that the tinc network is interface $VPNNAME
and not eth1 - the change of the zone name from loc
to tinc
is purely cosmetic.
Change the example eth1 interface to your tinc interface in /etc/shorewall/interfaces
:
...
# replaced
# loc eth1 tcpflags,nosmurfs,routefilter,logmartians
# with
tinc $VPNNAME tcpflags,nosmurfs,routefilter,logmartians
And then change the name in /etc/shorewall/zones
:
# replaced
# loc ipv4
# with
tinc ipv4
Then adjust the rest of shorewall to your needs. ;) The documentation is excellent in my opinion, so I won’t repeat it here.
Switching from ufw to shorewall was a great relief for me - even for simple one-interface firewalling, I’d recommend to use shorewall.
Tunnel in Operation
Enabling the Tunnel
This is a bit cumbersome, I have to admit. One could automate this with a script, or maybe even a convenient netctl unit on Arch Linux, but as I need the tunneling not all to often, I didn’t take the time for it. In case you hack something together, I’d be happy to hear from you.
First, you need to find out the IP of your router and the name of the interface the router is on, let’s call them $ROUTER_IP
and $ROUTER_INTERFACE
. Both will be needed to disable the tunnel again, so keep a note!
ip route show to 0/0
# output: default via 192.168.178.1 dev enp0s25
# of the form: default via $ROUTER_IP dev $ROUTER_INTERFACE
Aditionally, you’ll need your $STATIC_IP
.
# Add a static route to the entry node via the router
# so that the entry node can still be reached when you change your default route
sudo ip route add $STATIC_IP via $ROUTER_IP dev enp0s25
# route all internet trafic through the tinc interface
sudo ip route change default via 10.84.82.X dev $VPN_NAME
If you resolve DNS against your local $ROUTER_IP
, you probably want to change that as long as the tunnel is up to prevent your DNS queries leaking. If you don’t do this, the router you are connected to (and the internet provider behind it) will see your DNS queries as plaintext.
You need an alternative DNS server, e.g. from the Open Root Server Network or CCC and the like. You can use 8.8.8.8
- in case you want to report every site you visit to Google instead of the other options.
In /etc/resolv.conf
:
# Comment out this line:
# nameserver $ROUTER_IP
# add this line with the DNS server of your choice,
# here: dnscache.berlin.ccc.de
nameserver 213.73.91.35
Disabling the Tunnel
To disable the tunnel, we need to reverse the steps from above:
# change the default route back to route traffic to the router
sudo ip route change default via $ROUTER_IP dev $ROUTER_INTERFACE
# delete the explicit route to your entry node's $STATIC_IP - this is optional
# as it doesn't do anything anymore, and will vanish on next reboot.
sudo ip route del $STATIC_IP
In /etc/resolv.conf
:
# comment out this line again
# nameserver 213.73.91.35
Next Steps
This setup still lacks IPv6 connectivity, which would be nice especially when you’re behind an IPv4 only internet connection.
One could provide that by more shorewall and public static IPv6 addresses for each tinc, or by radvd handing out the /64
prefix you probably got with your server and enable the IPv6 Privacy Extension on the clients, which would be the IPv6 equivalent of a NAT to make connections indistinguishable.
But as most hosts I need to reach via IPv6 are actually my own, and therefore, they already are inside my tinc network, so I skipped this step for now.
Other Options
4in6
Another option is to use 4in6 tunnels which can be created with ip -6 tunnel add mode ipip6 ...
in tinc-up and tinc-down, but this is functionally the same as making tinc dual stack, but much more hassle to configure as you end up with a separate interface for each peer node.
NAT64
A third option would be NAT64 combined with DNS64 to translate all A records to simulated AAAA records and then establish IPv6 only connections to the NAT which then translates the requests transparently to IPv4.
This would allow to have all tinc nodes IPv6 only, at the cost of running a full DNS server, an advanced NAT setup and a servere interference in the client’s traffic - but well, you need a NAT anyway..
I might try it someday following this guide, but I don’t think it’s worth the effort compared to the above IPv4 setup if you don’t care whether your network is IPv6-only or not: How to setup an IPv6-only network with NAT64, DNS64 and Shorewall