Setting up a 6in4 with only a /64

I know one is not supposed to subnet a /64 and doing so will cause things like SLAAC to break.

Is there a feasible way to setup a 6in4 tunnel to my own virtual machine if the hosting provider only routes a /64 to my VM?

I'd definitely prefer to use my own VM rather than Hurricane as it's a 4 core Xeon on a blazingly fast link with sub 10ms latency between my router and the VM....

2 Likes

Options:

1 Like

Thanks. The NAT option is a no-no for me, otherwise I may as well just keep using NAT ipv4. No real advantage to this option.

Not sure the relay option will work very well either - I'll need to look at it. I need to subnet the /64 since the two ends of the sit tunnel require IPs, so the prefix routed to the LAN will be smaller than a /64. This will break SLAAC for sure and might break a number of other things.

1 Like

Note that in theory you don't need to use GUA for the tunnel itself.
If you control both server and client sides, you can build a tunnel using a ULA.
And if you have a separate GUA prefix, you can route it over the ULA inside the tunnel.


You can use these how-tos to set up a 6in4-tunnel on a Linux-based VM:

2 Likes

Cheers - I'll look into using a ULA. I do control both ends...that would probably work if I can get it to function properly

1 Like

That appears to work although it needs manual configuration using iproute2.

Everything on my LAN has SLAAC'ed out properly and can reach the public ipv6 internet.

Quite honestly, I should have thought of this since we routinely use private ipv4s for tunnel endpoints.

Thanks for the suggestion.

2 Likes

Just a quick bit of documentation for anyone landing on this thread via a search.

Problem: you have a VPS or a VM in some data centre and want to setup a 6in4 tunnel from your Openwrt router via your VM so that you can access the public ipv6 internet. Your provider routes only a /64 to your VM, and per RFC 5375, can't subnet it. SLAAC and other things will break.

Solution: On your VM, setup a sit tunnel using ULA addresses

For the purposes of this we assume AAAA:BBBB:C:D::/64 is the prefix routed to you on your VM

Now, go here and generate a ULA: https://simpledns.plus/private-ipv6

Let's assume that it gives us the following:

Prefix/L:	  fd
Global ID:	  4e9ef20e44
Subnet ID:	  2c71
Combined/CID:	  fd4e:9ef2:0e44:2c71::/64
IPv6 addresses:	  fd4e:9ef2:0e44:2c71:xxxx:xxxx:xxxx:xxxx

On your virtual machine, do

ip tunnel add 6in4_tun0 mode sit local <your VM's public ipv4> remote <your router's public ipv4> ttl 64 dev eth0		
ip addr add dev 6in4_tun0 fd4e:9ef2:0e44:2c71::0/127
ip link set 6in4_tun0 up
ip -6 route add AAAA:BBBB:C:D::/64::/64 via fd4e:9ef2:0e44:2c71::1
ip6tables -I FORWARD -i 6in4_tun0 -j ACCEPT
ip6tables -I FORWARD -o 6in4_tun0 -j ACCEPT
sysctl -w net.ipv6.conf.all.forwarding=1

On your openwrt router go to luci, add a new interface

You'll need to add some firewall rules

You might also want to force your windows hosts to prefer ipv4 over ipv6 (default is the other way around) given that you're using a tunnelling mechanism. You can see here how to do it

1 Like

It performs like a pig though. Here two iperf3 tests to the VM, one over ipv4 and the other over ipv6. Am I missing something here? There should not be this much of a performance overhead

root@localhost:/etc/bind# iperf3 -R -O 3 -c <remote ipv4>
Connecting to host <remote ipv4>, port 5201
Reverse mode, remote host <ipv4> is sending
[  5] local <local ipv4> port 34474 connected to <ipv4> port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  8.57 MBytes  71.9 Mbits/sec                  (omitted)
[  5]   1.00-2.00   sec  72.4 MBytes   607 Mbits/sec                  (omitted)
[  5]   2.00-3.00   sec   104 MBytes   876 Mbits/sec                  (omitted)
[  5]   0.00-1.00   sec   104 MBytes   873 Mbits/sec                  
[  5]   1.00-2.00   sec   101 MBytes   844 Mbits/sec                  
[  5]   2.00-3.00   sec   100 MBytes   842 Mbits/sec                  
[  5]   3.00-4.00   sec   100 MBytes   841 Mbits/sec                  
[  5]   4.00-5.00   sec   100 MBytes   842 Mbits/sec                  
[  5]   5.00-6.00   sec   100 MBytes   841 Mbits/sec                  
[  5]   6.00-7.00   sec   100 MBytes   841 Mbits/sec                  
[  5]   7.00-8.00   sec   100 MBytes   841 Mbits/sec                  
[  5]   8.00-9.00   sec   100 MBytes   841 Mbits/sec                  
[  5]   9.00-10.00  sec   100 MBytes   841 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.02  sec  1009 MBytes   845 Mbits/sec    0             sender
[  5]   0.00-10.00  sec  1007 MBytes   845 Mbits/sec                  receiver

iperf Done.
root@localhost:/etc/bind# iperf3 -R -O 3 -6 -c <remote ipv6>
Connecting to host <remote ipv6>, port 5201
Reverse mode, remote host <remote ipv6> is sending
[  5] local <local ipv6> port 60922 connected to <remote ipv6> port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  6.79 MBytes  56.9 Mbits/sec                  (omitted)
[  5]   1.00-2.00   sec  25.5 MBytes   214 Mbits/sec                  (omitted)
[  5]   2.00-3.00   sec  25.7 MBytes   216 Mbits/sec                  (omitted)
[  5]   0.00-1.00   sec  26.0 MBytes   218 Mbits/sec                  
[  5]   1.00-2.00   sec  25.4 MBytes   213 Mbits/sec                  
[  5]   2.00-3.00   sec  29.0 MBytes   243 Mbits/sec                  
[  5]   3.00-4.00   sec  29.9 MBytes   251 Mbits/sec                  
[  5]   4.00-5.00   sec  30.2 MBytes   254 Mbits/sec                  
[  5]   5.00-6.00   sec  30.8 MBytes   258 Mbits/sec                  
[  5]   6.00-7.00   sec  30.6 MBytes   256 Mbits/sec                  
[  5]   7.00-8.00   sec  30.1 MBytes   252 Mbits/sec                  
[  5]   8.00-9.00   sec  30.4 MBytes   255 Mbits/sec                  
[  5]   9.00-10.00  sec  30.0 MBytes   252 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.02  sec   294 MBytes   246 Mbits/sec    0             sender
[  5]   0.00-10.00  sec   292 MBytes   245 Mbits/sec                  receiver

iperf Done.

Edit: both the router and the VM are barely breaking a sweat, so it's a very long way off cpu bound. The router is an 8-core Intel C3758 with 16GB or RAM and the VM is a 4 core Xeon. with 2GB of RAM

2 Likes

Huh. Curious.

My ISP is doing something with protocol 41 it seems. I stumbled across this article

So I routed my sit tunnel over my wireguard tunnel and then the iperf3 caps out at the wireguard tunnel's speed, quite a bit faster.

Not optimal, since my wireguard performance isn't even close to my line speed, but still a lot better....

1 Like

Yep, WG can replace 6in4 providing IPv4 and/or IPv6 connectivity over the tunnel.
So, I hope you are not wrapping the 6in4 protocol in a WG tunnel.
Actually, I have a similar use case with a WG server on a VPS as a dual-stack VPN.

In theory, there are multiple methods to build an IP tunnel, encrypted or not.
But some of them can be easier than others to provide IPv6 connectivity.
ISP traffic shaping is a pain in the back, so encryption might be an acceptable solution.

I just did that as a quick and dirty to check if protocol 41 was throttled - I'm in the process of tearing down the 6in4 and just routing the /64 prefix over the Wireguard tunnel after I've configured it for ipv6

1 Like

So here's some more evidence of the aggressive traffic shaping by my ISP.

I setup an L2TP tunnel between my router and my virtual machine. L2TP has the feature that the encapsulation type can be selected to be either UDP or IP protocol 115. Which makes for a nice test to examine possible shaping behaviour on the different protocols.

Here is an iperf3 test over an l2tp tunnel using UDP as the encapsulation method

root@ubuntu:/etc# iperf3 -R -O 3 -c 10.50.0.1
Connecting to host 10.50.0.1, port 5201
Reverse mode, remote host 10.50.0.1 is sending
[  5] local 192.168.0.2 port 41048 connected to 10.50.0.1 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  20.7 MBytes   174 Mbits/sec                  (omitted)
[  5]   1.00-2.00   sec  47.1 MBytes   395 Mbits/sec                  (omitted)
[  5]   2.00-3.00   sec  42.6 MBytes   357 Mbits/sec                  (omitted)
[  5]   0.00-1.00   sec  47.1 MBytes   395 Mbits/sec                  
[  5]   1.00-2.00   sec  48.8 MBytes   410 Mbits/sec                  
[  5]   2.00-3.00   sec  48.6 MBytes   408 Mbits/sec                  
[  5]   3.00-4.00   sec  51.8 MBytes   435 Mbits/sec                  
[  5]   4.00-5.00   sec  51.6 MBytes   433 Mbits/sec                  
[  5]   5.00-6.00   sec  48.3 MBytes   405 Mbits/sec                  
[  5]   6.00-7.00   sec  47.0 MBytes   394 Mbits/sec                  
[  5]   7.00-8.00   sec  48.3 MBytes   405 Mbits/sec                  
[  5]   8.00-9.00   sec  49.6 MBytes   416 Mbits/sec                  
[  5]   9.00-10.00  sec  48.4 MBytes   406 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.02  sec   491 MBytes   411 Mbits/sec    0             sender
[  5]   0.00-10.00  sec   490 MBytes   411 Mbits/sec                  receiver

Then, I simply changed encapsulation type from UDP to IP.

Here is an iperf3 test over the same l2tp tunnel using IP protocol 115 as the encapsulation method instead of UDP.

Fully ten times less bandwidth. I repeated this test multiple times.

root@ubuntu:/etc# iperf3 -R -O 3 -c 10.50.0.1
Connecting to host 10.50.0.1, port 5201
Reverse mode, remote host 10.50.0.1 is sending
[  5] local 192.168.0.2 port 41058 connected to 10.50.0.1 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  4.63 MBytes  38.8 Mbits/sec                  (omitted)
[  5]   1.00-2.00   sec  6.48 MBytes  54.4 Mbits/sec                  (omitted)
[  5]   1.00-1.00   sec  6.39 MBytes  26.8 Mbits/sec                  
[  5]   1.00-2.00   sec  7.90 MBytes  66.2 Mbits/sec                  
[  5]   2.00-3.00   sec  6.30 MBytes  52.9 Mbits/sec                  
[  5]   3.00-4.00   sec  7.80 MBytes  65.4 Mbits/sec                  
[  5]   4.00-5.00   sec  6.55 MBytes  54.9 Mbits/sec                  
[  5]   5.00-6.00   sec  8.12 MBytes  68.1 Mbits/sec                  
[  5]   6.00-7.00   sec  6.86 MBytes  57.5 Mbits/sec                  
[  5]   7.00-8.00   sec  7.49 MBytes  62.8 Mbits/sec                  
[  5]   8.00-9.00   sec  6.55 MBytes  55.0 Mbits/sec                  
[  5]   9.00-10.00  sec  7.89 MBytes  66.2 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.03  sec  72.0 MBytes  60.3 Mbits/sec    4             sender
[  5]   0.00-10.00  sec  71.8 MBytes  60.3 Mbits/sec                  receiver

And even UDP encapsulation appears to be shaped, although much less aggressively.

Here is the iperf3 test to the public IP of the same virtual machine for comparison.

root@ubuntu:/etc# iperf3 -R -O 3 -c <public ipv4 of virtual machine>
Connecting to host <public ipv4 of virtual machine>, port 5201
Reverse mode, remote host <public ipv4 of virtual machine> is sending
[  5] local 192.168.0.2 port 49852 connected to <public ipv4 of virtual machine> port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  76.9 MBytes   645 Mbits/sec                  (omitted)
[  5]   1.00-2.00   sec  99.3 MBytes   833 Mbits/sec                  (omitted)
[  5]   2.00-3.00   sec   100 MBytes   843 Mbits/sec                  (omitted)
[  5]   0.00-1.00   sec   101 MBytes   846 Mbits/sec                  
[  5]   1.00-2.00   sec   101 MBytes   847 Mbits/sec                  
[  5]   2.00-3.00   sec   102 MBytes   856 Mbits/sec                  
[  5]   3.00-4.00   sec  98.4 MBytes   826 Mbits/sec                  
[  5]   4.00-5.00   sec  98.6 MBytes   827 Mbits/sec                  
[  5]   5.00-6.00   sec  98.9 MBytes   830 Mbits/sec                  
[  5]   6.00-7.00   sec  98.1 MBytes   823 Mbits/sec                  
[  5]   7.00-8.00   sec  99.5 MBytes   835 Mbits/sec                  
[  5]   8.00-9.00   sec  99.4 MBytes   834 Mbits/sec                  
[  5]   9.00-10.00  sec   101 MBytes   851 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.02  sec   999 MBytes   836 Mbits/sec    3             sender
[  5]   0.00-10.00  sec   998 MBytes   837 Mbits/sec                  receiver

iperf Done.

I get similar results over a Wireguard tunnel, that is to say, 50% bandwidth reduction.

Any other IP protocol type gets much more aggressively shaped, up to 90%.

There is categorically not that amount of overhead in the encapsulation that could cause this type of bandwidth reduction, yet my ISP insists publicly that they don't do any form of shaping.

Well, methinks this test thoroughly debunks that claim

1 Like

Looks like you've discovered the joys of 6in4 with Virgin Media!

The short version: Don't bother with 6in4 on Virgin Media UK.

Something on their network makes 6in4 perform horribly, it has been happening for years and there's never really been a "fix" or accurate confirmation of the exact issue as to why, only speculation and theories, some which are disputed.

The fix is simply don't have a 6in4 tunnel linked to a Virgin Media IP address, instead make the endpoint something else which you have tested i.e. Wireguard, another WAN connection etc.

The actual fix is make Virgin Media deploy IPv6 to it's customers resolving the two issues as to whhy 6in4 is being used, but they aren't exactly in a rush to do that either!

2 Likes

Hah, I should have expected that you'd be a member of the openwrt forum too :smiley:

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.