No, I’m not talking about a new cocktail, I’m talking about using Wireguard with Deutsche Bahn’s WiFi on ICE trains. I like to work remotely while traveling on the train, and Wireguard seems like a sensible choice for this occasion. My remote Debian server has a static IP address with a DNS record mapped to it. The Wireguard UDP port is exposed over the internet. This means my MacBook the train should have no problem traversing any NAT or firewall and establishing a secret tunnel. However, a very strange problem occurred:

  1. I could create the tunnel without issues.
  2. Pings to the internal IP on the remote side were returned.
  3. HTTPS connections to internal services on the other side timed out (basic browser usage and kubectl).

What’s wrong? There is connectivity, pings go out and come back. What can go wrong? The usual suspect would be the NAT and firewall on the train. However, the traffic inside the tunnel should be fairly opaque to the firewall (besides packet size and timing). The firewall should not be able to specifically block TLS inside the tunnel. How to debug this? The first go-to tool for network problems: ping. Well, maybe later, since there is connectivity for pings.

The next go-to tool: wireshark or tcpdump. I’ll spare you the nitty-gritty details. As it turns out, the first part of the TLS handshake is ACKnowledged by the remote side. However, after that, there are only TCP packet retransmissions. How can it be, that only small packets pass through the tunnel, and larger ones are not acknowledged by the remote side?

Enter the maximum transmission unit or MTU.

The MTU determines the maximal size of a packet as it is transmitted. Packets cannot have arbitrary sizes due to hardware and software restrictions, and performance optimizations. The physical layer can usually accommodate 1500 bytes per Ethernet packet. However, UDP or TCP payloads need to be smaller to leave room for the IP and TCP/UDP headers. For traffic through the Wireguard tunnel, we need to reserve another 32 bytes for the Wireguard headers. So the total amount of bytes per packet that we can send through the tunnel is around 1412 bytes or 1440 bytes. By default, the MTU for Wireguard-managed devices is 1420 bytes. This is fine for most IPv4 network scenarios. Could it be that the WifiOnICE network imposes other limits on the MTU? If so, how can we test it?

Back to good old ping. The standard Linux tool allows us to set a custom ping packet size to send more than the default 76 bytes of data. With trial and error, I could verify that the network only allowed packets up to around 1443 bytes. This result has nothing to do with Wireguard and is a property of the network between my laptop and the remote machine.

# 1443 bytes are delivered (after quite some delay)
$ ping -c 1 -s 1443
PING ( 1443 data bytes
1451 bytes from icmp_seq=0 ttl=57 time=453.196 ms

--- ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 453.196/453.196/453.196/0.000 ms

# 1444 leads to no response
$ ping -c 1 -s 1444
PING ( 1444 data bytes

--- ping statistics ---
1 packets transmitted, 0 packets received, 100.0% packet loss

So, since the network only supports packets up to 1443 bytes, we need to set an even smaller MTU for the Wireguard network. Go to the interface configuration on the client and server-side and set the MTU parameters to, let’s say 1200 bytes. This should provide more than enough headroom.

MTU = 1200
your other settings

Reconnecting, and I’m able to connect to all internal services that use TLS.