-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce MTU on all batXX devices #80
Comments
Hmmm. Ok es schaut so aus, als ob das auf diese Weise doch keine gute Idee wäre, weil viele (ahnungslose) ISPs in der Welt die "ICMP-Packet too big"-Pakete aussperren. Wir können das also nur darüber fixen indem wir die MSS auf einen festen Wert setzen. In der MSS sind noch IP und TCP-Header enthalten. Die MSS ist also nochmal 40 bytes (IPv4) bzw 60 bytes (IPv6) kleiner als die MTU. IPv4:
IPv6:
Leider funktioniert das nur in der FORWARD chain, sodass unsere internen IPerf-Server bspw. das nicht mitbekommen. |
Könnte man argumentieren, dass sie zumindest bei IPv6 sich sowieso in den Fuß schießen, wenn sie das tun? |
Joa, könnte man. Dann müsste man in der Firewall sowas wie eine MTU nachbauen. Da das dann nur bei UDP hilft, bin ich nicht sicher ob das zielführend ist. Quic wäre wohl positiv beeinflusst. Ansonsten haben wir (glaube ich) nicht so viel UDP-Traffic.
Für IPerf kann man wohl die MSS explizit via `-M MSS` setzen. Müssen wir mal gucken ob das auf der Serverseite geht. Ggf. könnten wir es auch bei dem ffh_speedtest.sh-Skript mit einbauen. Ggf. ist die Option aber auch komplett broken: esnet/iperf#779
…--
Diese Nachricht wurde von meinem Android-Gerät mit K-9 Mail gesendet.
|
For doc purposes, to test if any packet size is causing errors. for mtu in $(seq 1300 1500); do printf "%d: " "$mtu"; ping6 -M do -s $((mtu-48)) \
-c 1 fdca:ffee:8::7001 2>&1 | grep loss; done
1300: 1 packets transmitted, 1 received, 0% packet loss, time 0ms
1301: 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms
1302: 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms
1303: 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms
... |
This commit creates a dummy interface with the "bottleneck" MTU among our VPN path (currently batadv - see issue #80). Furthermore it creates an iptables DNAT rule which changes the destination IP address of incoming QUIC (UDP 443) packets which exceed the bottleneck MTU to a special IPv4 continuity address which is part of the subnet of the dummy interface. When an oversized QUIC packet arrives, it will thus be routed to the dummy interface which in turn generates an ICMP destination unreachable (fragmentation needed) packet as the packet does not fit the MTU of the dummy interface. The QUIC servers will react to the ICMP packet by changing the PMTU of their UDP sockets according to the maximum MTU advertised in the ICMP message, which is the dummy interface's MTU.
This commit creates a dummy interface with the "bottleneck" MTU among our VPN path (currently batadv - see issue #80). Furthermore it creates an iptables DNAT rule which changes the destination IP address of incoming QUIC (UDP 443) packets which exceed the bottleneck MTU to a special IPv4 continuity address which is part of the subnet of the dummy interface. When an oversized QUIC packet arrives, it will thus be routed to the dummy interface which in turn generates an ICMP destination unreachable (fragmentation needed) packet as the packet does not fit the MTU of the dummy interface. The QUIC servers will react to the ICMP packet by changing the PMTU of their UDP sockets according to the maximum MTU advertised in the ICMP message, which is the dummy interface's MTU.
This commit creates a dummy interface with the "bottleneck" MTU among our VPN path (currently batadv - see issue #80). Furthermore it creates an iptables DNAT rule which changes the destination IP address of incoming QUIC (UDP 443) packets which exceed the bottleneck MTU to a special IPv4 continuity address which is part of the subnet of the dummy interface. When an oversized QUIC packet arrives, it will thus be routed to the dummy interface which in turn generates an ICMP destination unreachable (fragmentation needed) packet as the packet does not fit the MTU of the dummy interface. The QUIC servers will react to the ICMP packet by changing the PMTU of their UDP sockets according to the maximum MTU advertised in the ICMP message, which is the dummy interface's MTU.
This commit creates a dummy interface with the "bottleneck" MTU among our VPN path (currently batadv - see issue #80). Furthermore it creates an iptables DNAT rule which changes the destination IP address of incoming QUIC (UDP 443) packets which exceed the bottleneck MTU to a special IPv4 continuity address which is part of the subnet of the dummy interface. When an oversized QUIC packet arrives, it will thus be routed to the dummy interface which in turn generates an ICMP destination unreachable (fragmentation needed) packet as the packet does not fit the MTU of the dummy interface. The QUIC servers will react to the ICMP packet by changing the PMTU of their UDP sockets according to the maximum MTU advertised in the ICMP message, which is the dummy interface's MTU.
Duplicate of #64 |
Auch wenn wir die MTU auf den fastd interfaces begrenzen, weiß der TCP-Layer auf höheren nicht inhärent davon, da batman das ganze für die höheren Layer wieder transparent macht. Normal nutzen wir da Clamping auf die Path MTU.
Leider klappt das ganze nicht, wenn unsere MTU auf den batXX devices zu groß ist:
Wenn ich nun auf dem Supernode folgendes ausführe
ip link set mtu 1362 dev bat18
:-> keine Fragmentierung mehr!
Im Speedtest bringt das bei mir hier gerade etwa +50% Speed. Das mag aber von Anschluss zu Anschluss unterschiedlich sein.
Die MTU sollte so korrekt berechnet sein: 1362 = 1394 - 32 (siehe hier, "usable client mtu": Link). Wir sollten also auf allen Bat-Devices die MTU 1362 ausrollen.
The text was updated successfully, but these errors were encountered: