-
Notifications
You must be signed in to change notification settings - Fork 6.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IPv6 defragmenting fails when segments do not overlap #54577
Comments
@hakehuang Thanks for testing |
|
So the test results are from April 2022 - I don't think that accurately represents the current state of the networking stack given that's almost a whole year ago. Can these tests be re-ran? The IPv4 ones also don't actually make sense given that I added IPv4 fragmented packet support to zephyr in November 2022, so it would be fully expected that IPv4 fragmented packet tested would fail in April 2022 because there was no support back then. |
Comment from IPv4 one which also applies to IPv6 - replace the IPV4 with IPV6 in the Kconfig name:
Kconfig help text:
|
Yes, this was discussed during net forum yesterday, and IPv4 fragmentation test results are going to be updated. |
according to latest test, those issues are still, please check the weekly networking test report in mail list |
I've spent some time trying to trigger the IPv6 defragmentation error... and failed. Apart from basic tests (sending a fragmented packets from Linux host to Zephyr device), I've also fetched a test case, where I could inject individual fragments into the stack in any order. No matter how I scrambled the fragments, the defragmentation implementation from Zephyr worked just fine. The only condition was that the fragment count could not exceed Now, I've requested some more info in zephyrproject-rtos/test_results#1124, which seemed the most basic one (ideally a wireshark pcap). Let's see if we can get more information out of it. Otherwise, I really don't see a point keeping the issue open, if there's no way to reproduce it. |
which platform are you using? |
|
Thank you @hakehuang for the pcaps. Now the primary question from my side - I know that you're using It brought my attention, that Frag_03-Frag05 tests did not fail due to defragmentation errors, but rather because it could not establish TCP connection (see retransmitted SYN,ACK packets). I've tried then to reproduce the scenario from Frag_06, where I wrote a simple Linux app to send packets copied from wireshark to Zephyr sample, over raw Ethernet socket. For reference, the test case fails, because there is no response for the fragmented TCP SYN packet. Now I'm not sure whether the problems with this driver are due to the driver itself, or some underlying qemu_x86 issues, I've already experience difficulties in the past when running throughput tests (#23302 (comment)). If that's the case however you use this configuration for running maxwell tests, we should most likely consider some other alternative. Now to summarize the pcaps:
|
@rlubos Thanks a lot for detailed analysis, I would propose that we can end up to support a benchmark platform for zephyr stacking test. I know there are some issue in the qemu system, but it would also has some other issue for other platform. Besides for now do you have the connection to the e1000 driver on qemu owner? |
I'm not really sure who's responsible for the driver right now, it has no dedicated maintainer assigned. @jukkar @carlescufi Any ideas?
Perhaps we could try to switch to |
ok, let me try to use the native_posix |
Yes I'd give those two a try. Would be good to see for instance the results for IPv6 fragmentation only and compare with currently used platform. If that was the culprit, I think other areas could've been affected as well by the same problem. |
Query: does native_posix do e.g. fragmentation through zephyr or through the linux host's networking stack (or both)? |
When I've been testing with native_posix, Zephyr stack did the defragmentation. |
per my understanding, we need set to tup mode in host for native_posix, so the host stack is not impacted |
I change to use native_posix and use the overlay_max-stack.conf, Frag_03-05 still fails, and Frag06 still fails. |
Any chance for a pcap again? I'd like to compare with the previous results. |
just to update one new errror message from native_posix it shows |
Looks like |
maybe this is the case, but let me try your suggestion first, btw what is below warnning means?
@rlubos the pcap FYI. |
Maximum number of fragments exceeded - Zephyr can only store up to
I'm a bit confused right now, those are IPv4 reports (we have a separate issue to track IPv4, #54576). Are there no more IPv6 fragmentation failures? |
I seemingly completely missed the |
Yep, was just writing this when I saw your update. @nordicjm For IPv6, I've specifically tested out-of-order reception, and it worked just fine, as long as I did not exceed the fragment limit, or injected a fragment duplicate (which is considered as an overlap by this implementation). |
Unable to reproduce, so closing this until we have solid evidence that an issue exists. |
Describe the bug
With the Maxwell Pro testing some IPv6 fragmentation tests are failing. Most of the tests pass without problems, but the tests where fragments do not overlap seem to fail.
The tests where the fragments do have overlap pass.
See:
https://github.com/zephyrproject-rtos/test_results/issues?q=is%3Aissue+is%3Aopen++Fragment+IPv6
The testsuite used the echo server sample with the config
The bug is reproduced by feeding fragmented packets to the echo server with the config file of:
https://github.com/hakehuang/zephyr/blob/tcp_ip_testing_maxwell/samples/net/sockets/echo_server/overlay-maxwell.conf
Expected behavior
IPv6 fragmenting should work also when segments do not overlap
Impact
At least some the IPv6 fragmenting tests are failing.
The text was updated successfully, but these errors were encountered: