Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: varlink interfaces should probably be fuzzed #23785

Closed
evverx opened this issue Jun 20, 2022 · 30 comments · Fixed by #24271
Closed

tests: varlink interfaces should probably be fuzzed #23785

evverx opened this issue Jun 20, 2022 · 30 comments · Fixed by #24271
Labels
fuzzing Implementation of fuzzers and fixes for stuff found through fuzzing RFE 🎁 Request for Enhancement, i.e. a feature request tests

Comments

@evverx
Copy link
Member

evverx commented Jun 20, 2022

By analogy with dfuzzer fuzzing DBus interfaces I think there should be vfuzzer fuzzing the varlink stuff. It should help to catch issues like #22480 and somewhat cover PRs like #22845 (where malformed varlink messages kick out all listeners) and #23750 (which introduces a memory leak at least)

[164210.524275] systemd-resolved[67]: =================================================================
[164210.525019] systemd-resolved[67]: ==67==ERROR: LeakSanitizer: detected memory leaks
[164210.525664] systemd-resolved[67]: Direct leak of 5 byte(s) in 5 object(s) allocated from:
[164210.526081] systemd-resolved[67]:     #0 0x7fcc079668f7 in strdup (/usr/lib64/libasan.so.6.0.0+0x598f7)
[164210.526817] systemd-resolved[67]:     #1 0x7fcc0686338a in free_and_strdup ../src/basic/string-util.c:940
[164210.527238] systemd-resolved[67]:     #2 0x7fcc065b80d0 in json_dispatch_string ../src/shared/json.c:4417
[164210.527725] systemd-resolved[67]:     #3 0x7fcc065b6a86 in json_dispatch ../src/shared/json.c:4260
[164210.528253] systemd-resolved[67]:     #4 0x5510ae in vl_method_resolve_service ../src/resolve/resolved-varlink.c:1021
[164210.528676] systemd-resolved[67]:     #5 0x7fcc066f8178 in varlink_dispatch_method ../src/shared/varlink.c:878
[164210.529082] systemd-resolved[67]:     #6 0x7fcc066f8b50 in varlink_process ../src/shared/varlink.c:953
[164210.529629] systemd-resolved[67]:     #7 0x7fcc067062dd in defer_callback ../src/shared/varlink.c:1889
[164210.530055] systemd-resolved[67]:     #8 0x7fcc06b809a0 in source_dispatch ../src/libsystemd/sd-event/sd-event.c:3623
[164210.530469] systemd-resolved[67]:     #9 0x7fcc06b888b3 in sd_event_dispatch ../src/libsystemd/sd-event/sd-event.c:4175
[164210.531023] systemd-resolved[67]:     #10 0x7fcc06b89a17 in sd_event_run ../src/libsystemd/sd-event/sd-event.c:4236
[164210.531626] systemd-resolved[67]:     #11 0x7fcc06b89d98 in sd_event_loop ../src/libsystemd/sd-event/sd-event.c:4257
[164210.532028] systemd-resolved[67]:     #12 0x55a448 in run ../src/resolve/resolved.c:92
[164210.532451] systemd-resolved[67]:     #13 0x55a6e2 in main ../src/resolve/resolved.c:99
[164210.532914] systemd-resolved[67]:     #14 0x7fcc048c643f in __libc_start_call_main (/lib64/libc.so.6+0x4043f)
[164210.533316] systemd-resolved[67]: SUMMARY: AddressSanitizer: 5 byte(s) leaked in 5 allocation(s).
@evverx evverx added the tests label Jun 20, 2022
@evverx
Copy link
Member Author

evverx commented Jun 20, 2022

@mrc0mmand given that the varlink stuff can't be introspected I have to admit I'm not sure how it can be implemented automatically. I think in theory it should be possible to autodiscover method signatures and endpoints parsing the source code but it seems to be fragile.

@mrc0mmand
Copy link
Member

@mrc0mmand given that the varlink stuff can't be introspected I have to admit I'm not sure how it can be implemented automatically. I think in theory it should be possible to autodiscover method signatures and endpoints parsing the source code but it seems to be fragile.

Having a "vfuzzer" along with dfuzzer is definitely on my TODO list. I'll need to read a bit more about varlink, though, as my knowledge of it is, so far, quite rudimentary (as was my D-Bus knowledge before dfuzzer :-)).

I'll tie up a couple of loose ends regarding dfuzzer and CentOS CI first, and then, hopefully, delve straight into varlink.

@mrc0mmand
Copy link
Member

I guess the fundamental question is - which language do we want to use for the vfuzzer? Looking at https://github.com/varlink there are bindings for C, Python, Go, Rust and Java. I guess only C and Rust make sense in the end, as fuzzer in Python might be a bit slow (and I'm not sure if we want to introduce the Java/Go stack to systemd). Using Rust would probably satisfy my inner desire to finally learn the language, but given that I know barely Rust basics, it might... take a while. So the only viable option is, I guess, C?

Any opinions?

@DaanDeMeyer
Copy link
Contributor

There's also sd-varlink in systemd itself but that would mean vfuzzer would have to be part of the systemd repo since sd-varlink isn't public yet. Not sure if this is actually a good idea, but just throwing the option out there.

@evverx
Copy link
Member Author

evverx commented Jun 22, 2022

varlink boils down to sending json messages to unix sockets so I think it should be possible to take the part of dfuzzer generating random stuff, turn it into json and send it all to a socket without any libraries. It can even be combined with dbus-fuzzer/dfuzzer#81.

I took a look at libvarlink and a few seconds later varlink/libvarlink#51 was opened. The rust cli keeps throwing unsymbolized backtraces at me so I'm a bit wary of it as well.

There's also sd-varlink in systemd itself

As far as I know it's the only varlink library fuzzed on a regular basis (in fact, it was even mentioned in a paper related to fuzzing. There it was concluded that the fuzz target couldn't find a lot of issues so it couldn't be used to benchmark various fuzzing engines) so personally I think it would be useful to be able to use it outside of systemd. Having said that given that vfuzzer is supposed to generate semi-valid messages from time to time it would probably be easier to do that without any libraries.

@keszybz
Copy link
Member

keszybz commented Jun 22, 2022

I'd like to export sd-varlink. We use it internally and I think it'd be nice to make it more widely available.

@evverx
Copy link
Member Author

evverx commented Jun 23, 2022

Looking at #12230 (comment) I don't think it was meant to be exported in the sense that as far as I understand the idea was to let it cover systemd use cases only (without upgrade, introspection and stuff like that) and let external libraries handle that themselves if they want to interact with services providing varlink interfaces. I'm not sure if anything has changed since then though but technically sd-varlink isn't fully compatible with the varlink specification.

@evverx
Copy link
Member Author

evverx commented Jun 23, 2022

There are also issues like #20330 where it isn't clear what should happen on restarts/reloads. I ran into a bit different issue though. Since that socket is world-readable/writable my scripts managed to subscribe to that interface when oomd wasn't looking and it failed to start after that. Anyway, I think to implement vfuzzer it should be enough to use a json library and simply connect to sockets. What I'm not sure about is how to figure out what methods expect to receive. Without introspection it's kind of hard.

@keszybz
Copy link
Member

keszybz commented Jun 23, 2022

Looking at #12230 (comment) I don't think it was meant to be exported

While varlink may be relatively easy to reimplement, particularly in C it is not trivial. Just generating JSON from some structures and the other way around is a lot of work. Also the whole callback structure. So I think it'd be quite useful to export the library for programs in C or C++ and other low-level languages.

@DaanDeMeyer
Copy link
Contributor

While varlink may be relatively easy to reimplement, particularly in C it is not trivial. Just generating JSON from some structures and the other way around is a lot of work. Also the whole callback structure. So I think it'd be quite useful to export the library for programs in C or C++ and other low-level languages.

This was discussed elsewhere but it's going to be rather involved to make sd-varlink work for anything other than C due to sd-json using compound initializers (and macros) very heavily. Those aren't even supported in C++ and will cause compilation errors. @poettering preferred to only expose sd-varlink for usage in other C programs and have other programs use libvarlink instead (instead of decoupling sd-varlink from sd-json).

It should still be useful to expose it for other C stuff though

@yuwata yuwata added RFE 🎁 Request for Enhancement, i.e. a feature request fuzzing Implementation of fuzzers and fixes for stuff found through fuzzing labels Jun 27, 2022
@evverx
Copy link
Member Author

evverx commented Jun 29, 2022

Here's another issue found in #22532 with half-baked vfuzzer:

Jun 29 11:14:59 C systemd-resolved[276]: =================================================================
Jun 29 11:14:59 C systemd-resolved[276]: ==276==ERROR: AddressSanitizer: global-buffer-overflow on address 0x0000006391c8 at pc 0x7fb39a3dfe29 bp 0x7ffdaa4b92e0 sp 0x7ffdaa4b92d8
Jun 29 11:14:59 C systemd-resolved[276]: READ of size 8 at 0x0000006391c8 thread T0
Jun 29 11:14:59 C systemd-resolved[276]:     #0 0x7fb39a3dfe28 in json_dispatch ../src/shared/json.c:4212
Jun 29 11:14:59 C systemd-resolved[276]:     #1 0x54ad18 in vl_method_start_browse ../src/resolve/resolved-varlink.c:561
Jun 29 11:14:59 C systemd-resolved[276]:     #2 0x7fb39a522222 in varlink_dispatch_method ../src/shared/varlink.c:878
Jun 29 11:14:59 C systemd-resolved[276]:     #3 0x7fb39a522bfa in varlink_process ../src/shared/varlink.c:953
Jun 29 11:14:59 C systemd-resolved[276]:     #4 0x7fb39a530387 in defer_callback ../src/shared/varlink.c:1889
Jun 29 11:14:59 C systemd-resolved[276]:     #5 0x7fb39a9a9c91 in source_dispatch ../src/libsystemd/sd-event/sd-event.c:3623
Jun 29 11:14:59 C systemd-resolved[276]:     #6 0x7fb39a9b1ba4 in sd_event_dispatch ../src/libsystemd/sd-event/sd-event.c:4175
Jun 29 11:14:59 C systemd-resolved[276]:     #7 0x7fb39a9b2d08 in sd_event_run ../src/libsystemd/sd-event/sd-event.c:4236
Jun 29 11:14:59 C systemd-resolved[276]:     #8 0x7fb39a9b3089 in sd_event_loop ../src/libsystemd/sd-event/sd-event.c:4257
Jun 29 11:14:59 C systemd-resolved[276]:     #9 0x553a12 in run ../src/resolve/resolved.c:92
Jun 29 11:14:59 C systemd-resolved[276]:     #10 0x553cac in main ../src/resolve/resolved.c:99
Jun 29 11:14:59 C systemd-resolved[276]:     #11 0x7fb39844043f in __libc_start_call_main (/lib64/libc.so.6+0x4043f)
Jun 29 11:14:59 C systemd-resolved[276]:     #12 0x7fb3984404ef in __libc_start_main@@GLIBC_2.34 (/lib64/libc.so.6+0x404ef)
Jun 29 11:14:59 C systemd-resolved[276]:     #13 0x40c104 in _start (/usr/lib/systemd/systemd-resolved+0x40c104)
Jun 29 11:14:59 C systemd-resolved[276]: 0x0000006391c8 is located 0 bytes to the right of global variable 'dispatch_table' defined in '../src/resolve/resolved-varlink.c:544:35' (0x639100) of size 200
Jun 29 11:14:59 C systemd-resolved[276]: 0x0000006391c8 is located 56 bytes to the left of global variable '__func__' defined in '../src/resolve/resolved-varlink.c:563:17' (0x639200) of size 23
Jun 29 11:14:59 C systemd-resolved[276]:   '__func__' is ascii string 'vl_method_start_browse'
Jun 29 11:14:59 C systemd-resolved[276]: SUMMARY: AddressSanitizer: global-buffer-overflow ../src/shared/json.c:4212 in json_dispatch
Jun 29 11:14:59 C systemd-resolved[276]: Shadow bytes around the buggy address:
Jun 29 11:14:59 C systemd-resolved[276]:   0x0000800bf1e0: 00 f9 f9 f9 f9 f9 f9 f9 07 f9 f9 f9 f9 f9 f9 f9
Jun 29 11:14:59 C systemd-resolved[276]:   0x0000800bf1f0: 06 f9 f9 f9 f9 f9 f9 f9 00 00 00 00 f9 f9 f9 f9
Jun 29 11:14:59 C systemd-resolved[276]:   0x0000800bf200: 00 00 07 f9 f9 f9 f9 f9 00 04 f9 f9 f9 f9 f9 f9
Jun 29 11:14:59 C systemd-resolved[276]:   0x0000800bf210: 05 f9 f9 f9 f9 f9 f9 f9 07 f9 f9 f9 f9 f9 f9 f9
Jun 29 11:14:59 C systemd-resolved[276]:   0x0000800bf220: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Jun 29 11:14:59 C systemd-resolved[276]: =>0x0000800bf230: 00 00 00 00 00 00 00 00 00[f9]f9 f9 f9 f9 f9 f9
Jun 29 11:14:59 C systemd-resolved[276]:   0x0000800bf240: 00 00 07 f9 f9 f9 f9 f9 00 00 06 f9 f9 f9 f9 f9
Jun 29 11:14:59 C systemd-resolved[276]:   0x0000800bf250: 00 00 05 f9 f9 f9 f9 f9 00 00 05 f9 f9 f9 f9 f9
Jun 29 11:14:59 C systemd-resolved[276]:   0x0000800bf260: 00 00 05 f9 f9 f9 f9 f9 00 00 00 00 00 00 00 00
Jun 29 11:14:59 C systemd-resolved[276]:   0x0000800bf270: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Jun 29 11:14:59 C systemd-resolved[276]:   0x0000800bf280: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Jun 29 11:14:59 C systemd-resolved[276]: Shadow byte legend (one shadow byte represents 8 application bytes):
Jun 29 11:14:59 C systemd-resolved[276]:   Addressable:           00
Jun 29 11:14:59 C systemd-resolved[276]:   Partially addressable: 01 02 03 04 05 06 07
Jun 29 11:14:59 C systemd-resolved[276]:   Heap left redzone:       fa
Jun 29 11:14:59 C systemd-resolved[276]:   Freed heap region:       fd
Jun 29 11:14:59 C systemd-resolved[276]:   Stack left redzone:      f1
Jun 29 11:14:59 C systemd-resolved[276]:   Stack mid redzone:       f2
Jun 29 11:14:59 C systemd-resolved[276]:   Stack right redzone:     f3
Jun 29 11:14:59 C systemd-resolved[276]:   Stack after return:      f5
Jun 29 11:14:59 C systemd-resolved[276]:   Stack use after scope:   f8
Jun 29 11:14:59 C systemd-resolved[276]:   Global redzone:          f9
Jun 29 11:14:59 C systemd-resolved[276]:   Global init order:       f6
Jun 29 11:14:59 C systemd-resolved[276]:   Poisoned by user:        f7
Jun 29 11:14:59 C systemd-resolved[276]:   Container overflow:      fc
Jun 29 11:14:59 C systemd-resolved[276]:   Array cookie:            ac
Jun 29 11:14:59 C systemd-resolved[276]:   Intra object redzone:    bb
Jun 29 11:14:59 C systemd-resolved[276]:   ASan internal:           fe
Jun 29 11:14:59 C systemd-resolved[276]:   Left alloca redzone:     ca
Jun 29 11:14:59 C systemd-resolved[276]:   Right alloca redzone:    cb
Jun 29 11:14:59 C systemd-resolved[276]:   Shadow gap:              cc
Jun 29 11:14:59 C systemd-resolved[276]: ==276==ABORTING

I haven't figured out how to automate that stuff yet unfortunately.

@mrc0mmand
Copy link
Member

I haven't figured out how to automate that stuff yet unfortunately.

After delving into the docs & stuff for a bit, it looks like it should be possible to "introspect" services to get a list of interfaces they implement via a call to org.varlink.service.GetInfo

# varlink info unix:/run/org.varlink.resolver
Vendor: Varlink
Product: Resolver
Version: 1
URL: https://github.com/varlink/org.varlink.resolver
Interfaces:
  com.redhat.resolver
  org.varlink.resolver
  org.varlink.service

However, in systemd we chose to not implement the GetInfo and GetInterface methods:

if (STR_IN_SET(method, "org.varlink.service.GetInfo", "org.varlink.service.GetInterface")) {
/* For now, we don't implement a single of varlink's own methods */
callback = NULL;
error = VARLINK_ERROR_METHOD_NOT_IMPLEMENTED;

which complicates things a bit :-)

# varlink info unix:/run/systemd/resolve/io.systemd.Resolve
Call failed with error: org.varlink.service.MethodNotImplemented

@mrc0mmand
Copy link
Member

Also, it looks like org.varlink.service.GetInterface was renamed to org.varlink.service.GetInterfaceDescription in 2017 [0]. Apart from that, the combination of GetInfo (to get a list of interfaces) with GetInterfaceDescription (to get a description of the interface, possibly with all implemented methods) might be of some use, if it was implemented in sd-varlink.

For example, in the com.varlink.resolver[1] service, the GetInterfaceDescription returns the whole interface configuration file:

# varlink info unix:///run/org.varlink.resolver
Vendor: Varlink
Product: Resolver
Version: 1
URL: https://github.com/varlink/org.varlink.resolver
Interfaces:
  com.redhat.resolver
  org.varlink.resolver
  org.varlink.service

# varlink call unix:///run/org.varlink.resolver/org.varlink.service.GetInterfaceDescription  '{ "interface": "org.varlink.resolver" }' | jq '.description' | sed 's/\\n/\n/g' 
"# Interface to resolve reverse-domain interface names to
# service adresses
interface org.varlink.resolver

# Get a list of all resolvable interfaces and information
# about the resolver's identity.
method GetInfo() -> (vendor: string, product: string, version: string, url: string, interfaces: []string)

# Resolve an interface name to a registered varlink service address
method Resolve(interface: string) -> (address: string)

error InterfaceNotFound (interface: string)
"

# cat ~fsumsal/repos/com.redhat.resolver/src/org.varlink.resolver.varlink
# Interface to resolve reverse-domain interface names to
# service adresses
interface org.varlink.resolver

# Get a list of all resolvable interfaces and information
# about the resolver's identity.
method GetInfo() -> (
  vendor: string,
  product: string,
  version: string,
  url: string,
  interfaces: []string
)

# Resolve an interface name to a registered varlink service address
method Resolve(interface: string) -> (address: string)

error InterfaceNotFound (interface: string)

Sadly, the format of the description is not specified anywhere, but I guess that's as close to the introspection-like stuff as we can get for now.

[0] varlink/libvarlink@7f28847
[1] https://github.com/cherry-pick/com.redhat.resolver

@evverx
Copy link
Member Author

evverx commented Aug 10, 2022

vfuzzer strikes again:

Aug 09 02:33:01 H systemd-oomd[231]: Assertion '!strstr("Failed to get cgroup %s owner uid: %m", "%m")' failed at src/oom/oomd-manager.c:86, function process_managed_oom_message(). Aborting.
Aug 09 02:33:51 H systemd[1]: systemd-oomd.service: Main process exited, code=dumped, status=6/ABRT
Aug 09 02:33:51 H systemd[1]: systemd-oomd.service: Failed with result 'core-dump'.
Aug 09 02:33:51 H systemd[1]: systemd-oomd.service: Scheduled restart job, restart counter is at 4.
Aug 09 02:33:51 H systemd[1]: Stopped systemd-oomd.service.

As far as I can tell it's just 63275a7 in action though.

yuwata added a commit to yuwata/systemd that referenced this issue Aug 10, 2022
@yuwata yuwata reopened this Aug 10, 2022
@yuwata yuwata reopened this Aug 11, 2022
@mrc0mmand mrc0mmand reopened this Oct 1, 2022
keszybz pushed a commit to systemd/systemd-stable that referenced this issue Nov 4, 2022
Fixes systemd/systemd#23785 (comment).

(cherry picked from commit b6f6df4)
(cherry picked from commit a3348ba)
@evverx
Copy link
Member Author

evverx commented Dec 21, 2022

As far as I know it's the only varlink library fuzzed on a regular basis

I fuzz libvarlink now and with a few bug fixes and varlink/libvarlink@b0a5530 merged I think it should be ready to handle whatever can be thrown at it.

I'll go ahead and close this issue because my understanding is that it's totally fine for systemd to crash/overflow when it receives data through world-writable sockets so it doesn't make much sense to test it probably.

@evverx
Copy link
Member Author

evverx commented May 5, 2024

@poettering just out of curiosity looking at the recently announced bug bounty program I wonder if it would be possible for STF to help with this? In terms of "Bug Resilience" it would make much sense to cover the varlink interfaces and prevent issues like #22480 in the first place instead.

Either way I'll go ahead and reopen this.

@evverx evverx reopened this May 5, 2024
@evverx
Copy link
Member Author

evverx commented May 5, 2024

On a somewhat related note the integration test improvements are going to break my downstream networkd/resolved fuzzers (as discussed in #32540 (comment)) and I'm unlikely to resurrect them when they break (or, more precisely, it's not a priority to me) so if STF could help with covering those parts too it would be great too because the upstream fuzz targets are kind of sloppy :-)

@evverx
Copy link
Member Author

evverx commented May 6, 2024

I took a closer look at https://www.sovereigntechfund.de/programs/bug-resilience and it appears STF can do that since they work with the OSTIF folks (who have the expertise in those things and can improve the fuzzing infrastructure and stuff like that). I'm not sure who STF talked to and why it was decided to roll out the bug bounty program instead of covering the basics. It's kind of weird.

@DaanDeMeyer
Copy link
Contributor

@evverx Any chance you could make your downstream fuzzers public somewhere? I'm happy to take a look at upstreaming them into the mkosi stuff.

@evverx
Copy link
Member Author

evverx commented May 6, 2024

@DaanDeMeyer unfortunately that stuff can't be open sourced.

@evverx
Copy link
Member Author

evverx commented Jun 6, 2024

On the bright side https://nvd.nist.gov/vuln/detail/CVE-2024-5564 kind of came out of all this and was fixed because I switched the fuzzer to NetworkManager at some point. I'm repurposing the mkosi stuff to keep it going too. @DaanDeMeyer I have to say mkosi is pretty cool stuff. Backtraces are still messed up there but it's probably my bad.

I'm not planning to fuzz systemd-networkd though.

@DaanDeMeyer
Copy link
Contributor

On the bright side https://nvd.nist.gov/vuln/detail/CVE-2024-5564 kind of came out of all this and was fixed because I switched the fuzzer to NetworkManager at some point. I'm repurposing the mkosi stuff to keep it going too. @DaanDeMeyer I have to say mkosi is pretty cool stuff. Backtraces are still messed up there but it's probably my bad.

I'm not planning to fuzz systemd-networkd though.

Thanks! Can you provide a bit more information on the backtrace issue? Is this just #33206 or are they messed up locally as well?

@evverx
Copy link
Member Author

evverx commented Jun 6, 2024

I'm 99% sure that I fiddled with CFLAGS too much and forgot to pass something to a bunch of additional packages I install into images. Configs aren't in the right places either :-) Previously I assembled those images a bit differently.

Is this just #33206

I think #33206 would be nice but the part extracting backtraces/coredumps and so on on my side kind of works once containers are stopped so it doesn't prevent me from doing whatever it is I'm doing.

@DaanDeMeyer
Copy link
Contributor

At least for the systemd packages we build with mkosi the sources have to be mounted into the image to get the best possible backtraces (with RuntimeBuildSources=yes but that's enabled by default for mkosi). Or you can set -E WITH_DEBUG=1 and we'll build and install debuginfo + debugsource (where available) packages so that you get proper backtraces without having to mount in the sources.

@evverx
Copy link
Member Author

evverx commented Jul 6, 2024

@DaanDeMeyer I wonder if there is a recommended way to bisect stuff using mkosi? The idea is to figure out when bugs are introduced/fixed by analogy with, say, OSS-Fuzz showing commit ranges. With the bash framework git bisect was run on the host with a bunch of wrappers and then the newly built binaries were installed to the images. The bash framework itself was "frozen" to prevent it from being changed in the process. It didn't always work automatically especially when bugs spanned quite a few releases but it worked in most cases. I'm not sure how to do it with mkosi.

@DaanDeMeyer
Copy link
Contributor

@evverx I hope that within a year or so this setup will have sufficiently stabilized that you can just run something like git bisect run meson compile -C build mkosi && SYSTEMD_INTEGRATION_TESTS=1 meson test -C build --no-rebuild -v TEST-01-BASIC to bisect a failure. But at the moment this stuff is so new and changes so often that I don't think bisecting has a high chance of working. There's also the fact that since this was only introduced from v256 onwards, bisecting any bug that was introduced in earlier releases with mkosi is naturally impossible.

@evverx
Copy link
Member Author

evverx commented Jul 6, 2024

@DaanDeMeyer got it. I don't run the integration tests though.

bisecting any bug that was introduced in earlier releases with mkosi is naturally impossible

I think I bisected one bug by rebuilding stuff inside containers that were up and running and reinstalling it over and over again there and rebooting it :-)

@DaanDeMeyer
Copy link
Contributor

@evverx It doesn't have to be an integration test of course, you can run mkosi without meson to try to reproduce some issue. The approach for that is to write a systemd unit that reproduces the issue and put it inside the image via the mkosi.extra tree. Then, add systemd.unit=<reproduce-unit>.service to the kernel command line with the KernelCommandLineExtra= setting. This unit should have something like the following layout:

[Unit]
Wants=basic.target
After=basic.target
SuccessAction=exit
FailureAction=exit

[Service]
StandardOutput=journal+console

Of course you'll need to add whatever settings here required to reproduce the issue and also include a script with mkosi.extra to use in ExecStart= for the unit.

You can then invoke git bisect run mkosi -f qemu since mkosi will exit with the exit status of the unit in the VM or container.

@evverx
Copy link
Member Author

evverx commented Jul 6, 2024

@DaanDeMeyer got it. Thanks. My use case is a bit different in that I only need images with certain snapshots. Crashes, reproducers and things like that are handled outside.

@evverx
Copy link
Member Author

evverx commented Jul 28, 2024

I'll go ahead and close this because I don't think the varlink interfaces are ever going to be fuzzed upstream. (The D-Bus interfaces are unlikely to be fuzzed better upstream either)

@DaanDeMeyer I'm OK with dropping the bash framework because it should no longer affect my stuff any more. Not that it matters much though.

(It would be great if changes like that were communicated going forward. I think mkosi is cool stuff, the testing part got better in the end and I started using it in places where it covers my use cases but it all was totally unexpected to me)

@evverx evverx closed this as not planned Won't fix, can't repro, duplicate, stale Jul 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fuzzing Implementation of fuzzers and fixes for stuff found through fuzzing RFE 🎁 Request for Enhancement, i.e. a feature request tests
Development

Successfully merging a pull request may close this issue.

5 participants