-
Notifications
You must be signed in to change notification settings - Fork 284
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge master to feature/easier-pool-join #6236
Merged
BengangY
merged 109 commits into
xapi-project:feature/easier-pool-join
from
BengangY:private/bengangy/master-to-easy-pool-join
Jan 17, 2025
Merged
Merge master to feature/easier-pool-join #6236
BengangY
merged 109 commits into
xapi-project:feature/easier-pool-join
from
BengangY:private/bengangy/master-to-easy-pool-join
Jan 17, 2025
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Test that the event is correctly executed. Signed-off-by: Frediano Ziglio <[email protected]>
Signed-off-by: Frediano Ziglio <[email protected]>
Do not use ">" or other operators to compare Mtime.t, the value is intended to be unsigned and should be compared with Int64.unsigned_compare as Mtime functions do. Signed-off-by: Frediano Ziglio <[email protected]>
Signed-off-by: Frediano Ziglio <[email protected]>
The parameter was false only to support an internal usage, external users should always alert the thread loop. Signed-off-by: Frediano Ziglio <[email protected]>
- Do not wait huge amount of time if the queue is empty but use Delay.wait if possible; - Fix delete of periodic events. In case the event is processed it's removed from the queue. Previously remove_from_queue was not able to mark this event as removed; - Do not race between checking the first event and removing it. These 2 actions were done in 2 separate critical section, now they are done in a single one. Signed-off-by: Frediano Ziglio <[email protected]>
Potentially a periodic event can be cancelled while this is processed. Simulate this behavior removing the event inside the handler. This was fixed by previous commit. Signed-off-by: Frediano Ziglio <[email protected]>
Previously if the queue was empty and the loop thread was active the scheduler took quite some time to pick up the new event. Check that this is done in a timely fashion to avoid regressions in code. Signed-off-by: Frediano Ziglio <[email protected]>
Similar to test_empty test however the state of the loop should be different. Signed-off-by: Frediano Ziglio <[email protected]>
For the scan retry, previously we were comparing the entire vdi data structure from the database using the (<>) operator. This is a bit wasteful and not very stable. Instead let us just compare the vdi refs, since the race here comes from `VDI.db_{introduce,forget}`, which would only add/remove vdis from the db, but not change its actual data structure. Also add a bit more logging when retrying, since this should not happen very often. Signed-off-by: Vincent Liu <[email protected]>
When add leaked datapath: 1. add leaked datapath to Sr.vdis 2. write to db file 3. log enhance If there are storage exceptions raised when destroying datapath, the procedure fails and the state of VDI becomes incorrect, which leads to various abnormalresults in subsequent operations. To handle this, the leaked datapath is designed to redestroy the datapath and refresh the state before next storage operation via function remove_datapaths_andthen_nolock. But this mechanism doesn't take effect in current code. This commit is to fix this bug. Leaked datapath should be added to Sr.vdis to make the leaked datapath really work. And write to db file to avoid losing the leaked datapath if xapi restarts. Signed-off-by: Changlei Li <[email protected]>
There was some issue in the current code, from structure corruption to thread safety. Add some test and fix discovered issues. More details on commit messages.
When add leaked datapath: 1. add leaked datapath to Sr.vdis 2. write to db file 3. log enhance If there are storage exceptions raised when destroying datapath, the procedure fails and the state of VDI becomes incorrect, which leads to various abnormalresults in subsequent operations. To handle this, the leaked datapath is designed to redestroy the datapath and refresh the state before next storage operation via function remove_datapaths_andthen_nolock. But this mechanism doesn't take effect in current code. This commit is to fix this bug. leaked datapath should be added to Sr.vdis to make the leaked datapath really work. And write to db file to avoid losing the leaked datapath if xapi restarts.
For the scan retry, previously we were comparing the entire vdi data structure from the database using the (<>) operator. This is a bit wasteful and not very stable. Instead let us just compare the vdi refs, since the race here comes from `VDI.db_{introduce,forget}`, which would only add/remove vdis from the db, but not change its actual data structure. Also add a bit more logging when retrying, since this should not happen very often.
Signed-off-by: Konstantina Chremmou <[email protected]>
- Moved notes on error structure to the Wire protocol. - Moved duplicate file basics.md and wire-protocol.md from ocaml/doc to doc/content/xen-api. - Moved notes on VM boot parameters to doc/content/xen-api/topics/vm-lifecycle.md and removed ocaml/doc/vm-lifecycle.md. Signed-off-by: Konstantina Chremmou <[email protected]>
This includes the current_domain_type field, which is important for live imports, including those during a cross-pool live migration. Signed-off-by: Rob Hoes <[email protected]>
Signed-off-by: Rob Hoes <[email protected]>
The target host of a live migration is running by definition the same or a newer version of the software compared to the source host. As CPU checks are often extended or even changed in software updates, it is best to perform such checks on the target host. However, currently it is the source host that does these checks. This patch moves the check to the target host. A cross-pool live migration always begins with a dry-run VM-metadata import on the target host. This is where checks for free memory and GPU capacity are already carried out. The metadata import handler is extended to accept a `check_cpu` query parameter to signal that a CPU check is needed. This is included in the import call done in `assert_can_migrate` on the source host, and the old CPU check in there is dropped. Source hosts without this patch will still perform the CPU checks themselves, so we do not compromise safety. NOTE: This is a rebase of the initial work from 07a2a71 that had to be reverted with Ming's suggestion to skip CPUID check on 'unspecified' snapshots implemented. Signed-off-by: Rob Hoes <[email protected]>
CPU checks are needed only for running VMs that are being migrated, to check for compatibility with the remote-host's CPUs. NOTE: This is the rebase of the initial work from 3d039f3 that had to be reverted with the fix from df7cbfd incorporated. Signed-off-by: Rob Hoes <[email protected]>
QEMU orders devices by the time of plugging. Parallelizing them introduces randomness, which breaks the assumption that devices are ordered in a deterministic way. Serialize all PCI and VUSB plugs to restore behaviour. Signed-off-by: Pau Ruiz Safont <[email protected]>
QEMU orders devices by the time of plugging. Parallelizing them introduces randomness, which breaks the assumption that devices are ordered in a deterministic way. Serialize all PCI and VUSB plugs to restore behaviour. This fix has been manually tested by @freddy77 against an affected VM relying on the ordering
…CPU check to the target host (xapi-project#6175) Rebased the work from 2023 merged in xapi-project#5111 and xapi-project#5132, that caused issues and was partially fixed in xapi-project#5148, but was completely reverted in xapi-project#5147. I've integrated the fix from xapi-project#5148 and additionally the fix suggested by @minglumlu in CA-380715 that was not merged at the time due to time constraints. This series passed the tests that were originally failing: sxm-unres (Job ID 4177739), vGPUSXMM60CrossPool (4177750), and also passed the Ring3 BST+BVT (209341). I can run more migration tests if needed - I've heard @Vincent-lau has requested for these to be separated into its own suite instead of being only in Core and Distribution regression tests.
Regarding the docs changes, I moved the copies of `basics.md`, `wire-protocol.md`, and `vm-lifecycle.md` from `ocaml/doc` to `doc/content/xen-api`. The latter folder had these files already, so the resulting changes on them are the following: - `basics.md` has formatting changes only. - `vm-lifecycle.md` has now the additional section on VM boot parameters. - `wire-protocol.md` has formatting changes, the sections for JSON-RPC, and a section on the error format which previously was in the API errors markdown (and was absent from the github.io docs since we don't list the errors anywhere; we could add this list in a future PR).
It is expected to use root CA certficate to verify an appliance's server certificate for a xapi outgoing TLS connection. Prior to this change, the related stunnel configurations are: "verifyPeer=yes", and "checkHost=<hostname>". The 'verifyPeer' option of stunnel doesn't treat the CA bundle as root CA certificates. The 'checkHost' option of stunnel only checks the host name against the one in server certificate. In other words, the issue is that the root CA based checking doesn't work for appliance. This change adds 'verifyChain' for the appliance to ensure the outgoing TLS connection from xapi will verify the appliance's server certificates by real root CA certificate. Signed-off-by: Ming Lu <[email protected]>
Precompute a table of object names for which events should be propagated. This avoids the list querying done every time the database queues an event. Signed-off-by: Colin James <[email protected]>
Signed-off-by: Colin James <[email protected]>
Instead of logging errors like this and then immediately handling them: ``` [error|1141 |VM metadata import |import] Found no VDI with location = dedeeb44-62b3-460e-b55c-6de45ba10cc0: treating as fatal and abandoning import [debug|1141 |VM metadata import |import] Ignoring missing disk Ref:4 - this will be mirrored during a real live migration. ``` Log once in the handler: ``` [ warn|VM metadata import |import] Ignoring missing disk Ref:16 - this will be mirrored during a real live migration. (Suppressed error: 'Found no VDI with location = c208b47c-cf87-495f-bd3c-a4bc8167ef83') ``` Signed-off-by: Andrii Sultanov <[email protected]>
Instead of raising an exception in case of an error like get_by_uuid, return None to be handled gracefully later. Do not expose it in the datamodel. This will later be used when an object is checked to exist before its creation (during migration, for example), and so its absence is expected - no need to raise a backtrace and pollute the logs with errors. Signed-off-by: Andrii Sultanov <[email protected]>
…failure is expected Migration logs are always full of exceptions that are expected and immediately handled: ``` [error|backtrace] SR.get_by_uuid D:8651cc0c9fb6 failed with exception Db_exn.Read_missing_uuid("SR", "", "a94bf103-0169-6d70-8874-334261f5098e") [error|backtrace] Raised Db_exn.Read_missing_uuid("SR", "", "a94bf103-0169-6d70-8874-334261f5098e") [error|backtrace] 1/9 xapi Raised at file ocaml/database/db_cache_impl.ml, line 237 [error|backtrace] 2/9 xapi Called from file ocaml/xapi/db_actions.ml, line 13309 [error|backtrace] 3/9 xapi Called from file ocaml/xapi/rbac.ml, line 188 [error|backtrace] 4/9 xapi Called from file ocaml/xapi/rbac.ml, line 197 [error|backtrace] 5/9 xapi Called from file ocaml/xapi/server_helpers.ml, line 74 [error|backtrace] 6/9 xapi Called from file ocaml/xapi/server_helpers.ml, line 96 [error|backtrace] 7/9 xapi Called from file ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml, line 24 [error|backtrace] 8/9 xapi Called from file ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml, line 39 [error|backtrace] 9/9 xapi Called from file ocaml/libs/log/debug.ml, line 250 [ warn|import] Failed to find SR with UUID: a94bf103-0169-6d70-8874-334261f5098e content-type: user - will still try to find individual VDIs [....] [debug|import] Importing 1 VM_guest_metrics(s) [debug|import] Importing 1 VM_metrics(s) [debug|import] Importing 1 VM(s) [debug|import] Importing 1 network(s) [debug|import] Importing 0 GPU_group(s) [debug|import] Importing 1 VBD(s) [error|backtrace] VBD.get_by_uuid D:3a12311e8be4 failed with exception Db_exn.Read_missing_uuid("VBD", "", "026d61e9-ed8a-fc72-7fd3-77422585baff") [error|backtrace] Raised Db_exn.Read_missing_uuid("VBD", "", "026d61e9-ed8a-fc72-7fd3-77422585baff") [error|backtrace] 1/9 xapi Raised at file ocaml/database/db_cache_impl.ml, line 237 [error|backtrace] 2/9 xapi Called from file ocaml/xapi/db_actions.ml, line 14485 [error|backtrace] 3/9 xapi Called from file ocaml/xapi/rbac.ml, line 188 [error|backtrace] 4/9 xapi Called from file ocaml/xapi/rbac.ml, line 197 [error|backtrace] 5/9 xapi Called from file ocaml/xapi/server_helpers.ml, line 74 [error|backtrace] 6/9 xapi Called from file ocaml/xapi/server_helpers.ml, line 96 [error|backtrace] 7/9 xapi Called from file ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml, line 24 [error|backtrace] 8/9 xapi Called from file ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml, line 39 [error|backtrace] 9/9 xapi Called from file ocaml/libs/log/debug.ml, line 250 [debug|import] Importing 1 VIF(s) [error|backtrace] VIF.get_by_uuid D:2bc78449e0bc failed with exception Db_exn.Read_missing_uuid("VIF", "", "7d14aee4-47a4-e271-4f64-fe9f9ef6d50b") [error|backtrace] Raised Db_exn.Read_missing_uuid("VIF", "", "7d14aee4-47a4-e271-4f64-fe9f9ef6d50b") [error|backtrace] 1/9 xapi Raised at file ocaml/database/db_cache_impl.ml, line 237 [error|backtrace] 2/9 xapi Called from file ocaml/xapi/db_actions.ml, line 10813 [error|backtrace] 3/9 xapi Called from file ocaml/xapi/rbac.ml, line 188 [error|backtrace] 4/9 xapi Called from file ocaml/xapi/rbac.ml, line 197 [error|backtrace] 5/9 xapi Called from file ocaml/xapi/server_helpers.ml, line 74 [error|backtrace] 6/9 xapi Called from file ocaml/xapi/server_helpers.ml, line 96 [error|backtrace] 7/9 xapi Called from file ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml, line 24 [error|backtrace] 8/9 xapi Called from file ocaml/libs/xapi-stdext/lib/xapi-stdext-pervasives/pervasiveext.ml, line 39 [error|backtrace] 9/9 xapi Called from file ocaml/libs/log/debug.ml, line 250 ``` Use an internal get_by_uuid_opt call and match on the Option instead, with the logs looking much clearer: ``` [debug|import] Importing 1 host(s) [debug|import] Importing 2 SR(s) [ warn|import] Failed to find SR with UUID: 8568e308-c61c-3b10-3953-45606cfecede content-type: - will still try to find individual VDIs [ warn|import] Failed to find SR with UUID: 40e9e252-46ac-ed3d-7a4c-6db175212195 content-type: user - will still try to find individual VDIs [...] [debug|import] Importing 2 VM_guest_metrics(s) [debug|import] Importing 2 VM(s) [debug|import] Importing 1 network(s) [debug|import] Importing 1 GPU_group(s) [debug|import] Importing 4 VBD(s) [ info|import] Did not find an already existing VBD with the same uuid=569d0e60-6a89-d1fa-2ed6-38b8eebe9065, try to create a new one [ info|import] Did not find an already existing VBD with the same uuid=533306da-cff1-7ada-71f7-2c4de8a0065b, try to create a new one [ info|import] Did not find an already existing VBD with the same uuid=f9dec620-0180-f67f-6711-7f9e5222a682, try to create a new one [ info|import] Did not find an already existing VBD with the same uuid=05e55076-b559-9b49-c247-e7850984ddae, try to create a new one [debug|import] Importing 2 VIF(s) [ info|import] Did not find an already existing VIF with the same uuid=a5a731d5-622c-5ca5-5b2a-a0053a11ef07, try to create a new one [ info|import] Did not find an already existing VIF with the same uuid=1738bf20-8d16-0d69-48cd-8f3d9e7ea791, try to create a new one ``` Signed-off-by: Andrii Sultanov <[email protected]>
…#6215) The point of using try_lock is to not get lock while trying to hold the mutex. Releasing it and blocking to lock it defeats the purpose of using try_lock. Reorganise the sequence to read and copy all the rrds first while under the locked mutex, release it, and then archive the copies. This doesn't fix CA-404013 directly, so this can wait until the release is cut. I'm opening otherwise I'll forget to open it when things are unlocked. I've run the smoke and validation tests, as well as checking that the rrd files in the bugtools are well formed for hosts and vms (they are created suing this function)
…ect#6197) Two changes in the PR: **Drop the usage of fuser in stunnel client proxy** The drawback of fuser is that it gets too many things involved. E.g. it is observed that it got stuck on cifs kernel module. This change uses a cleaner way to remember the stunnel client proxy. Even when the xapi restarted unexpectedly, it can stop the remnant stunnel proxy and start a new one. **Make the stunnel proxy local port configurable** Making it configurable can avoid the situation when the port conflicts with others, e.g. an external program from users.
Commits best reviewed separately. - CP-53003: Use JsonRpc v1.0 by default and switch to v2.0 once the API version is known. - Removal of deprecated methods and unused internal methods. - Refactoring to fix some inefficiencies when loading and saving certificates in PS.
The new clustering interface uses a constructor `Extended` address when a cluster hots is trying to join a cluster. This causes problems during upgrades as a newly upgraded host might send out the new address format to old hosts, which do not understand this format, causing the new hosts not able to join. The fix would be to use a new cluster_address feature flag/pool restrictions to control the use of the new clustering interface. This makes sure that the new interface would only be used if all of the hosts understand this new interface, i.e. have this feature enabled. The cluster_address feature is controlled by v6d and is pool-wide, therefore the new interface would only be enabled if all v6ds are updated to the correct level, which also implies that the accompanying xapi are updated to the correct level. Signed-off-by: Vincent Liu <[email protected]>
…ject#6217) The new clustering interface uses a constructor `Extended` address when a cluster hots is trying to join a cluster. This causes problems during upgrades as a newly upgraded host might send out the new address format to old hosts, which do not understand this format, causing the new hosts not able to join. The fix would be to use the cluster_health feature flag/pool restrictions to gate the use of the new clustering interface. This makes sure that the new interface would only be used if all of the hosts understand this new interface, i.e. have this feature enabled. The cluster_health feature is controlled by v6d and is pool-wide, therefore the new interface would only be enabled if all v6ds are updated to the correct level, which also implies that the accompanying xapi are updated to the correct level.
Be explicit about the file system of an update ISO. Remove dead code.
This has two advantages: 1. Always non-negative: they represent absolute differences in time 2. Forces users to define the units of time, allowing to read the time in minutes, when appropriate Passed the toolstack's smoke and validation tests: Job 209833
Signed-off-by: Rob Hoes <[email protected]>
The package dune has been replaced with ocaml-dune Signed-off-by: Pau Ruiz Safont <[email protected]>
The package dune has been replaced with ocaml-dune > Package dune is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: ocaml-dune
Ubuntu's dune is way too old for the version generally used. On top of that the command doesn't fail when this happens, making the setup brittle. Instead write the version variable to config.mk and run make. Signed-off-by: Pau Ruiz Safont <[email protected]>
Ubuntu's dune is way too old for the version generally used. On top of that the command doesn't fail when this happens, making the setup brittle. Instead write the version variable to config.mk and run make. You can see this change running as expected in this run: https://github.com/xapi-project/xen-api/actions/runs/12792091034/job/35661725368
We have so far hard-coded the exectation that in a PEM file a private key is followed by a certficate but this is actually not required by the PEM standard and let to a failure in XSI-1781. This is a simple fix first collects all keys and certificates while skipping over other content and the uses the first key and certificate. Signed-off-by: Christian Lindig <[email protected]>
Add back a unit test. Signed-off-by: Christian Lindig <[email protected]>
The integer values that OCaml uses for signals should never be printed as they are. They can cause confusion because they don't match the C POSIX values. Change the unixext function that converts them to string to stop building a list and finding a value in the list to instead use pattern-matching. Also added some more values that got introduced in OCaml 4.03, and return a more compact value for unknown signals, following the same format as Fmt.Dump.signal Signed-off-by: Pau Ruiz Safont <[email protected]>
When signals are are written to logs, the POSIX name should be used to minimize confusion. It makes sense that the function that does this is in the logging library instead of the unix one, as most users will be already be using the logging library, but not all the unix one. Moving it there also allows for a more ergonomic usage with the logging functions. Signed-off-by: Pau Ruiz Safont <[email protected]>
…6227) We have so far hard-coded the expectation that in a PEM file a private key is followed by a certificate but this is actually not required by the PEM standard and let to a failure in XSI-1781. This is a simple fix that permits both. However, it would still fail if we have a certificate with multiple certificates followed by a key. Another strategy could be to parse all blocks in a PEM file and to pick out a key and certificate later rather than expecting a particular order up front. Added new test data for unit tests; fail-01.pem now becomes a passing test case.
…es correctly Other unit tests only verify the interoperability of the RRDs - dumping them to JSON/XML and reading back in, verifying that the same data was decoded. We're now seeing a problem where Gauge data sources, which should be absolute values provided by the plugin, fluctuate wildly when processed by the RRD library. Ensure we have an easy way to test this for both Gauge and Absolute data sources - these values should be passed as-is by the RRD library, without any time-based transformations. This test currently fails and will be passing with the fix commits. Signed-off-by: Andrii Sultanov <[email protected]>
The integer values that OCaml uses for signals should never be printed as integers. They can cause confusion because they don't match the C POSIX values. Change the unixext function that converts them to string to stop building a list and finding a value in the list to instead use pattern-matching. Also added some more values that got introduced in OCaml 4.03, and return a more compact value for unknown signals, following the same format as Fmt.Dump.signal. Typically, engineers see signal -11 and assume it's SIGSEGV, when it's SIGTERM. Fixes xapi-project#6225
Some recent changes related to RRDs likely exposed a long-standing latent issue where the RRD library would process the passed-in values for Gauge and Absolute data sources incorrectly leading to constant values changing from update to update, for example: ``` $ rrd2csv memory_total_kib timestamp, AVERAGE:host:8b533333-91e1-4698-bd17-95b9732ffbb6:memory_total_kib 2025-01-15T08:41:40Z, 33351000 2025-01-15T08:41:45Z, 33350000 2025-01-15T08:41:50Z, 33346000 2025-01-15T08:41:55Z, 33352000 ``` Instead of treating Gauge and Absolute data sources as a variation on the rate-based Derive data source type, expecting time-based calculations to cancel each other out, do not undertake any calculations on non-rate data sources at all. This makes the unit test added in the previous commit pass. Signed-off-by: Andrii Sultanov <[email protected]>
…sources (xapi-project#6233) Some recent changes related to RRDs likely exposed a long-standing latent issue where the RRD library would process the passed-in values for Gauge and Absolute data sources incorrectly leading to constant values changing from update to update, for example: ``` $ rrd2csv memory_total_kib timestamp, AVERAGE:host:8b533333-91e1-4698-bd17-95b9732ffbb6:memory_total_kib 2025-01-15T08:41:40Z, 33351000 2025-01-15T08:41:45Z, 33350000 2025-01-15T08:41:50Z, 33346000 2025-01-15T08:41:55Z, 33352000 ``` Instead of treating Gauge and Absolute data sources as a variation on the rate-based Derive data source type, expecting time-based calculations to cancel each other out, do not undertake any calculations on non-rate data sources at all. First commit adds a failing unit test, second makes it pass. === I've verified these changes through manual testing, they've also passed the testcases that discovered this issue: SNMP memory testcases (JobIDs 4197305, 4196759, 4196744) and ShimMemory testcase (4197050). This branch also passed Ring3 BST+BVT (210577)
Solve conflict: Stunnel.with_client_proxy -> Stunnel.with_client_proxy_systemd_service
BengangY
force-pushed
the
private/bengangy/master-to-easy-pool-join
branch
from
January 17, 2025 10:40
bbffffe
to
9a33546
Compare
Vincent-lau
approved these changes
Jan 17, 2025
robhoes
approved these changes
Jan 17, 2025
gangj
approved these changes
Jan 17, 2025
BengangY
merged commit Jan 17, 2025
9fa5a04
into
xapi-project:feature/easier-pool-join
15 checks passed
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Solve conflict:
Stunnel.with_client_proxy -> Stunnel.with_client_proxy_systemd_service