Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: Pact Plugins #492

Open
adamrodger opened this issue Feb 18, 2024 · 6 comments
Open

RFC: Pact Plugins #492

adamrodger opened this issue Feb 18, 2024 · 6 comments
Labels
feature request Indicates new feature requests

Comments

@adamrodger
Copy link
Contributor

Is your feature request related to a problem? Please describe.
The Pact ecosystem now supports plugins, and PactNet should be part of this. See https://docs.pact.io/plugins/quick_start

Describe the solution you'd like
The implementation details are entirely TBD currently, but some thoughts:

  • It is not desirable to add more and more dependencies to PactNet and make the library bigger and bigger. Instead the plugins should be separate projects, e.g. in a PactNet.Extensions.* namespace format.
  • Plugins rely on FFI calls, which means that they need to load the FFI library and it has to be the same instance and version that the rest of the process has loaded so that it writes to the same Pact file in the same way etc.
    • It was originally envisaged that plugins could add extensions methods to add appropriate functionality (e.g. a WithProtobufContent to mirror the existing WithJsonContent calls) but the FFI aspect probably makes this pretty awkward
    • One solution may be to refactor the FFI interop parts out of PactNet and into a dedicated PactNet.Interop (or similar name) so that any plugin projects could depend on this and on PactNet.Abstractions to add functionality in a non-breaking way. This refactor would be a breaking change upfront though so this idea would need to be properly thought out.
  • Plugins are installed to the OS, which creates friction in PactNet itself because you would install e.g. PactNet.Extensions.Protobuf but that doesn't actually install the plugin, just the API support in PactNet. This can be confusing for users.

Breaking Changes
Depending on the implementation details above (TBD at the time of writing) this may involve breaking changes.

Potential Downsides/Caveats

Describe alternatives you've considered

Additional context

@adamrodger adamrodger added feature request Indicates new feature requests triage This issue is yet to be triaged by a maintainer and removed triage This issue is yet to be triaged by a maintainer labels Feb 18, 2024
@mefellows
Copy link
Member

Thanks for raising this!

Some comments:

Instead the plugins should be separate projects, e.g. in a PactNet.Extensions.* namespace format.

Is the implication here that we need a namespace for each plugin (e.g. one for protobuf and another for CSV etc.)? Or are you referring to the "plugin" work being contained in a separate namespace?

For context on my line of questioning, at the moment, the languages that have implemented plugins have started with what you might call a "raw" interface:

e.g. https://github.com/pact-foundation/pact-plugins/blob/main/examples/gRPC/area_calculator/consumer-jvm/src/test/java/io/pact/example/grpc/consumer/PactConsumerTest.java

  @Pact(consumer = "grpc-consumer-jvm")
  V4Pact calculateRectangleArea(PactBuilder builder) {
    return builder
      // Tell Pact we need the Protobuf plugin
      .usingPlugin("protobuf")
      // We will use a V4 synchronous message interaction for the test
      .expectsToReceive("calculate rectangle area request", "core/interaction/synchronous-message")
      // We need to pass all the details for the interaction over to the plugin
      .with(Map.of(
        // Configure the proto file, the content type and the service we expect to invoke
        "pact:proto", filePath("../proto/area_calculator.proto"),
        "pact:content-type", "application/grpc",
        "pact:proto-service", "Calculator/calculateOne",

        // Details on the request message (ShapeMessage) we will send
        "request", Map.of(
          "rectangle", Map.of(
              "length", "matching(number, 3)",
              "width", "matching(number, 4)"
          )),

        // Details on the response message we expect to get back (AreaResponse)
        "response", List.of(
            Map.of(
              "value", "matching(number, 12)"
            )
          )
      ))
      .toPact();
  }

The thinking would be that whatever the raw interface is, that it might be extendable (outside of the library) with a thin type-safe wrapper if necessary that makes it more ergonomic for the given plugin.

Plugins rely on FFI calls, which means that they need to load the FFI library and it has to be the same instance and version that the rest of the process has loaded so that it writes to the same Pact file in the same way etc.

Actually, the way the architecture works doesn't make this assumption. Plugins are actually small independent gRPC servers and the plugin driver (distributed in the FFI) communicates to these plugins via gRPC. The FFI itself is still responsible for the lifecycle of a pact test and coordinates the plugins. The FFI will continue to write to the pact file.

i.e. the interface plugins are developed against is a protobuf definition.

It was originally envisaged that plugins could add extensions methods to add appropriate functionality (e.g. a WithProtobufContent to mirror the existing WithJsonContent calls) but the FFI aspect probably makes this pretty awkward

As above. This could still be done, but I think at best it would be an additional namespace or external to Pact .NET entirely (e.g. a small interface adapter library).

One solution may be to refactor the FFI interop parts out of PactNet and into a dedicated PactNet.Interop (or similar name) so that any plugin projects could depend on this and on PactNet.Abstractions to add functionality in a non-breaking way. This refactor would be a breaking change upfront though so this idea would need to be properly thought out.

I'm not sure I follow this entirely, but it might be a clash of mental models around how the FFI itself works.

Plugins are installed to the OS, which creates friction in PactNet itself because you would install e.g. PactNet.Extensions.Protobuf but that doesn't actually install the plugin, just the API support in PactNet. This can be confusing for users.

The plugin framework automatically installs plugins if they aren't already on the host, but they can also be installed/managed using the plugin CLI.

General note about plugins

The intention behind plugins is that each client language itself should be completely agnostic to any plugin capability. That is, once plugin support has been added to a language, anybody could create a plugin and should be able to leverage it in the client library without the library having to know of its existence or how to support it. The plugin could be created and distributed only internally to a company (e.g. due to a proprietary protocol) and still be able to use it with Pact.

Implementation notes (Pact JS / Go)

If it helps, I took inspiration from your type-state design in Pact .NET and adopted it in both Pact Go and JS. The type system for HTTP can be seen here
and messages here.

There was a surprisingly small amount of extra work to get plugins working, most of the work was supporting synchronous messages which was a new interaction type introduced in V4. It obviously works with async messages also.

Testing

I've just noticed our compatibility suite doesn't have a feature for plugins. We can add that in so that you have a way to confirm compatibility / interop with the rest of the ecosystem.

@adamrodger
Copy link
Contributor Author

My design notes above are specifically about how the FFI library is wrapped in .Net. Everything is at internal accessibility level and is not accessible to any third party assembly/project. That means any plugin project must call into the 'main' project because it has to be using the same FFI instance (e.g. for consistent logging experience).

The problem is that the library is not structured that way. The idea of PactNet.Abstractions is that you could use an entirely parallel implementation from PactNet which only uses PactNet.Abstractions, but that's not the case for plugins because they'd have no way of calling in to FFI functions, and we've no desire to make those publicly accessible.

I really dislike the "raw" untyped approach. I've gone to great lengths to prevent untyped usage wherever possible in the 4.x rewrite because PactNet 3.x and below really suffered from too much use of dynamic, object and string in the public interface and this often made for extremely confusing and hard to identify bugs. The design goal is that plugins are no less ergonomic than any other type of interaction and that you fall into the "pit of success" rather than be left to fend for yourself (which will just result in endless "why doesn't this code work?" type issues being raised here).

The example above effectively translates to a .Net interface of .WithPlugin(string name).WithInteraction(IDictionary<object, object> args) and that's really not a nice user experience, and also a really bad one to try to support from the library maintainer perspective also.

The gRPC example is also useful in that it exposes another problem I'd not considered - the API of PactNet is effectively broken when writing consumer tests because there needs to be an entirely separate stage where you can start whatever service it is you want to run your consumer tests against (in this case a gRPC transport). The PactNet API assumes this isn't necessary because for sync/HTTP interactions it uses the mock server (internal concern) and for async/Messaging interactions it starts its own internal server to handle the calls from the Rust core, so the API is just like:

// Arrange
this.pactBuilder
    .UponReceiving("A GET request to retrieve the something")
        .Given("There is a something with id 'tester'")
        .WithRequest(HttpMethod.Get, "/somethings/tester")
        .WithHeader("Accept", "application/json")
    .WillRespond()
        .WithStatus(HttpStatusCode.OK)
        .WithHeader("Content-Type", "application/json; charset=utf-8")
        .WithJsonBody(new
        {
            id = "tester",
            firstName = "Totally",
            lastName = "Awesome"
        });

await this.pactBuilder.VerifyAsync(async ctx =>
{
    // at this point the mock and messaging servers are guaranteed to have started
    // and each test is entirely independent because it gets its own mock server instance
    // running on a different port, and therefore you don't need to worry about things like
    // test lifetimes and parallelism concerns

    // Act
    var client = new SomethingApiClient(ctx.MockServerUri);
    var something = await client.GetSomething("tester");

    // Assert
    Assert.Equal("tester", something.Id);
});

Notice how there's nowhere there to start whatever transport you need, because all calls must happen within the scope of VerifyAsync where we're guaranteed the mock server and messaging server are both started. We'll have no way of guaranteeing that the gRPC server (for example) is started. The pact file is updated only after VerifyAsync is successfully executed (again, an internal concern which isn't available to the user).

The Rust library has a very different API to that where you have to be responsible for starting the mock server, but that would be a very large departure from the current PactNet interaction model and API. The entire existence of the mock server has been abstraction away as much as possible to create a very ergonomic API which guarantees a number of invariants instead of moving that concern onto the user (the mock server will definitely be started before you try to make HTTP calls, the pact file will definitely be written once your interaction has been verified, any failure to start the server or write the file will be reported consistently).

The plugin framework automatically installs plugins if they aren't already on the host

This is all well and good if the test runner has unfiltered internet access, but it's extremely common for them not to have in corporate settings. No corporate CI agent I've ever used has internet access anyway, and dependencies usually have to be pulled through some kind of centralised internal artifact store like JFrog Artifactory.

So that adds additional concerns to any plugin based approach

  • we need to report when plugins aren't available, and whether or not they're being downloaded
  • we need to be able to configure where plugins are downloaded from, if they're downloaded automatically
  • we need to be able to disable automatic plugin downloading entirely (e.g. for air-gapped CI runners)

@mefellows
Copy link
Member

The problem is that the library is not structured that way. The idea of PactNet.Abstractions is that you could use an entirely parallel implementation from PactNet which only uses PactNet.Abstractions, but that's not the case for plugins because they'd have no way of calling in to FFI functions, and we've no desire to make those publicly accessible.

That sounds like over engineering or building for a problem that doesn't exist, if I'm honest. Is this something that's likely to happen?

I really dislike the "raw" untyped approach. I've gone to great lengths to prevent untyped usage wherever possible in the 4.x rewrite because PactNet 3.x and below really suffered from too much use of dynamic, object and string in the public interface and this often made for extremely confusing and hard to identify bugs. The design goal is that plugins are no less ergonomic than any other type of interaction and that you fall into the "pit of success" rather than be left to fend for yourself (which will just result in endless "why doesn't this code work?" type issues being raised here).

Me too, however how else do you deal with types that you can't know about in advance in the Pact .NET DSL? It's basically implied by the term "plugin". Put another way, is there is another design you are speaking of that would allow would-be plugin authors to create and use plugins without requiring a pull request into Pact .NET to use it?

This is all well and good if the test runner has unfiltered internet access, but it's extremely common for them not to have in corporate settings. No corporate CI agent I've ever used has internet access anyway, and dependencies usually have to be pulled through some kind of centralised internal artifact store like JFrog Artifactory.

So that adds additional concerns to any plugin based approach

we need to report when plugins aren't available, and whether or not they're being downloaded
we need to be able to configure where plugins are downloaded from, if they're downloaded automatically
we need to be able to disable automatic plugin downloading entirely (e.g. for air-gapped CI runners)

I believe those are catered for already by the core, but I might be wrong. The Plugin CLI can still be used to download the plugins onto the CI system (e.g. a docker image) so that they're available. In this case, the plugin driver has logic to determine if the installed plugin is acceptable or not.

I understand some of your other points (e.g. consistent logging) and the type-state patterns to avoid certain undesirable state conditions. Understand that many of these things have been considered and that there are tradeoffs discussed to get to the current implementation.

For clarity, I don't think we should not introduce plugins to Pact .NET for some of these reasons, albeit there is work to improve the DX that I'm sure we can do within the current framework.

We are also discussing what the "next" version of the plugin API might look like, after ~2 years of soak time in the community.

@YOU54F
Copy link
Member

YOU54F commented Jul 29, 2024

I've added some basic examples of the current known plugins in the pact ecocsystem, along with the native pact_ffi interop calls required for .NET

https://github.com/YOU54F/pact-dotnet-ffi-plugins

hopefully this will help in introducing the pact-plugin framework into pact-net. I thought it may be easier to create an example without necessarily making design choices about the pact-net DSL

@mefellows
Copy link
Member

Thanks Yousaf, very helpful! How would you like to proceed with this Adam? We get a lot of questions in the Slack community about when plugin support will be available, and we don't have an answer for them.

Perhaps you could help riff on an API/DSL that makes sense and works with the current plugin architecture and we could find a contributor willing to implement that interface?

I understand some of the concerns around the plugin system design and the tradeoffs that were considered - and we will hold a separate RFC for a "version 2" of that in due course - but the current plugin architecture is what it is, so we need to find a way to move forward.

@adamrodger
Copy link
Contributor Author

Yeah I think a good next step is to start sketching out an API for the consumer and verifier sides.

Presumably these are only available on the V4 spec anyway so that you can have pacts with mixed interaction types? The current design should make it possible to only add the new stuff onto the V4 version of consumer tests, but dunno about the verifier side.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request Indicates new feature requests
Projects
None yet
Development

No branches or pull requests

3 participants