-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NRI network #57
NRI network #57
Conversation
Fixed forgotten test cases and NetworkDeleted functionality |
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #57 +/- ##
==========================================
- Coverage 64.61% 58.27% -6.34%
==========================================
Files 9 9
Lines 1834 2030 +196
==========================================
- Hits 1185 1183 -2
- Misses 498 696 +198
Partials 151 151
☔ View full report in Codecov by Sentry. |
Plugin fixed to match expected functions |
PreSetupNetwork messages are sent to plugins before CNI is invoked to set up pod networking. The plugins can return a set of additional CNI capabilities and arguments. Once CNI networking has completed, PostSetupNetwork messages containing an equivalent of the CNI Result structure are sent out to interested plugins, which can return the same or a modified version of the received CNI Result structure. Signed-off-by: Patrik Flykt <[email protected]>
NetworkConfigurationChanged messages are sent whenever a CNI network configuation file is updated on disk. One or more CNI configurations can be sent, depending on the implementation in the runtime. Only modifications of existing CNI configurations are supported. Signed-off-by: Patrik Flykt <[email protected]>
When the pod sandbox network is to be removed, NRI plugins are informed of the event in form of PreNetworkDeleted message and a PostNetworkDeleted event. The PreNetworkDeleted request is sent to the plugins while the network namespace is still available and waits until all plugins have responded. Once all plugins have responded to the PreNetworkDeleted message, the namespace is removed, and PostNetworkDeleted events will be sent to plugins subscribed to the event. Signed-off-by: Patrik Flykt <[email protected]>
Signed-off-by: Patrik Flykt <[email protected]>
Log NRI network messages Signed-off-by: Patrik Flykt <[email protected]>
Add example plugin logging network related messages. Signed-off-by: Patrik Flykt <[email protected]>
Removed 'WIP' labels as this PR has progressed into something decently useful. |
/cc |
- Pod network post-creation after CNI plugins have been called | ||
- Pod network removal | ||
|
||
### Pod Network Configuration Change |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a person that worked developing and maintaining different CNI plugins, the CNI word evolved from referring to the CNI spec to refer to component that implements the spec (Calico, Cilium, ovn-kubernetes, kindnet, ...)
The signatue of the existing CNI calls in containerd are
err = netPlugin.Status()
err = netPlugin.Remove(ctx, id, path, opts...)
result, err := netPlugin.Setup(ctx, id, path, opts...)
if those 3 methods are added as 3 hooks of the NRI plugin (so it can be actually replaced by the NRI plugin), an existing CNI implementation can move to a more rich and flexible model provided by NRI and move on from the legacy interface that has a lot of constrains (strong typing, shell, json, ...) https://github.com/containernetworking/cni/issues?q=is%3Aissue+is%3Aopen+grpc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These days the "CNI plugins" need to run connected to the apiserver to perform API operations, they can not longer run as standalone binaries, Cilum https://github.com/cilium/cilium/blob/d1128cf948979de7433385b2101a98404d99c64f/pkg/client/client.go#L32-L41
and OVN-kubernets https://github.com/ovn-org/ovn-kubernetes/blob/2916102d231ce9596e23c9880103225db2caa65e/go-controller/pkg/cni/cnishim.go#L37-L49 are alread using client/server model, with a binary that is implementing the CNI interface that uses a socket to connect to the "CNI plugin" , the NRI plugin can make this a native integration
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 on the native integration.
And I also agree that a "modern" plugin requires knowledge of API server objects (to various extents).
Generalizing this integration would definitely be beneficial.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll take a look at those.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The signatue of the existing CNI calls in containerd are
err = netPlugin.Status()
err = netPlugin.Remove(ctx, id, path, opts...)
result, err := netPlugin.Setup(ctx, id, path, opts...)if those 3 methods are added as 3 hooks of the NRI plugin (so it can be actually replaced by the NRI plugin), an existing CNI implementation can move to a more rich and flexible model provided by NRI and move on from the legacy interface
If these three network methods were to be implemented and only these three new methods were used to set up a network, does that imply the contents of /etc/cni/net.d/ is no longer neither needed nor available to the runtime, as e.g. a netPlugin.GetConfig() equivalent would not exist? Similarly, cniNetConfSyncer also disappears, as the network config is now in the hands of an NRI plugin? Or does the runtime still keep track of files in /etc/cni/net.d/, and keep sending the opts arguments to Setup() and Remove(), as is the case now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
result, err := netPlugin.Setup(ctx, id, path, opts...)
Does the 'opts' argument here turn into json marshalled protobuf map<string, bytes> and unmarshalled in the other end as string, interface{} mappings in Go, or did you have some other format in mind?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as e.g. a netPlugin.GetConfig() equivalent would not exist? Similarly, cniNetConfSyncer also disappears, as the network config is now in the hands of an NRI plugin? Or does the runtime still keep track of files in /etc/cni/net.d/, and keep sending the opts arguments to Setup() and Remove(), as is the case now?
heh, that is the 1M dollar question
I think CNI solves a problem and works well solving that problem, but is hard to extend. If we think about this as CNI 2.0 , that is very similar per the issues linked https://github.com/containernetworking/cni/issues?q=is%3Aissue+is%3Aopen+grpc , it will logically to think about it as exclusive, so people should choose between v1 or v2, but never both at same time, I do not know if that is even possible though
arguments to PreSetupNetwork includes an array of strings containing all JSON | ||
formatted CNI configuration files associated with the pod. | ||
|
||
The reply data consists of CNI capabilities and 'args' arguments. Any returned |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"CNI capabilities" is not a great API, mainly because is a convention https://github.com/containernetworking/cni/blob/main/CONVENTIONS.md#current-conventions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the above comments, we need plain 'opts' transported back and forth here?
Let's revisit this PR once KNI has taken shape. At that point we will learn whether any CNI specific plugins will be needed or if plugins interested in network activity listen to KNI instead. |
@pfl want to reopen this? |
Once KNI messaging gets specified, NRI plugins may perhaps hook into KNI directly. If needed, a non-KNI NRI based setup should be as close to a KNI one as possible in order to avoid excessive coding and added architecture. Right now I'm unfortunately (de)allocated to some DRA work until mid Q3. Thus the pause in this PR. |
Implementation of NRI network messages. The messages added are: