ANI (Application Network Interface) standardizes multi-cloud networking and its core constructs, enabling seamless provisioning of connectivity and access control. Traditionally, NetOps and SecOps teams have managed vendor-specific networks; however, as cloud-native and microservices architectures evolve, NetOps, SecOps, and CloudOps must adopt a more automated, standardized approach to multi-cloud and WAN networking. ANI provides interfaces that allow these teams to configure network resources and enforce security controls without delving into the specifics of each cloud or WAN vendor.
AWI (Application WAN Interface), a subset of ANI, offers a programmable interface for SD-WAN and CloudWAN controllers. It enables teams to establish enterprise WAN connectivity with defined service levels and access policies, while supporting vendor plugins.
Together, ANI and AWI create an open, plugin-based ecosystem that simplifies multi-cloud networking, streamlines network management, and connects network domains for secure, efficient operations.
-
[awi-grpc] This repository has the interface definition for 1) network domain connectivity and access control. 2) Application Access Control. The interfaces are defined both in YAML and through protobuf. Connection related YAML files can be found [here]
-
[awi-infra-guard] This repository allows discovery of resources in a cloud provider (AWS/GCP/Azure) environment that can be used in the connext of a connection. A resource could be a VPC, subnet, instance, a kubernetes service, namespace etc. This is used within kube-awi. It can also run independently on MacOS/Linux .
-
[awi-catalyst-sdwan-operator] AWI k8s operator for Catalyst SDWAN. With AWI operator installed, user can connect VPCs and VRF in multi-cloud environment using kubectl.
-
[awi-install] Helm chart(s) to install kubernetes operator. Also has script to do a full stack implementation.
-
[awi-cli] AWI CLI allows users to leverage AWI eco-system from a non-kubernetes envrionment.
-
[awi-grpc-catalyst-sdwan] AWI GRPC plugin for Cisco Catalyst SDWAN controller. This plugin is used within kube-awi. It can also run independently on MacOS/Linux .
-
[catalyst-sdwan-app-client] Catalyst SDWAN controller application client. This is not officially supported by Cisco Catalyst SDWAN team. It's used as a package from within Catalyst SDWAN controller plugin.
-
[kubernetes-discovery] Allows discovery of kubernetes clustsers , pods and services. It also watches resources for changes and has notifiers. This is used within AWI Infra Guard as a library to discover kubernetes resources, but can be used independently as well.
Thank you for your interest in contributing to awi-infra-guard! Please make sure you read the full code of conduct before making any contribution.
Before contributing to this repository, please first create an issue discussing the change you wish to make or discuss about it via email with one of the owners of the project.
We kindly ask you to follow the following code of conduct in all your interactions with the project.
Before doing any work, please make sure you are working on a local fork of the project. For more information and instructions on how to do so, please refer to GitHub's contributing guide.
Before reporting a new issue, please ensure that the issue was not already
reported or fixed by searching through a repo specific
issues list
When creating a new issue, please be sure to include a title and clear description, as much relevant information as possible, and, if possible, a test case.
If you discover a security bug, please do not report it through GitHub. Instead, please see security procedures in SECURITY.md.
- Ensure any install or build dependencies are removed before asking to merge your code.
- Update the
README.md
with details of changes to the interface, including new environment variables, exposed ports, file locations and container parameters. - Increase the version numbers in any examples files and the
README.md
to the new version that this Pull Request would represent. The versioning scheme we use is SemVer. Alternatively - You may ask one (or more) of the code owners to review and merge your Pull Request.
We welcome anyone that wants to contribute to awi-infra-guard to triage and reply to open issues to help troubleshoot and fix existing bugs. Here is what you can do:
- Help ensure that existing issues follows the recommendations from the Reporting Issues section, providing feedback to the issue's author on what might be missing.
- Review and update the existing content of our documentation with up-to-date instructions and code samples.
- Review existing pull requests, and testing patches against real existing applications that use awi-infra-guard.
- Write a test, or add a missing test case to an existing test.
Below are the problems we are trying to address:
Distributed applications have connectivity needs that span across multiple networking domains such as datacenter, VPCs (public cloud), campus and co-location. Line of Business (LOB) product teams, IT business application teams need to talk to NetOps team to provision connectivity across these sites for their distributed application deployment. These teams express their connectivity requirements to NetOps team via email, slack messages, service tickets or shared documents, adding considerable toil and making the overall process tedious. Most DevOps teams have adopted Agile development process and use CI/CD pipelines to deploy product artifacts. Today, connectivity provisioning is an impediment for their productivity as it slows the deployment process. Sometimes, they prefer taking short-cuts (e.g., Hosting many different LoB apps in a single VPC) to avoid dynamic connectivity provisioning, and thereby compromising on security and operational efficiency. Please see customer conversation section for details
AWI solves this problem by providing a open and standard connectivity interface for DevOps to provision connectivity across networking domains from within compute infrastructure such as Kubernetes, using tools like Kubectl that DevOps teams are already familiar with.
Once connectivity is provisioned, AWI data models allow Dev (Sec)Ops teams to do workload segmentation across networking domains. Only ABAC (Attribute Based Access Control) and segmentation-based security are supported at this moment.
AWI allows development teams to provision connectivity for a specific workload. In the backend networking infrastructure, traffic for a specific workload could be routed through an underlay provider network (such as Megaport/Equinix) and end-to-end network SLA can be maintained.
Cloud native ecosystem is fragmented with proprietary networking domains, compute clusters, Container Network Interfaces (CNIs) and service meshes. There is no standard way of provisioning connectivity across heterogenous compute clusters. We believe an SDWAN, or cloud networking controller is the glue that can help enterprises connect cloud-native compute clusters.
A standard based connectivity interface exposed through Cloud Networking / SDWAN connectivity domain would help enterprises adopt the controller software as the connectivity provider across their distributed compute clusters and workloads.
Is AWI only for DevOps and the application domain? What about DevOps access and authorization process?
AWI is being designed for DevOps consumption, so that inter cluster workload connections can be provisioned from within application domain.
SDWAN/Cloud WAN Controller vendor implementations would need to create mechanisms to create DevOps authorization flow, so that NetOps teams are in complete control of the SDWAN functions and services. This is to ensure that DevOps automation happens in the context of networking and security policy set up by NetOps team. This authorization process is outside of AWI scope. We have created an authorization mechanism for Cisco SDWAN.
NetOps admins can also use the intent-based interface to provision connectivity should they choose to. Controller , or API Access & Authorization is based on the user credentials that's provisioned within AWI operator or CLI. AWI inherintly does not specify who can or cannot use the API (Application Programmer Interface).
An open eco-system and standardization would accelerate adoption across the industry, and adoption by networking vendors would put hybrid/multi cloud network controllers in the front and center as the default multi-network domain connectivity infrastructure provider.
Today’s SDWAN/Cloud Network vendor controllers are proprietary and have proprietary interfaces. Compute infrastructure automation systems like Kubernetes have no integration with vendor controllers for external connectivity because of the need to deal with different proprietary interfaces. AWI would provide a vendor agnostic interface that can be used from within Kubernetes, so that connectivity can be provisioned using Kubectl. This would remove the need for Kubernetes maintainers to integrate with each vendor controller.