diff --git a/config.yaml b/config.yaml
index 211e4051..2041a5c2 100644
--- a/config.yaml
+++ b/config.yaml
@@ -36,9 +36,10 @@ params:
docs_search_index_name: index_name
docs_search_api_key: api_key
docs_versioning: true
- docs_latest: v1.14.2
+ docs_latest: v1.15.0
docs_versions:
- main
+ - v1.15.0
- v1.14.2
- v1.14.1
- v1.14.0
diff --git a/content/docs/v1.15.0/ADOPTERS.md b/content/docs/v1.15.0/ADOPTERS.md
new file mode 100644
index 00000000..5b692f64
--- /dev/null
+++ b/content/docs/v1.15.0/ADOPTERS.md
@@ -0,0 +1,105 @@
+# Antrea Adopters
+
+
+{{< img alt="glasnostic.com" src="docs/assets/adopters/glasnostic-logo.png"height="50" >}}
+
+
+{{< img alt="https://www.transwarp.io" src="docs/assets/adopters/transwarp-logo.png"height="50" >}}
+
+
+{{< img alt="https://www.terasky.com" src="docs/assets/adopters/terasky-logo.png"height="50" >}}
+
+## Success Stories
+
+Below is a list of adopters of Antrea that have publicly shared the details
+of how they use it.
+
+**[Glasnostic](https://glasnostic.com)**
+
+Glasnostic makes modern cloud operations resilient. It does this by shaping how
+systems interact, automatically and in real-time. As a result, DevOps and SRE
+teams can deploy reliably, prevent failure and assure the customer experience.
+We use Antrea's Open vSwitch support to tune how services interact in Kubernetes
+clusters. We are @glasnostic on Twitter.
+
+**[Transwarp](https://www.transwarp.io)**
+
+Transwarp is committed to building enterprise-level big data infrastructure
+software, providing enterprises with infrastructure software and supporting
+around the whole data lifecycle to build a data world of the future.
+
+1. We use Antrea's AntreaClusterNetworkPolicy and AntreaNetworkPolicy to protect
+big data software for every tenant of our kubernetes platform.
+2. We use Antrea's Open vSwitch to support Pod-To-Pod network between flannel and
+antrea clusters, and also between antrea clusters
+3. We use Antrea's Open vSwitch to support Pod-To-Pod network between flannel and
+antrea nodes in one cluster for upgrading.
+4. We use Antrea's Egress feature to keep the original source ip to ensure
+Internal Pods can get the real source IP of the request.
+
+You can contact us with
+
+**[TeraSky](https://terasky.com)**
+
+TeraSky is a Global Advanced Technology Solutions Provider.
+Antrea is used in our internal Kubernetes clusters as well as by many of our customers.
+Antrea helps us to apply a very strong and flexible security models in Kubernetes.
+We are very heavily utilizing Antrea Cluster Network Policies, Antrea Network Policies,
+and the Egress functionality.
+
+We are @TeraSkycom1 on Twitter.
+
+## Adding yourself as an Adopter
+
+It would be great to have your success story and logo on our list of
+Antrea adopters!
+
+To add yourself, you can follow the steps outlined below, alternatively,
+feel free to reach out via Slack or on Github to have our team
+add your success story and logo.
+
+1. Prepare your addition and PR as described in the Antrea
+[Contributor Guide](CONTRIBUTING.md).
+
+2. Add your name to the success stories, using **bold** format with a link to
+your web site like this: `**[Example](https://example.com)**`
+
+3. Below your name, describe your organization or yourself and how you make
+use of Antrea. Optionally, list the features of Antrea you are using. Please
+keep the line width at 80 characters maximum, and avoid trailing spaces.
+
+4. If you are willing to share contact details, e.g. your Twitter handle, etc.
+add a line where people can find you.
+
+ Example:
+
+ ```markdown
+ **[Example](https://example.com)**
+ Example.com is a company operating internationally, focusing on creating
+ documentation examples. We are using Antrea in our K8s clusters deployed
+ using Kubeadm. We making use of Antrea's Network Policy capabilities.
+ You can reach us on twitter @vmwopensource.
+ ```
+
+5. (Optional) To add your logo, simply drop your logo in PNG or SVG format with
+a maximum size of 50KB to the [adopters](docs/assets/adopters) directory.
+Name the image file something that reflects your company (e.g., if your company
+is called Acme, name the image acme-logo.png). Then add an inline html link
+directly bellow the [Antrea Adopters section](#antrea-adopters). Use the
+following format:
+
+ ```html
+
+ {{< img alt="example.com" src="docs/assets/adopters/example-logo.png" height="50" >}}
+ ```
+
+6. Send a PR with your addition as described in the Antrea
+[Contributor Guide](CONTRIBUTING.md)
+
+## Adding a logo to Antrea.io
+
+We are working on adding an *Adopters* section on [antrea.io][1].
+Follow the steps above to add your organization to the list of Antrea Adopters.
+We will follow up and publish it to the [antrea.io][1] website.
+
+[1]: https://antrea.io
diff --git a/content/docs/v1.15.0/CHANGELOG.md b/content/docs/v1.15.0/CHANGELOG.md
new file mode 100644
index 00000000..1a4acaea
--- /dev/null
+++ b/content/docs/v1.15.0/CHANGELOG.md
@@ -0,0 +1 @@
+Changelogs have been moved to the [CHANGELOG](https://github.com/antrea-io/antrea/blob/v1.15.0/CHANGELOG) directory.
diff --git a/content/docs/v1.15.0/CODE_OF_CONDUCT.md b/content/docs/v1.15.0/CODE_OF_CONDUCT.md
new file mode 100644
index 00000000..94d03ef9
--- /dev/null
+++ b/content/docs/v1.15.0/CODE_OF_CONDUCT.md
@@ -0,0 +1,3 @@
+# Community Code of Conduct
+
+Project Antrea follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
diff --git a/content/docs/v1.15.0/CONTRIBUTING.md b/content/docs/v1.15.0/CONTRIBUTING.md
new file mode 100644
index 00000000..57f9eed8
--- /dev/null
+++ b/content/docs/v1.15.0/CONTRIBUTING.md
@@ -0,0 +1,410 @@
+# Developer Guide
+
+Thank you for taking the time out to contribute to project Antrea!
+
+This guide will walk you through the process of making your first commit and how
+to effectively get it merged upstream.
+
+
+- [Getting Started](#getting-started)
+ - [Accounts Setup](#accounts-setup)
+- [Contribute](#contribute)
+ - [Git Client Hooks](#git-client-hooks)
+ - [GitHub Workflow](#github-workflow)
+ - [Getting reviewers](#getting-reviewers)
+ - [Getting your PR verified by CI](#getting-your-pr-verified-by-ci)
+ - [Cherry-picks to release branches](#cherry-picks-to-release-branches)
+ - [Conventions for Writing Documentation](#conventions-for-writing-documentation)
+ - [Inclusive Naming](#inclusive-naming)
+ - [Building and testing your change](#building-and-testing-your-change)
+ - [Reverting a commit](#reverting-a-commit)
+ - [Sign-off Your Work](#sign-off-your-work)
+- [Issue and PR Management](#issue-and-pr-management)
+ - [Filing An Issue](#filing-an-issue)
+ - [Issue Triage](#issue-triage)
+ - [Issue and PR Kinds](#issue-and-pr-kinds)
+
+
+## Getting Started
+
+To get started, let's ensure you have completed the following prerequisites for
+contributing to project Antrea:
+
+1. Read and observe the [code of conduct](CODE_OF_CONDUCT.md).
+2. Check out the [Architecture document](docs/design/architecture.md) for the Antrea
+ architecture and design.
+3. Set up necessary [accounts](#accounts-setup).
+
+Now that you're setup, skip ahead to learn how to [contribute](#contribute).
+
+### Accounts Setup
+
+At minimum, you need the following accounts for effective participation:
+
+1. **Github**: Committing any change requires you to have a [github
+ account](https://github.com/join).
+2. **Slack**: Join the [Kubernetes Slack](http://slack.k8s.io/) and look for our
+ [#antrea](https://kubernetes.slack.com/messages/CR2J23M0X) channel.
+3. **Google Group**: Join our [mailing list](https://groups.google.com/forum/#!forum/projectantrea-dev).
+
+## Contribute
+
+There are multiple ways in which you can contribute, either by contributing
+code in the form of new features or bug-fixes or non-code contributions like
+helping with code reviews, triaging of bugs, documentation updates, filing
+[new issues](#filing-an-issue) or writing blogs/manuals etc.
+
+In order to help you get your hands "dirty", there is a list of
+[starter](https://github.com/antrea-io/antrea/labels/Good%20first%20issue)
+issues from which you can choose.
+
+### Git Client Hooks
+
+ There are a few recommended git client hooks which we advise you to use. You can find
+ them here:
+ [hack/git_client_side_hooks](hack/git_client_side_hooks).
+ You can run `make install-hooks` to copy them to your local `.git/hooks/` folder, and remove them via `make uninstall-hooks`
+
+### GitHub Workflow
+
+Developers work in their own forked copy of the repository and when ready,
+submit pull requests to have their changes considered and merged into the
+project's repository.
+
+1. Fork your own copy of the repository to your GitHub account by clicking on
+ `Fork` button on [Antrea's GitHub repository](https://github.com/antrea-io/antrea).
+2. Clone the forked repository on your local setup.
+
+ ```bash
+ git clone https://github.com/$user/antrea
+ ```
+
+ Add a remote upstream to track upstream Antrea repository.
+
+ ```bash
+ git remote add upstream https://github.com/antrea-io/antrea
+ ```
+
+ Never push to upstream remote
+
+ ```bash
+ git remote set-url --push upstream no_push
+ ```
+
+3. Create a topic branch.
+
+ ```bash
+ git checkout -b branchName
+ ```
+
+4. Make changes and commit it locally. Make sure that your commit is
+ [signed](#sign-off-your-work).
+
+ ```bash
+ git add
+ git commit -s
+ ```
+
+5. Keeping branch in sync with upstream.
+
+ ```bash
+ git checkout branchName
+ git fetch upstream
+ git rebase upstream/main
+ ```
+
+6. Push local branch to your forked repository.
+
+ ```bash
+ git push -f $remoteBranchName branchName
+ ```
+
+7. Create a Pull request on GitHub.
+ Visit your fork at `https://github.com/antrea-io/antrea` and click
+ `Compare & Pull Request` button next to your `remoteBranchName` branch.
+
+### Getting reviewers
+
+Once you have opened a Pull Request (PR), reviewers will be assigned to your
+PR and they may provide review comments which you need to address.
+Commit changes made in response to review comments to the same branch on your
+fork. Once a PR is ready to merge, squash any *fix review feedback, typo*
+and *merged* sorts of commits.
+
+To make it easier for reviewers to review your PR, consider the following:
+
+1. Follow the golang [coding conventions](https://github.com/golang/go/wiki/CodeReviewComments)
+ and check out this [document](https://github.com/tnqn/code-review-comments#code-review-comments)
+ for common comments we made during reviews and suggestions for fixing them.
+2. Format your code with `make golangci-fix`; if the [linters](https://github.com/antrea-io/antrea/blob/v1.15.0/ci/README.md) flag an issue that
+ cannot be fixed automatically, an error message will be displayed so you can address the issue.
+3. Follow [git commit](https://chris.beams.io/posts/git-commit/) guidelines.
+4. Follow [logging](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md) guidelines.
+5. Please refer to [Conventions for Writing Documentation](#conventions-for-writing-documentation) for
+spelling conventions when writing documentation or commenting code.
+
+If your PR fixes a bug or implements a new feature, add the appropriate test
+cases to our [automated test suite](https://github.com/antrea-io/antrea/blob/v1.15.0/ci/README.md) to guarantee enough
+coverage. A PR that makes significant code changes without contributing new test
+cases will be flagged by reviewers and will not be accepted.
+
+### Getting your PR verified by CI
+
+It is a requirement to get your PR verified with CI checks before it gets merged.
+Also, it helps to find possible bugs before the review work starts. Once you create
+a PR, or you push new commits, CI checks at the bottom of a PR page will be refreshed.
+Checks include Github Action ones and Jenkins ones. Github Action ones will be
+triggered automatically when you push to the head branch of the PR but Jenkins ones
+need to be triggered manually with comments. Please note that if you are a first-time
+contributor, the Github workflows need approval from someone with write access to
+the repo. It's a Github security mechanism.
+
+Here are the trigger phrases for individual checks:
+
+* `/test-e2e`: Linux IPv4 e2e tests
+* `/test-conformance`: Linux IPv4 conformance tests
+* `/test-networkpolicy`: Linux IPv4 networkpolicy tests
+* `/test-all-features-conformance`: Linux IPv4 conformance tests with all features enabled
+* `/test-windows-e2e`: Windows IPv4 e2e tests
+* `/test-windows-conformance`: Windows IPv4 conformance tests
+* `/test-windows-networkpolicy`: Windows IPv4 networkpolicy tests
+* `/test-ipv6-e2e`: Linux dual stack e2e tests
+* `/test-ipv6-conformance`: Linux dual stack conformance tests
+* `/test-ipv6-networkpolicy`: Linux dual stack networkpolicy tests
+* `/test-ipv6-only-e2e`: Linux IPv6 only e2e tests
+* `/test-ipv6-only-conformance`: Linux IPv6 only conformance tests
+* `/test-ipv6-only-networkpolicy`: Linux IPv6 only networkpolicy tests
+* `/test-flexible-ipam-e2e`: Flexible IPAM e2e tests
+* `/test-multicast-e2e`: Multicast e2e tests
+* `/test-multicluster-e2e`: Multicluster e2e tests
+* `/test-vm-e2e`: ExternalNode e2e tests
+* `/test-whole-conformance`: All conformance tests on Linux
+* `/test-hw-offload`: Hardware offloading e2e tests
+* `/test-rancher-e2e`: Linux IPv4 e2e tests on Rancher clusters.
+* `/test-rancher-conformance`: Linux IPv4 conformance tests on Rancher clusters.
+* `/test-rancher-networkpolicy`: Linux IPv4 networkpolicy tests on Rancher clusters.
+
+Here are the trigger phrases for groups of checks:
+
+* `/test-all`: Linux IPv4 tests
+* `/test-windows-all`: Windows IPv4 tests, including e2e tests with proxyAll enabled. It also includes all containerd runtime based Windows tests since 1.10.0.
+* `/test-ipv6-all`: Linux dual stack tests
+* `/test-ipv6-only-all`: Linux IPv6 only tests
+
+Besides, you can skip a check with `/skip-*`, e.g. `/skip-e2e`: skip Linux IPv4
+e2e tests.
+
+Skipping a check should be used only when the change doesn't influence the
+specific function. For example:
+
+* doc change: skip all checks
+* comment change: skip all checks
+* test/e2e/* change: skip conformance and networkpolicy checks
+* *_windows.go change: skip Linux checks
+
+Besides skipping specific checks you can also cancel all stale running or waiting capv jenkins jobs related to your PR with
+`/stop-all-jobs`.
+
+For more information about the tests we run as part of CI, please refer to
+[ci/README.md](https://github.com/antrea-io/antrea/blob/v1.15.0/ci/README.md).
+
+### Cherry-picks to release branches
+
+If your PR fixes a critical bug, it may need to be backported to older release
+branches which are still maintained. If this is the case, one of the Antrea
+maintainers will let you know once your PR is approved. Please refer to the
+documentation on [cherry-picks](docs/contributors/cherry-picks.md) for more
+information.
+
+### Conventions for Writing Documentation
+
+* Short name of `IP Security` should be `IPsec` as per [rfc 6071](https://datatracker.ietf.org/doc/html/rfc6071).
+* Any Kubernetes object in log/comment should start with upper case, eg: Namespace, Pod, Service.
+
+### Inclusive Naming
+
+For symbol names and documentation, do not introduce new usage of harmful
+language such as 'master / slave' (or 'slave' independent of 'master') and
+'blacklist / whitelist'. For more information about what constitutes harmful
+language and for a reference word replacement list, please refer to the
+[Inclusive Naming Initiative](https://inclusivenaming.org/).
+
+We are committed to removing all harmful language from the project. If you
+detect existing usage of harmful language in code or documentation, please
+report the issue to us or open a Pull Request to address it directly. Thanks!
+
+### Building and testing your change
+
+To build the Antrea Docker image together with all Antrea bits, you can simply
+do:
+
+1. Checkout your feature branch and `cd` into it.
+2. Run `make`
+
+The second step will compile the Antrea code in a `golang` container, and build
+an Ubuntu-based Docker image that includes all the generated binaries. [`Docker`](https://docs.docker.com/install)
+must be installed on your local machine in advance. If you are a macOS user and
+cannot use [Docker Desktop](https://www.docker.com/products/docker-desktop) to
+contribute to Antrea for licensing reasons, check out this
+[document](docs/contributors/docker-desktop-alternatives.md) for possible
+alternatives.
+
+Alternatively, you can build the Antrea code in your local Go environment. The
+Antrea project uses the [Go modules support](https://github.com/golang/go/wiki/Modules) which was introduced in Go 1.11. It
+facilitates dependency tracking and no longer requires projects to live inside
+the `$GOPATH`.
+
+To develop locally, you can follow these steps:
+
+ 1. [Install Go 1.21](https://golang.org/doc/install)
+ 2. Checkout your feature branch and `cd` into it.
+ 3. To build all Go files and install them under `bin`, run `make bin`
+ 4. To run all Go unit tests, run `make test-unit`
+ 5. To build the Antrea Ubuntu Docker image separately with the binaries generated in step 2, run `make ubuntu`
+
+### Reverting a commit
+
+1. Create a branch in your forked repo
+
+ ```bash
+ git checkout -b revertName
+ ```
+
+2. Sync the branch with upstream
+
+ ```bash
+ git fetch upstream
+ git rebase upstream/main
+ ```
+
+3. Create a revert based on the SHA of the commit. The commit needs to be
+ [signed](#sign-off-your-work).
+
+ ```bash
+ git revert -s SHA
+ ```
+
+4. Push this new commit.
+
+ ```bash
+ git push $remoteRevertName revertName
+ ```
+
+5. Create a Pull Request on GitHub.
+ Visit your fork at `https://github.com/antrea-io/antrea` and click
+ `Compare & Pull Request` button next to your `remoteRevertName` branch.
+
+### Sign-off Your Work
+
+As a CNCF project, Antrea must enforce the [Developer Certificate of
+Origin](https://developercertificate.org/) (DCO) on all Pull Requests. We
+require that for all commits constituting the Pull Request, the commit message
+contains the `Signed-off-by` line with an email address that matches the commit
+author. By adding this line to their commit messages, contributors *sign-off*
+that they adhere to the requirements of the DCO.
+
+Git provides the `-s` command-line option to append the required line
+automatically to the commit message:
+
+```bash
+git commit -s -m 'This is my commit message'
+```
+
+For an existing commit, you can also use this option with `--amend`:
+
+```bash
+git commit -s --amend
+```
+
+If more than one person works on something it's possible for more than one
+person to sign-off on it. For example:
+
+```bash
+Signed-off-by: Some Developer somedev@example.com
+Signed-off-by: Another Developer anotherdev@example.com
+```
+
+We use the [DCO Github App](https://github.com/apps/dco) to enforce that all
+commits in a Pull Request include the required `Signed-off-by` line. If this is
+not the case, the app will report a failed status for the Pull Request and it
+will be blocked from being merged.
+
+Compared to our earlier CLA, DCO tends to make the experience simpler for new
+contributors. If you are contributing as an employee, there is no need for your
+employer to sign anything; the DCO assumes you are authorized to submit
+contributions (it's your responsibility to check with your employer).
+
+## Issue and PR Management
+
+We use labels and workflows (some manual, some automated with GitHub Actions) to
+help us manage triage, prioritize, and track issue progress. For a detailed
+discussion, see [docs/issue-management.md](docs/contributors/issue-management.md).
+
+### Filing An Issue
+
+Help is always appreciated. If you find something that needs fixing, please file
+an issue [here](https://github.com/antrea-io/antrea/issues). Please ensure
+that the issue is self explanatory and has enough information for an assignee to
+get started.
+
+Before picking up a task, go through the existing
+[issues](https://github.com/antrea-io/antrea/issues) and make sure that your
+change is not already being worked on. If it does not exist, please create a new
+issue and discuss it with other members.
+
+For simple contributions to Antrea, please ensure that this minimum set of
+labels are included on your issue:
+
+* **kind** -- common ones are `kind/feature`, `kind/support`, `kind/bug`,
+ `kind/documentation`, or `kind/design`. For an overview of the different types
+ of issues that can be submitted, see [Issue and PR
+ Kinds](#issue-and-pr-kinds).
+ The kind of issue will determine the issue workflow.
+* **area** (optional) -- if you know the area the issue belongs in, you can assign it.
+ Otherwise, another community member will label the issue during triage. The
+ area label will identify the area of interest an issue or PR belongs in and
+ will ensure the appropriate reviewers shepherd the issue or PR through to its
+ closure. For an overview of areas, see the
+ [`docs/github-labels.md`](docs/contributors/github-labels.md).
+* **size** (optional) -- if you have an idea of the size (lines of code,
+ complexity, effort) of the issue, you can label it using a size label. The
+ size can be updated during backlog grooming by contributors. This estimate is
+ used to guide the number of features selected for a milestone.
+
+All other labels will be assigned during issue triage.
+
+### Issue Triage
+
+Once an issue has been submitted, the CI (GitHub actions) or a human will
+automatically review the submitted issue or PR to ensure that it has all relevant
+information. If information is lacking or there is another problem with the
+submitted issue, an appropriate `triage/>` label will be applied.
+
+After an issue has been triaged, the maintainers can prioritize the issue with
+an appropriate `priority/>` label.
+
+Once an issue has been submitted, categorized, triaged, and prioritized it
+is marked as `ready-to-work`. A ready-to-work issue should have labels
+indicating assigned areas, prioritization, and should not have any remaining
+triage labels.
+
+### Issue and PR Kinds
+
+Use a `kind` label to describe the kind of issue or PR you are submitting. Valid
+kinds include:
+
+* [`kind/api-change`](docs/contributors/issue-management.md#api-change) -- for api changes
+* [`kind/bug`](docs/contributors/issue-management.md#bug) -- for filing a bug
+* [`kind/cleanup`](docs/contributors/issue-management.md#cleanup) -- for code cleanup and organization
+* [`kind/deprecation`](docs/contributors/issue-management.md#deprecation) -- for deprecating a feature
+* [`kind/design`](docs/contributors/issue-management.md#design) -- for proposing a design or architectural change
+* [`kind/documentation`](docs/contributors/issue-management.md#documentation) -- for updating documentation
+* [`kind/failing-test`](docs/contributors/issue-management.md#failing-test) -- for reporting a failed test (may
+ create with automation in future)
+* [`kind/feature`](docs/contributors/issue-management.md#feature) -- for proposing a feature
+* [`kind/support`](docs/contributors/issue-management.md#support) -- to request support. You may also get support by
+ using our [Slack](https://kubernetes.slack.com/archives/CR2J23M0X) channel for
+ interactive help. If you have not set up the appropriate accounts, please
+ follow the instructions in [accounts setup](#accounts-setup).
+
+For more details on how we manage issues, please read our [Issue Management doc](docs/contributors/issue-management.md).
diff --git a/content/docs/v1.15.0/GOVERNANCE.md b/content/docs/v1.15.0/GOVERNANCE.md
new file mode 100644
index 00000000..a58ca425
--- /dev/null
+++ b/content/docs/v1.15.0/GOVERNANCE.md
@@ -0,0 +1,85 @@
+# Antrea Governance
+
+This document defines the project governance for Antrea.
+
+## Overview
+
+**Antrea** is committed to building an open, inclusive, productive and
+self-governing open source community focused on building a high-quality
+[Kubernetes Network
+Plugin](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). The
+community is governed by this document which defines how all members should work
+together to achieve this goal.
+
+## Code of Conduct
+
+The Antrea community abides by this [code of conduct](CODE_OF_CONDUCT.md).
+
+## Community Roles
+
+* **Users:** Members that engage with the Antrea community via any medium
+ (Slack, GitHub, mailing lists, etc.).
+* **Contributors:** Do regular contributions to the Antrea project
+ (documentation, code reviews, responding to issues, participating in proposal
+ discussions, contributing code, etc.).
+* **Maintainers**: Responsible for the overall health and direction of the
+ project. They are the final reviewers of PRs and responsible for Antrea
+ releases.
+
+### Contributors
+
+Anyone can contribute to the project (e.g. open a PR) as long as they follow the
+guidelines in [CONTRIBUTING.md](CONTRIBUTING.md).
+
+Frequent contributors to the project can become members of the antrea-io Github
+organization and receive write access to the repository. Write access is
+required to trigger re-runs of workflows in [Github
+Actions](https://docs.github.com/en/actions/managing-workflow-runs/re-running-a-workflow). Becoming
+a member of the antrea-io Github organization does not come with additional
+responsibilities for the contributor, but simplifies the contributing
+process. To become a member, you may [open an
+issue](https://github.com/antrea-io/antrea/issues/new?template=membership.md&title=REQUEST%3A%20New%20membership%20for%20%3Cyour-GH-handle%3E)
+and your membership needs to be approved by two maintainers: approval is
+indicated by leaving a `+1` comment. If a contributor is not active for a
+duration of 12 months (no contribution of any kind), they may be removed from
+the antrea-io Github organization. In case of privilege abuse (members receive
+write access to the organization), any maintainer can decide to disable write
+access temporarily for the member. Within the next 2 weeks, the maintainer must
+either restore the member's privileges, or remove the member from the
+organization. The latter requires approval from at least one other maintainer,
+which must be obtained publicly either on Github or Slack.
+
+### Maintainers
+
+The list of current maintainers can be found in
+[MAINTAINERS.md](MAINTAINERS.md).
+
+While anyone can review a PR and is encouraged to do so, only maintainers are
+allowed to merge the PR. To maintain velocity, only one maintainer's approval is
+required to merge a given PR. In case of a disagreement between maintainers, a
+vote should be called (on Github or Slack) and a simple majority is required in
+order for the PR to be merged.
+
+New maintainers must be nominated from contributors by an existing maintainer
+and must be elected by a [supermajority](#supermajority) of the current
+maintainers. Likewise, maintainers can be removed by a supermajority of the
+maintainers or can resign by notifying the maintainers.
+
+### Supermajority
+
+A supermajority is defined as two-thirds of members in the group.
+
+## Code of Conduct
+
+The code of conduct is overseen by the Antrea project maintainers. Possible code
+of conduct violations should be emailed to the project maintainers at
+.
+
+If the possible violation is against one of the project maintainers that member
+will be recused from voting on the issue. Such issues must be escalated to the
+appropriate CNCF contact, and CNCF may choose to intervene.
+
+## Updating Governance
+
+All substantive changes in Governance require a supermajority vote of the
+maintainers.
diff --git a/content/docs/v1.15.0/MAINTAINERS.md b/content/docs/v1.15.0/MAINTAINERS.md
new file mode 100644
index 00000000..d3ab761f
--- /dev/null
+++ b/content/docs/v1.15.0/MAINTAINERS.md
@@ -0,0 +1,11 @@
+# Antrea Maintainers
+
+This is the current list of maintainers for the Antrea project. The maintainer
+role is described in [GOVERNANCE.md](GOVERNANCE.md).
+
+| Maintainer | GitHub ID | Affiliation |
+| ---------- | --------- | ----------- |
+| Antonin Bas | antoninbas | VMware |
+| Jianjun Shen | jianjuns | VMware |
+| Quan Tian | tnqn | VMware |
+| Salvatore Orlando | salv-orlando | VMware |
diff --git a/content/docs/v1.15.0/README.md b/content/docs/v1.15.0/README.md
new file mode 100644
index 00000000..1c466508
--- /dev/null
+++ b/content/docs/v1.15.0/README.md
@@ -0,0 +1,137 @@
+# Antrea
+
+![Antrea Logo](docs/assets/logo/antrea_logo.svg)
+
+![Build Status](https://github.com/antrea-io/antrea/workflows/Go/badge.svg?branch=main)
+[![Go Report Card](https://goreportcard.com/badge/antrea.io/antrea)](https://goreportcard.com/report/antrea.io/antrea)
+[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4173/badge)](https://bestpractices.coreinfrastructure.org/projects/4173)
+[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
+![GitHub release](https://img.shields.io/github/v/release/antrea-io/antrea?display_name=tag&sort=semver)
+[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2Fantrea-io%2Fantrea.svg?type=shield)](https://app.fossa.com/projects/git%2Bgithub.com%2Fantrea-io%2Fantrea?ref=badge_shield)
+
+## Overview
+
+Antrea is a [Kubernetes](https://kubernetes.io) networking solution intended
+to be Kubernetes native. It operates at Layer 3/4 to provide networking and
+security services for a Kubernetes cluster, leveraging
+[Open vSwitch](https://www.openvswitch.org/) as the networking data plane.
+
+
+
+Open vSwitch is a widely adopted high-performance programmable virtual
+switch; Antrea leverages it to implement Pod networking and security features.
+For instance, Open vSwitch enables Antrea to implement Kubernetes
+Network Policies in a very efficient manner.
+
+## Prerequisites
+
+Antrea has been tested with Kubernetes clusters running version 1.16 or later.
+
+* `NodeIPAMController` must be enabled in the Kubernetes cluster.\
+ When deploying a cluster with kubeadm the `--pod-network-cidr `
+ option must be specified.
+ Alternately, NodeIPAM feature of Antrea Controller should be enabled and
+ configured.
+* Open vSwitch kernel module must be present on every Kubernetes node.
+
+## Getting Started
+
+Getting started with Antrea is very simple, and takes only a few minutes.
+See how it's done in the [Getting started](docs/getting-started.md) document.
+
+## Contributing
+
+The Antrea community welcomes new contributors. We are waiting for your PRs!
+
+* Before contributing, please get familiar with our
+[Code of Conduct](CODE_OF_CONDUCT.md).
+* Check out the Antrea [Contributor Guide](CONTRIBUTING.md) for information
+about setting up your development environment and our contribution workflow.
+* Learn about Antrea's [Architecture and Design](docs/design/architecture.md).
+Your feedback is more than welcome!
+* Check out [Open Issues](https://github.com/antrea-io/antrea/issues).
+* Join the Antrea [community](#community) and ask us any question you may have.
+
+### Community
+
+* Join the [Kubernetes Slack](http://slack.k8s.io/) and look for our
+[#antrea](https://kubernetes.slack.com/messages/CR2J23M0X) channel.
+* Check the [Antrea Team Calendar](https://calendar.google.com/calendar/embed?src=uuillgmcb1cu3rmv7r7jrhcrco%40group.calendar.google.com)
+ and join the developer and user communities!
+ + The [Antrea community meeting](https://broadcom.zoom.us/j/91668049513?pwd=WHpaYTE2eWhja0xUN21MRU1BWllYdz09),
+every two weeks on Tuesday at 5AM GMT+1 (United Kingdom time). See Antrea team calendar for localized times.
+ - [Meeting minutes](https://github.com/antrea-io/antrea/wiki/Community-Meetings)
+ - [Meeting recordings](https://www.youtube.com/playlist?list=PLuzde2hYeDBdw0BuQCYbYqxzoJYY1hfwv)
+ + [Antrea live office hours](https://antrea.io/live) archives.
+* Join our mailing lists to always stay up-to-date with Antrea development:
+ + [projectantrea-announce](https://groups.google.com/forum/#!forum/projectantrea-announce)
+for important project announcements.
+ + [projectantrea](https://groups.google.com/forum/#!forum/projectantrea)
+for updates about Antrea or provide feedback.
+ + [projectantrea-dev](https://groups.google.com/forum/#!forum/projectantrea-dev)
+to participate in discussions on Antrea development.
+
+Also check out [@ProjectAntrea](https://twitter.com/ProjectAntrea) on Twitter!
+
+## Features
+
+* **Kubernetes-native**: Antrea follows best practices to extend the Kubernetes
+ APIs and provide familiar abstractions to users, while also leveraging
+ Kubernetes libraries in its own implementation.
+* **Powered by Open vSwitch**: Antrea relies on Open vSwitch to implement all
+ networking functions, including Kubernetes Service load-balancing, and to
+ enable hardware offloading in order to support the most demanding workloads.
+* **Run everywhere**: Run Antrea in private clouds, public clouds and on bare
+ metal, and select the appropriate traffic mode (with or without overlay) based
+ on your infrastructure and use case.
+* **Comprehensive policy model**: Antrea provides a comprehensive network policy
+ model, which builds upon Kubernetes Network Policies with new features such as
+ policy tiering, rule priorities and cluster-level policies. Refer to the
+ [Antrea Network Policy documentation](docs/antrea-network-policy.md) for a
+ full list of features.
+* **Windows Node support**: Thanks to the portability of Open vSwitch, Antrea
+ can use the same data plane implementation on both Linux and Windows
+ Kubernetes Nodes.
+* **Multi-cluster networking**: Federate multiple Kubernetes clusters and
+ benefit from a unified data plane (including multi-cluster Services) and a
+ unified security posture. Refer to the [Antrea Multi-cluster documentation](docs/multicluster/user-guide.md)
+ to get started.
+* **Troubleshooting and monitoring tools**: Antrea comes with CLI and UI tools
+ which provide visibility and diagnostics capabilities (packet tracing, policy
+ analysis, flow inspection). It exposes Prometheus metrics and supports
+ exporting network flow information to collectors and analyzers.
+* **Network observability and analytics**: Antrea + [Theia](https://github.com/antrea-io/theia)
+ enable fine-grained visibility into the communication among Kubernetes
+ workloads. Theia provides visualization for Antrea network flows in Grafana
+ dashboards, and recommends Network Policies to secure the workloads.
+* **Network Policies for virtual machines**: Antrea native policies can be
+ enforced on non-Kubernetes Nodes including VMs and baremetal servers. Project
+ [Nephe](https://github.com/antrea-io/nephe) implements security policies for
+ VMs across clouds, leveraging Antrea native policies.
+* **Encryption**: Encryption of inter-Node Pod traffic with IPsec or WireGuard
+ tunnels.
+* **Easy deployment**: Antrea is deployed by applying a single YAML manifest
+ file.
+
+To explore more Antrea features and their usage, check the [Getting started](docs/getting-started.md#features)
+document and user guides in the [Antrea documentation folder](docs/). Refer to
+the [Changelogs](https://github.com/antrea-io/antrea/blob/v1.15.0/CHANGELOG/README.md) for a detailed list of features
+introduced for each version release.
+
+## Adopters
+
+For a list of Antrea Adopters, please refer to [ADOPTERS.md](ADOPTERS.md).
+
+## Roadmap
+
+We are adding features very quickly to Antrea. Check out the list of features we
+are considering on our [Roadmap](ROADMAP.md) page. Feel free to throw your ideas
+in!
+
+## License
+
+Antrea is licensed under the [Apache License, version 2.0](https://github.com/antrea-io/antrea/blob/v1.15.0/LICENSE)
+
+[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2Fantrea-io%2Fantrea.svg?type=large)](https://app.fossa.com/projects/git%2Bgithub.com%2Fantrea-io%2Fantrea?ref=badge_large)
diff --git a/content/docs/v1.15.0/ROADMAP.md b/content/docs/v1.15.0/ROADMAP.md
new file mode 100644
index 00000000..e831599c
--- /dev/null
+++ b/content/docs/v1.15.0/ROADMAP.md
@@ -0,0 +1,125 @@
+# Antrea Roadmap
+
+This document lists the new features being considered for the future. The
+intention is for Antrea contributors and users to know what features could come
+in the near future, and to share feedback and ideas. Priorities for the project
+may change over time and so this roadmap is likely to evolve. A feature that is
+not listed now does not mean it will not be considered for Antrea. We definitely
+welcome suggestions and ideas from everyone about the roadmap and Antrea
+features. Reach us through Issues, Slack and / or Google Group!
+
+## Roadmap Items
+
+### Antrea v2
+
+Antrea [version 2](https://github.com/antrea-io/antrea/issues/4832) is coming in
+2024. We are graduating some popular features to Beta or GA, deprecating some
+legacy APIs, dropping support for old K8s versions (< 1.19) to improve support
+for newer ones, and more! This is a big milestone for the project, stay tuned!
+
+### K8s Node security
+
+So far Antrea has focused on K8s Pod networking and security, but we would like
+to extend Antrea-native NetworkPolicies to cover protection of K8s Nodes
+too. There is ongoing work for this, so expect this feature very soon!
+
+### Quality of life improvements for installation and upgrade
+
+We have a few things planned to improve basic usability:
+
+* provide separate container images for the Agent and Controller: this will
+ reduce image size and speed up deployment of new Antrea versions.
+* support for installation and upgrade using the antctl CLI: this will provide
+ an alternative installation method and antctl will ensure that Antrea
+ components are upgraded in the right order to minimize workload disruption.
+* CLI tools to facilitate migration from another CNI: we will take care of
+ provisioning the correct network resources for your existing workloads.
+
+### Core networking features
+
+We are currently working on supporting VLAN tagging for Egress traffic. In the
+long term, we plan to add BGP support to the Antrea Agent, as it is a much
+requested feature.
+
+### Windows support improvements
+
+Antrea [supports Windows K8s Nodes](docs/windows.md). However, a few features
+including: Egress, NodePortLocal, IPsec encryption are not supported for Windows
+yet. We will continue to add more features for Windows (starting with Egress)
+and aim for feature parity with Linux. We encourage users to reach out if they
+would like us to prioritize a specific feature. While the installation procedure
+has improved significantly since we first added Windows support, we plan to keep
+on streamlining the procedure (more automation) and on improving the user
+documentation.
+
+### More robust FQDN support in Antrea NetworkPolicy
+
+Antrea provides a comprehensive network policy model, which builds upon K8s
+Network Policies and provides many additional capabilities. One of them is the
+ability to define policy rules using domain names (FQDNs). We think there is
+some room to improve user experience with this feature, and we are working on
+making it more stable.
+
+### Implementation of new upstream NetworkPolicy APIs
+
+[SIG Network](https://github.com/kubernetes/community/tree/master/sig-network)
+is working on [new standard APIs](https://network-policy-api.sigs.k8s.io/) to
+extend the base K8s NetworkPolicy resource. We are closely monitoring the
+upstream work and implementing these APIs as their development matures.
+
+### Better network troubleshooting with packet capture
+
+Antrea comes with many tools for network diagnostics and observability. You may
+already be familiar with Traceflow, which lets you trace a single packet through
+the Antrea network. We plan on also providing users with the ability to capture
+live traffic and export it in PCAP format. Think tcpdump, but for K8s and
+through a dedicated Antrea API!
+
+### Multi-network support for Pods
+
+We recently added the SecondaryNetwork feature, which supports provisioning
+additional networks for Pods, using the same constructs made popular by
+[Multus](https://github.com/k8snetworkplumbingwg/multus-cni). However, at the
+moment, options for network "types" are limited. We plan on supporting new use
+cases (e.g., secondary network overlays, network acceleration with DPDK), as
+well as on improving user experience for this feature (with some useful
+documentation).
+
+### L7 security policy
+
+Support for L7 NetworkPolicies was added in version 1.10, providing the ability
+to select traffic based on the application-layer context. However, the feature
+currently only supports HTTP and TLS traffic, and we plan to extend support to
+other protocols, such as DNS.
+
+### Multi-cluster networking
+
+Antrea can federate multiple K8s clusters, but this feature (introduced in
+version 1.7) is still considered Alpha today. Most of the functionality is
+already there (multi-cluster Services, cross-cluster connectivity,
+and multi-cluster NetworkPolicies), but we think there is some room for
+improvement when it comes to stability and usability.
+
+### NetworkPolicy scale and performance tests
+
+We are working on a framework to empower contributors and users to benchmark the
+performance of Antrea at scale.
+
+### Investigate better integration with service meshes
+
+As service meshes start introducing alternatives to the sidecar approach,
+we believe there is an opportunity to improve the synergy between the K8s
+network plugin and the service mesh provider. In particular, we are looking at
+how Antrea can integrate with the new Istio ambient data plane mode. Take a look
+at [#5682](https://github.com/antrea-io/antrea/issues/5682) for more
+information.
+
+### Investigate multiple replicas for the Controller
+
+While today the Antrea Controller can scale to 1000s of K8s Nodes and 100,000
+Pods, and failover to a new replica in case of failure can happen in under a
+minute, we believe we should still investigate the possibility of deploying
+multiple replicas for the Controller (Active-Active or Active-Standby), to
+enable horizontal scaling and achieve high-availability with very quick
+failover. Horizontal scaling could help reduce the memory footprint of each
+Controller instance for very large K8s clusters.
diff --git a/content/docs/v1.15.0/SECURITY.md b/content/docs/v1.15.0/SECURITY.md
new file mode 100644
index 00000000..6456edbd
--- /dev/null
+++ b/content/docs/v1.15.0/SECURITY.md
@@ -0,0 +1,81 @@
+# Security Procedures
+
+The Antrea community holds security in the highest regard.
+The community adopted this security disclosure policy to ensure vulnerabilities are responsibly handled.
+
+## Reporting a Vulnerability
+
+If you believe you have identified a vulnerability, please work with the Antrea maintainers to fix it and disclose the issue responsibly.
+All security issues, confirmed or suspected, should be reported privately.
+Please avoid using github issues, and instead report the vulnerability to .
+
+A vulnerability report should be filed if any of the following applies:
+
+* You have discovered and confirmed a vulnerability in Antrea.
+* You believe Antrea might be vulnerable to some published [CVE](https://cve.mitre.org/cve/).
+* You have found a potential security flaw in Antrea but you're not yet sure whether there's a viable attack vector.
+* You have confirmed or suspect any of Antrea's dependencies has a vulnerability.
+
+### Vulnerability report template
+
+Provide a descriptive subject and include the following information in the body:
+
+* Detailed steps to reproduce the vulnerability (scripts, screenshots, packet captures, manual procedures, etc.).
+* Describe the effects of the vulnerability on the Kubernetes cluster, on the applications running on it, and on the underlying infrastructure, if applicable.
+* How the vulnerability affects Antrea workflows.
+* Potential attack vectors and an estimation of the attack surface, if applicable.
+* Other software that was used to expose the vulnerability.
+
+## Responding to a vulnerability
+
+A coordinator is assigned to each reported security issue. The coordinator is a member from the Antrea maintainers team, and will drive the fix and disclosure process.
+At the moment reports are received via email at .
+The first steps performed by the coordinator are to confirm the validity of the report and send an embargo reminder to all parties involved.
+Antrea maintainers and issue reporters will review the issue for confirmation of impact and determination of affected components.
+
+With reference to the scale reported below, reported vulnerabilities will be disclosed and treated as regular issues if their issue risk is low (level 4 or higher in the scale).
+For these lower-risk issues the fix process will proceed with the usual github workflow.
+
+### Reference taxonomy for issue risk
+
+1. Vulnerability must be fixed in main and any other supported branch.
+2. Vulnerability must be fixed in main only for next release.
+3. Vulnerability in experimental features or troubleshooting code.
+4. Vulnerability without practical attack vector (e.g.: needs GUID guessing).
+5. Not a vulnerability per se, but an opportunity to strengthen security (in code, architecture, protocols, and/or processes).
+6. Not a vulnerability or a strengthening opportunity.
+7. Vulnerability only exist in some PR or non-release branch.
+
+## Developing a patch for a vulnerability
+
+This part of the process applies only to confirmed vulnerabilities.
+The reporter and Antrea maintainers, plus anyone they deem necessary to develop and validate a fix will be included the discussion.
+
+**Please refrain from creating a PR for the fix!**
+
+A fix is proposed as a patch to the current main branch, formatted with:
+
+```bash
+git format-patch --stdout HEAD~1 > path/to/local/file.patch
+```
+
+and then sent to .
+
+**Please don't push the patch to the Antrea fork on your github account!**
+
+Patch review will be performed via email. Reviewers will suggest modifications and/or improvements, and then pre-approve it for merging.
+Pre-approval will ensure patches can be fast-tracked through public code review later at disclosure time.
+
+## Disclosing the vulnerability
+
+In preparation for this, at least a maintainer must be available to help pushing the fix at disclosure time.
+
+At the disclosure time, one of the maintainers (or the reporter) will open an issue on github and create a PR with the patch for the main branch and any other applicable branch.
+Available maintainers will fast-track approvals and merge the patch.
+
+Regardless of the owner of the issue and the corresponding PR, the original reporter and the submitter of the fix will be properly credited.
+As for the git history, the commit message and author of the pre-approved patch will be preserved in the final patch submitted into the Antrea repository.
+
+### Notes
+
+At the moment the Antrea project does not have a process to assign a CVE to a confirmed vulnerability.
diff --git a/content/docs/v1.15.0/_index.md b/content/docs/v1.15.0/_index.md
new file mode 100644
index 00000000..5b7b4cd0
--- /dev/null
+++ b/content/docs/v1.15.0/_index.md
@@ -0,0 +1,7 @@
+---
+cascade:
+ layout: docs
+ version: v1.15.0
+---
+
+{{% include-md "README.md" %}}
diff --git a/content/docs/v1.15.0/docs/admin-network-policy.md b/content/docs/v1.15.0/docs/admin-network-policy.md
new file mode 100644
index 00000000..4fac588f
--- /dev/null
+++ b/content/docs/v1.15.0/docs/admin-network-policy.md
@@ -0,0 +1,119 @@
+# AdminNetworkPolicy API Support in Antrea
+
+## Table of Contents
+
+
+- [Introduction](#introduction)
+- [Prerequisites](#prerequisites)
+- [Usage](#usage)
+ - [Sample specs for AdminNetworkPolicy and BaselineAdminNetworkPolicy](#sample-specs-for-adminnetworkpolicy-and-baselineadminnetworkpolicy)
+ - [Relationship with Antrea-native Policies](#relationship-with-antrea-native-policies)
+
+
+## Introduction
+
+Kubernetes provides the NetworkPolicy API as a simple way for developers to control traffic flows of their applications.
+While NetworkPolicy is embraced throughout the community, it was designed for developers instead of cluster admins.
+Therefore, traits such as the lack of explicit deny rules make securing workloads at the cluster level difficult.
+The Network Policy API working group (subproject of Kubernetes SIG-Network) has then introduced the
+[AdminNetworkPolicy APIs](https://network-policy-api.sigs.k8s.io/api-overview/) which aims to solve the cluster admin
+policy usecases.
+
+Starting with v1.13, Antrea supports the `AdminNetworkPolicy` and `BaselineAdminNetworkPolicy` API types, except for
+advanced Namespace selection mechanisms (namely `sameLabels` and `notSameLabels` rules) which are still in the
+experimental phase and not required as part of conformance.
+
+## Prerequisites
+
+AdminNetworkPolicy was introduced in v1.13 as an alpha feature and is disabled by default. A feature gate,
+`AdminNetworkPolicy`, must be enabled in antrea-controller.conf in the `antrea-config` ConfigMap when Antrea is deployed:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-controller.conf: |
+ featureGates:
+ AdminNetworkPolicy: true
+```
+
+Note that the `AdminNetworkPolicy` feature also requires the `AntreaPolicy` featureGate to be set to true, which is
+enabled by default since Antrea v1.0.
+
+In addition, the AdminNetworkPolicy CRD types need to be installed in the K8s cluster.
+Refer to [this document](https://network-policy-api.sigs.k8s.io/getting-started/) for more information.
+
+## Usage
+
+### Sample specs for AdminNetworkPolicy and BaselineAdminNetworkPolicy
+
+Please refer to the [examples page](https://network-policy-api.sigs.k8s.io/reference/examples/) of the network-policy-api
+repo, which contains several user stories for the AdminNetworkPolicy APIs, as well as sample specs for each of the user
+story. Shown below are sample specs of `AdminNetworkPolicy` and `BaselineAdminNetworkPolicy` for demonstration purposes:
+
+```yaml
+apiVersion: policy.networking.k8s.io/v1alpha1
+kind: AdminNetworkPolicy
+metadata:
+ name: cluster-wide-deny-example
+spec:
+ priority: 10
+ subject:
+ namespaces:
+ matchLabels:
+ kubernetes.io/metadata.name: sensitive-ns
+ ingress:
+ - action: Deny
+ from:
+ - namespaces:
+ namespaceSelector: {}
+ name: select-all-deny-all
+```
+
+```yaml
+apiVersion: policy.networking.k8s.io/v1alpha1
+kind: BaselineAdminNetworkPolicy
+metadata:
+ name: default
+spec:
+ subject:
+ namespaces: {}
+ ingress:
+ - action: Deny # zero-trust cluster default security posture
+ from:
+ - namespaces:
+ namespaceSelector: {}
+```
+
+Note that for a single cluster, the `BaselineAdminNetworkPolicy` resource is supported as a singleton with the name of
+`default`.
+
+### Relationship with Antrea-native Policies
+
+AdminNetworkPolicy API objects and Antrea-native policies can co-exist with each other in the same cluster.
+
+AdminNetworkPolicy and BaselineAdminNetworkPolicy API types provide K8s upstream supported, cluster admin facing
+guardrails that are portable and CNI-agnostic. AntreaClusterNetworkPolicy and AntreaNetworkPolicy on the other hand,
+are designed for similar use cases but provide a richer feature set, including FQDN policies, nodeSelectors and L7 rules.
+See the [Antrea-native policy doc](antrea-network-policy.md) and [L7 policy doc](antrea-network-policy.md) for details.
+
+Both the AdminNetworkPolicy object and Antrea-native policy objects use a `priority` field to determine its precedence
+compared to other policy objects. The following diagram describes the relative precedence between the AdminNetworkPolicy
+API types and Antrea-native policy types:
+
+```text
+Antrea-native Policies (tier != baseline) >
+AdminNetworkPolicies >
+K8s NetworkPolicies >
+Antrea-native Policies (tier == baseline) >
+BaselineAdminNetworkPolicy
+```
+
+In other words, any Antrea-native policies that are not created in the `baseline` tier will have higher precedence over,
+and thus evaluated before, all AdminNetworkPolicies at any `priority`. Effectively, the AdminNetworkPolicy objects are
+associated with a tier priority lower than Antrea-native policies, but higher than K8s NetworkPolicies. Similarly,
+baseline-tier Antrea-native policies will have a higher precedence over the BaselineAdminNetworkPolicy object.
+For more information on policy and rule precedence, refer to [this section](antrea-network-policy.md#notes-and-constraints).
diff --git a/content/docs/v1.15.0/docs/aks-installation.md b/content/docs/v1.15.0/docs/aks-installation.md
new file mode 100644
index 00000000..7cded58a
--- /dev/null
+++ b/content/docs/v1.15.0/docs/aks-installation.md
@@ -0,0 +1,283 @@
+# Deploying Antrea on AKS and AKS Engine
+
+This document describes steps to deploy Antrea to an AKS cluster or an AKS
+Engine cluster.
+
+## Deploy Antrea to an AKS cluster
+
+Antrea can be deployed to an AKS cluster either in `networkPolicyOnly` mode or
+in `encap` mode.
+
+In `networkPolicyOnly` mode, Antrea enforces NetworkPolicies and implements
+other services for the AKS cluster, while the Azure CNI takes care of Pod IPAM
+and traffic routing across Nodes. For more information about `networkPolicyOnly`
+mode, refer to [this design document](design/policy-only.md).
+
+In `encap` mode, Antrea is in charge of Pod IPAM and of all the networking
+functions on the Nodes. Using `encap` mode provides access to additional Antrea
+features, such as Multicast, as inter-Node Pod traffic is encapsulated, and is
+not handled directly by the Azure Virtual Network. Note that the [caveats](eks-installation.md#deploying-antrea-in-encap-mode)
+which apply when deploying Antrea in `encap` mode on EKS do *not* apply for AKS.
+
+We recommend `encap` mode, as it will give you access to the most Antrea
+features.
+
+### AKS Prerequisites
+
+Install the Azure Cloud CLI. Refer to [Azure CLI installation guide](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
+
+We recommend using the latest version available (use at least version 2.39.0).
+
+### Deploying Antrea in `networkPolicyOnly` mode
+
+#### Creating the cluster
+
+You can use any method to create an AKS cluster. The example given here is using the Azure Cloud CLI.
+
+1. Create an AKS cluster
+
+ ```bash
+ export RESOURCE_GROUP_NAME=aks-antrea-cluster
+ export CLUSTER_NAME=aks-antrea-cluster
+ export LOCATION=westus
+
+ az group create --name $RESOURCE_GROUP_NAME --location $LOCATION
+ az aks create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $CLUSTER_NAME \
+ --node-count 2 \
+ --network-plugin azure
+ ```
+
+ **Note** Do not specify network-policy option.
+
+2. Get AKS cluster credentials
+
+ ```bash
+ az aks get-credentials --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP_NAME
+ ```
+
+3. Access your cluster
+
+ ```bash
+ kubectl get nodes
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-84330359-vmss000000 Ready agent 6m21s v1.16.10
+ aks-nodepool1-84330359-vmss000001 Ready agent 6m25s v1.16.10
+ ```
+
+#### Deploying Antrea
+
+1. Prepare the cluster Nodes
+
+ Deploy ``antrea-node-init`` DaemonSet to enable ``azure cni`` to operate in transparent mode.
+
+ ```bash
+ kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-aks-node-init.yml
+ ```
+
+2. Deploy Antrea
+
+ To deploy a released version of Antrea, pick a deployment manifest from the
+[list of releases](https://github.com/antrea-io/antrea/releases).
+Note that AKS support was added in release 0.9.0, which means you cannot
+pick a release older than 0.9.0. For any given release `` (e.g. `v0.9.0`),
+you can deploy Antrea as follows:
+
+ ```bash
+ kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea-aks.yml
+ ```
+
+ To deploy the latest version of Antrea (built from the main branch), use the
+checked-in [deployment yaml](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/antrea-aks.yml):
+
+ ```bash
+ kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-aks.yml
+ ```
+
+ The command will deploy a single replica of Antrea controller to the AKS
+cluster and deploy Antrea agent to every Node. After a successful deployment
+you should be able to see these Pods running in your cluster:
+
+ ```bash
+ $ kubectl get pods --namespace kube-system -l app=antrea
+ NAME READY STATUS RESTARTS AGE
+ antrea-agent-bpj72 2/2 Running 0 40s
+ antrea-agent-j2sjz 2/2 Running 0 40s
+ antrea-controller-6f7468cbff-5sk4t 1/1 Running 0 43s
+ antrea-node-init-6twqg 1/1 Running 0 2m
+ antrea-node-init-mqsqr 1/1 Running 0 2m
+ ```
+
+3. Restart remaining Pods
+
+ Once Antrea is up and running, restart all Pods in all Namespaces (kube-system, etc) so they can be managed by Antrea.
+
+ ```bash
+ kubectl delete pods -n kube-system $(kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name,HOSTNETWORK:.spec.hostNetwork --no-headers=true | grep '' | awk '{ print $1 }')
+ pod "coredns-544d979687-96xm9" deleted
+ pod "coredns-544d979687-p7dfb" deleted
+ pod "coredns-autoscaler-78959b4578-849k8" deleted
+ pod "dashboard-metrics-scraper-5f44bbb8b5-5qkkx" deleted
+ pod "kube-proxy-6qxdw" deleted
+ pod "kube-proxy-h6d89" deleted
+ pod "kubernetes-dashboard-785654f667-7twsm" deleted
+ pod "metrics-server-85c57978c6-pwzcx" deleted
+ pod "tunnelfront-649ff5fb55-5lxg7" deleted
+ ```
+
+### Deploying Antrea in `encap` mode
+
+AKS now officially supports [Bring your own Container Network Interface (BYOCNI)](https://learn.microsoft.com/en-us/azure/aks/use-byo-cni).
+Thanks to this, you can deploy Antrea on AKS in `encap` mode, and you will not
+lose access to any functionality. Check the AKS BYOCNI documentation for
+prerequisites, in particular for AKS version requirements.
+
+#### Creating the cluster
+
+You can use any method to create an AKS cluster. The example given here is using the Azure Cloud CLI.
+
+1. Create an AKS cluster
+
+ ```bash
+ export RESOURCE_GROUP_NAME=aks-antrea-cluster
+ export CLUSTER_NAME=aks-antrea-cluster
+ export LOCATION=westus
+
+ az group create --name $RESOURCE_GROUP_NAME --location $LOCATION
+ az aks create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $CLUSTER_NAME \
+ --node-count 2 \
+ --network-plugin none
+ ```
+
+ Notice `--network-plugin none`, which tells AKS not to install any CNI plugin.
+
+2. Get AKS cluster credentials
+
+ ```bash
+ az aks get-credentials --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP_NAME
+ ```
+
+3. Access your cluster
+
+ ```bash
+ kubectl get nodes
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-40948307-vmss000000 NotReady agent 18m v1.27.7
+ aks-nodepool1-40948307-vmss000001 NotReady agent 17m v1.27.7
+ ```
+
+ The Nodes are supposed to report a `NotReady` Status, since no CNI plugin is
+ installed yet.
+
+#### Deploying Antrea
+
+You can use Helm to easily install Antrea (or any other supported installation
+method). Just make sure that you configure Antrea NodeIPAM:
+
+```bash
+# you may not need this:
+helm repo add antrea https://charts.antrea.io
+helm repo update
+
+cat <> values-aks.yml
+nodeIPAM:
+ enable: true
+ clusterCIDRs: ["10.10.0.0/16"]
+EOF
+
+helm install -n kube-system -f values-aks.yml antrea antrea/antrea
+```
+
+For more information about how to configure Antrea Node IPAM, please refer to
+[Antrea Node IPAM guide](antrea-ipam.md#running-nodeipam-within-antrea-controller).
+
+After a while, make sure that all your Nodes report a `Ready` Status and that
+all your Pods are running correctly. Some Pods, and in particular the
+`metrics-server` Pods, may restart once after installing Antrea; this is not an
+issue.
+
+After a successful installation, Pods should look like this:
+
+```bash
+NAMESPACE NAME READY STATUS RESTARTS AGE
+kube-system antrea-agent-bpskv 2/2 Running 0 7m34s
+kube-system antrea-agent-pfqrn 2/2 Running 0 7m34s
+kube-system antrea-controller-555b8c799d-wk8zz 1/1 Running 0 7m34s
+kube-system cloud-node-manager-2nszz 1/1 Running 0 31m
+kube-system cloud-node-manager-wj68q 1/1 Running 0 31m
+kube-system coredns-789789675-2nwd7 1/1 Running 0 6m48s
+kube-system coredns-789789675-lbkfn 1/1 Running 0 31m
+kube-system coredns-autoscaler-649b947bbd-j5wqc 1/1 Running 0 31m
+kube-system csi-azuredisk-node-4bnnl 3/3 Running 0 31m
+kube-system csi-azuredisk-node-52nwd 3/3 Running 0 31m
+kube-system csi-azurefile-node-2h66l 3/3 Running 0 31m
+kube-system csi-azurefile-node-dhrf2 3/3 Running 0 31m
+kube-system konnectivity-agent-5fc7989878-6nhwl 1/1 Running 0 31m
+kube-system konnectivity-agent-5fc7989878-t2n6h 1/1 Running 0 30m
+kube-system kube-proxy-96c9p 1/1 Running 0 31m
+kube-system kube-proxy-x8g8s 1/1 Running 0 31m
+kube-system metrics-server-5955767688-2hjvn 2/2 Running 0 3m45s
+kube-system metrics-server-5955767688-vmcq7 2/2 Running 0 3m45s
+```
+
+## Deploy Antrea to an AKS Engine cluster
+
+Antrea is an integrated CNI of AKS Engine, and can be installed in
+`networkPolicyOnly` mode or `encap` mode to an AKS Engine cluster as part of the
+AKS Engine cluster deployment. To learn basics of AKS Engine cluster deployment,
+please refer to [AKS Engine Quickstart Guide](https://github.com/Azure/aks-engine/blob/master/docs/tutorials/quickstart.md).
+
+### Deploying Antrea in `networkPolicyOnly` mode
+
+To configure Antrea to enforce NetworkPolicies for the AKS Engine cluster,
+`"networkPolicy": "antrea"` needs to be set in `kubernetesConfig` of the AKS
+Engine cluster definition (Azure CNI will be used as the `networkPlugin`):
+
+```json
+ "apiVersion": "vlabs",
+ "properties": {
+ "orchestratorProfile": {
+ "kubernetesConfig": {
+ "networkPolicy": "antrea"
+ }
+ }
+ }
+```
+
+You can use the deployment template
+[`examples/networkpolicy/kubernetes-antrea.json`](https://github.com/Azure/aks-engine/blob/master/examples/networkpolicy/kubernetes-antrea.json)
+to deploy an AKS Engine cluster with Antrea in `networkPolicyOnly` mode:
+
+```bash
+$ aks-engine deploy --dns-prefix \
+ --resource-group \
+ --location westus2 \
+ --api-model examples/networkpolicy/kubernetes-antrea.json \
+ --auto-suffix
+```
+
+### Deploying Antrea in `encap` mode
+
+To deploy Antrea in `encap` mode for an AKS Engine cluster, both
+`"networkPlugin": "antrea"` and `"networkPolicy": "antrea"` need to be set in
+`kubernetesConfig` of the AKS Engine cluster definition:
+
+```json
+ "apiVersion": "vlabs",
+ "properties": {
+ "orchestratorProfile": {
+ "kubernetesConfig": {
+ "networkPlugin": "antrea",
+ "networkPolicy": "antrea"
+ }
+ }
+ }
+```
+
+You can add `"networkPlugin": "antrea"` to the deployment template
+[`examples/networkpolicy/kubernetes-antrea.json`](https://github.com/Azure/aks-engine/blob/master/examples/networkpolicy/kubernetes-antrea.json),
+and use the template to deploy an AKS Engine cluster with Antrea in `encap`
+mode.
diff --git a/content/docs/v1.15.0/docs/antctl.md b/content/docs/v1.15.0/docs/antctl.md
new file mode 100644
index 00000000..96976844
--- /dev/null
+++ b/content/docs/v1.15.0/docs/antctl.md
@@ -0,0 +1,758 @@
+# Antctl
+
+antctl is the command-line tool for Antrea. At the moment, antctl supports
+running in three different modes:
+
+* "controller mode": when run out-of-cluster or from within the Antrea
+ Controller Pod, antctl can connect to the Antrea Controller and query
+ information from it (e.g. the set of computed NetworkPolicies).
+* "agent mode": when run from within an Antrea Agent Pod, antctl can connect to
+ the Antrea Agent and query information local to that Agent (e.g. the set of
+ computed NetworkPolicies received by that Agent from the Antrea Controller, as
+ opposed to the entire set of computed policies).
+* "flowaggregator mode": when run from within a Flow Aggregator Pod, antctl can
+ connect to the Flow Aggregator and query information from it (e.g. flow records
+ related statistics).
+
+## Table of Contents
+
+
+- [Installation](#installation)
+- [Usage](#usage)
+ - [Showing or changing log verbosity level](#showing-or-changing-log-verbosity-level)
+ - [Showing feature gates status](#showing-feature-gates-status)
+ - [Collecting support information](#collecting-support-information)
+ - [controllerinfo and agentinfo commands](#controllerinfo-and-agentinfo-commands)
+ - [NetworkPolicy commands](#networkpolicy-commands)
+ - [Mapping endpoints to NetworkPolicies](#mapping-endpoints-to-networkpolicies)
+ - [Dumping Pod network interface information](#dumping-pod-network-interface-information)
+ - [Dumping OVS flows](#dumping-ovs-flows)
+ - [OVS packet tracing](#ovs-packet-tracing)
+ - [Traceflow](#traceflow)
+ - [Antctl Proxy](#antctl-proxy)
+ - [Flow Aggregator commands](#flow-aggregator-commands)
+ - [Dumping flow records](#dumping-flow-records)
+ - [Record metrics](#record-metrics)
+ - [Multi-cluster commands](#multi-cluster-commands)
+ - [Multicast commands](#multicast-commands)
+ - [Showing memberlist state](#showing-memberlist-state)
+ - [Upgrade existing objects of CRDs](#upgrade-existing-objects-of-crds)
+
+
+## Installation
+
+The antctl binary is included in the Antrea Docker image
+(`antrea/antrea-ubuntu`) which means that there is no need to install anything
+to connect to the Antrea Agent. Simply exec into the antrea-agent container for
+the appropriate antrea-agent Pod and run `antctl`:
+
+```bash
+kubectl exec -it ANTREA-AGENT_POD_NAME -n kube-system -c antrea-agent -- bash
+> antctl help
+```
+
+Starting with Antrea release v0.5.0, we publish the antctl binaries for
+different OS / CPU Architecture combinations. Head to the [releases
+page](https://github.com/antrea-io/antrea/releases) and download the
+appropriate one for your machine. For example:
+
+On Mac & Linux:
+
+```bash
+curl -Lo ./antctl "https://github.com/antrea-io/antrea/releases/download//antctl-$(uname)-x86_64"
+chmod +x ./antctl
+mv ./antctl /some-dir-in-your-PATH/antctl
+antctl version
+```
+
+For Linux, we also publish binaries for Arm-based systems.
+
+On Windows, using PowerShell:
+
+```powershell
+Invoke-WebRequest -Uri https://github.com/antrea-io/antrea/releases/download//antctl-windows-x86_64.exe -Outfile antctl.exe
+Move-Item .\antctl.exe c:\some-dir-in-your-PATH\antctl.exe
+antctl version
+```
+
+## Usage
+
+To see the list of available commands and options, run `antctl help`. The list
+will be different based on whether you are connecting to the Antrea Controller
+or Agent.
+
+When running out-of-cluster ("controller mode" only), antctl will look for your
+kubeconfig file at `$HOME/.kube/config` by default. You can select a different
+one by setting the `KUBECONFIG` environment variable or with `--kubeconfig`
+(the latter taking precedence over the former).
+
+The following sub-sections introduce a few commands which are useful for
+troubleshooting the Antrea system.
+
+### Showing or changing log verbosity level
+
+Starting from version 0.10.0, Antrea supports showing or changing the log
+verbosity level of Antrea Controller or Antrea Agent using the `antctl log-level`
+command. Starting from version 1.5, Antrea supports showing or changing the
+log verbosity level of the Flow Aggregator using the `antctl log-level` command.
+The command can only run locally inside the `antrea-controller`, `antrea-agent`
+or `flow-aggregator` container.
+
+The following command prints the current log verbosity level:
+
+```bash
+antctl log-level
+```
+
+This command updates the log verbosity level (the `LEVEL` argument must be an
+integer):
+
+```bash
+antctl log-level LEVEL
+```
+
+### Showing feature gates status
+
+The feature gates of Antrea Controller and Agent can be shown using the `antctl get featuregates` command.
+The command can run locally inside the `antrea-controller` or `antrea-agent` container or out-of-cluster,
+when it is running out-of-cluster or in Controller Pod, it will print both Controller and Agent's feature gates list.
+
+The following command prints the current feature gates:
+
+```bash
+antctl get featuregates
+```
+
+### Collecting support information
+
+Starting with version 0.7.0, Antrea supports the `antctl supportbundle` command,
+which can collect information from the cluster, the Antrea Controller and all
+Antrea agents. This information is useful when trying to troubleshoot issues in
+Kubernetes clusters using Antrea. In particular, when running the command
+out-of-cluster, all the information can be collected under one single directory,
+which you can upload and share when reporting issues on Github. Simply run the
+command as follows:
+
+```bash
+antctl supportbundle [-d TARGET_DIR]
+```
+
+If you omit to provide a directory, antctl will create one in the current
+working directory, using the current timestamp as a suffix. The command also
+provides additional flags to filter the results: run `antctl supportbundle
+--help` for the full list.
+
+The collected support bundle will include the following (more information may be
+included over time):
+
+* cluster information: description of the different K8s resources in the cluster
+ (Nodes, Deployments, etc.).
+* Antrea Controller information: all the available logs (contents will vary
+ based on the verbosity selected when running the controller) and state stored
+ at the controller (e.g. computed NetworkPolicy objects).
+* Antrea Agent information: all the available logs from the agent and the OVS
+ daemons, network configuration of the Node (e.g. routes, iptables rules, OVS
+ flows) and state stored at the agent (e.g. computed NetworkPolicy objects
+ received from the controller).
+
+**Be aware that the generated support bundle includes a lot of information,
+ including logs, so please review the contents of the directory before sharing
+ it on Github and ensure that you do not share anything sensitive.**
+
+The `antctl supportbundle` command can also be run inside a Controller or Agent
+Pod, in which case only local information will be collected.
+
+Since v1.10.0, Antrea also supports collecting information by applying a
+`SupportBundleCollection` CRD, you can refer to the [support bundle guide](./support-bundle-guide.md)
+for more information.
+
+### controllerinfo and agentinfo commands
+
+`antctl` controller command `get controllerinfo` (or `get ci`) and agent command
+`get agentinfo` (or `get ai`) print the runtime information of
+`antrea-controller` and `antrea-agent` respectively.
+
+```bash
+antctl get controllerinfo
+antctl get agentinfo
+```
+
+### NetworkPolicy commands
+
+Both Antrea Controller and Agent support querying the NetworkPolicy objects in the Antrea
+control plane API. The source of a control plane NetworkPolicy is the original policy resource
+(K8s NetworkPolicy, Antrea-native Policy or AdminNetworkPolicy) from which the control plane
+NetworkPolicy was derived.
+
+- `antctl` `get networkpolicy` (or `get netpol`) command can print all
+NetworkPolicies, a specified NetworkPolicy, or NetworkPolicies in a specified
+Namespace.
+- `get appliedtogroup` (or `get atg`) command can print all NetworkPolicy
+AppliedToGroups (AppliedToGroup includes the Pods to which a NetworkPolicy is
+applied), or a specified AppliedToGroup.
+- `get addressgroup` (or `get ag`) command can print all NetworkPolicy
+AddressGroups (AddressGroup defines source or destination addresses of
+NetworkPolicy rules), or a specified AddressGroup.
+
+Using the `json` or `yaml` antctl output format can print more information of
+NetworkPolicy, AppliedToGroup, and AddressGroup, than using the default `table`
+output format. The `NAME` of a control plane NetworkPolicy is the UID of its source
+NetworkPolicy.
+
+```bash
+antctl get networkpolicy [NAME] [-n NAMESPACE] [-o yaml]
+antctl get appliedtogroup [NAME] [-o yaml]
+antctl get addressgroup [NAME] [-o yaml]
+```
+
+NetworkPolicy, AppliedToGroup, and AddressGroup also support `sort-by=''` option,
+which can be used to sort these resources on the basis of a particular field. Any
+valid json path can be passed as flag value. If no value is passed it will use a
+default field to sort results. For NetworkPolicy, the default field is the name of
+the source NetworkPolicy. For AppliedToGroup and AddressGroup, the default field is
+the object name (which is a generated UUID).
+
+```bash
+antctl get networkpolicy --sort-by='.sourceRef.name'
+antctl get appliedtogroup --sort-by='.metadata.name'
+antctl get addressgroup --sort-by='.metadata.name'
+```
+
+NetworkPolicy also supports `sort-by=effectivePriority` option, which can be used to
+view the effective order in which the NetworkPolicies are evaluated. Antrea-native
+NetworkPolicy ordering is documented [here](
+antrea-network-policy.md#antrea-native-policy-ordering-based-on-priorities).
+
+```bash
+antctl get networkpolicy --sort-by=effectivePriority
+```
+
+Antrea Agent supports some extra `antctl` commands.
+
+* Printing NetworkPolicies applied to a specific local Pod.
+
+ ```bash
+ antctl get networkpolicy -p POD -n NAMESPACE
+ ```
+
+* Printing NetworkPolicies with a specific source NetworkPolicy type.
+
+ ```bash
+ antctl get networkpolicy -T (K8sNP|ACNP|ANNP|ANP)
+ ```
+
+* Printing NetworkPolicies with a specific source NetworkPolicy name.
+
+ ```bash
+ antctl get networkpolicy -S SOURCE_NAME [-n NAMESPACE]
+ ```
+
+#### Mapping endpoints to NetworkPolicies
+
+`antctl` supports mapping a specific Pod to the NetworkPolicies which "select"
+this Pod, either because they apply to the Pod directly or because one of their
+policy rules selects the Pod.
+
+```bash
+antctl query endpoint -p POD [-n NAMESPACE]
+```
+
+If no Namespace is provided with `-n`, the command will default to the "default"
+Namespace.
+
+This command only works in "controller mode" and **as of now it can only be run
+from inside the Antrea Controller Pod, and not from out-of-cluster**.
+
+### Dumping Pod network interface information
+
+`antctl` agent command `get podinterface` (or `get pi`) can dump network
+interface information of all local Pods, or a specified local Pod, or local Pods
+in the specified Namespace, or local Pods matching the specified Pod name.
+
+```bash
+antctl get podinterface [NAME] [-n NAMESPACE]
+```
+
+### Dumping OVS flows
+
+Starting from version 0.6.0, Antrea Agent supports dumping Antrea OVS flows. The
+`antctl` `get ovsflows` (or `get of`) command can dump all OVS flows, flows
+added for a specified Pod, or flows added for Service load-balancing of a
+specified Service, or flows added to realize a specified NetworkPolicy, or flows
+in the specified OVS flow tables, or all or the specified OVS groups.
+
+```bash
+antctl get ovsflows
+antctl get ovsflows -p POD -n NAMESPACE
+antctl get ovsflows -S SERVICE -n NAMESPACE
+antctl get ovsflows -N NETWORKPOLICY -n NAMESPACE
+antctl get ovsflows -T TABLE_A,TABLE_B
+antctl get ovsflows -T TABLE_A,TABLE_B_NUM
+antctl get ovsflows -G all
+antctl get ovsflows -G GROUP_ID1,GROUP_ID2
+```
+
+OVS flow tables can be specified using table names, or the table numbers.
+`antctl get ovsflow --help` lists all Antrea flow tables. For more information
+about Antrea OVS pipeline and flows, please refer to the [OVS pipeline doc](design/ovs-pipeline.md).
+
+Example outputs of dumping Pod and NetworkPolicy OVS flows:
+
+```bash
+# Dump OVS flows of Pod "coredns-6955765f44-zcbwj"
+$ antctl get of -p coredns-6955765f44-zcbwj -n kube-system
+FLOW
+table=classification, n_packets=513122, n_bytes=42615080, priority=190,in_port="coredns--d0c58e" actions=set_field:0x2/0xffff->reg0,resubmit(,10)
+table=10, n_packets=513122, n_bytes=42615080, priority=200,ip,in_port="coredns--d0c58e",dl_src=52:bd:c6:e0:eb:c1,nw_src=172.100.1.7 actions=resubmit(,30)
+table=10, n_packets=0, n_bytes=0, priority=200,arp,in_port="coredns--d0c58e",arp_spa=172.100.1.7,arp_sha=52:bd:c6:e0:eb:c1 actions=resubmit(,20)
+table=80, n_packets=556468, n_bytes=166477824, priority=200,dl_dst=52:bd:c6:e0:eb:c1 actions=load:0x5->NXM_NX_REG1[],set_field:0x10000/0x10000->reg0,resubmit(,90)
+table=70, n_packets=0, n_bytes=0, priority=200,ip,dl_dst=aa:bb:cc:dd:ee:ff,nw_dst=172.100.1.7 actions=set_field:62:39:b4:e8:05:76->eth_src,set_field:52:bd:c6:e0:eb:c1->eth_dst,dec_ttl,resubmit(,80)
+
+# Get NetworkPolicies applied to Pod "coredns-6955765f44-zcbwj"
+$ antctl get netpol -p coredns-6955765f44-zcbwj -n kube-system
+NAMESPACE NAME APPLIED-TO RULES
+kube-system kube-dns 160ea6d7-0234-5d1d-8ea0-b703d0aa3b46 1
+
+# Dump OVS flows of NetworkPolicy "kube-dns"
+$ antctl get of -N kube-dns -n kube-system
+FLOW
+table=90, n_packets=0, n_bytes=0, priority=190,conj_id=1,ip actions=resubmit(,105)
+table=90, n_packets=0, n_bytes=0, priority=200,ip actions=conjunction(1,1/3)
+table=90, n_packets=0, n_bytes=0, priority=200,ip,reg1=0x5 actions=conjunction(2,2/3),conjunction(1,2/3)
+table=90, n_packets=0, n_bytes=0, priority=200,udp,tp_dst=53 actions=conjunction(1,3/3)
+table=90, n_packets=0, n_bytes=0, priority=200,tcp,tp_dst=53 actions=conjunction(1,3/3)
+table=90, n_packets=0, n_bytes=0, priority=200,tcp,tp_dst=9153 actions=conjunction(1,3/3)
+table=100, n_packets=0, n_bytes=0, priority=200,ip,reg1=0x5 actions=drop
+```
+
+### OVS packet tracing
+
+Starting from version 0.7.0, Antrea Agent supports tracing the OVS flows that a
+specified packet traverses, leveraging the [OVS packet tracing tool](https://docs.openvswitch.org/en/latest/topics/tracing/).
+
+`antctl trace-packet` command starts a packet tracing operation.
+`antctl help trace-packet` shows the usage of the command. This section lists a
+few trace-packet command examples.
+
+```bash
+# Trace an IP packet between two Pods
+antctl trace-packet -S ns1/pod1 -D ns2/pod2
+# Trace a Service request from a local Pod
+antctl trace-packet -S ns1/pod1 -D ns2/svc2 -f "tcp,tcp_dst=80"
+# Trace the Service reply packet (assuming "ns2/pod2" is the Service backend Pod)
+antctl trace-packet -D ns1/pod1 -S ns2/pod2 -f "tcp,tcp_src=80"
+# Trace an IP packet from a Pod to gateway port
+antctl trace-packet -S ns1/pod1 -D antrea-gw0
+# Trace a UDP packet from a Pod to an IP address
+antctl trace-packet -S ns1/pod1 -D 10.1.2.3 -f udp,udp_dst=1234
+# Trace a UDP packet from an IP address to a Pod
+antctl trace-packet -D ns1/pod1 -S 10.1.2.3 -f udp,udp_src=1234
+# Trace an ARP request from a local Pod
+antctl trace-packet -p ns1/pod1 -f arp,arp_spa=10.1.2.3,arp_sha=00:11:22:33:44:55,arp_tpa=10.1.2.1,dl_dst=ff:ff:ff:ff:ff:ff
+```
+
+Example outputs of tracing a UDP (DNS request) packet from a remote Pod to a
+local (coredns) Pod:
+
+```bash
+$ antctl trace-packet -S default/web-client -D kube-system/coredns-6955765f44-zcbwj -f udp,udp_dst=53
+result: |
+ Flow: udp,in_port=1,vlan_tci=0x0000,dl_src=aa:bb:cc:dd:ee:ff,dl_dst=aa:bb:cc:dd:ee:ff,nw_src=172.100.2.11,nw_dst=172.100.1.7,nw_tos=0,nw_ecn=0,nw_ttl=64,tp_src=0,tp_dst=53
+
+ bridge("br-int")
+ ----------------
+ 0. in_port=1, priority 200, cookie 0x5e000000000000
+ load:0->NXM_NX_REG0[0..15]
+ resubmit(,30)
+ 30. ip, priority 200, cookie 0x5e000000000000
+ ct(table=31,zone=65520)
+ drop
+ -> A clone of the packet is forked to recirculate. The forked pipeline will be resumed at table 31.
+ -> Sets the packet to an untracked state, and clears all the conntrack fields.
+
+ Final flow: unchanged
+ Megaflow: recirc_id=0,eth,udp,in_port=1,nw_frag=no,tp_src=0x0/0xfc00
+ Datapath actions: ct(zone=65520),recirc(0x53)
+
+ ===============================================================================
+ recirc(0x53) - resume conntrack with default ct_state=trk|new (use --ct-next to customize)
+ ===============================================================================
+
+ Flow: recirc_id=0x53,ct_state=new|trk,ct_zone=65520,eth,udp,in_port=1,vlan_tci=0x0000,dl_src=aa:bb:cc:dd:ee:ff,dl_dst=aa:bb:cc:dd:ee:ff,nw_src=172.100.2.11,nw_dst=172.100.1.7,nw_tos=0,nw_ecn=0,nw_ttl=64,tp_src=0,tp_dst=53
+
+ bridge("br-int")
+ ----------------
+ thaw
+ Resuming from table 31
+ 31. priority 0, cookie 0x5e000000000000
+ resubmit(,40)
+ 40. priority 0, cookie 0x5e000000000000
+ resubmit(,50)
+ 50. priority 0, cookie 0x5e000000000000
+ resubmit(,60)
+ 60. priority 0, cookie 0x5e000000000000
+ resubmit(,70)
+ 70. ip,dl_dst=aa:bb:cc:dd:ee:ff,nw_dst=172.100.1.7, priority 200, cookie 0x5e030000000000
+ set_field:62:39:b4:e8:05:76->eth_src
+ set_field:52:bd:c6:e0:eb:c1->eth_dst
+ dec_ttl
+ resubmit(,80)
+ 80. dl_dst=52:bd:c6:e0:eb:c1, priority 200, cookie 0x5e030000000000
+ set_field:0x5->reg1
+ set_field:0x10000/0x10000->reg0
+ resubmit(,90)
+ 90. conj_id=2,ip, priority 190, cookie 0x5e050000000000
+ resubmit(,105)
+ 105. ct_state=+new+trk,ip, priority 190, cookie 0x5e000000000000
+ ct(commit,table=110,zone=65520)
+ drop
+ -> A clone of the packet is forked to recirculate. The forked pipeline will be resumed at table 110.
+ -> Sets the packet to an untracked state, and clears all the conntrack fields.
+
+ Final flow: recirc_id=0x53,eth,udp,reg0=0x10000,reg1=0x5,in_port=1,vlan_tci=0x0000,dl_src=62:39:b4:e8:05:76,dl_dst=52:bd:c6:e0:eb:c1,nw_src=172.100.2.11,nw_dst=172.100.1.7,nw_tos=0,nw_ecn=0,nw_ttl=63,tp_src=0,tp_dst=53
+ Megaflow: recirc_id=0x53,ct_state=+new-est-inv+trk,ct_mark=0,eth,udp,in_port=1,dl_src=aa:bb:cc:dd:ee:ff,dl_dst=aa:bb:cc:dd:ee:ff,nw_src=192.0.0.0/2,nw_dst=172.100.1.7,nw_ttl=64,nw_frag=no,tp_dst=53
+ Datapath actions: set(eth(src=62:39:b4:e8:05:76,dst=52:bd:c6:e0:eb:c1)),set(ipv4(ttl=63)),ct(commit,zone=65520),recirc(0x54)
+
+ ===============================================================================
+ recirc(0x54) - resume conntrack with default ct_state=trk|new (use --ct-next to customize)
+ ===============================================================================
+
+ Flow: recirc_id=0x54,ct_state=new|trk,ct_zone=65520,eth,udp,reg0=0x10000,reg1=0x5,in_port=1,vlan_tci=0x0000,dl_src=62:39:b4:e8:05:76,dl_dst=52:bd:c6:e0:eb:c1,nw_src=172.100.2.11,nw_dst=172.100.1.7,nw_tos=0,nw_ecn=0,nw_ttl=63,tp_src=0,tp_dst=53
+
+ bridge("br-int")
+ ----------------
+ thaw
+ Resuming from table 110
+ 110. ip,reg0=0x10000/0x10000, priority 200, cookie 0x5e000000000000
+ output:NXM_NX_REG1[]
+ -> output port is 5
+
+ Final flow: unchanged
+ Megaflow: recirc_id=0x54,eth,ip,in_port=1,nw_frag=no
+ Datapath actions: 3
+```
+
+### Traceflow
+
+`antctl traceflow` (or `antctl tf`) command is used to start a Traceflow and
+retrieve its result. After the result is collected, the Traceflow will be
+deleted. Users can also create a Traceflow with `kubectl`, but `antctl traceflow`
+offers a simpler way. For more information about Traceflow, refer to the
+[Traceflow guide](traceflow-guide.md).
+
+To start a regular Traceflow, both `--source` (or `-S`) and `--destination` (or
+`-D`) arguments must be specified, and the source must be a Pod. For example:
+
+```bash
+$ antctl tf -S busybox0 -D busybox1
+name: busybox0-to-busybox1-fpllngzi
+phase: Succeeded
+source: default/busybox0
+destination: default/busybox1
+results:
+- node: antrea-linux-testbed7-1
+ timestamp: 1596435607
+ observations:
+ - component: SpoofGuard
+ action: Forwarded
+ - component: Forwarding
+ componentInfo: Output
+ action: Delivered
+```
+
+To start a live-traffic Traceflow, add the `--live-traffic` (or `-L`) flag. Add
+the `--dropped-only` flag to indicate only the packet dropped by a NetworkPolicy
+should be captured in the live-traffic Traceflow. A live-traffic Traceflow
+just requires one of `--source` and `--destination` arguments to be specified,
+and at least one of them must be a Pod.
+
+The `--flow` (or `-f`) argument can be used to specify the Traceflow packet
+headers with the [ovs-ofctl](http://www.openvswitch.org//support/dist-docs/ovs-ofctl.8.txt)
+flow syntax. The supported flow fields include: IP family (`ipv6` to indicate an
+IPv6 packet), IP protocol (`icmp`, `icmpv6`, `tcp`, `udp`), source and
+destination ports (`tcp_src`, `tcp_dst`, `udp_src`, `udp_dst`), and TCP flags
+(`tcp_flags`).
+
+By default, the command will wait for the Traceflow to succeed or fail, or
+timeout. The default timeout is 10 seconds, but can be changed with the
+`--timeout` (or `-t`) argument. Add the `--no-wait` flag to start a Traceflow
+without waiting for its results. In this case, the command will not delete the
+Traceflow resource. The `traceflow` command supports yaml and json output.
+
+More examples of `antctl traceflow`:
+
+```bash
+# Start a Traceflow from pod1 to pod2, both Pods are in Namespace default
+$ antctl traceflow -S pod1 -D pod2
+# Start a Traceflow from pod1 in Namepace ns1 to a destination IP
+$ antctl traceflow -S ns1/pod1 -D 123.123.123.123
+# Start a Traceflow from pod1 to Service svc1 in Namespace ns1
+$ antctl traceflow -S pod1 -D ns1/svc1 -f tcp,tcp_dst=80
+# Start a Traceflow from pod1 to pod2, with a UDP packet to destination port 1234
+$ antctl traceflow -S pod1 -D pod2 -f udp,udp_dst=1234
+# Start a Traceflow for live TCP traffic from pod1 to svc1, with 1 minute timeout
+$ antctl traceflow -S pod1 -D svc1 -f tcp --live-traffic -t 1m
+# Start a Traceflow to capture the first dropped TCP packet to pod1 on port 80, within 10 minutes
+$ antctl traceflow -D pod1 -f tcp,tcp_dst=80 --live-traffic --dropped-only -t 10m
+```
+
+### Antctl Proxy
+
+antctl can run as a reverse proxy for the Antrea API (Controller or arbitrary
+Agent). Usage is very similar to `kubectl proxy` and the implementation is
+essentially the same.
+
+To run a reverse proxy for the Antrea Controller API, use:
+
+```bash
+antctl proxy --controller
+````
+
+To run a reverse proxy for the Antrea Agent API for the antrea-agent Pod running
+on Node , use:
+
+```bash
+antctl proxy --agent-node
+```
+
+You can then access the API at `127.0.0.1:8001`. To implement this
+functionality, antctl retrieves the Node IP address and API server port for the
+Antrea Controller or for the specified Agent from the K8s API, and it proxies
+all the requests received on `127.0.0.1:8001` directly to that IP / port. One
+thing to keep in mind is that the TLS connection between the proxy and the
+Antrea Agent or Controller will not be secure (no certificate verification), and
+the proxy should be used for debugging only.
+
+To see the full list of supported options, run `antctl proxy --help`.
+
+This feature is useful if one wants to use the Go
+[pprof](https://golang.org/pkg/net/http/pprof/) tool to collect runtime
+profiling data about the Antrea components. Please refer to this
+[document](troubleshooting.md#profiling-antrea-components) for more information.
+
+### Flow Aggregator commands
+
+antctl supports dumping the flow records handled by the Flow Aggregator, and
+printing metrics about flow record processing. These commands are only available
+when you exec into the Flow Aggregator Pod.
+
+#### Dumping flow records
+
+antctl supports dumping flow records stored in the Flow Aggregator. The
+`antctl get flowrecords` command can dump all matching flow records. It supports
+the 5-tuple flow key or a subset of the 5-tuple as a filter. A 5-tuple flow key
+contains Source IP, Destination IP, Source Port, Destination Port and Transport
+Protocol. If the filter is empty, all flow records will be dumped.
+
+The command provides a compact display of the flow records in the default table
+output format, which contains the flow key, source pod name, destination pod name,
+source pod namespace, destination pod namespace and destination service name for
+each flow record. Using the `json` or `yaml` antctl output format will include
+output flow record information in a structured format, and will include more
+information about each flow record. `antctl get flowrecords --help` shows the
+usage of the command. This section lists a few dumping flow records command
+examples.
+
+```bash
+# Get the list of all flow records
+antctl get flowrecords
+# Get the list of flow records with a complete filter and output in json format
+antctl get flowrecords --srcip 10.0.0.1 --dstip 10.0.0.2 --proto 6 --srcport 1234 --dstport 5678 -o json
+# Get the list of flow records with a partial filter, e.g. source address and source port
+antctl get flowrecords --srcip 10.0.0.1 --srcport 1234
+```
+
+Example outputs of dumping flow records:
+
+```bash
+$ antctl get flowrecords --srcip 10.10.1.4 --dstip 10.10.0.2
+SRC_IP DST_IP SPORT DPORT PROTO SRC_POD DST_POD SRC_NS DST_NS SERVICE
+10.10.1.4 10.10.0.2 38581 53 17 flow-aggregator-67dc8ddfc8-zx8sg coredns-78fcd69978-7vc6k flow-aggregator kube-system kube-system/kube-dns:dns
+10.10.1.4 10.10.0.2 56505 53 17 flow-aggregator-67dc8ddfc8-zx8sg coredns-78fcd69978-7vc6k flow-aggregator kube-system kube-system/kube-dns:dns
+
+$ antctl get flowrecords --srcip 10.10.0.1 --srcport 50497 -o json
+[
+ {
+ "destinationClusterIPv4": "0.0.0.0",
+ "destinationIPv4Address": "10.10.1.2",
+ "destinationNodeName": "k8s-node-worker-1",
+ "destinationPodName": "coredns-78fcd69978-x2twv",
+ "destinationPodNamespace": "kube-system",
+ "destinationServicePort": 0,
+ "destinationServicePortName": "",
+ "destinationTransportPort": 53,
+ "egressNetworkPolicyName": "",
+ "egressNetworkPolicyNamespace": "",
+ "egressNetworkPolicyRuleAction": 0,
+ "egressNetworkPolicyRuleName": "",
+ "egressNetworkPolicyType": 0,
+ "flowEndReason": 3,
+ "flowEndSeconds": 1635546893,
+ "flowStartSeconds": 1635546867,
+ "flowType": 2,
+ "ingressNetworkPolicyName": "",
+ "ingressNetworkPolicyNamespace": "",
+ "ingressNetworkPolicyRuleAction": 0,
+ "ingressNetworkPolicyRuleName": "",
+ "ingressNetworkPolicyType": 0,
+ "octetDeltaCount": 99,
+ "octetDeltaCountFromDestinationNode": 99,
+ "octetDeltaCountFromSourceNode": 0,
+ "octetTotalCount": 99,
+ "octetTotalCountFromDestinationNode": 99,
+ "octetTotalCountFromSourceNode": 0,
+ "packetDeltaCount": 1,
+ "packetDeltaCountFromDestinationNode": 1,
+ "packetDeltaCountFromSourceNode": 0,
+ "packetTotalCount": 1,
+ "packetTotalCountFromDestinationNode": 1,
+ "packetTotalCountFromSourceNode": 0,
+ "protocolIdentifier": 17,
+ "reverseOctetDeltaCount": 192,
+ "reverseOctetDeltaCountFromDestinationNode": 192,
+ "reverseOctetDeltaCountFromSourceNode": 0,
+ "reverseOctetTotalCount": 192,
+ "reverseOctetTotalCountFromDestinationNode": 192,
+ "reverseOctetTotalCountFromSourceNode": 0,
+ "reversePacketDeltaCount": 1,
+ "reversePacketDeltaCountFromDestinationNode": 1,
+ "reversePacketDeltaCountFromSourceNode": 0,
+ "reversePacketTotalCount": 1,
+ "reversePacketTotalCountFromDestinationNode": 1,
+ "reversePacketTotalCountFromSourceNode": 0,
+ "sourceIPv4Address": "10.10.0.1",
+ "sourceNodeName": "",
+ "sourcePodName": "",
+ "sourcePodNamespace": "",
+ "sourceTransportPort": 50497,
+ "tcpState": ""
+ }
+]
+```
+
+#### Record metrics
+
+Flow Aggregator supports printing record metrics. The `antctl get recordmetrics`
+command can print all metrics related to the Flow Aggregator. The metrics include
+the following:
+
+* number of records received by the collector process in the Flow Aggregator
+* number of records exported by the Flow Aggregator
+* number of active flows that are being tracked
+* number of exporters connected to the Flow Aggregator
+
+Example outputs of record metrics:
+
+```bash
+RECORDS-EXPORTED RECORDS-RECEIVED FLOWS EXPORTERS-CONNECTED
+46 118 7 2
+```
+
+### Multi-cluster commands
+
+For information about Antrea Multi-cluster commands, please refer to the
+[antctl Multi-cluster commands](./multicluster/antctl.md).
+
+### Multicast commands
+
+The `antctl get podmulticaststats [POD_NAME] [-n NAMESPACE]` command prints inbound
+and outbound multicast statistics for each Pod. Note that IGMP packets are not counted.
+
+Example output of podmulticaststats:
+
+```bash
+$ antctl get podmulticaststats
+
+NAMESPACE NAME INBOUND OUTBOUND
+testmulticast-vw7gx5b9 test3-receiver-2 30 0
+testmulticast-vw7gx5b9 test3-sender-1 0 10
+```
+
+### Showing memberlist state
+
+`antctl` agent command `get memberlist` (or `get ml`) prints the state of memberlist
+cluster of Antrea Agent.
+
+```bash
+$ antctl get memberlist
+
+NODE IP STATUS
+worker1 172.18.0.4 Alive
+worker2 172.18.0.3 Alive
+worker3 172.18.0.2 Dead
+```
+
+### Upgrade existing objects of CRDs
+
+antctl supports upgrading existing objects of Antrea CRDs to the storage version.
+The related sub-commands should be run out-of-cluster. Please ensure that the
+kubeconfig file used by antctl has the necessary permissions. The required permissions
+are listed in the following sample ClusterRole.
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: antctl
+rules:
+ - apiGroups:
+ - apiextensions.k8s.io
+ resources:
+ - customresourcedefinitions
+ verbs:
+ - get
+ - list
+ - apiGroups:
+ - apiextensions.k8s.io
+ resources:
+ - customresourcedefinitions/status
+ verbs:
+ - update
+ - apiGroups:
+ - crd.antrea.io
+ resources:
+ - "*"
+ verbs:
+ - get
+ - list
+ - update
+```
+
+This command performs a dry-run to upgrade all existing objects of Antrea CRDs to
+the storage version:
+
+```bash
+antctl upgrade api-storage --dry-run
+```
+
+This command upgrades all existing objects of Antrea CRDs to the storage version:
+
+```bash
+antctl upgrade api-storage
+```
+
+This command upgrades existing AntreaAgentInfo objects to the storage version:
+
+```bash
+antctl upgrade api-storage --crds=antreaagentinfos.crd.antrea.io
+```
+
+This command upgrades existing Egress and Group objects to the storage version:
+
+```bash
+antctl upgrade api-storage --crds=egresses.crd.antrea.io,groups.crd.antrea.io
+```
+
+If you encounter any errors related to permissions while running the commands, double-check
+the permissions of the kubeconfig used by antctl. Ensure that the ClusterRole has the
+required permissions. The following sample errors are caused by insufficient permissions:
+
+```bash
+Error: failed to get CRD list: customresourcedefinitions.apiextensions.k8s.io is forbidden: User "user" cannot list resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope
+
+Error: externalippools.crd.antrea.io is forbidden: User "user" cannot list resource "externalippools" in API group "crd.antrea.io" at the cluster scope
+
+Error: error upgrading object prod-external-ip-pool of CRD "externalippools.crd.antrea.io": externalippools.crd.antrea.io "prod-external-ip-pool" is forbidden: User "user" cannot update resource "externalippools" in API group "crd.antrea.io" at the cluster scope
+
+Error: error updating CRD "externalippools.crd.antrea.io" status.storedVersion: customresourcedefinitions.apiextensions.k8s.io "externalippools.crd.antrea.io" is forbidden: User "user" cannot update resource "customresourcedefinitions/status" in API group "apiextensions.k8s.io" at the cluster scope
+```
diff --git a/content/docs/v1.15.0/docs/antrea-agent-simulator.md b/content/docs/v1.15.0/docs/antrea-agent-simulator.md
new file mode 100644
index 00000000..d2d265ea
--- /dev/null
+++ b/content/docs/v1.15.0/docs/antrea-agent-simulator.md
@@ -0,0 +1,53 @@
+# Run Antrea agent simulator
+
+This document describes how to run the Antrea agent simulator. The simulator is
+useful for Antrea scalability testing, without having to create a very large
+cluster.
+
+## Build the images
+
+```bash
+make build-scale-simulator
+```
+
+## Create the yaml file
+
+This demo uses 1 simulator, this command will create a yaml file
+build/yamls/antrea-scale.yml
+
+```bash
+make manifest-scale
+```
+
+The above yaml will create one simulated Node/Pod, to change the number of
+instances, you can modify `spec.replicas` of the StatefulSet
+`antrea-agent-simulator` in the yaml, or scale it via
+`kubectl scale statefulset/antrea-agent-simulator -n kube-system --replicas=`
+after deploying it.
+
+## Taint the simulator node
+
+To prevent Pods from being scheduled on the simulated Node(s), you can use the
+following taint.
+
+```bash
+kubectl taint -l 'antrea/instance=simulator' node mocknode=true:NoExecute
+```
+
+## Create secret for kubemark
+
+```bash
+kubectl create secret generic kubeconfig --type=Opaque --namespace=kube-system --from-file=admin.conf=
+```
+
+## Apply the yaml file
+
+```bash
+kubectl apply -f build/yamls/antrea-scale.yml
+```
+
+check the simulated Node:
+
+ ```bash
+kubectl get nodes -l 'antrea/instance=simulator'
+ ```
diff --git a/content/docs/v1.15.0/docs/antrea-ipam.md b/content/docs/v1.15.0/docs/antrea-ipam.md
new file mode 100644
index 00000000..7bb6cbac
--- /dev/null
+++ b/content/docs/v1.15.0/docs/antrea-ipam.md
@@ -0,0 +1,482 @@
+# Antrea IPAM Capabilities
+
+
+* [Antrea IPAM Capabilities](#antrea-ipam-capabilities)
+ * [Running NodeIPAM within Antrea Controller](#running-nodeipam-within-antrea-controller)
+ * [Configuration](#configuration)
+ * [Antrea Flexible IPAM](#antrea-flexible-ipam)
+ * [Usage](#usage)
+ * [Enable AntreaIPAM feature gate and bridging mode](#enable-antreaipam-feature-gate-and-bridging-mode)
+ * [Create IPPool CR](#create-ippool-cr)
+ * [IPPool Annotations on Namespace](#ippool-annotations-on-namespace)
+ * [IPPool Annotations on Pod (available since Antrea 1.5)](#ippool-annotations-on-pod-available-since-antrea-15)
+ * [Persistent IP for StatefulSet Pod (available since Antrea 1.5)](#persistent-ip-for-statefulset-pod-available-since-antrea-15)
+ * [Data path behaviors](#data-path-behaviors)
+ * [Requirements for this Feature](#requirements-for-this-feature)
+ * [Flexible IPAM design](#flexible-ipam-design)
+ * [On IPPool CR create/update event](#on-ippool-cr-createupdate-event)
+ * [On StatefulSet create event](#on-statefulset-create-event)
+ * [On StatefulSet delete event](#on-statefulset-delete-event)
+ * [On Pod create](#on-pod-create)
+ * [On Pod delete](#on-pod-delete)
+ * [IPAM for Secondary Network](#ipam-for-secondary-network)
+ * [Prerequisites](#prerequisites)
+ * [CNI IPAM configuration](#cni-ipam-configuration)
+ * [Configuration with `NetworkAttachmentDefinition` CRD](#configuration-with-networkattachmentdefinition-crd)
+ * [`IPPool` CRD](#ippool-crd)
+ * [Secondary Network creation with Multus](#secondary-network-creation-with-multus)
+
+
+## Running NodeIPAM within Antrea Controller
+
+NodeIPAM is a Kubernetes component, which manages IP address pool allocation per
+each Node, when the Node initializes.
+
+On single stack deployments, NodeIPAM allocates a single IPv4 or IPv6 CIDR per
+Node, while in dual stack deployments, NodeIPAM allocates two CIDRs per each
+Node: one for each IP family.
+
+NodeIPAM is configured with a CIDR per each family, which it slices into smaller
+per-Node CIDRs. When a Node is initialized, the CIDRs are set to the podCIDRs
+attribute of the Node spec.
+
+Antrea NodeIPAM controller can be executed in scenarios where the
+NodeIPAMController is disabled in kube-controller-manager.
+
+Note that running Antrea NodeIPAM while NodeIPAMController runs within
+kube-controller-manager would cause conflicts and result in an unstable
+behavior.
+
+### Configuration
+
+Antrea Controller NodeIPAM configuration items are grouped under `nodeIPAM`
+dictionary key.
+
+NodeIPAM dictionary contains the following items:
+
+- `enableNodeIPAM`: Enable the integrated NodeIPAM controller within the Antrea
+controller. Default is false.
+
+- `clusterCIDRs`: CIDR ranges for Pods in cluster. String array containing single
+CIDR range, or multiple ranges. The CIDRs could be either IPv4 or IPv6. At most
+one CIDR may be specified for each IP family. Example values:
+`[172.100.0.0/16]`, `[172.100.0.0/20, fd00:172:100::/60]`.
+
+- `serviceCIDR`: CIDR range for IPv4 Services in cluster. It is not necessary to
+specify it when there is no overlap with clusterCIDRs.
+
+- `serviceCIDRv6`: CIDR range for IPv6 Services in cluster. It is not necessary to
+ specify it when there is no overlap with clusterCIDRs.
+
+- `nodeCIDRMaskSizeIPv4`: Mask size for IPv4 Node CIDR in IPv4 or dual-stack
+cluster. Valid range is 16 to 30. Default is 24.
+
+- `nodeCIDRMaskSizeIPv6`: Mask size for IPv6 Node CIDR in IPv6 or dual-stack
+cluster. Valid range is 64 to 126. Default is 64.
+
+Below is a sample of needed changes in the Antrea deployment YAML:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-controller.conf: |
+ nodeIPAM:
+ enableNodeIPAM: true
+ clusterCIDRs: [172.100.0.0/16]
+```
+
+When running Antrea NodeIPAM in a particular version or scenario, you may need to
+be aware of the following:
+
+* Prior to v1.12, a feature gate, `NodeIPAM` must also be enabled for
+ `antrea-controller`.
+* Prior to v1.13, running Antrea NodeIPAM without kube-proxy is not supported.
+ Starting with v1.13, the `kubeAPIServerOverride` option in the `antrea-controller`
+ configuration must be set to the address of Kubernetes apiserver when kube-proxy
+ is not deployed.
+
+## Antrea Flexible IPAM
+
+Antrea supports flexible control over Pod IP addressing since version 1.4. Pod
+IP addresses can be allocated from an `IPPool`. When a Pod's IP is allocated
+from an IPPool, the traffic from the Pod to Pods on another Node or from the Pod to
+external network will be sent to the underlay network through the Node's transport
+network interface, and will be forwarded/routed by the underlay network. We also
+call this forwarding mode `bridging mode`.
+
+`IPPool` CRD defines a desired set of IP ranges and VLANs. An `IPPool` can be annotated
+to Namespace, Pod and PodTemplate of StatefulSet/Deployment. Then Antrea will
+manage IP address assignment for corresponding Pods according to `IPPool` spec.
+Note that the IP pool annotation cannot be updated or deleted without recreating
+the resource. An `IPPool` can be extended, but cannot be shrunk if already
+assigned to a resource. The IP ranges of IPPools must not overlap, otherwise it
+would lead to undefined behavior.
+
+Regular `Subnet per Node` IPAM will continue to be used for resources without the
+IPPool annotation, or when the `AntreaIPAM` feature is disabled.
+
+### Usage
+
+#### Enable AntreaIPAM feature gate and bridging mode
+
+To enable flexible IPAM, you need to enable the `AntreaIPAM` feature gate for
+both `antrea-controller` and `antrea-agent`, and set the `enableBridgingMode`
+configuration parameter of `antrea-agent` to `true`.
+
+When Antrea is installed from YAML, the needed changes in the Antrea
+ConfigMap `antrea-config` YAML are as below:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-controller.conf: |
+ featureGates:
+ AntreaIPAM: true
+ antrea-agent.conf: |
+ featureGates:
+ AntreaIPAM: true
+ enableBridgingMode: true
+ trafficEncapMode: "noEncap"
+ noSNAT: true
+```
+
+Alternatively, you can use the following helm install/upgrade command to configure
+the above options:
+
+ ```bash
+ helm upgrade --install antrea antrea/antrea --namespace kube-system --set
+enableBridgingMode=true,featureGates.AntreaIPAM=true,trafficEncapMode=noEncap,noSNAT=true
+ ```
+
+#### Create IPPool CR
+
+The following example YAML manifest creates an IPPool CR.
+
+```yaml
+apiVersion: "crd.antrea.io/v1alpha2"
+kind: IPPool
+metadata:
+ name: pool1
+spec:
+ ipVersion: 4
+ ipRanges:
+ - start: "10.2.0.12"
+ end: "10.2.0.20"
+ gateway: "10.2.0.1"
+ prefixLength: 24
+ vlan: 2 # Default is 0 (untagged). Valid value is 0~4095.
+```
+
+#### IPPool Annotations on Namespace
+
+The following example YAML manifest creates a Namespace to allocate Pod IPs from the IP pool.
+
+```yaml
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: namespace1
+ annotations:
+ ipam.antrea.io/ippools: 'pool1'
+```
+
+#### IPPool Annotations on Pod (available since Antrea 1.5)
+
+Since Antrea v1.5.0, Pod IPPool annotation is supported and has a higher
+priority than the Namespace IPPool annotation. This annotation can be added to
+`PodTemplate` of a controller resource such as StatefulSet and Deployment.
+
+Pod IP annotation is supported for a single Pod to specify a fixed IP for the Pod.
+
+Examples of annotations on a Pod or PodTemplate:
+
+```yaml
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: statefulset1
+spec:
+ replicas: 1 # Do not increase replicas if there is pod-ips annotation in PodTemplate
+ template:
+ metadata:
+ annotations:
+ ipam.antrea.io/ippools: 'sts-ip-pool1' # This annotation will be set automatically on all Pods managed by this resource
+ ipam.antrea.io/pod-ips: ''
+```
+
+```yaml
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: statefulset1
+spec:
+ replicas: 4
+ template:
+ metadata:
+ annotations:
+ ipam.antrea.io/ippools: 'sts-ip-pool1' # This annotation will be set automatically on all Pods managed by this resource
+ # Do not add pod-ips annotation to PodTemplate if there is more than 1 replica
+```
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: pod1
+ annotations:
+ ipam.antrea.io/ippools: 'pod-ip-pool1'
+```
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: pod1
+ annotations:
+ ipam.antrea.io/ippools: 'pod-ip-pool1'
+ ipam.antrea.io/pod-ips: ''
+```
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: pod1
+ annotations:
+ ipam.antrea.io/pod-ips: ''
+```
+
+#### Persistent IP for StatefulSet Pod (available since Antrea 1.5)
+
+A StatefulSet Pod's IP will be kept after Pod restarts, when the IP is allocated from the
+annotated IPPool.
+
+### Data path behaviors
+
+When `AntreaIPAM` is enabled, `antrea-agent` will connect the Node's network interface
+to the OVS bridge at startup, and it will detach the interface from the OVS bridge and
+restore its configurations at exit. Node may lose network connection when `antrea-agent`
+or OVS daemons are stopped unexpectedly, which can be recovered by rebooting the Node.
+`AntreaIPAM` Pods' traffic will not be routed by local Node's network stack.
+
+Traffic from `AntreaIPAM` Pods without VLAN, regular `Subnet per Node` IPAM Pods, and K8s
+Nodes is recognized as VLAN 0 (untagged).
+
+Traffic to a local Pod in the Pod's VLAN will be sent to the Pod's OVS port directly,
+after the destination MAC is rewritten to the Pod's MAC address. This includes
+`AntreaIPAM` Pods and regular `Subnet per Node` IPAM Pods, even when they are not in the
+same subnet. Traffic to a Pod in different VLAN will be sent to the underlay network,
+where the underlay router will route the traffic to the destination VLAN.
+
+### Requirements for this Feature
+
+As of now, this feature is supported on Linux Nodes, with IPv4, `system` OVS datapath
+type, `noEncap`, `noSNAT` traffic mode, and `AntreaProxy` feature enabled. Configuration
+with `ProxyAll` feature enabled is not verified.
+
+The IPs in the `IPPools` without VLAN must be in the same underlay subnet as the Node
+IP, because inter-Node traffic of AntreaIPAM Pods is forwarded by the Node network.
+`IPPools` with VLAN must not overlap with other network subnets, and the underlay network
+router should provide the network connectivity for these VLANs. Only a single IP pool can
+be included in the Namespace annotation. In the future, annotation of up to two pools for
+IPv4 and IPv6 respectively will be supported.
+
+### Flexible IPAM design
+
+When the `AntreaIPAM` feature gate is enabled, `antrea-controller` will watch IPPool CRs and
+StatefulSets from `kube-apiserver`.
+
+#### On IPPool CR create/update event
+
+`antrea-controller` will update IPPool counters, and periodically clean up stale IP addresses.
+
+#### On StatefulSet create event
+
+`antrea-controller` will check the Antrea IPAM annotations on the StatefullSet, and preallocate
+IPs from the specified IPPool for the StatefullSet Pods
+
+#### On StatefulSet delete event
+
+`antrea-controller` will clean up IP allocations for this StatefulSet.
+
+#### On Pod create
+
+`antrea-agent` will receive a CNI add request, and it will then check the Antrea IPAM annotations
+and allocate an IP for the Pod, which can be a pre-allocated IP StatefulSet IP, a user-specified
+IP, or the next available IP in the specified IPPool.
+
+#### On Pod delete
+
+`antrea-agent` will receive a CNI del request and release the IP allocation from the IPPool.
+If the IP is a pre-allocated StatefulSet IP, it will stay in the pre-allocated status thus the Pod
+will get same IP after recreated.
+
+## IPAM for Secondary Network
+
+With the AntreaIPAM feature, Antrea can allocate IPs for Pod secondary networks. At the
+moment, AntreaIPAM supports secondary networks managed by [Multus](https://github.com/k8snetworkplumbingwg/multus-cni),
+we will add support for [secondary networks managed by Antrea](feature-gates.md#secondarynetwork)
+in the future.
+
+### Prerequisites
+
+The IPAM capability for secondary network was added in Antrea version 1.7. It
+requires the `AntreaIPAM` feature gate to be enabled on both `antrea-controller`
+and `antrea-agent`, as `AntreaIPAM` is still an alpha feature at this moment and
+is not enabled by default.
+
+### CNI IPAM configuration
+
+To configure Antrea IPAM, `antrea` should be specified as the IPAM plugin in the
+the CNI IPAM configuration, and at least one Antrea IPPool should be specified
+in the `ippools` field. IPs will be allocated from the specified IPPool(s) for
+the secondary network.
+
+```json
+{
+ "cniVersion": "0.3.0",
+ "name": "ipv4-net-1",
+ "type": "macvlan",
+ "master": "eth0",
+ "mode": "bridge",
+ "ipam": {
+ "type": "antrea",
+ "ippools": [ "ipv4-pool-1" ]
+ }
+}
+```
+
+Multiple IPPools can be specified to allocate multiple IPs from each IPPool for
+the secondary network. For example, you can specify one IPPool to allocate an
+IPv4 address and another IPPool to allocate an IPv6 address in the dual-stack
+case.
+
+```json
+{
+ "cniVersion": "0.3.0",
+ "name": "dual-stack-net-1",
+ "type": "macvlan",
+ "master": "eth0",
+ "mode": "bridge",
+ "ipam": {
+ "type": "antrea",
+ "ippools": [ "ipv4-pool-1", "ipv6-pool-1" ]
+ }
+}
+```
+
+Additionally, Antrea IPAM also supports the same configuration of static IP
+addresses, static routes, and DNS settings, as what is supported by the
+[static IPAM plugin](https://www.cni.dev/plugins/current/ipam/static). The
+following example requests an IP from an IPPool and also specifies two
+additional static IP addresses. It also includes static routes and DNS settings.
+
+```json
+{
+ "cniVersion": "0.3.0",
+ "name": "pool-and-static-net-1",
+ "type": "bridge",
+ "bridge": "br0",
+ "ipam": {
+ "type": "antrea",
+ "ippools": [ "ipv4-pool-1" ],
+ "addresses": [
+ {
+ "address": "10.10.0.1/24",
+ "gateway": "10.10.0.254"
+ },
+ {
+ "address": "3ffe:ffff:0:01ff::1/64",
+ "gateway": "3ffe:ffff:0::1"
+ }
+ ],
+ "routes": [
+ { "dst": "0.0.0.0/0" },
+ { "dst": "192.168.0.0/16", "gw": "10.10.5.1" },
+ { "dst": "3ffe:ffff:0:01ff::1/64" }
+ ],
+ "dns": {
+ "nameservers" : ["8.8.8.8"],
+ "domain": "example.com",
+ "search": [ "example.com" ]
+ }
+ }
+}
+```
+
+The CNI IPAM configuration can include only static addresses without IPPools, if
+only static IP addresses are needed.
+
+### Configuration with `NetworkAttachmentDefinition` CRD
+
+CNI and IPAM configuration of a secondary network is typically defined with the
+`NetworkAttachmentDefinition` CRD. For example:
+
+```yaml
+apiVersion: "k8s.cni.cncf.io/v1"
+kind: NetworkAttachmentDefinition
+metadata:
+ name: ipv4-net-1
+spec:
+ {
+ "cniVersion": "0.3.0",
+ "type": "macvlan",
+ "master": "eth0",
+ "mode": "bridge",
+ "ipam": {
+ "type": "antrea",
+ "ippools": [ "ipv4-pool-1" ]
+ }
+ }
+```
+
+## `IPPool` CRD
+
+Antrea IP pools are defined with the `IPPool` CRD. The following two examples
+define an IPv4 and an IPv6 IP pool respectively.
+
+```yaml
+apiVersion: "crd.antrea.io/v1alpha2"
+kind: IPPool
+metadata:
+ name: ipv4-pool-1
+spec:
+ ipVersion: 4
+ ipRanges:
+ - cidr: "10.10.1.0/26"
+ gateway: "10.10.1.1"
+ prefixLength: 24
+```
+
+```yaml
+apiVersion: "crd.antrea.io/v1alpha2"
+kind: IPPool
+metadata:
+ name: ipv6-pool-1
+spec:
+ ipVersion: 6
+ ipRanges:
+ - start: "3ffe:ffff:1:01ff::0100"
+ end: "3ffe:ffff:1:01ff::0200"
+ gateway: "3ffe:ffff:1:01ff::1"
+ prefixLength: 64
+```
+
+VLAN ID in the IP range subnet definition of `IPPool` CRD is not supported for
+secondary network IPAM.
+
+### Secondary Network creation with Multus
+
+To leverage Antrea for secondary network IPAM, Antrea must be used as the CNI
+for the Pods' primary network, while the secondary networks are implemented by
+other CNIs which are managed by Multus. The [Antrea + Multus guide](cookbooks/multus)
+talks about how to use Antrea with Multus, including the option of using Antrea
+IPAM for secondary networks.
diff --git a/content/docs/v1.15.0/docs/antrea-l7-network-policy.md b/content/docs/v1.15.0/docs/antrea-l7-network-policy.md
new file mode 100644
index 00000000..3b1aaa2c
--- /dev/null
+++ b/content/docs/v1.15.0/docs/antrea-l7-network-policy.md
@@ -0,0 +1,367 @@
+# Antrea Layer 7 NetworkPolicy
+
+## Table of Contents
+
+
+- [Introduction](#introduction)
+- [Prerequisites](#prerequisites)
+- [Usage](#usage)
+ - [HTTP](#http)
+ - [More examples](#more-examples)
+ - [TLS](#tls)
+ - [More examples](#more-examples-1)
+ - [Logs](#logs)
+- [Limitations](#limitations)
+
+
+## Introduction
+
+NetworkPolicy was initially used to restrict network access at layer 3 (Network) and 4 (Transport) in the OSI model,
+based on IP address, transport protocol, and port. Securing applications at IP and port level provides limited security
+capabilities, as the service an application provides is either entirely exposed to a client or not accessible by that
+client at all. Starting with v1.10, Antrea introduces support for layer 7 NetworkPolicy, an application-aware policy
+which provides fine-grained control over the network traffic beyond IP, transport protocol, and port. It enables users
+to protect their applications by specifying how they are allowed to communicate with others, taking into account
+application context. For example, you can enforce policies to:
+
+- Grant access of privileged URLs to specific clients while make other URLs publicly accessible.
+- Prevent applications from accessing unauthorized domains.
+- Block network traffic using an unauthorized application protocol regardless of port used.
+
+This guide demonstrates how to configure layer 7 NetworkPolicy.
+
+## Prerequisites
+
+Layer 7 NetworkPolicy was introduced in v1.10 as an alpha feature and is disabled by default. A feature gate,
+`L7NetworkPolicy`, must be enabled in antrea-controller.conf and antrea-agent.conf in the `antrea-config` ConfigMap.
+Additionally, due to the constraint of the application detection engine, TX checksum offloading must be disabled via the
+`disableTXChecksumOffload` option in antrea-agent.conf for the feature to work. An example configuration is as below:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ disableTXChecksumOffload: true
+ featureGates:
+ L7NetworkPolicy: true
+ antrea-controller.conf: |
+ featureGates:
+ L7NetworkPolicy: true
+```
+
+Alternatively, you can use the following helm installation command to configure the above options:
+
+```bash
+helm install antrea antrea/antrea --namespace kube-system --set featureGates.L7NetworkPolicy=true,disableTXChecksumOffload=true
+```
+
+## Usage
+
+There isn't a separate resource type for layer 7 NetworkPolicy. It is one kind of Antrea-native policies, which has the
+`l7Protocols` field specified in the rules. Like layer 3 and layer 4 policies, the `l7Protocols` field can be specified
+for ingress and egress rules in Antrea ClusterNetworkPolicy and Antrea NetworkPolicy. It can be used with the `from` or
+`to` field to select the network peer, and the `ports` to select the transport protocol and/or port for which the layer
+7 rule applies to. The `action` of a layer 7 rule can only be `Allow`.
+
+**Note**: Any traffic matching the layer 3/4 criteria (specified by `from`, `to`, and `port`) of a layer 7 rule will be
+forwarded to an application-aware engine for protocol detection and rule enforcement, and the traffic will be allowed if
+the layer 7 criteria is also matched, otherwise it will be dropped. Therefore, any rules after a layer 7 rule will not
+be enforced for the traffic that match the layer 7 rule's layer 3/4 criteria.
+
+As of now, the only supported layer 7 protocol is HTTP. Support for more protocols may be added in the future and we
+welcome feature requests for protocols that you are interested in.
+
+### HTTP
+
+An example layer 7 NetworkPolicy for the HTTP protocol is like below:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: NetworkPolicy
+metadata:
+ name: ingress-allow-http-request-to-api-v2
+spec:
+ priority: 5
+ tier: application
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: web
+ ingress:
+ - name: allow-http # Allow inbound HTTP GET requests to "/api/v2" from Pods with label "app=client".
+ action: Allow # All other traffic from these Pods will be automatically dropped, and subsequent rules will not be considered.
+ from:
+ - podSelector:
+ matchLabels:
+ app: client
+ l7Protocols:
+ - http:
+ path: "/api/v2/*"
+ host: "foo.bar.com"
+ method: "GET"
+ - name: drop-other # Drop all other inbound traffic (i.e., from Pods without label "app=client" or from external clients).
+ action: Drop
+```
+
+**path**: The `path` field represents the URI path to match. Both exact matches and wildcards are supported, e.g.
+`/api/v2/*`, `*/v2/*`, `/index.html`. If not set, the rule matches all URI paths.
+
+**host**: The `host` field represents the hostname present in the URI or the HTTP Host header to match. It does not
+contain the port associated with the host. Both exact matches and wildcards are supported, e.g. `*.foo.com`, `*.foo.*`,
+`foo.bar.com`. If not set, the rule matches all hostnames.
+
+**method**: The `method` field represents the HTTP method to match. It could be GET, POST, PUT, HEAD, DELETE, TRACE,
+OPTIONS, CONNECT and PATCH. If not set, the rule matches all methods.
+
+#### More examples
+
+The following NetworkPolicy grants access of privileged URLs to specific clients while make other URLs publicly
+accessible:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: NetworkPolicy
+metadata:
+ name: allow-privileged-url-to-admin-role
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: web
+ ingress:
+ - name: for-admin # Allow inbound HTTP GET requests to "/admin" and "/public" from Pods with label "role=admin".
+ action: Allow
+ from:
+ - podSelector:
+ matchLabels:
+ role: admin
+ l7Protocols:
+ - http:
+ path: "/admin/*"
+ - http:
+ path: "/public/*"
+ - name: for-public # Allow inbound HTTP GET requests to "/public" from Pods with label "app=client".
+ action: Allow # All other inbound traffic will be automatically dropped.
+ l7Protocols:
+ - http:
+ path: "/public/*"
+```
+
+The following NetworkPolicy prevents applications from accessing unauthorized domains:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: allow-web-access-to-internal-domain
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ egress-restriction: internal-domain-only
+ egress:
+ - name: allow-dns # Allow outbound DNS requests.
+ action: Allow
+ ports:
+ - protocol: TCP
+ port: 53
+ - protocol: UDP
+ port: 53
+ - name: allow-http-only # Allow outbound HTTP requests towards "*.bar.com".
+ action: Allow # As the rule's "to" and "ports" are empty, which means it selects traffic to any network
+ l7Protocols: # peer's any port using any transport protocol, all outbound HTTP requests towards other
+ - http: # domains and non-HTTP requests will be automatically dropped, and subsequent rules will
+ host: "*.bar.com" # not be considered.
+```
+
+The following NetworkPolicy blocks network traffic using an unauthorized application protocol regardless of the port used.
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: NetworkPolicy
+metadata:
+ name: allow-http-only
+spec:
+ priority: 5
+ tier: application
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: web
+ ingress:
+ - name: http-only # Allow inbound HTTP requests only.
+ action: Allow # As the rule's "from" and "ports" are empty, which means it selects traffic from any network
+ l7Protocols: # peer to any port of the Pods this policy applies to, all inbound non-HTTP requests will be
+ - http: {} # automatically dropped, and subsequent rules will not be considered.
+```
+
+### TLS
+
+An example layer 7 NetworkPolicy for the TLS protocol is like below:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: NetworkPolicy
+metadata:
+ name: ingress-allow-tls-handshake
+spec:
+ priority: 5
+ tier: application
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: web
+ ingress:
+ - name: allow-tls # Allow inbound TLS/SSL handshake packets to server name "foo.bar.com" from Pods with label "app=client".
+ action: Allow # All other traffic from these Pods will be automatically dropped, and subsequent rules will not be considered.
+ from:
+ - podSelector:
+ matchLabels:
+ app: client
+ l7Protocols:
+ - tls:
+ sni: "foo.bar.com"
+ - name: drop-other # Drop all other inbound traffic (i.e., from Pods without label "app=client" or from external clients).
+ action: Drop
+```
+
+**sni**: The `sni` field matches the TLS/SSL Server Name Indication (SNI) field in the TLS/SSL handshake process. Both
+exact matches and wildcards are supported, e.g. `*.foo.com`, `*.foo.*`, `foo.bar.com`. If not set, the rule matches all names.
+
+#### More examples
+
+The following NetworkPolicy prevents applications from accessing unauthorized SSL/TLS server names:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: allow-tls-handshake-to-internal
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ egress-restriction: internal-tls-only
+ egress:
+ - name: allow-dns # Allow outbound DNS requests.
+ action: Allow
+ ports:
+ - protocol: TCP
+ port: 53
+ - protocol: UDP
+ port: 53
+ - name: allow-tls-only # Allow outbound SSL/TLS handshake packets towards "*.bar.com".
+ action: Allow # As the rule's "to" and "ports" are empty, which means it selects traffic to any network
+ l7Protocols: # peer's any port of any transport protocol, all outbound SSL/TLS handshake packets towards
+ - tls: # other server names and non-SSL/non-TLS handshake packets will be automatically dropped,
+ sni: "*.bar.com" # and subsequent rules will not be considered.
+```
+
+The following NetworkPolicy blocks network traffic using an unauthorized application protocol regardless of the port used.
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: NetworkPolicy
+metadata:
+ name: allow-tls-only
+spec:
+ priority: 5
+ tier: application
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: web
+ ingress:
+ - name: tls-only # Allow inbound SSL/TLS handshake packets only.
+ action: Allow # As the rule's "from" and "ports" are empty, which means it selects traffic from any network
+ l7Protocols: # peer to any port of the Pods this policy applies to, all inbound non-SSL/non-TLS handshake
+ - tls: {} # packets will be automatically dropped, and subsequent rules will not be considered.
+```
+
+### Logs
+
+Layer 7 traffic that matches the NetworkPolicy will be logged in an event
+triggered log file (`/var/log/antrea/networkpolicy/l7engine/eve-YEAR-MONTH-DAY.json`).
+The event type for this log is `alert`. If `enableLogging` is set for the rule,
+packets that match the rule will also be logged in addition to the event with
+event type `packet`. Below is an example of the two event types.
+
+Deny ingress from client (10.10.1.5) to web (10.10.1.4/admin)
+
+```json
+{
+ "timestamp": "2023-03-09T20:00:28.210821+0000",
+ "flow_id": 627175734391745,
+ "in_iface": "antrea-l7-tap0",
+ "event_type": "alert",
+ "vlan": [
+ 1
+ ],
+ "src_ip": "10.10.1.5",
+ "src_port": 43352,
+ "dest_ip": "10.10.1.4",
+ "dest_port": 80,
+ "proto": "TCP",
+ "alert": {
+ "action": "blocked",
+ "gid": 1,
+ "signature_id": 1,
+ "rev": 0,
+ "signature": "Reject by AntreaClusterNetworkPolicy:test-l7-ingress",
+ "category": "",
+ "severity": 3,
+ "tenant_id": 1
+ },
+ "http": {
+ "hostname": "10.10.1.4",
+ "url": "/admin",
+ "http_user_agent": "curl/7.74.0",
+ "http_method": "GET",
+ "protocol": "HTTP/1.1",
+ "length": 0
+ },
+ "app_proto": "http",
+ "flow": {
+ "pkts_toserver": 3,
+ "pkts_toclient": 1,
+ "bytes_toserver": 284,
+ "bytes_toclient": 74,
+ "start": "2023-03-09T20:00:28.209857+0000"
+ }
+}
+```
+
+```json
+{
+ "timestamp": "2023-03-09T20:00:28.225016+0000",
+ "flow_id": 627175734391745,
+ "in_iface": "antrea-l7-tap0",
+ "event_type": "packet",
+ "vlan": [
+ 1
+ ],
+ "src_ip": "10.10.1.4",
+ "src_port": 80,
+ "dest_ip": "10.10.1.5",
+ "dest_port": 43352,
+ "proto": "TCP",
+ "packet": "/lhtPRglzmQvxnJoCABFAAAoUGYAAEAGFE4KCgEECgoBBQBQqVhIGzbi/odenlAUAfsR7QAA",
+ "packet_info": {
+ "linktype": 1
+ }
+}
+```
+
+## Limitations
+
+This feature is currently only supported for Nodes running Linux.
diff --git a/content/docs/v1.15.0/docs/antrea-network-policy.md b/content/docs/v1.15.0/docs/antrea-network-policy.md
new file mode 100644
index 00000000..5fdd9a2d
--- /dev/null
+++ b/content/docs/v1.15.0/docs/antrea-network-policy.md
@@ -0,0 +1,1841 @@
+# Antrea Network Policy CRDs
+
+## Table of Contents
+
+
+- [Summary](#summary)
+- [Tier](#tier)
+ - [Tier CRDs](#tier-crds)
+ - [Static tiers](#static-tiers)
+ - [kubectl commands for Tier](#kubectl-commands-for-tier)
+- [Antrea ClusterNetworkPolicy](#antrea-clusternetworkpolicy)
+ - [The Antrea ClusterNetworkPolicy resource](#the-antrea-clusternetworkpolicy-resource)
+ - [ACNP with stand-alone selectors](#acnp-with-stand-alone-selectors)
+ - [ACNP with ClusterGroup reference](#acnp-with-clustergroup-reference)
+ - [ACNP for complete Pod isolation in selected Namespaces](#acnp-for-complete-pod-isolation-in-selected-namespaces)
+ - [ACNP for strict Namespace isolation](#acnp-for-strict-namespace-isolation)
+ - [ACNP for default zero-trust cluster security posture](#acnp-for-default-zero-trust-cluster-security-posture)
+ - [ACNP for toServices rule](#acnp-for-toservices-rule)
+ - [ACNP for ICMP traffic](#acnp-for-icmp-traffic)
+ - [ACNP for IGMP traffic](#acnp-for-igmp-traffic)
+ - [ACNP for multicast egress traffic](#acnp-for-multicast-egress-traffic)
+ - [ACNP for HTTP traffic](#acnp-for-http-traffic)
+ - [ACNP for Kubernetes Node traffic](#acnp-for-kubernetes-node-traffic)
+ - [ACNP with log settings](#acnp-with-log-settings)
+ - [Behavior of to and from selectors](#behavior-of-to-and-from-selectors)
+ - [Key differences from K8s NetworkPolicy](#key-differences-from-k8s-networkpolicy)
+ - [kubectl commands for Antrea ClusterNetworkPolicy](#kubectl-commands-for-antrea-clusternetworkpolicy)
+- [Antrea NetworkPolicy](#antrea-networkpolicy)
+ - [The Antrea NetworkPolicy resource](#the-antrea-networkpolicy-resource)
+ - [Key differences from Antrea ClusterNetworkPolicy](#key-differences-from-antrea-clusternetworkpolicy)
+ - [Antrea NetworkPolicy with Group reference](#antrea-networkpolicy-with-group-reference)
+ - [kubectl commands for Antrea NetworkPolicy](#kubectl-commands-for-antrea-networkpolicy)
+- [Antrea-native Policy ordering based on priorities](#antrea-native-policy-ordering-based-on-priorities)
+ - [Ordering based on Tier priority](#ordering-based-on-tier-priority)
+ - [Ordering based on policy priority](#ordering-based-on-policy-priority)
+ - [Rule enforcement based on priorities](#rule-enforcement-based-on-priorities)
+- [Advanced peer selection mechanisms of Antrea-native Policies](#advanced-peer-selection-mechanisms-of-antrea-native-policies)
+ - [Selecting Namespace by Name](#selecting-namespace-by-name)
+ - [K8s clusters with version 1.21 and above](#k8s-clusters-with-version-121-and-above)
+ - [K8s clusters with version 1.20 and below](#k8s-clusters-with-version-120-and-below)
+ - [Selecting Pods in the same Namespace with Self](#selecting-pods-in-the-same-namespace-with-self)
+ - [FQDN based filtering](#fqdn-based-filtering)
+ - [Node Selector](#node-selector)
+ - [toServices egress rules](#toservices-egress-rules)
+ - [ServiceAccount based selection](#serviceaccount-based-selection)
+ - [Apply to NodePort Service](#apply-to-nodeport-service)
+- [ClusterGroup](#clustergroup)
+ - [ClusterGroup CRD](#clustergroup-crd)
+ - [kubectl commands for ClusterGroup](#kubectl-commands-for-clustergroup)
+- [Group](#group)
+ - [Group CRD](#group-crd)
+ - [Restrictions and Key differences from ClusterGroup](#restrictions-and-key-differences-from-clustergroup)
+ - [kubectl commands for Group](#kubectl-commands-for-group)
+- [RBAC](#rbac)
+- [Notes and constraints](#notes-and-constraints)
+
+
+## Summary
+
+Antrea supports standard K8s NetworkPolicies to secure ingress/egress traffic for
+Pods. These NetworkPolicies are written from an application developer's perspective,
+hence they lack the ability to gain a finer-grained control over the security
+policies that a cluster administrator would require. This document describes a
+few new CRDs supported by Antrea to provide the administrator with more control
+over security within the cluster, and which are meant to co-exist with and
+complement the K8s NetworkPolicy.
+
+Starting with Antrea v1.0, Antrea-native policies are enabled by default, which
+means that no additional configuration is required in order to use the
+Antrea-native policy CRDs.
+
+## Tier
+
+Antrea supports grouping Antrea-native policy CRDs together in a tiered fashion
+to provide a hierarchy of security policies. This is achieved by setting the
+`tier` field when defining an Antrea-native policy CRD (e.g. an Antrea
+ClusterNetworkPolicy object) to the appropriate Tier name. Each Tier has a
+priority associated with it, which determines its relative order among other Tiers.
+
+**Note**: K8s NetworkPolicies will be enforced once all policies in all Tiers (except
+for the baseline Tier) have been enforced. For more information, refer to the following
+[Static Tiers section](#static-tiers)
+
+### Tier CRDs
+
+Creating Tiers as CRDs allows users the flexibility to create and delete
+Tiers as per their preference i.e. not be bound to 5 static tiering options
+as was the case initially.
+
+An example Tier might look like this:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: Tier
+metadata:
+ name: mytier
+spec:
+ priority: 10
+ description: "my custom tier"
+```
+
+Tiers have the following characteristics:
+
+- Policies can associate themselves with an existing Tier by setting the `tier`
+ field in an Antrea NetworkPolicy CRD spec to the Tier's name.
+- A Tier must exist before an Antrea-native policy can reference it.
+- Policies associated with higher ordered (low `priority` value) Tiers are
+ enforced first.
+- No two Tiers can be created with the same priority.
+- Updating the Tier's `priority` field is unsupported.
+- Deleting Tier with existing references from policies is not allowed.
+
+### Static tiers
+
+On startup, antrea-controller will create 5 static, read-only Tier CRD resources
+corresponding to the static tiers for default consumption, as well as a "baseline"
+Tier CRD object, that will be enforced after developer-created K8s NetworkPolicies.
+The details for these Tiers are shown below:
+
+```text
+ Emergency -> Tier name "emergency" with priority "50"
+ SecurityOps -> Tier name "securityops" with priority "100"
+ NetworkOps -> Tier name "networkops" with priority "150"
+ Platform -> Tier name "platform" with priority "200"
+ Application -> Tier name "application" with priority "250"
+ Baseline -> Tier name "baseline" with priority "253"
+```
+
+Any Antrea-native policy CRD referencing a static tier in its spec will now internally
+reference the corresponding Tier resource, thus maintaining the order of enforcement.
+
+The static Tier CRD Resources are created as follows in the relative order of
+precedence compared to K8s NetworkPolicies:
+
+```text
+ Emergency > SecurityOps > NetworkOps > Platform > Application > K8s NetworkPolicy > Baseline
+```
+
+Thus, all Antrea-native Policy resources associated with the "emergency" Tier will be
+enforced before any Antrea-native Policy resource associated with any other
+Tiers, until a match occurs, in which case the policy rule's `action` will be
+applied. **Any Antrea-native Policy resource without a `tier` name set in its spec
+will be associated with the "application" Tier.** Policies associated with the first
+5 static, read-only Tiers, as well as with all the custom Tiers created with a priority
+value lower than 250 (priority values greater than or equal to 250 are not allowed
+for custom Tiers), will be enforced before K8s NetworkPolicies.
+
+Policies created in the "baseline" Tier, on the other hand, will have lower precedence
+than developer-created K8s NetworkPolicies, which comes in handy when administrators
+want to enforce baseline policies like "default-deny inter-namespace traffic" for some
+specific Namespace, while still allowing individual developers to lift the restriction
+if needed using K8s NetworkPolicies.
+
+Note that baseline policies cannot counteract the isolated Pod behavior provided by
+K8s NetworkPolicies. To read more about this Pod isolation behavior, refer to [this
+document](https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-two-sorts-of-pod-isolation).
+If a Pod becomes isolated because a K8s NetworkPolicy is applied to it, and the policy
+does not explicitly allow communications with another Pod, this behavior cannot be changed
+by creating an Antrea-native policy with an "allow" action in the "baseline" Tier.
+For this reason, it generally does not make sense to create policies in the "baseline"
+Tier with the "allow" action.
+
+### *kubectl* commands for Tier
+
+The following `kubectl` commands can be used to retrieve Tier resources:
+
+```bash
+ # Use long name
+ kubectl get tiers
+
+ # Use long name with API Group
+ kubectl get tiers.crd.antrea.io
+
+ # Use short name
+ kubectl get tr
+
+ # Use short name with API Group
+ kubectl get tr.crd.antrea.io
+
+ # Sort output by Tier priority
+ kubectl get tiers --sort-by=.spec.priority
+```
+
+All the above commands produce output similar to what is shown below:
+
+```text
+ NAME PRIORITY AGE
+ emergency 50 27h
+ securityops 100 27h
+ networkops 150 27h
+ platform 200 27h
+ application 250 27h
+```
+
+## Antrea ClusterNetworkPolicy
+
+Antrea ClusterNetworkPolicy (ACNP), one of the two Antrea-native policy CRDs
+introduced, is a specification of how workloads within a cluster communicate
+with each other and other external endpoints. The ClusterNetworkPolicy is
+supposed to aid cluster admins to configure the security policy for the
+cluster, unlike K8s NetworkPolicy, which is aimed towards developers to secure
+their apps and affects Pods within the Namespace in which the K8s NetworkPolicy
+is created. Rules belonging to ClusterNetworkPolicies are enforced before any
+rule belonging to a K8s NetworkPolicy.
+
+### The Antrea ClusterNetworkPolicy resource
+
+Example ClusterNetworkPolicies might look like these:
+
+#### ACNP with stand-alone selectors
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-with-stand-alone-selectors
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ role: db
+ - namespaceSelector:
+ matchLabels:
+ env: prod
+ ingress:
+ - action: Allow
+ from:
+ - podSelector:
+ matchLabels:
+ role: frontend
+ - podSelector:
+ matchLabels:
+ role: nondb
+ namespaceSelector:
+ matchLabels:
+ role: db
+ ports:
+ - protocol: TCP
+ port: 8080
+ endPort: 9000
+ - protocol: TCP
+ port: 6379
+ name: AllowFromFrontend
+ egress:
+ - action: Drop
+ to:
+ - ipBlock:
+ cidr: 10.0.10.0/24
+ ports:
+ - protocol: TCP
+ port: 5978
+ name: DropToThirdParty
+```
+
+#### ACNP with ClusterGroup reference
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-with-cluster-groups
+spec:
+ priority: 8
+ tier: securityops
+ appliedTo:
+ - group: "test-cg-with-db-selector" # defined separately with a ClusterGroup resource
+ ingress:
+ - action: Allow
+ from:
+ - group: "test-cg-with-frontend-selector" # defined separately with a ClusterGroup resource
+ ports:
+ - protocol: TCP
+ port: 8080
+ endPort: 9000
+ - protocol: TCP
+ port: 6379
+ name: AllowFromFrontend
+ egress:
+ - action: Drop
+ to:
+ - group: "test-cg-with-ip-block" # defined separately with a ClusterGroup resource
+ ports:
+ - protocol: TCP
+ port: 5978
+ name: DropToThirdParty
+```
+
+#### ACNP for complete Pod isolation in selected Namespaces
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: isolate-all-pods-in-namespace
+spec:
+ priority: 1
+ tier: securityops
+ appliedTo:
+ - namespaceSelector:
+ matchLabels:
+ app: no-network-access-required
+ ingress:
+ - action: Drop # For all Pods in those Namespaces, drop and log all ingress traffic from anywhere
+ name: drop-all-ingress
+ egress:
+ - action: Drop # For all Pods in those Namespaces, drop and log all egress traffic towards anywhere
+ name: drop-all-egress
+```
+
+#### ACNP for strict Namespace isolation
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: strict-ns-isolation
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - namespaceSelector: # Selects all non-system Namespaces in the cluster
+ matchExpressions:
+ - {key: kubernetes.io/metadata.name, operator: NotIn, values: [kube-system]}
+ ingress:
+ - action: Pass
+ from:
+ - namespaces:
+ match: Self # Skip ACNP evaluation for traffic from Pods in the same Namespace
+ name: PassFromSameNS
+ - action: Drop
+ from:
+ - namespaceSelector: {} # Drop from Pods from all other Namespaces
+ name: DropFromAllOtherNS
+ egress:
+ - action: Pass
+ to:
+ - namespaces:
+ match: Self # Skip ACNP evaluation for traffic to Pods in the same Namespace
+ name: PassToSameNS
+ - action: Drop
+ to:
+ - namespaceSelector: {} # Drop to Pods from all other Namespaces
+ name: DropToAllOtherNS
+```
+
+#### ACNP for default zero-trust cluster security posture
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: default-cluster-deny
+spec:
+ priority: 1
+ tier: baseline
+ appliedTo:
+ - namespaceSelector: {} # Selects all Namespaces in the cluster
+ ingress:
+ - action: Drop
+ egress:
+ - action: Drop
+```
+
+#### ACNP for toServices rule
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-drop-to-services
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ role: client
+ namespaceSelector:
+ matchLabels:
+ env: prod
+ egress:
+ - action: Drop
+ toServices:
+ - name: svcName
+ namespace: svcNamespace
+ name: DropToServices
+```
+
+#### ACNP for ICMP traffic
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-reject-ping-request
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ role: server
+ namespaceSelector:
+ matchLabels:
+ env: prod
+ egress:
+ - action: Reject
+ protocols:
+ - icmp:
+ icmpType: 8
+ icmpCode: 0
+ name: DropPingRequest
+```
+
+#### ACNP for IGMP traffic
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-with-igmp-drop
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: mcjoin6
+ ingress:
+ - action: Drop
+ protocols:
+ - igmp:
+ igmpType: 0x11
+ groupAddress: 224.0.0.1
+ name: dropIGMPQuery
+ egress:
+ - action: Drop
+ protocols:
+ - igmp:
+ igmpType: 0x16
+ groupAddress: 225.1.2.3
+ name: dropIGMPReport
+```
+
+#### ACNP for multicast egress traffic
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-with-multicast-traffic-drop
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: mcjoin6
+ egress:
+ - action: Drop
+ to:
+ - ipBlock:
+ cidr: 225.1.2.3/32
+ name: dropMcastUDPTraffic
+```
+
+#### ACNP for HTTP traffic
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: NetworkPolicy
+metadata:
+ name: ingress-allow-http-request-to-api-v2
+spec:
+ priority: 5
+ tier: application
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: web
+ ingress:
+ - name: allow-http # Allow inbound HTTP GET requests to "/api/v2" from Pods with app=client label.
+ action: Allow # All other traffic from these Pods will be automatically dropped, and subsequent rules will not be considered.
+ from:
+ - podSelector:
+ matchLabels:
+ app: client
+ l7Protocols:
+ - http:
+ path: "/api/v2/*"
+ host: "foo.bar.com"
+ method: "GET"
+ - name: drop-other # Drop all other inbound traffic (i.e., from Pods without the app=client label or from external clients).
+ action: Drop
+```
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: allow-web-access-to-internal-domain
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ egress-restriction: internal-domain-only
+ egress:
+ - name: allow-dns # Allow outbound DNS requests.
+ action: Allow
+ ports:
+ - protocol: TCP
+ port: 53
+ - protocol: UDP
+ port: 53
+ - name: allow-http-only # Allow outbound HTTP requests towards foo.bar.com.
+ action: Allow # As the rule's "to" and "ports" are empty, which means it selects traffic to any network
+ l7Protocols: # peer's any port using any transport protocol, all outbound HTTP requests towards other
+ - http: # domains and non-HTTP requests will be automatically dropped, and subsequent rules will
+ host: "*.bar.com" # not be considered.
+```
+
+Please refer to [Antrea Layer 7 NetworkPolicy](antrea-l7-network-policy.md) for extra information.
+
+#### ACNP for Kubernetes Node traffic
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-node-egress-traffic-drop
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - nodeSelector:
+ matchLabels:
+ kubernetes.io/os: linux
+ egress:
+ - action: Drop
+ to:
+ - ipBlock:
+ cidr: 192.168.1.0/24
+ ports:
+ - protocol: TCP
+ port: 80
+ name: dropHTTPTrafficToCIDR
+```
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-node-ingress-traffic-drop
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - nodeSelector:
+ matchLabels:
+ kubernetes.io/os: linux
+ ingress:
+ - action: Drop
+ from:
+ - ipBlock:
+ cidr: 192.168.1.0/24
+ ports:
+ - protocol: TCP
+ port: 22
+ name: dropSSHTrafficFromCIDR
+```
+
+Please refer to [Antrea Node NetworkPolicy](antrea-node-network-policy.md) for more information.
+
+#### ACNP with log settings
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-with-log-setting
+spec:
+ priority: 5
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ role: db
+ - namespaceSelector:
+ matchLabels:
+ env: prod
+ ingress:
+ - action: Allow
+ from:
+ - podSelector:
+ matchLabels:
+ role: frontend
+ namespaceSelector:
+ matchLabels:
+ role: db
+ name: AllowFromFrontend
+ enableLogging: true
+ logLabel: "frontend-allowed"
+```
+
+**spec**: The ClusterNetworkPolicy `spec` has all the information needed to
+define a cluster-wide security policy.
+
+**appliedTo**: The `appliedTo` field at the policy level specifies the
+grouping criteria of Pods to which the policy applies to. Pods can be
+selected cluster-wide using `podSelector`. If set with a `namespaceSelector`,
+all Pods from Namespaces selected by the namespaceSelector will be selected.
+Specific Pods from specific Namespaces can be selected by providing both a
+`podSelector` and a `namespaceSelector` in the same `appliedTo` entry.
+The `appliedTo` field can also reference a ClusterGroup resource by setting
+the ClusterGroup's name in `group` field in place of the stand-alone selectors.
+The `appliedTo` field can also reference a Service by setting the Service's name
+and Namespace in `service` field in place of the stand-alone selectors. Only a
+NodePort Service can be referred by this field. More details can be found in the
+[ApplyToNodePortService](#apply-to-nodeport-service) section.
+IPBlock cannot be set in the `appliedTo` field.
+An IPBlock ClusterGroup referenced in an `appliedTo` field will be ignored,
+and the policy will have no effect.
+This `appliedTo` field must not be set, if `appliedTo` per
+rule is used.
+
+In the [first example](#acnp-with-stand-alone-selectors), the policy applies to Pods, which either match the labels
+"role=db" in all the Namespaces, or are from Namespaces which match the
+labels "env=prod".
+The [second example](#acnp-with-clustergroup-reference) policy applies to all network endpoints selected by the
+"test-cg-with-db-selector" ClusterGroup.
+The [third example](#acnp-for-complete-pod-isolation-in-selected-namespaces) policy applies to all Pods in the
+Namespaces that matches label "app=no-network-access-required".
+`appliedTo' also supports ServiceAccount based selection. This allows users using ServiceAccount to select Pods.
+More details can be found in the [ServiceAccountSelector](#serviceaccount-based-selection) section.
+
+**priority**: The `priority` field determines the relative priority of the
+policy among all ClusterNetworkPolicies in the given cluster. This field is
+mandatory. A lower priority value indicates higher precedence. Priority values
+can range from 1.0 to 10000.0.
+**Note**: Policies with the same priorities will be enforced
+indeterministically. Users should therefore take care to use priorities to
+ensure the behavior they expect.
+
+**tier**: The `tier` field associates an ACNP to an existing Tier. The `tier`
+field can be set with the name of the Tier CRD to which this policy must be
+associated with. If not set, the ACNP is associated with the lowest priority
+default tier i.e. the "application" Tier.
+
+**action**: Each ingress or egress rule of a ClusterNetworkPolicy must have the
+`action` field set. As of now, the available actions are ["Allow", "Drop", "Reject", "Pass"].
+When the rule action is "Allow" or "Drop", Antrea will allow or drop traffic which
+matches both `from/to`, `ports` and `protocols` sections of that rule, given that traffic does not
+match a higher precedence rule in the cluster (ACNP rules created in higher order
+Tiers or policy instances in the same Tier with lower priority number). If a "Reject"
+rule is matched, the client initiating the traffic will receive `ICMP host administratively
+prohibited` code for ICMP, UDP and SCTP request, or an explicit reject response for
+TCP request, instead of timeout. A "Pass" rule, on the other hand, skips this packet
+for further ACNP rule evaluations (all ACNP rules that has lower priority than the
+current "Pass" rule will be skipped, except for the Baseline Tier rules), and delegates
+the decision to developer created namespaced NetworkPolicies. If no NetworkPolicy
+matches this traffic, then the Baseline Tier rules will still be matched against.
+Note that the "Pass" action does not make sense when configured in Baseline Tier
+ACNP rules, and such configurations will be rejected by the admission controller.
+Note: "Pass" and "Reject" actions are not supported for rules applied to multicast
+traffic.
+
+**ingress**: Each ClusterNetworkPolicy may consist of zero or more ordered set of
+ingress rules. Under `ports`, the optional field `endPort` can only be set when a
+numerical `port` is set to represent a range of ports from `port` to `endPort` inclusive.
+`protocols` defines additional protocols that are not supported by `ports`.
+Currently only ICMP protocol and IGMP protocol are under `protocols`. For `ICMP`
+protocol, `icmpType` and `icmpCode` could be used to specify the ICMP traffic that
+this rule matches. And for `IGMP` protocol, `igmpType` and `groupAddress` can be
+used to specify the IGMP traffic that this rule matches. Currently, only IGMP
+query is supported in ingress rules. Other IGMP types and multicast data traffic
+are not supported for ingress rules. Valid `igmpType` is:
+
+message type | value
+-- | --
+Membership Query | 0x11
+
+The group address in IGMP query packets can only be 224.0.0.1. As for Group-Specific
+IGMP query, which encodes the target group in the IGMP message, it is not supported
+yet because OVS can not recognize the address. Protocol `IGMP` can not be used with
+`ICMP` or properties like `from`, `to`, `ports` and `toServices`.
+
+Also, each rule has an optional `name` field, which should be unique within
+the policy describing the intention of this rule. If `name` is not provided for
+a rule, it will be auto-generated by Antrea. The auto-generated name will be
+of format `[ingress/egress]-[action]-[uid]`, e.g. ingress-allow-2f0ed6e,
+where [uid] is the first 7 bits of hash value of the rule based on sha1 algorithm.
+If a policy contains duplicate rules, or if a rule name is same as the auto-generated
+name of some other rules in the same policy, it will cause a conflict,
+and the policy will be rejected.
+A ClusterGroup name can be set in the `group` field of an ingress `from` section in place
+of stand-alone selectors to allow traffic from workloads/ipBlocks set in the ClusterGroup.
+
+The [first example](#acnp-with-stand-alone-selectors) policy contains a single rule, which allows matched traffic on a
+single port, from one of two sources: the first specified by a `podSelector`
+and the second specified by a combination of a `podSelector` and a
+`namespaceSelector`.
+The [second example](#acnp-with-clustergroup-reference) policy contains a single rule, which allows matched traffic on
+multiple TCP ports (8000 through 9000 included, plus 6379) from all network endpoints
+selected by the "test-cg-with-frontend-selector" ClusterGroup.
+The [third example](#acnp-for-complete-pod-isolation-in-selected-namespaces) policy contains a single rule,
+which drops all ingress traffic towards any Pod in Namespaces that have label `app` set to
+`no-network-access-required`. Note that an empty `From` in the ingress rule means that
+this rule matches all ingress sources.
+Ingress `From` section also supports ServiceAccount based selection. This allows users to use ServiceAccount
+to select Pods. More details can be found in the [ServiceAccountSelector](#serviceaccount-based-selection) section.
+**Note**: The order in which the ingress rules are specified matters, i.e., rules will
+be enforced in the order in which they are written.
+
+**egress**: Each ClusterNetworkPolicy may consist of zero or more ordered set
+of egress rules. Each rule, depending on the `action` field of the rule, allows
+or drops traffic which matches all `from`, `ports` sections.
+Under `ports`, the optional field `endPort` can only be set when a numerical `port`
+is set to represent a range of ports from `port` to `endPort` inclusive.
+`protocols` defines additional protocols that are not supported by `ports`. Currently, only
+ICMP protocol and IGMP protocol are under `protocols`. For `ICMP` protocol, `icmpType`
+and `icmpCode` could be used to specify the ICMP traffic that this rule matches. And
+for `IGMP` protocol, `igmpType` and `groupAddress` can be used to specify the IGMP
+traffic that this rule matches. If `igmpType` is not set, all reports will be matched.
+If `groupAddress` is empty, then all multicast group addresses will be matched here.
+Only IGMP reports are supported in egress rules. Protocol `IGMP` can not be used with
+`ICMP` or properties like `from`, `to`, `ports` and `toServices`. Valid `igmpType` are:
+
+message type | value
+-- | --
+IGMPv1 Membership Report | 0x12
+IGMPv2 Membership Report | 0x16
+IGMPv3 Membership Report | 0x22
+
+Also, each rule has an optional `name` field, which should be unique within
+the policy describing the intention of this rule. If `name` is not provided for
+a rule, it will be auto-generated by Antrea. The rule name auto-generation process
+is the same as ingress rules.
+A ClusterGroup name can be set in the `group` field of a egress `to` section in place
+of stand-alone selectors to allow traffic to workloads/ipBlocks set in the ClusterGroup.
+`toServices` field contains a list of combinations of Service Namespace and Service Name
+to match traffic to this Service.
+
+More details can be found in the [toServices](#toservices-egress-rules) section.
+The [first example](#acnp-with-stand-alone-selectors) policy contains a single rule, which drops matched traffic on a
+single port, to the 10.0.10.0/24 subnet specified by the `ipBlock` field.
+The [second example](#acnp-with-clustergroup-reference) policy contains a single rule, which drops matched traffic on
+TCP port 5978 to all network endpoints selected by the "test-cg-with-ip-block"
+ClusterGroup.
+The [third example](#acnp-for-complete-pod-isolation-in-selected-namespaces) policy contains a single rule,
+which drops all egress traffic initiated by any Pod in Namespaces that have `app` set to
+`no-network-access-required`.
+The [sixth example](#acnp-for-toservices-rule) policy contains a single rule,
+which drops traffic from "role: client" labeled Pods from "env: prod" labeled Namespaces to Service svcNamespace/svcName
+via ClusterIP.
+Note that an empty `to` + an empty `toServices` in the egress rule means that
+this rule matches all egress destinations.
+Egress `To` section also supports FQDN based filtering. This can be applied to exact FQDNs or
+wildcard expressions. More details can be found in the [FQDN](#fqdn-based-filtering) section.
+Egress `To` section also supports ServiceAccount based selection. This allows users to use ServiceAccount
+to select Pods. More details can be found in the [ServiceAccountSelector](#serviceaccount-based-selection) section.
+**Note**: The order in which the egress rules are specified matters, i.e., rules will
+be enforced in the order in which they are written.
+
+**enableLogging** and **logLabel**: Antrea-native policy ingress or egress rules
+can be audited by setting its logging fields. When the `enableLogging` field is set
+to `true`, the first packet of any connection that matches this rule will be
+logged to a file (`/var/log/antrea/networkpolicy/np.log`) on the Node on which the
+rule is enforced. The log files can then be used for further analysis. If `logLabel`
+is provided, the label will be added in the log. For example, in the
+[ACNP with log settings](#acnp-with-log-settings), traffic that hits the
+"AllowFromFrontend" rule will be logged with log label "frontend-allowed".
+
+For drop and reject rules, deduplication is applied to reduce duplicated
+log messages, and the duplication buffer length is set to 1 second. When a rule
+does not have a name, an identifiable name will be generated for the rule and
+added to the log. For rules in layer 7 NetworkPolicy, packets are logged with
+action `Redirect` prior to analysis by the layer 7 engine, and the layer 7 engine
+can log more information in its own logs.
+
+The rules are logged in the following format:
+
+```text
+
Rules is a list of rules to be applied to the selected GroupMembers.
+
+
+
+
+appliedToGroups
+
+[]string
+
+
+
+
AppliedToGroups is a list of names of AppliedToGroups to which this policy applies.
+Cannot be set in conjunction with any NetworkPolicyRule.AppliedToGroups in Rules.
+
+
+
+
+priority
+
+float64
+
+
+
+
Priority represents the relative priority of this Network Policy as compared to
+other Network Policies. Priority will be unset (nil) for K8s NetworkPolicy.
+
+
+
+
+tierPriority
+
+int32
+
+
+
+
TierPriority represents the priority of the Tier associated with this Network
+Policy. The TierPriority will remain nil for K8s NetworkPolicy.
HTTPProtocol matches HTTP requests with specific host, method, and path. All fields could be used alone or together.
+If all fields are not provided, it matches all HTTP requests.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+host
+
+string
+
+
+
+
Host represents the hostname present in the URI or the HTTP Host header to match.
+It does not contain the port associated with the host.
+
+
+
+
+method
+
+string
+
+
+
+
Method represents the HTTP method to match.
+It could be GET, POST, PUT, HEAD, DELETE, TRACE, OPTIONS, CONNECT and PATCH.
+
+
+
+
+path
+
+string
+
+
+
+
Path represents the URI path to match (Ex. “/index.html”, “/admin”).
Action specifies the action to be applied on the rule. i.e. Allow/Drop. An empty
+action “nil” defaults to Allow action, which would be the case for rules created for
+K8s Network Policy.
+
+
+
+
+enableLogging
+
+bool
+
+
+
+
EnableLogging indicates whether or not to generate logs when rules are matched. Default to false.
+
+
+
+
+appliedToGroups
+
+[]string
+
+
+
+
AppliedToGroups is a list of names of AppliedToGroups to which this rule applies.
+Cannot be set in conjunction with NetworkPolicy.AppliedToGroups of the NetworkPolicy
+that this Rule is referred to.
+
+
+
+
+name
+
+string
+
+
+
+
Name describes the intention of this rule.
+Name should be unique within the policy.
Port and EndPort can only be specified, when the Protocol is TCP, UDP, or SCTP.
+Port defines the port name or number on the given protocol. If not specified
+and the Protocol is TCP, UDP, or SCTP, this matches all port numbers.
+
+
+
+
+endPort
+
+int32
+
+
+
+(Optional)
+
EndPort defines the end of the port range, being the end included within the range.
+It can only be specified when a numerical port is specified.
+
+
+
+
+icmpType
+
+int32
+
+
+
+(Optional)
+
ICMPType and ICMPCode can only be specified, when the Protocol is ICMP. If they
+both are not specified and the Protocol is ICMP, this matches all ICMP traffic.
+
+
+
+
+icmpCode
+
+int32
+
+
+
+
+
+
+
+igmpType
+
+int32
+
+
+
+(Optional)
+
IGMPType and GroupAddress can only be specified when the Protocol is IGMP.
+
+
+
+
+groupAddress
+
+string
+
+
+
+
+
+
+
+srcPort
+
+int32
+
+
+
+(Optional)
+
SrcPort and SrcEndPort can only be specified, when the Protocol is TCP, UDP, or SCTP.
+It restricts the source port of the traffic.
Specification of the desired behavior of ClusterNetworkPolicy.
+
+
+
+
+
+tier
+
+string
+
+
+
+
Tier specifies the tier to which this ClusterNetworkPolicy belongs to.
+The ClusterNetworkPolicy order will be determined based on the
+combination of the Tier’s Priority and the ClusterNetworkPolicy’s own
+Priority. If not specified, this policy will be created in the Application
+Tier right above the K8s NetworkPolicy which resides at the bottom.
+
+
+
+
+priority
+
+float64
+
+
+
+
Priority specfies the order of the ClusterNetworkPolicy relative to
+other AntreaClusterNetworkPolicies.
Set of ingress rules evaluated based on the order in which they are set.
+Currently Ingress rule supports setting the From field but not the To
+field within a Rule.
Set of egress rules evaluated based on the order in which they are set.
+Currently Egress rule supports setting the To field but not the From
+field within a Rule.
Specification of the desired behavior of NetworkPolicy.
+
+
+
+
+
+tier
+
+string
+
+
+
+
Tier specifies the tier to which this NetworkPolicy belongs to.
+The NetworkPolicy order will be determined based on the combination of the
+Tier’s Priority and the NetworkPolicy’s own Priority. If not specified,
+this policy will be created in the Application Tier right above the K8s
+NetworkPolicy which resides at the bottom.
+
+
+
+
+priority
+
+float64
+
+
+
+
Priority specfies the order of the NetworkPolicy relative to other
+NetworkPolicies.
Set of ingress rules evaluated based on the order in which they are set.
+Currently Ingress rule supports setting the From field but not the To
+field within a Rule.
Set of egress rules evaluated based on the order in which they are set.
+Currently Egress rule supports setting the To field but not the From
+field within a Rule.
ExpirationMinutes is the requested duration of validity of the SupportBundleCollection.
+A SupportBundleCollection will be marked as Failed if it does not finish before expiration.
+Default is 60.
+
+
+
+
+sinceTime
+
+string
+
+
+
+
SinceTime specifies a relative time before the current time from which to collect logs
+A valid value is like: 1d, 2h, 30m.
LiveTraffic indicates the Traceflow is to trace the live traffic
+rather than an injected packet, when set to true. The first packet of
+the first connection that matches the packet spec will be traced.
+
+
+
+
+droppedOnly
+
+bool
+
+
+
+
DroppedOnly indicates only the dropped packet should be captured in a
+live-traffic Traceflow.
+
+
+
+
+timeout
+
+uint16
+
+
+
+
Timeout specifies the timeout of the Traceflow in seconds. Defaults
+to 20 seconds if not set.
Select Pods from NetworkPolicy’s Namespace as workloads in
+AppliedTo fields. If set with NamespaceSelector, Pods are
+matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select all Pods from Namespaces matched by this selector, as
+workloads in AppliedTo fields. If set with PodSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except PodSelector or
+ExternalEntitySelector. Cannot be set with Namespaces.
Select ExternalEntities from NetworkPolicy’s Namespace as workloads
+in AppliedTo fields. If set with NamespaceSelector,
+ExternalEntities are matched from Namespaces matched by the
+NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
+
+
+
+
+group
+
+string
+
+
+
+(Optional)
+
Group is the name of the ClusterGroup which can be set as an
+AppliedTo in place of a stand-alone selector. A Group cannot
+be set with any other selector.
Select a certain Service which matches the NamespacedName.
+A Service can only be set in either policy level AppliedTo field in a policy
+that only has ingress rules or rule level AppliedTo field in an ingress rule.
+Only a NodePort Service can be referred by this field.
+Cannot be set with any other selector.
BundleFileServer specifies the bundle file server information.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+url
+
+string
+
+
+
+
The URL of the bundle file server. It is set with format: scheme://host[:port][/path],
+e.g, https://api.example.com:8443/v1/supportbundles/. If scheme is not set, https is used by default.
ClusterNetworkPolicySpec defines the desired state for ClusterNetworkPolicy.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+tier
+
+string
+
+
+
+
Tier specifies the tier to which this ClusterNetworkPolicy belongs to.
+The ClusterNetworkPolicy order will be determined based on the
+combination of the Tier’s Priority and the ClusterNetworkPolicy’s own
+Priority. If not specified, this policy will be created in the Application
+Tier right above the K8s NetworkPolicy which resides at the bottom.
+
+
+
+
+priority
+
+float64
+
+
+
+
Priority specfies the order of the ClusterNetworkPolicy relative to
+other AntreaClusterNetworkPolicies.
Set of ingress rules evaluated based on the order in which they are set.
+Currently Ingress rule supports setting the From field but not the To
+field within a Rule.
Set of egress rules evaluated based on the order in which they are set.
+Currently Egress rule supports setting the To field but not the From
+field within a Rule.
HTTPProtocol matches HTTP requests with specific host, method, and path. All fields could be used alone or together.
+If all fields are not provided, it matches all HTTP requests.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+host
+
+string
+
+
+
+
Host represents the hostname present in the URI or the HTTP Host header to match.
+It does not contain the port associated with the host.
+
+
+
+
+method
+
+string
+
+
+
+
Method represents the HTTP method to match.
+It could be GET, POST, PUT, HEAD, DELETE, TRACE, OPTIONS, CONNECT and PATCH.
+
+
+
+
+path
+
+string
+
+
+
+
Path represents the URI path to match (Ex. “/index.html”, “/admin”).
ICMPProtocol matches ICMP traffic with specific ICMPType and/or ICMPCode. All
+fields could be used alone or together. If all fields are not provided, this
+matches all ICMP traffic.
IPBlock describes the IPAddresses/IPBlocks that is matched in to/from.
+IPBlock cannot be set as part of the AppliedTo field.
+Cannot be set with any other selector.
Select Pods from NetworkPolicy’s Namespace as workloads in
+To/From fields. If set with NamespaceSelector, Pods are
+matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select all Pods from Namespaces matched by this selector, as
+workloads in To/From fields. If set with PodSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except PodSelector or
+ExternalEntitySelector. Cannot be set with Namespaces.
Select Pod/ExternalEntity from Namespaces matched by specific criteria.
+Current supported criteria is match: Self, which selects from the same
+Namespace of the appliedTo workloads.
+Cannot be set with any other selector except PodSelector or
+ExternalEntitySelector. This field can only be set when NetworkPolicyPeer
+is created for ClusterNetworkPolicy ingress/egress rules.
+Cannot be set with NamespaceSelector.
Select ExternalEntities from NetworkPolicy’s Namespace as workloads
+in To/From fields. If set with NamespaceSelector,
+ExternalEntities are matched from Namespaces matched by the
+NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
+
+
+
+
+group
+
+string
+
+
+
+
Group is the name of the ClusterGroup which can be set within
+an Ingress or Egress rule in place of a stand-alone selector.
+A Group cannot be set with any other selector.
+
+
+
+
+fqdn
+
+string
+
+
+
+
Restrict egress access to the Fully Qualified Domain Names prescribed
+by name or by wildcard match patterns. This field can only be set for
+NetworkPolicyPeer of egress rules.
+Supported formats are:
+Exact FQDNs, i.e. “google.com”, “db-svc.default.svc.cluster.local”
+Wildcard expressions, i.e. “*wayfair.com”.
The port on the given protocol. This can be either a numerical
+or named port on a Pod. If this field is not provided, this
+matches all port names and numbers.
+
+
+
+
+endPort
+
+int32
+
+
+
+(Optional)
+
EndPort defines the end of the port range, inclusive.
+It can only be specified when a numerical port is specified.
+
+
+
+
+sourcePort
+
+int32
+
+
+
+(Optional)
+
The source port on the given protocol. This can only be a numerical port.
+If this field is not provided, rule matches all source ports.
+
+
+
+
+sourceEndPort
+
+int32
+
+
+
+(Optional)
+
SourceEndPort defines the end of the source port range, inclusive.
+It can only be specified when sourcePort is specified.
NetworkPolicySpec defines the desired state for NetworkPolicy.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+tier
+
+string
+
+
+
+
Tier specifies the tier to which this NetworkPolicy belongs to.
+The NetworkPolicy order will be determined based on the combination of the
+Tier’s Priority and the NetworkPolicy’s own Priority. If not specified,
+this policy will be created in the Application Tier right above the K8s
+NetworkPolicy which resides at the bottom.
+
+
+
+
+priority
+
+float64
+
+
+
+
Priority specfies the order of the NetworkPolicy relative to other
+NetworkPolicies.
Set of ingress rules evaluated based on the order in which they are set.
+Currently Ingress rule supports setting the From field but not the To
+field within a Rule.
Set of egress rules evaluated based on the order in which they are set.
+Currently Egress rule supports setting the To field but not the From
+field within a Rule.
Rule describes the traffic allowed to/from the workloads selected by
+Spec.AppliedTo. Based on the action specified in the rule, traffic is either
+allowed or denied which exactly match the specified ports and protocol.
Set of layer 7 protocols matched by the rule. If this field is set, action can only be Allow.
+When this field is used in a rule, any traffic matching the other layer 3⁄4 criteria of the rule (typically the
+5-tuple) will be forwarded to an application-aware engine for protocol detection and rule enforcement, and the
+traffic will be allowed if the layer 7 criteria is also matched, otherwise it will be dropped. Therefore, any
+rules after a layer 7 rule will not be enforced for the traffic.
Rule is matched if traffic is intended for workloads selected by
+this field. This field can’t be used with ToServices. If this field
+and ToServices are both empty or missing this rule matches all destinations.
Rule is matched if traffic is intended for a Service listed in this field.
+Currently, only ClusterIP types Services are supported in this field.
+When scope is set to ClusterSet, it matches traffic intended for a multi-cluster
+Service listed in this field. Service name and Namespace provided should match
+the original exported Service.
+This field can only be used when AntreaProxy is enabled. This field can’t be used
+with To or Ports. If this field and To are both empty or missing, this rule matches
+all destinations.
+
+
+
+
+name
+
+string
+
+
+
+(Optional)
+
Name describes the intention of this rule.
+Name should be unique within the policy.
+
+
+
+
+enableLogging
+
+bool
+
+
+
+(Optional)
+
EnableLogging is used to indicate if agent should generate logs
+when rules are matched. Should be default to false.
+
+
+
+
+logLabel
+
+string
+
+
+
+(Optional)
+
LogLabel is a user-defined arbitrary string which will be printed in the NetworkPolicy logs.
ExpirationMinutes is the requested duration of validity of the SupportBundleCollection.
+A SupportBundleCollection will be marked as Failed if it does not finish before expiration.
+Default is 60.
+
+
+
+
+sinceTime
+
+string
+
+
+
+
SinceTime specifies a relative time before the current time from which to collect logs
+A valid value is like: 1d, 2h, 30m.
LiveTraffic indicates the Traceflow is to trace the live traffic
+rather than an injected packet, when set to true. The first packet of
+the first connection that matches the packet spec will be traced.
+
+
+
+
+droppedOnly
+
+bool
+
+
+
+
DroppedOnly indicates only the dropped packet should be captured in a
+live-traffic Traceflow.
+
+
+
+
+timeout
+
+uint16
+
+
+
+
Timeout specifies the timeout of the Traceflow in seconds. Defaults
+to 20 seconds if not set.
StartTime is the time at which the Traceflow as started by the Antrea Controller.
+Before K8s v1.20, null values (field not set) are not pruned, and a CR where a
+metav1.Time field is not set would fail OpenAPI validation (type string). The
+recommendation seems to be to use a pointer instead, and the field will be omitted when
+serializing.
+See https://github.com/kubernetes/kubernetes/issues/86811
+
+
+
+
+dataplaneTag
+
+byte
+
+
+
+
DataplaneTag is a tag to identify a traceflow session across Nodes.
Select Pods matching the labels set in the PodSelector in
+AppliedTo/To/From fields. If set with NamespaceSelector, Pods are
+matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select all Pods from Namespaces matched by this selector, as
+workloads in AppliedTo/To/From fields. If set with PodSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except PodSelector.
IPBlock describes the IPAddresses/IPBlocks that is matched in to/from.
+IPBlock cannot be set as part of the AppliedTo field.
+Cannot be set with any other selector or ServiceReference.
+Cannot be set with IPBlocks.
IPBlocks is a list of IPAddresses/IPBlocks that is matched in to/from.
+IPBlock cannot be set as part of the AppliedTo field.
+Cannot be set with any other selector or ServiceReference.
+Cannot be set with IPBlock.
Select ExternalEntities from all Namespaces as workloads
+in AppliedTo/To/From fields. If set with NamespaceSelector,
+ExternalEntities are matched from Namespaces matched by the
+NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select other ClusterGroups by name. The ClusterGroups must already
+exist and must not contain ChildGroups themselves.
+Cannot be set with any selector/IPBlock/ServiceReference.
AppliedTo selects Pods to which the Egress will be applied.
+
+
+
+
+egressIP
+
+string
+
+
+
+
EgressIP specifies the SNAT IP address for the selected workloads.
+If ExternalIPPool is empty, it must be specified manually.
+If ExternalIPPool is non-empty, it can be empty and will be assigned by Antrea automatically.
+If both ExternalIPPool and EgressIP are non-empty, the IP must be in the pool.
+
+
+
+
+egressIPs
+
+[]string
+
+
+
+
EgressIPs specifies multiple SNAT IP addresses for the selected workloads.
+Cannot be set with EgressIP.
+
+
+
+
+externalIPPool
+
+string
+
+
+
+
ExternalIPPool specifies the IP Pool that the EgressIP should be allocated from.
+If it is empty, the specified EgressIP must be assigned to a Node manually.
+If it is non-empty, the EgressIP will be assigned to a Node specified by the pool automatically and will failover
+to a different Node when the Node becomes unreachable.
+
+
+
+
+externalIPPools
+
+[]string
+
+
+
+
ExternalIPPools specifies multiple unique IP Pools that the EgressIPs should be allocated from. Entries with the
+same index in EgressIPs and ExternalIPPools are correlated.
+Cannot be set with ExternalIPPool.
ExternalNode is the opaque identifier of the agent/controller responsible
+for additional processing or handling of this external entity.
+
+
+
+
+
+
+
+
ExternalIPPool
+
+
+
ExternalIPPool defines one or multiple IP sets that can be used in the external network. For instance, the IPs can be
+allocated to the Egress resources as the Egress IPs.
IPPool defines one or multiple IP sets that can be used for flexible IPAM feature. For instance, the IPs can be
+allocated to Pods according to IP pool specified in Deployment annotation.
TrafficControl allows mirroring or redirecting the traffic Pods send or receive. It enables users to monitor and
+analyze Pod traffic, and to enforce custom network protections for Pods with fine-grained control over network
+traffic.
Select Pods matched by this selector. If set with NamespaceSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector;
+otherwise, Pods are matched from all Namespaces.
Select all Pods from Namespaces matched by this selector. If set with
+PodSelector, Pods are matched from Namespaces matched by the
+NamespaceSelector.
AppliedTo selects Pods to which the Egress will be applied.
+
+
+
+
+egressIP
+
+string
+
+
+
+
EgressIP specifies the SNAT IP address for the selected workloads.
+If ExternalIPPool is empty, it must be specified manually.
+If ExternalIPPool is non-empty, it can be empty and will be assigned by Antrea automatically.
+If both ExternalIPPool and EgressIP are non-empty, the IP must be in the pool.
+
+
+
+
+egressIPs
+
+[]string
+
+
+
+
EgressIPs specifies multiple SNAT IP addresses for the selected workloads.
+Cannot be set with EgressIP.
+
+
+
+
+externalIPPool
+
+string
+
+
+
+
ExternalIPPool specifies the IP Pool that the EgressIP should be allocated from.
+If it is empty, the specified EgressIP must be assigned to a Node manually.
+If it is non-empty, the EgressIP will be assigned to a Node specified by the pool automatically and will failover
+to a different Node when the Node becomes unreachable.
+
+
+
+
+externalIPPools
+
+[]string
+
+
+
+
ExternalIPPools specifies multiple unique IP Pools that the EgressIPs should be allocated from. Entries with the
+same index in EgressIPs and ExternalIPPools are correlated.
+Cannot be set with ExternalIPPool.
EgressStatus represents the current status of an Egress.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+egressNode
+
+string
+
+
+
+
The name of the Node that holds the Egress IP.
+
+
+
+
+egressIP
+
+string
+
+
+
+
EgressIP indicates the effective Egress IP for the selected workloads. It could be empty if the Egress IP in spec
+is not assigned to any Node. It’s also useful when there are more than one Egress IP specified in spec.
Select Pods matching the labels set in the PodSelector in
+AppliedTo/To/From fields. If set with NamespaceSelector, Pods are
+matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select all Pods from Namespaces matched by this selector, as
+workloads in AppliedTo/To/From fields. If set with PodSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except PodSelector.
IPBlock describes the IPAddresses/IPBlocks that is matched in to/from.
+IPBlock cannot be set as part of the AppliedTo field.
+Cannot be set with any other selector or ServiceReference.
+Cannot be set with IPBlocks.
IPBlocks is a list of IPAddresses/IPBlocks that is matched in to/from.
+IPBlock cannot be set as part of the AppliedTo field.
+Cannot be set with any other selector or ServiceReference.
+Cannot be set with IPBlock.
Select ExternalEntities from all Namespaces as workloads
+in AppliedTo/To/From fields. If set with NamespaceSelector,
+ExternalEntities are matched from Namespaces matched by the
+NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select other ClusterGroups by name. The ClusterGroups must already
+exist and must not contain ChildGroups themselves.
+Cannot be set with any selector/IPBlock/ServiceReference.
Select Pods matching the labels set in the PodSelector in
+AppliedTo/To/From fields. If set with NamespaceSelector, Pods are
+matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select all Pods from Namespaces matched by this selector, as
+workloads in AppliedTo/To/From fields. If set with PodSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except PodSelector.
IPBlocks describe the IPAddresses/IPBlocks that are matched in to/from.
+IPBlocks cannot be set as part of the AppliedTo field.
+Cannot be set with any other selector or ServiceReference.
Select ExternalEntities from all Namespaces as workloads
+in AppliedTo/To/From fields. If set with NamespaceSelector,
+ExternalEntities are matched from Namespaces matched by the
+NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select other ClusterGroups by name. The ClusterGroups must already
+exist and must not contain ChildGroups themselves.
+Cannot be set with any selector/IPBlock/ServiceReference.
Group can be used in AntreaNetworkPolicies. When used with AppliedTo, it cannot include NamespaceSelector,
+otherwise, Antrea will not realize the NetworkPolicy or rule, but will just update the NetworkPolicy
+Status as “Unrealizable”.
Select Pods matching the labels set in the PodSelector in
+AppliedTo/To/From fields. If set with NamespaceSelector, Pods are
+matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select all Pods from Namespaces matched by this selector, as
+workloads in AppliedTo/To/From fields. If set with PodSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except PodSelector.
IPBlocks describe the IPAddresses/IPBlocks that are matched in to/from.
+IPBlocks cannot be set as part of the AppliedTo field.
+Cannot be set with any other selector or ServiceReference.
Select ExternalEntities from all Namespaces as workloads
+in AppliedTo/To/From fields. If set with NamespaceSelector,
+ExternalEntities are matched from Namespaces matched by the
+NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select other ClusterGroups by name. The ClusterGroups must already
+exist and must not contain ChildGroups themselves.
+Cannot be set with any selector/IPBlock/ServiceReference.
Select Pods matching the labels set in the PodSelector in
+AppliedTo/To/From fields. If set with NamespaceSelector, Pods are
+matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select all Pods from Namespaces matched by this selector, as
+workloads in AppliedTo/To/From fields. If set with PodSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except PodSelector.
IPBlocks describe the IPAddresses/IPBlocks that are matched in to/from.
+IPBlocks cannot be set as part of the AppliedTo field.
+Cannot be set with any other selector or ServiceReference.
Select ExternalEntities from all Namespaces as workloads
+in AppliedTo/To/From fields. If set with NamespaceSelector,
+ExternalEntities are matched from Namespaces matched by the
+NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select other ClusterGroups by name. The ClusterGroups must already
+exist and must not contain ChildGroups themselves.
+Cannot be set with any selector/IPBlock/ServiceReference.
Select Pods matching the labels set in the PodSelector in
+AppliedTo/To/From fields. If set with NamespaceSelector, Pods are
+matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select all Pods from Namespaces matched by this selector, as
+workloads in AppliedTo/To/From fields. If set with PodSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except PodSelector.
IPBlocks describe the IPAddresses/IPBlocks that are matched in to/from.
+IPBlocks cannot be set as part of the AppliedTo field.
+Cannot be set with any other selector or ServiceReference.
Select ExternalEntities from all Namespaces as workloads
+in AppliedTo/To/From fields. If set with NamespaceSelector,
+ExternalEntities are matched from Namespaces matched by the
+NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select other ClusterGroups by name. The ClusterGroups must already
+exist and must not contain ChildGroups themselves.
+Cannot be set with any selector/IPBlock/ServiceReference.
Specification of the desired behavior of ClusterNetworkPolicy.
+
+
+
+
+
+tier
+
+string
+
+
+
+
Tier specifies the tier to which this ClusterNetworkPolicy belongs to.
+The ClusterNetworkPolicy order will be determined based on the
+combination of the Tier’s Priority and the ClusterNetworkPolicy’s own
+Priority. If not specified, this policy will be created in the Application
+Tier right above the K8s NetworkPolicy which resides at the bottom.
+
+
+
+
+priority
+
+float64
+
+
+
+
Priority specfies the order of the ClusterNetworkPolicy relative to
+other AntreaClusterNetworkPolicies.
Set of ingress rules evaluated based on the order in which they are set.
+Currently Ingress rule supports setting the From field but not the To
+field within a Rule.
Set of egress rules evaluated based on the order in which they are set.
+Currently Egress rule supports setting the To field but not the From
+field within a Rule.
AppliedTo selects Pods to which the Egress will be applied.
+
+
+
+
+egressIP
+
+string
+
+
+
+
EgressIP specifies the SNAT IP address for the selected workloads.
+If ExternalIPPool is empty, it must be specified manually.
+If ExternalIPPool is non-empty, it can be empty and will be assigned by Antrea automatically.
+If both ExternalIPPool and EgressIP are non-empty, the IP must be in the pool.
+
+
+
+
+egressIPs
+
+[]string
+
+
+
+
EgressIPs specifies multiple SNAT IP addresses for the selected workloads.
+Cannot be set with EgressIP.
+
+
+
+
+externalIPPool
+
+string
+
+
+
+
ExternalIPPool specifies the IP Pool that the EgressIP should be allocated from.
+If it is empty, the specified EgressIP must be assigned to a Node manually.
+If it is non-empty, the EgressIP will be assigned to a Node specified by the pool automatically and will failover
+to a different Node when the Node becomes unreachable.
+
+
+
+
+externalIPPools
+
+[]string
+
+
+
+
ExternalIPPools specifies multiple unique IP Pools that the EgressIPs should be allocated from. Entries with the
+same index in EgressIPs and ExternalIPPools are correlated.
+Cannot be set with ExternalIPPool.
EgressStatus represents the current status of an Egress.
+
+
+
+
+
ExternalIPPool
+
+
+
ExternalIPPool defines one or multiple IP sets that can be used in the external network. For instance, the IPs can be
+allocated to the Egress resources as the Egress IPs.
The Subnet info of this IP pool. If set, all IP ranges in the IP pool should share the same subnet attributes.
+Currently, it’s only used when an IP is allocated from the pool for Egress, and is ignored otherwise.
Group can be used in AntreaNetworkPolicies. When used with AppliedTo, it cannot include NamespaceSelector,
+otherwise, Antrea will not realize the NetworkPolicy or rule, but will just update the NetworkPolicy
+Status as “Unrealizable”.
Select Pods matching the labels set in the PodSelector in
+AppliedTo/To/From fields. If set with NamespaceSelector, Pods are
+matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select all Pods from Namespaces matched by this selector, as
+workloads in AppliedTo/To/From fields. If set with PodSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except PodSelector.
IPBlocks describe the IPAddresses/IPBlocks that are matched in to/from.
+IPBlocks cannot be set as part of the AppliedTo field.
+Cannot be set with any other selector or ServiceReference.
Select ExternalEntities from all Namespaces as workloads
+in AppliedTo/To/From fields. If set with NamespaceSelector,
+ExternalEntities are matched from Namespaces matched by the
+NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select other ClusterGroups by name. The ClusterGroups must already
+exist and must not contain ChildGroups themselves.
+Cannot be set with any selector/IPBlock/ServiceReference.
Specification of the desired behavior of NetworkPolicy.
+
+
+
+
+
+tier
+
+string
+
+
+
+
Tier specifies the tier to which this NetworkPolicy belongs to.
+The NetworkPolicy order will be determined based on the combination of the
+Tier’s Priority and the NetworkPolicy’s own Priority. If not specified,
+this policy will be created in the Application Tier right above the K8s
+NetworkPolicy which resides at the bottom.
+
+
+
+
+priority
+
+float64
+
+
+
+
Priority specfies the order of the NetworkPolicy relative to other
+NetworkPolicies.
Set of ingress rules evaluated based on the order in which they are set.
+Currently Ingress rule supports setting the From field but not the To
+field within a Rule.
Set of egress rules evaluated based on the order in which they are set.
+Currently Egress rule supports setting the To field but not the From
+field within a Rule.
LiveTraffic indicates the Traceflow is to trace the live traffic
+rather than an injected packet, when set to true. The first packet of
+the first connection that matches the packet spec will be traced.
+
+
+
+
+droppedOnly
+
+bool
+
+
+
+
DroppedOnly indicates only the dropped packet should be captured in a
+live-traffic Traceflow.
+
+
+
+
+timeout
+
+int32
+
+
+
+
Timeout specifies the timeout of the Traceflow in seconds. Defaults
+to 20 seconds if not set.
Select Pods from NetworkPolicy’s Namespace as workloads in
+AppliedTo fields. If set with NamespaceSelector, Pods are
+matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select all Pods from Namespaces matched by this selector, as
+workloads in AppliedTo fields. If set with PodSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except PodSelector or
+ExternalEntitySelector. Cannot be set with Namespaces.
Select ExternalEntities from NetworkPolicy’s Namespace as workloads
+in AppliedTo fields. If set with NamespaceSelector,
+ExternalEntities are matched from Namespaces matched by the
+NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
+
+
+
+
+group
+
+string
+
+
+
+(Optional)
+
Group is the name of the ClusterGroup which can be set as an
+AppliedTo in place of a stand-alone selector. A Group cannot
+be set with any other selector.
Select a certain Service which matches the NamespacedName.
+A Service can only be set in either policy level AppliedTo field in a policy
+that only has ingress rules or rule level AppliedTo field in an ingress rule.
+Only a NodePort Service can be referred by this field.
+Cannot be set with any other selector.
ClusterNetworkPolicySpec defines the desired state for ClusterNetworkPolicy.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+tier
+
+string
+
+
+
+
Tier specifies the tier to which this ClusterNetworkPolicy belongs to.
+The ClusterNetworkPolicy order will be determined based on the
+combination of the Tier’s Priority and the ClusterNetworkPolicy’s own
+Priority. If not specified, this policy will be created in the Application
+Tier right above the K8s NetworkPolicy which resides at the bottom.
+
+
+
+
+priority
+
+float64
+
+
+
+
Priority specfies the order of the ClusterNetworkPolicy relative to
+other AntreaClusterNetworkPolicies.
Set of ingress rules evaluated based on the order in which they are set.
+Currently Ingress rule supports setting the From field but not the To
+field within a Rule.
Set of egress rules evaluated based on the order in which they are set.
+Currently Egress rule supports setting the To field but not the From
+field within a Rule.
AppliedTo selects Pods to which the Egress will be applied.
+
+
+
+
+egressIP
+
+string
+
+
+
+
EgressIP specifies the SNAT IP address for the selected workloads.
+If ExternalIPPool is empty, it must be specified manually.
+If ExternalIPPool is non-empty, it can be empty and will be assigned by Antrea automatically.
+If both ExternalIPPool and EgressIP are non-empty, the IP must be in the pool.
+
+
+
+
+egressIPs
+
+[]string
+
+
+
+
EgressIPs specifies multiple SNAT IP addresses for the selected workloads.
+Cannot be set with EgressIP.
+
+
+
+
+externalIPPool
+
+string
+
+
+
+
ExternalIPPool specifies the IP Pool that the EgressIP should be allocated from.
+If it is empty, the specified EgressIP must be assigned to a Node manually.
+If it is non-empty, the EgressIP will be assigned to a Node specified by the pool automatically and will failover
+to a different Node when the Node becomes unreachable.
+
+
+
+
+externalIPPools
+
+[]string
+
+
+
+
ExternalIPPools specifies multiple unique IP Pools that the EgressIPs should be allocated from. Entries with the
+same index in EgressIPs and ExternalIPPools are correlated.
+Cannot be set with ExternalIPPool.
EgressStatus represents the current status of an Egress.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+egressNode
+
+string
+
+
+
+
The name of the Node that holds the Egress IP.
+
+
+
+
+egressIP
+
+string
+
+
+
+
EgressIP indicates the effective Egress IP for the selected workloads. It could be empty if the Egress IP in spec
+is not assigned to any Node. It’s also useful when there are more than one Egress IP specified in spec.
The Subnet info of this IP pool. If set, all IP ranges in the IP pool should share the same subnet attributes.
+Currently, it’s only used when an IP is allocated from the pool for Egress, and is ignored otherwise.
Select Pods matching the labels set in the PodSelector in
+AppliedTo/To/From fields. If set with NamespaceSelector, Pods are
+matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select all Pods from Namespaces matched by this selector, as
+workloads in AppliedTo/To/From fields. If set with PodSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except PodSelector.
IPBlocks describe the IPAddresses/IPBlocks that are matched in to/from.
+IPBlocks cannot be set as part of the AppliedTo field.
+Cannot be set with any other selector or ServiceReference.
Select ExternalEntities from all Namespaces as workloads
+in AppliedTo/To/From fields. If set with NamespaceSelector,
+ExternalEntities are matched from Namespaces matched by the
+NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select other ClusterGroups by name. The ClusterGroups must already
+exist and must not contain ChildGroups themselves.
+Cannot be set with any selector/IPBlock/ServiceReference.
HTTPProtocol matches HTTP requests with specific host, method, and path. All fields could be used alone or together.
+If all fields are not provided, it matches all HTTP requests.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+host
+
+string
+
+
+
+
Host represents the hostname present in the URI or the HTTP Host header to match.
+It does not contain the port associated with the host.
+
+
+
+
+method
+
+string
+
+
+
+
Method represents the HTTP method to match.
+It could be GET, POST, PUT, HEAD, DELETE, TRACE, OPTIONS, CONNECT and PATCH.
+
+
+
+
+path
+
+string
+
+
+
+
Path represents the URI path to match (Ex. “/index.html”, “/admin”).
ICMPProtocol matches ICMP traffic with specific ICMPType and/or ICMPCode. All
+fields could be used alone or together. If all fields are not provided, this
+matches all ICMP traffic.
IPBlock describes the IPAddresses/IPBlocks that is matched in to/from.
+IPBlock cannot be set as part of the AppliedTo field.
+Cannot be set with any other selector.
Select Pods from NetworkPolicy’s Namespace as workloads in
+To/From fields. If set with NamespaceSelector, Pods are
+matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
Select all Pods from Namespaces matched by this selector, as
+workloads in To/From fields. If set with PodSelector,
+Pods are matched from Namespaces matched by the NamespaceSelector.
+Cannot be set with any other selector except PodSelector or
+ExternalEntitySelector. Cannot be set with Namespaces.
Select Pod/ExternalEntity from Namespaces matched by specific criteria.
+Current supported criteria is match: Self, which selects from the same
+Namespace of the appliedTo workloads.
+Cannot be set with any other selector except PodSelector or
+ExternalEntitySelector. This field can only be set when NetworkPolicyPeer
+is created for ClusterNetworkPolicy ingress/egress rules.
+Cannot be set with NamespaceSelector.
Select ExternalEntities from NetworkPolicy’s Namespace as workloads
+in To/From fields. If set with NamespaceSelector,
+ExternalEntities are matched from Namespaces matched by the
+NamespaceSelector.
+Cannot be set with any other selector except NamespaceSelector.
+
+
+
+
+group
+
+string
+
+
+
+
Group is the name of the ClusterGroup which can be set within
+an Ingress or Egress rule in place of a stand-alone selector.
+A Group cannot be set with any other selector.
+
+
+
+
+fqdn
+
+string
+
+
+
+
Restrict egress access to the Fully Qualified Domain Names prescribed
+by name or by wildcard match patterns. This field can only be set for
+NetworkPolicyPeer of egress rules.
+Supported formats are:
+Exact FQDNs, i.e. “google.com”, “db-svc.default.svc.cluster.local”
+Wildcard expressions, i.e. “*wayfair.com”.
The port on the given protocol. This can be either a numerical
+or named port on a Pod. If this field is not provided, this
+matches all port names and numbers.
+
+
+
+
+endPort
+
+int32
+
+
+
+(Optional)
+
EndPort defines the end of the port range, inclusive.
+It can only be specified when a numerical port is specified.
+
+
+
+
+sourcePort
+
+int32
+
+
+
+(Optional)
+
The source port on the given protocol. This can only be a numerical port.
+If this field is not provided, rule matches all source ports.
+
+
+
+
+sourceEndPort
+
+int32
+
+
+
+(Optional)
+
SourceEndPort defines the end of the source port range, inclusive.
+It can only be specified when sourcePort is specified.
NetworkPolicySpec defines the desired state for NetworkPolicy.
+
+
+
+
+
Field
+
Description
+
+
+
+
+
+tier
+
+string
+
+
+
+
Tier specifies the tier to which this NetworkPolicy belongs to.
+The NetworkPolicy order will be determined based on the combination of the
+Tier’s Priority and the NetworkPolicy’s own Priority. If not specified,
+this policy will be created in the Application Tier right above the K8s
+NetworkPolicy which resides at the bottom.
+
+
+
+
+priority
+
+float64
+
+
+
+
Priority specfies the order of the NetworkPolicy relative to other
+NetworkPolicies.
Set of ingress rules evaluated based on the order in which they are set.
+Currently Ingress rule supports setting the From field but not the To
+field within a Rule.
Set of egress rules evaluated based on the order in which they are set.
+Currently Egress rule supports setting the To field but not the From
+field within a Rule.
Rule describes the traffic allowed to/from the workloads selected by
+Spec.AppliedTo. Based on the action specified in the rule, traffic is either
+allowed or denied which exactly match the specified ports and protocol.
Set of layer 7 protocols matched by the rule. If this field is set, action can only be Allow.
+When this field is used in a rule, any traffic matching the other layer 3⁄4 criteria of the rule (typically the
+5-tuple) will be forwarded to an application-aware engine for protocol detection and rule enforcement, and the
+traffic will be allowed if the layer 7 criteria is also matched, otherwise it will be dropped. Therefore, any
+rules after a layer 7 rule will not be enforced for the traffic.
Rule is matched if traffic is intended for workloads selected by
+this field. This field can’t be used with ToServices. If this field
+and ToServices are both empty or missing this rule matches all destinations.
Rule is matched if traffic is intended for a Service listed in this field.
+Currently, only ClusterIP types Services are supported in this field.
+When scope is set to ClusterSet, it matches traffic intended for a multi-cluster
+Service listed in this field. Service name and Namespace provided should match
+the original exported Service.
+This field can only be used when AntreaProxy is enabled. This field can’t be used
+with To or Ports. If this field and To are both empty or missing, this rule matches
+all destinations.
+
+
+
+
+name
+
+string
+
+
+
+(Optional)
+
Name describes the intention of this rule.
+Name should be unique within the policy.
+
+
+
+
+enableLogging
+
+bool
+
+
+
+(Optional)
+
EnableLogging is used to indicate if agent should generate logs
+when rules are matched. Should be default to false.
+
+
+
+
+logLabel
+
+string
+
+
+
+(Optional)
+
LogLabel is a user-defined arbitrary string which will be printed in the NetworkPolicy logs.
LiveTraffic indicates the Traceflow is to trace the live traffic
+rather than an injected packet, when set to true. The first packet of
+the first connection that matches the packet spec will be traced.
+
+
+
+
+droppedOnly
+
+bool
+
+
+
+
DroppedOnly indicates only the dropped packet should be captured in a
+live-traffic Traceflow.
+
+
+
+
+timeout
+
+int32
+
+
+
+
Timeout specifies the timeout of the Traceflow in seconds. Defaults
+to 20 seconds if not set.
StartTime is the time at which the Traceflow as started by the Antrea Controller.
+Before K8s v1.20, null values (field not set) are not pruned, and a CR where a
+metav1.Time field is not set would fail OpenAPI validation (type string). The
+recommendation seems to be to use a pointer instead, and the field will be omitted when
+serializing.
+See https://github.com/kubernetes/kubernetes/issues/86811
+
+
+
+
+dataplaneTag
+
+byte
+
+
+
+
DataplaneTag is a tag to identify a traceflow session across Nodes.
+Generated with gen-crd-api-reference-docs
+on git commit 6507578.
+
diff --git a/content/docs/v1.15.0/docs/api-reference.md b/content/docs/v1.15.0/docs/api-reference.md
new file mode 100644
index 00000000..821072b4
--- /dev/null
+++ b/content/docs/v1.15.0/docs/api-reference.md
@@ -0,0 +1,5 @@
+
+---
+---
+
+{{% include-html "api-reference.html" %}}
diff --git a/content/docs/v1.15.0/docs/api.md b/content/docs/v1.15.0/docs/api.md
new file mode 100644
index 00000000..b0eceb22
--- /dev/null
+++ b/content/docs/v1.15.0/docs/api.md
@@ -0,0 +1,236 @@
+# Antrea API
+
+This document lists all the API resource versions currently or previously
+supported by Antrea, along with information related to their deprecation and
+removal when appropriate. It is kept up-to-date as we evolve the Antrea API.
+
+Starting with the v1.0 release, we decided to group all the Custom Resource
+Definitions (CRDs) defined by Antrea in a single API group, `crd.antrea.io`,
+instead of grouping CRDs logically in different API groups based on their
+purposes. The rationale for this change was to avoid proliferation of API
+groups. As a result, all resources in the `crd.antrea.io` are versioned
+individually, while before the v1.0 release, we used to have a single version
+number for all the CRDs in a given group: when introducing a new version of the
+API group, we would "move" all CRDs from the earlier version to the new version
+together. This explains why the tables below are presented differently for
+`crd.antrea.io` and for other API groups.
+
+For information about the Antrea API versioning policy, please refer to this
+[document](versioning.md).
+
+## Currently-supported
+
+### CRDs in `crd.antrea.io`
+
+These are the CRDs currently available in `crd.antrea.io`.
+
+| CRD | CRD version | Introduced in | Deprecated in / Planned Deprecation | Planned Removal |
+|---|---|---|---|---|
+| `AntreaAgentInfo` | v1beta1 | v1.0.0 | N/A | N/A |
+| `AntreaControllerInfo` | v1beta1 | v1.0.0 | N/A | N/A |
+| `ClusterGroup` | v1alpha2 | v1.0.0 | v1.1.0 | v2.0.0 |
+| `ClusterGroup` | v1alpha3 | v1.1.0 | v1.13.0 | N/A |
+| `ClusterGroup` | v1beta1 | v1.13.0 | N/A | N/A |
+| `ClusterNetworkPolicy` | v1alpha1 | v1.0.0 | v1.13.0 | N/A |
+| `ClusterNetworkPolicy` | v1beta1 | v1.13.0 | N/A | N/A |
+| `Egress` | v1alpha2 | v1.0.0 | N/A | N/A |
+| `Egress` | v1beta1 | v1.13.0 | N/A | N/A |
+| `ExternalEntity` | v1alpha2 | v1.0.0 | N/A | N/A |
+| `ExternalIPPool` | v1alpha2 | v1.2.0 | v1.13.0 | N/A |
+| `ExternalIPPool` | v1beta1 | v1.13.0 | N/A | N/A |
+| `ExternalNode` | v1alpha1 | v1.8.0 | N/A | N/A |
+| `IPPool`| v1alpha2 | v1.4.0 | N/A | N/A |
+| `Group` | v1alpha3 | v1.8.0 | v1.13.0 | N/A |
+| `Group` | v1beta1 | v1.13.0 | N/A | N/A |
+| `NetworkPolicy` | v1alpha1 | v1.0.0 | v1.13.0 | N/A |
+| `NetworkPolicy` | v1beta1 | v1.13.0 | N/A | N/A |
+| `SupportBundleCollection` | v1alpha1 | v1.10.0 | N/A | N/A |
+| `Tier` | v1alpha1 | v1.0.0 | v1.13.0 | v2.0.0 |
+| `Tier` | v1beta1 | v1.13.0 | N/A | N/A |
+| `Traceflow` | v1alpha1 | v1.0.0 | v1.13.0 | N/A |
+| `Traceflow` | v1beta1 | v1.13.0 | N/A | N/A |
+
+### Other API groups
+
+These are the API group versions which are currently available when using Antrea.
+
+| API group | API version | API Service? | Introduced in | Deprecated in / Planned Deprecation | Planned Removal |
+|---|---|---|---|---|---|
+| `controlplane.antrea.io` | `v1beta2` | Yes | v1.0.0 | N/A | N/A |
+| `stats.antrea.io` | `v1alpha1` | Yes | v1.0.0 | N/A | N/A |
+| `system.antrea.io` | `v1beta1` | Yes | v1.0.0 | N/A | N/A |
+
+## Previously-supported
+
+### Previously-supported API groups
+
+| API group | API version | API Service? | Introduced in | Deprecated in | Removed in |
+|---|---|---|---|---|---|
+| `core.antrea.tanzu.vmware.com` | `v1alpha1` | No | v0.8.0 | v0.11.0 | v0.11.0 |
+| `networking.antrea.tanzu.vmware.com` | `v1beta1` | Yes | v0.3.0 | v0.10.0 | v1.2.0 |
+| `controlplane.antrea.tanzu.vmware.com` | `v1beta1` | Yes | v0.10.0 | v0.11.0 | v1.3.0 |
+| `clusterinformation.antrea.tanzu.vmware.com` | `v1beta1` | No | v0.3.0 | v1.0.0 | v1.6.0 |
+| `core.antrea.tanzu.vmware.com` | `v1alpha2` | No | v0.11.0 | v1.0.0 | v1.6.0 |
+| `controlplane.antrea.tanzu.vmware.com` | `v1beta2` | Yes | v0.11.0 | v1.0.0 | v1.6.0 |
+| `ops.antrea.tanzu.vmware.com` | `v1alpha1` | No | v0.8.0 | v1.0.0 | v1.6.0 |
+| `security.antrea.tanzu.vmware.com` | `v1alpha1` | No | v0.8.0 | v1.0.0 | v1.6.0 |
+| `stats.antrea.tanzu.vmware.com` | `v1alpha1` | Yes | v0.10.0 | v1.0.0 | v1.6.0 |
+| `system.antrea.tanzu.vmware.com` | `v1beta1` | Yes | v0.5.0 | v1.0.0 | v1.6.0 |
+
+### Previously-supported CRDs
+
+| CRD | CRD version | Introduced in | Deprecated in | Removed in |
+|---|---|---|---|---|
+
+## API renaming from `*.antrea.tanzu.vmware.com` to `*.antrea.io`
+
+For the v1.0 release, we undertook to rename all Antrea APIs to use the
+`antrea.io` suffix instead of the `antrea.tanzu.vmware.com` suffix. For more
+information about the motivations behind this undertaking, please refer to
+[Github issue #1715](https://github.com/antrea-io/antrea/issues/1715).
+
+From the v1.6 release, all legacy APIs (ending with the
+`antrea.tanzu.vmware.com` suffix) have been completely removed. If you are
+running an Antrea version older than v1.0 and you want to upgrade to Antrea v1.6
+or greater and migrate your API resources, you will first need to do an
+intermediate upgrade to an Antrea version >= v1.0 and <= v1.5. You will then be
+able to migrate all your API resources to the new (`*.antrea.io`) API, by
+following the steps below. Finally, you will be able to upgrade to your desired
+Antrea version (>= v1.6).
+
+As part of the API renaming, and to avoid proliferation of API groups, we have
+decided to group all the Custom Resource Definitions (CRDs) defined by Antrea in
+a single API group: `crd.antrea.io`.
+
+To avoid disruptions to existing Antrea users, our requirements for this
+renaming process were as follows:
+
+1. As per our [upgrade
+ policy](versioning.md#antrea-upgrade-and-supported-version-skew), older
+ Agents need to be able to communicate with a new upgraded Controller, using
+ the old `controlplane.antrea.tanzu.vmware.com` API. Once both the Controller
+ and the Agent are upgraded, they communicate using `controlplane.antrea.io`.
+2. API Services can be accessed using either API version.
+3. After upgrade, Custom Resources can be managed using either API
+ version. Resources created using the old API (before or after upgrade) can be
+ accessed using the new API (or the old one).
+4. For each resource in each API group, the new resource type should be
+ backward-compatible with the old resource type, and, whenever possible,
+ forward-compatible. This simplifies the upgrade of existing client
+ applications which leverage the Antrea API. These applications can be easily
+ upgraded to use the new API version, with no change to the business
+ logic. Custom Resources created before upgrading the application can be
+ accessed through the new API with no loss of information.
+
+To achieve our 3rd goal, we introduced a new Kubernetes controller in the Antrea
+Controller, in charge of mirroring "old" Custom Resources (created using the
+`*.antrea.tanzu.vmware.com` API groups) to the new (`*.antrea.io`) API. This new
+mirroring controller is enabled by default, but can be disabled by setting
+`legacyCRDMirroring` to `false` in the `antrea-controller` configuration
+options. Thanks to this controller, the Antrea components (Agent and Controller)
+only need to watch Custom Resources created with the new API group. If any
+client still uses the old (or "legacy") API groups, these Custom Resources will
+be mirrored to the new API group and handled as expected.
+
+The mirroring controller behaves as follows:
+
+* If a Custom Resource is created with the legacy API, it will create a new
+ Custom Resource with the same `Spec` and `Labels` as the legacy one.
+* Any update to the `Spec` and / or `Labels` of the legacy Custom Resource will
+ be reflected identically in the new Custom Resource.
+* Any update to the `Status` of the new mirrored Custom Resource (assuming it
+ has a `Status` field) will be reflected back identically in the legacy Custom
+ Resource.
+* If the legacy Custom Resource is deleted, the mirrored one will be deleted
+ automatically as well.
+* Manual updates to new mirrored Custom Resources will be overwritten by the
+ controller.
+* If a legacy Custom Resource is annotated with `"crd.antrea.io/stop-mirror"`,
+ it will then be ignored, and updates to the corresponding new Custom
+ Resource will no longer be overwritten.
+
+This gives us the following upgrade sequence for a client application which uses
+the legacy Antrea CRDs:
+
+1. Ensure that Antrea has been upgraded in the cluster to a version greater than
+ or equal to v1.0, and that legacy CRD mirroring is enabled (this is the case
+ by default).
+
+2. Check that all Custom Resources have been mirrored. All the new ones should
+ be annotated with `"crd.antrea.io/managed-by":
+ "crdmirroring-controller"`. The first command below will display all the
+ legacy AntreaNetworkPolicies (ANPs). The second one will display all the ones
+ which exist in the new `crd.antrea.io` API group. You can then compare the
+ two lists.
+
+ ```bash
+ kubectl get lanp.security.antrea.tanzu.vmware.com -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'
+ kubectl get anp.crd.antrea.io -o jsonpath='{range .items[?(@.metadata.annotations.crd\.antrea\.io/managed-by=="crdmirroring-controller")]}{.metadata.name}{"\n"}{end}'
+ ```
+
+3. Stop the old version of the application, which uses the legacy CRDs.
+
+4. Annotate all existing Custom Resources managed by the application with
+ `"crd.antrea.io/stop-mirror"`. From now on, the mirroring controller will
+ ignore these legacy resources: updates to the legacy resources (including
+ deletions) are not applied to the corresponding new resource any more, and
+ changes to the new resources are now possible (they will not be overwritten
+ by the controller). As an example, the command below will annotate *all* ANPs
+ in the current Namespace with `"crd.antrea.io/stop-mirror"`.
+
+ ```bash
+ kubectl annotate lanp.security.antrea.tanzu.vmware.com --all crd.antrea.io/stop-mirror=''
+ ```
+
+5. Check that none of the new Custom Resources still have the
+ `"crd.antrea.io/managed-by": "crdmirroring-controller"` annotation. Running
+ the same command as before should return an empty list:
+
+ ```bash
+ kubectl get anp.crd.antrea.io -o jsonpath='{range .items[?(@.metadata.annotations.crd\.antrea\.io/managed-by=="crdmirroring-controller")]}{.metadata.name}{"\n"}{end}'
+ ```
+
+ If you remove the filter, all your ANPs should still exist:
+
+ ```bash
+ kubectl get anp.crd.antrea.io -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'
+ ```
+
+6. Safely delete all legacy CRDs previously managed by the application. As an
+ example, the command below will delete *all* legacy ANPs in the current
+ Namespace:
+
+ ```bash
+ kubectl delete lanp.security.antrea.tanzu.vmware.com
+ ```
+
+ Once again, all new ANPs should still exist, which can be confirmed with:
+
+ ```bash
+ kubectl get anp.crd.antrea.io -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'
+ ```
+
+7. Start the new version of the application, which uses the new CRDs. All
+ mirrored Custom Resources should be available for the application to access.
+
+8. At this stage, if all applications have been updated, legacy CRD mirroring
+ can be disabled in the Antrea Controller configuration.
+
+Note that for CRDs which are "owned" by Antrea, `AntreaAgentInfo` and
+`AntreaControllerInfo`, resources are automatically created by the Antrea
+components using both API versions.
+
+### Deleting legacy Kubernetes resources after an upgrade
+
+After a successful upgrade from Antrea < v1.6 to Antrea >= v1.6, you may want to
+manually clean up legacy Kubernetes resources which were created by an old
+Antrea version but are no longer needed. Note that keeping these resource will
+not impact any Antrea functions.
+
+To delete these legacy resources (CRDs and webhooks), run:
+
+```bash
+kubectl get crds -o=name --no-headers=true | grep "antrea\.tanzu\.vmware\.com" | xargs -r kubectl delete
+kubectl get mutatingwebhookconfigurations -o=name --no-headers=true | grep "antrea\.tanzu\.vmware\.com" | xargs -r kubectl delete
+kubectl get validatingwebhookconfigurations -o=name --no-headers=true | grep "antrea\.tanzu\.vmware\.com" | xargs -r kubectl delete
+```
diff --git a/content/docs/v1.15.0/docs/assets/README.md b/content/docs/v1.15.0/docs/assets/README.md
new file mode 100644
index 00000000..61dc9ea7
--- /dev/null
+++ b/content/docs/v1.15.0/docs/assets/README.md
@@ -0,0 +1,9 @@
+# Assets
+
+## SVG images
+
+The SVG images / diagrams in this directory have been created using
+[Inkscape](https://inkscape.org/) and exported as PNG files - which can be embedded in Markdown
+files. If you edit these images, please re-export them as PNG with a 300 dpi resolution. If you
+create new SVG images / diagrams for documentation, please check-in both the SVG source and the
+exported PNG file.
diff --git a/content/docs/v1.15.0/docs/assets/adopters/glasnostic-logo.png b/content/docs/v1.15.0/docs/assets/adopters/glasnostic-logo.png
new file mode 100644
index 00000000..52f96a48
Binary files /dev/null and b/content/docs/v1.15.0/docs/assets/adopters/glasnostic-logo.png differ
diff --git a/content/docs/v1.15.0/docs/assets/adopters/terasky-logo.png b/content/docs/v1.15.0/docs/assets/adopters/terasky-logo.png
new file mode 100644
index 00000000..d26875f4
Binary files /dev/null and b/content/docs/v1.15.0/docs/assets/adopters/terasky-logo.png differ
diff --git a/content/docs/v1.15.0/docs/assets/adopters/transwarp-logo.png b/content/docs/v1.15.0/docs/assets/adopters/transwarp-logo.png
new file mode 100644
index 00000000..07254111
Binary files /dev/null and b/content/docs/v1.15.0/docs/assets/adopters/transwarp-logo.png differ
diff --git a/content/docs/v1.15.0/docs/assets/antrea_overview.svg b/content/docs/v1.15.0/docs/assets/antrea_overview.svg
new file mode 100644
index 00000000..4a3b1da7
--- /dev/null
+++ b/content/docs/v1.15.0/docs/assets/antrea_overview.svg
@@ -0,0 +1,913 @@
+
+
+
+
diff --git a/content/docs/v1.15.0/docs/assets/antrea_overview.svg.png b/content/docs/v1.15.0/docs/assets/antrea_overview.svg.png
new file mode 100644
index 00000000..9aff76e7
Binary files /dev/null and b/content/docs/v1.15.0/docs/assets/antrea_overview.svg.png differ
diff --git a/content/docs/v1.15.0/docs/assets/arch.svg b/content/docs/v1.15.0/docs/assets/arch.svg
new file mode 100644
index 00000000..a549e33f
--- /dev/null
+++ b/content/docs/v1.15.0/docs/assets/arch.svg
@@ -0,0 +1,2076 @@
+
+
+
+
diff --git a/content/docs/v1.15.0/docs/assets/developer-workflow-opaque-bg.png b/content/docs/v1.15.0/docs/assets/developer-workflow-opaque-bg.png
new file mode 100644
index 00000000..191e9d50
Binary files /dev/null and b/content/docs/v1.15.0/docs/assets/developer-workflow-opaque-bg.png differ
diff --git a/content/docs/v1.15.0/docs/assets/developer-workflow.graffle b/content/docs/v1.15.0/docs/assets/developer-workflow.graffle
new file mode 100644
index 00000000..725d99a8
Binary files /dev/null and b/content/docs/v1.15.0/docs/assets/developer-workflow.graffle differ
diff --git a/content/docs/v1.15.0/docs/assets/flow_visibility.svg b/content/docs/v1.15.0/docs/assets/flow_visibility.svg
new file mode 100644
index 00000000..fdbb990a
--- /dev/null
+++ b/content/docs/v1.15.0/docs/assets/flow_visibility.svg
@@ -0,0 +1,2538 @@
+
+
diff --git a/content/docs/v1.15.0/docs/assets/hns_integration.svg b/content/docs/v1.15.0/docs/assets/hns_integration.svg
new file mode 100644
index 00000000..172b49e2
--- /dev/null
+++ b/content/docs/v1.15.0/docs/assets/hns_integration.svg
@@ -0,0 +1,856 @@
+
+
diff --git a/content/docs/v1.15.0/docs/assets/logo/README.md b/content/docs/v1.15.0/docs/assets/logo/README.md
new file mode 100644
index 00000000..c1233bae
--- /dev/null
+++ b/content/docs/v1.15.0/docs/assets/logo/README.md
@@ -0,0 +1,28 @@
+# Antrea Logos
+
+We provide the following 2 logos ("regular" and "stacked") in both SVG and PNG
+format. Use the one that works for you!
+
+## Regular SVG
+
+![Regular SVG](antrea_logo.svg)
+
+## Regular PNG Large
+
+![Regular PNG Large](antrea_logo_lrg.png)
+
+## Regular PNG Small
+
+![Regular PNG Small](antrea_logo_sml.png)
+
+## Stacked SVG
+
+![Stacked SVG](antrea_logo_stacked.svg)
+
+## Stacked PNG Large
+
+![Stacked PNG Large](antrea_logo_stacked_lrg.png)
+
+## Stacked PNG Small
+
+![Stacked PNG Small](antrea_logo_stacked_sml.png)
diff --git a/content/docs/v1.15.0/docs/assets/logo/antrea_logo.svg b/content/docs/v1.15.0/docs/assets/logo/antrea_logo.svg
new file mode 100644
index 00000000..55a22cc5
--- /dev/null
+++ b/content/docs/v1.15.0/docs/assets/logo/antrea_logo.svg
@@ -0,0 +1,70 @@
+
+
+
diff --git a/content/docs/v1.15.0/docs/assets/logo/antrea_logo_lrg.png b/content/docs/v1.15.0/docs/assets/logo/antrea_logo_lrg.png
new file mode 100644
index 00000000..cc09b97d
Binary files /dev/null and b/content/docs/v1.15.0/docs/assets/logo/antrea_logo_lrg.png differ
diff --git a/content/docs/v1.15.0/docs/assets/logo/antrea_logo_sml.png b/content/docs/v1.15.0/docs/assets/logo/antrea_logo_sml.png
new file mode 100644
index 00000000..43ee4286
Binary files /dev/null and b/content/docs/v1.15.0/docs/assets/logo/antrea_logo_sml.png differ
diff --git a/content/docs/v1.15.0/docs/assets/logo/antrea_logo_stacked.svg b/content/docs/v1.15.0/docs/assets/logo/antrea_logo_stacked.svg
new file mode 100644
index 00000000..194bd7d6
--- /dev/null
+++ b/content/docs/v1.15.0/docs/assets/logo/antrea_logo_stacked.svg
@@ -0,0 +1,71 @@
+
+
+
diff --git a/content/docs/v1.15.0/docs/assets/logo/antrea_logo_stacked_lrg.png b/content/docs/v1.15.0/docs/assets/logo/antrea_logo_stacked_lrg.png
new file mode 100644
index 00000000..e4577bf1
Binary files /dev/null and b/content/docs/v1.15.0/docs/assets/logo/antrea_logo_stacked_lrg.png differ
diff --git a/content/docs/v1.15.0/docs/assets/logo/antrea_logo_stacked_sml.png b/content/docs/v1.15.0/docs/assets/logo/antrea_logo_stacked_sml.png
new file mode 100644
index 00000000..d7009f44
Binary files /dev/null and b/content/docs/v1.15.0/docs/assets/logo/antrea_logo_stacked_sml.png differ
diff --git a/content/docs/v1.15.0/docs/assets/node.svg b/content/docs/v1.15.0/docs/assets/node.svg
new file mode 100644
index 00000000..bdab8f9d
--- /dev/null
+++ b/content/docs/v1.15.0/docs/assets/node.svg
@@ -0,0 +1,406 @@
+
+
+
+
diff --git a/content/docs/v1.15.0/docs/assets/node.svg.png b/content/docs/v1.15.0/docs/assets/node.svg.png
new file mode 100644
index 00000000..e8b8b0ce
Binary files /dev/null and b/content/docs/v1.15.0/docs/assets/node.svg.png differ
diff --git a/content/docs/v1.15.0/docs/assets/ovs-pipeline-antrea-proxy.svg b/content/docs/v1.15.0/docs/assets/ovs-pipeline-antrea-proxy.svg
new file mode 100644
index 00000000..7016a665
--- /dev/null
+++ b/content/docs/v1.15.0/docs/assets/ovs-pipeline-antrea-proxy.svg
@@ -0,0 +1,4835 @@
+
+
diff --git a/content/docs/v1.15.0/docs/assets/ovs-pipeline-external-node.svg b/content/docs/v1.15.0/docs/assets/ovs-pipeline-external-node.svg
new file mode 100644
index 00000000..63fee6dc
--- /dev/null
+++ b/content/docs/v1.15.0/docs/assets/ovs-pipeline-external-node.svg
@@ -0,0 +1,1216 @@
+
+
diff --git a/content/docs/v1.15.0/docs/assets/ovs-pipeline.svg b/content/docs/v1.15.0/docs/assets/ovs-pipeline.svg
new file mode 100644
index 00000000..c60576a1
--- /dev/null
+++ b/content/docs/v1.15.0/docs/assets/ovs-pipeline.svg
@@ -0,0 +1,4524 @@
+
+
diff --git a/content/docs/v1.15.0/docs/assets/policy-only-cni.svg b/content/docs/v1.15.0/docs/assets/policy-only-cni.svg
new file mode 100644
index 00000000..3adb5746
--- /dev/null
+++ b/content/docs/v1.15.0/docs/assets/policy-only-cni.svg
@@ -0,0 +1,138 @@
+
+
+
diff --git a/content/docs/v1.15.0/docs/assets/service_walk.svg b/content/docs/v1.15.0/docs/assets/service_walk.svg
new file mode 100644
index 00000000..85c13f2f
--- /dev/null
+++ b/content/docs/v1.15.0/docs/assets/service_walk.svg
@@ -0,0 +1,828 @@
+
+
+
+
diff --git a/content/docs/v1.15.0/docs/assets/service_walk.svg.png b/content/docs/v1.15.0/docs/assets/service_walk.svg.png
new file mode 100644
index 00000000..54bc058c
Binary files /dev/null and b/content/docs/v1.15.0/docs/assets/service_walk.svg.png differ
diff --git a/content/docs/v1.15.0/docs/assets/traffic_external_node.svg b/content/docs/v1.15.0/docs/assets/traffic_external_node.svg
new file mode 100644
index 00000000..19e547ef
--- /dev/null
+++ b/content/docs/v1.15.0/docs/assets/traffic_external_node.svg
@@ -0,0 +1,438 @@
+
+
diff --git a/content/docs/v1.15.0/docs/assets/traffic_walk.svg b/content/docs/v1.15.0/docs/assets/traffic_walk.svg
new file mode 100644
index 00000000..a40396fc
--- /dev/null
+++ b/content/docs/v1.15.0/docs/assets/traffic_walk.svg
@@ -0,0 +1,976 @@
+
+
+
+
diff --git a/content/docs/v1.15.0/docs/assets/traffic_walk.svg.png b/content/docs/v1.15.0/docs/assets/traffic_walk.svg.png
new file mode 100644
index 00000000..43064a68
Binary files /dev/null and b/content/docs/v1.15.0/docs/assets/traffic_walk.svg.png differ
diff --git a/content/docs/v1.15.0/docs/assets/windows_external_traffic.svg b/content/docs/v1.15.0/docs/assets/windows_external_traffic.svg
new file mode 100644
index 00000000..1339dcc2
--- /dev/null
+++ b/content/docs/v1.15.0/docs/assets/windows_external_traffic.svg
@@ -0,0 +1,386 @@
+
+
diff --git a/content/docs/v1.15.0/docs/configuration.md b/content/docs/v1.15.0/docs/configuration.md
new file mode 100644
index 00000000..4c6d78fd
--- /dev/null
+++ b/content/docs/v1.15.0/docs/configuration.md
@@ -0,0 +1,90 @@
+# Configuration
+
+## antrea-agent
+
+### Command line options
+
+```text
+--config string The path to the configuration file
+--v Level number for the log level verbosity
+```
+
+Use `antrea-agent -h` to see complete options.
+
+### Configuration
+
+The `antrea-agent` configuration file specifies the agent configuration
+parameters. For all the agent configuration parameters of a Linux Node, refer to
+this [base configuration file](https://github.com/antrea-io/antrea/blob/v1.15.0/build/charts/antrea/conf/antrea-agent.conf).
+For all the configuration parameters of a Windows Node, refer to this [base
+configuration file](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/windows/base/conf/antrea-agent.conf)
+
+## antrea-controller
+
+### Command line options
+
+```text
+--config string The path to the configuration file
+--v Level number for the log level verbosity
+```
+
+Use `antrea-controller -h` to see complete options.
+
+### Configuration
+
+The `antrea-controller` configuration file specifies the controller
+configuration parameters. For all the controller configuration parameters,
+refer to this [base configuration file](https://github.com/antrea-io/antrea/blob/v1.15.0/build/charts/antrea/conf/antrea-controller.conf).
+
+## CNI configuration
+
+A typical CNI configuration looks like this:
+
+```json
+ {
+ "cniVersion":"0.3.0",
+ "name": "antrea",
+ "plugins": [
+ {
+ "type": "antrea",
+ "ipam": {
+ "type": "host-local"
+ }
+ },
+ {
+ "type": "portmap",
+ "capabilities": {
+ "portMappings": true
+ }
+ },
+ {
+ "type": "bandwidth",
+ "capabilities": {
+ "bandwidth": true
+ }
+ }
+ ]
+ }
+```
+
+You can also set the MTU (for the Pod's network interface) in the CNI
+configuration using `"mtu": `. When using an `antrea.yml` manifest, the
+MTU should be set with the `antrea-agent` `defaultMTU` configuration parameter,
+which will apply to all Pods and the host gateway interface on every Node. It is
+strongly discouraged to set the `"mtu"` field in the CNI configuration to a
+value that does not match the `defaultMTU` parameter, as it may lead to
+performance degradation or packet drops.
+
+Antrea enables portmap and bandwidth CNI plugins by default to support `hostPort`
+and traffic shaping functionalities for Pods respectively. In order to disable
+them, remove the corresponding section from `antrea-cni.conflist` in the Antrea
+manifest. For example, removing the following section disables portmap plugin:
+
+```json
+{
+ "type": "portmap",
+ "capabilities": {
+ "portMappings": true
+ }
+}
+```
diff --git a/content/docs/v1.15.0/docs/contributors/cherry-picks.md b/content/docs/v1.15.0/docs/contributors/cherry-picks.md
new file mode 100644
index 00000000..5e2c2db4
--- /dev/null
+++ b/content/docs/v1.15.0/docs/contributors/cherry-picks.md
@@ -0,0 +1,64 @@
+# Cherry-picks to release branches
+
+Some Pull Requests (PRs) which fix bugs in the main branch of Antrea can be
+identified as good candidates for backporting to currently maintained release
+branches (using a Git [cherry-pick](https://git-scm.com/docs/git-cherry-pick)),
+so that they can be included in subsequent patch releases. If you have authored
+such a PR (thank you!!!), one of the Antrea maintainers may comment on your PR
+to ask for your assistance with that process. This document provides the steps
+you can use to cherry-pick your change to one or more release branches, with the
+help of the [cherry-pick script][cherry-pick-script].
+
+For information about which changes are good candidates for cherry-picking,
+please refer to our [versioning
+policy](../versioning.md#minor-releases-and-patch-releases).
+
+## Prerequisites
+
+* A PR which was approved and merged into the main branch.
+* The PR was identified as a good candidate for backporting by an Antrea
+ maintainer: they will label the PR with `action/backport` and comment a list
+ of release branches to which the patch should be backported (example:
+ [`release-1.0`](https://github.com/antrea-io/antrea/tree/release-1.0)).
+* Have the [Github CLI](https://cli.github.com/) installed (version >= 1.3) and
+ make sure you authenticate yourself by running `gh auth`.
+* Your own fork of the Antrea repository, and a clone of this fork with two
+ remotes: the `origin` remote tracking your fork and the `upstream` remote
+ tracking the upstream Antrea repository. If you followed our recommended
+ [Github Workflow], this should already be the case.
+
+## Cherry-pick your changes
+
+* Set the GITHUB_USER environment variable.
+* _Optional_ If your remote names do not match our recommended [Github
+ Workflow], you must set the `UPSTREAM_REMOTE` and `FORK_REMOTE` environment
+ variables.
+* Run the [cherry-pick script][cherry-pick-script]
+
+ This example applies a main branch PR #2134 to the remote branch
+ `upstream/release-1.0`:
+
+ ```shell
+ hack/cherry-pick-pull.sh upstream/release-1.0 2134
+ ```
+
+ If the cherry-picked PR does not apply cleanly against an old release branch,
+ the script will let you resolve conflicts manually. This is one of the reasons
+ why we ask contributors to backport their own bug fixes, as their
+ participation is critical in case of such a conflict.
+
+The script will create a PR on Github for you, which will automatically be
+labelled with `kind/cherry-pick`. This PR will go through the normal testing
+process, although it should be very quick given that the original PR was already
+approved and merged into the main branch. The PR should also go through normal
+CI testing. In some cases, a few CI tests may fail because we do not have
+dedicated CI infrastructure for past Antrea releases. If this happens, the PR
+will be merged despite the presence of CI test failures.
+
+You will need to run the cherry pick script separately for each release branch
+you need to cherry-pick to. Typically, cherry-picks should be applied to all
+[maintained](../versioning.md#release-cycle) release branches for which the fix
+is applicable.
+
+[cherry-pick-script]: ../../hack/cherry-pick-pull.sh
+[Github Workflow]: ../../CONTRIBUTING.md#github-workflow
diff --git a/content/docs/v1.15.0/docs/contributors/code-generation.md b/content/docs/v1.15.0/docs/contributors/code-generation.md
new file mode 100644
index 00000000..23a3c188
--- /dev/null
+++ b/content/docs/v1.15.0/docs/contributors/code-generation.md
@@ -0,0 +1,44 @@
+# Code and Documentation Generation
+
+## CNI
+
+Antrea uses [protoc](https://github.com/protocolbuffers/protobuf) and [protoc-gen-go](
+https://github.com/golang/protobuf) to generate CNI gRPC service code.
+
+If you make any change to [cni.proto](https://github.com/antrea-io/antrea/blob/v1.15.0/pkg/apis/cni/v1beta1/cni.proto), you can
+re-generate the code by invoking `make codegen`.
+
+## Extension API Resources and Custom Resource Definitions
+
+Antrea extends Kubernetes API with an extension APIServer and Custom Resource Definitions, and uses
+[k8s.io/code-generator
+(release-1.18)](https://github.com/kubernetes/code-generator/tree/release-1.18) to generate clients,
+informers, conversions, protobuf codecs and other helpers. The resource definitions and their
+generated codes are located in the conventional paths: `pkg/apis/` for internal
+types and `pkg/apis//` for versioned types and `pkg/client/clientset` for
+clients.
+
+If you make any change to any `types.go`, you can re-generate the code by invoking `make codegen`.
+
+## Mocks
+
+Antrea uses the [GoMock](https://github.com/uber-go/mock) framework for its unit tests.
+
+If you add or modify interfaces that need to be mocked, please add or update `MOCKGEN_TARGETS` in
+[update-codegen-dockerized.sh](https://github.com/antrea-io/antrea/blob/v1.15.0/hack/update-codegen-dockerized.sh) accordingly. All the mocks for a
+given package will typically be generated in a sub-package called `testing`. For example, the mock
+code for the interface `Baz` defined in the package `pkg/foo/bar` will be generated to
+`pkg/foo/bar/testing/mock_bar.go`, and you can import it via `pkg/foo/bar/testing`.
+
+Same as above, you can re-generate the mock source code (with `mockgen`) by invoking `make codegen`.
+
+## Generated Documentation
+
+[Prometheus integration document](../prometheus-integration.md) contains a list
+of supported metrics, which could be affected by third party component
+changes. The collection of metrics is done from a running Kind deployment, in
+order to reflect the current list of metrics which is exposed by Antrea
+Controller and Agents.
+
+To regenerate the metrics list within the document, use [make-metrics-doc.sh](https://github.com/antrea-io/antrea/blob/v1.15.0/hack/make-metrics-doc.sh)
+with document location as a parameter.
diff --git a/content/docs/v1.15.0/docs/contributors/docker-desktop-alternatives.md b/content/docs/v1.15.0/docs/contributors/docker-desktop-alternatives.md
new file mode 100644
index 00000000..97b20b0f
--- /dev/null
+++ b/content/docs/v1.15.0/docs/contributors/docker-desktop-alternatives.md
@@ -0,0 +1,53 @@
+# Docker Desktop Alternatives
+
+The Antrea build system relies on Docker to build container images, which can
+then be used to test Antrea locally. As an Antrea developer, if you run `make`,
+`docker build` will be invoked to build the `antrea-ubuntu` container image. On
+Linux, Docker Engine (based on moby) runs natively, but if you use macOS or
+Windows for Antrea development, Docker needs to run inside a Linux Virtual
+Machine (VM). This VM is typically managed by [Docker
+Desktop](https://www.docker.com/products/docker-desktop). Starting January 31
+2022, Docker Desktop requires a per user paid subscription for professional use
+in "large" companies (more than 250 employees or more than $10 million in annual
+revenue). See for details. For developers
+who contribute to Antrea as an employee of such a company (and not in their own
+individual capacity), it is no longer possible to use Docker Desktop to build
+(and possibly run) Antrea Docker images locally, unless they have a Docker
+subscription.
+
+For contributors who do not have a Docker subscription, we recommend the
+following Docker Desktop alternatives.
+
+## Colima (macOS)
+
+[Colima](https://github.com/abiosoft/colima) is a UI built with
+[Lima](https://github.com/lima-vm/lima). It supports running a container runtime
+(docker, containerd or kuberneters) on macOS, inside a Lima VM. Major benefits
+of Colima include its ability to be used as a drop-in replacement for Docker
+Desktop and its ability to coexist with Docker Desktop on the same macOS
+machine.
+
+To install and run Colima, follow these steps:
+
+* `brew install colima`
+* `colima start` to start Colima (the Linux VM) with the default
+ configuration. Check the Colima documentation for configuration options. By
+ default, Colima will use the Docker runtime. This means that you can keep
+ using the `docker` CLI and that no changes are required to build Antrea.
+* `docker context list` and check that the `colima` context is selected. You can
+ use `docker context use desktop-linux` to go back to Docker Desktop.
+* `make` to build Antrea locally. Check that the `antrea-ubuntu` image is
+ available by listing all images with `docker images`.
+
+TODO: validate that Kind can be used with Colima without any issue.
+
+## Rancher Desktop (macOS and Windows)
+
+Rancher Desktop is another possible alternative to Docker Desktop, which
+supports Windows in addition to macOS. On macOS, it also uses Lima as the Linux
+VM. Two major differences with Colima are that Rancher Desktop will always run
+Kubernetes, and that Rancher Desktop uses the
+[`nerdctl`](https://github.com/containerd/nerdctl) UI for container management
+instead of `docker`. However, the `nerdctl` and `docker` UIs are supposed to be
+compatible, so in theory it should be possible to alias `docker` to `nerdctl`
+and keep using the Antrea build system as is (to be tested).
diff --git a/content/docs/v1.15.0/docs/contributors/eks-terraform.md b/content/docs/v1.15.0/docs/contributors/eks-terraform.md
new file mode 100644
index 00000000..eb2a2b86
--- /dev/null
+++ b/content/docs/v1.15.0/docs/contributors/eks-terraform.md
@@ -0,0 +1,62 @@
+# Deploying EKS with Antrea
+
+Antrea may run in networkPolicyOnly mode in AKS and EKS clusters. This document
+describes the steps to create an EKS cluster with Antrea using terraform.
+
+## Common Prerequisites
+
+1. To run EKS cluster, install and configure AWS cli(either version 1 or 2), see
+ , and
+
+2. Install aws-iam-authenticator, see
+
+3. Install terraform, see
+4. You must already have ssh key-pair created. This key pair will be used to access worker Node via ssh.
+
+```bash
+ls ~/.ssh/
+id_rsa id_rsa.pub
+```
+
+## Create an EKS cluster via terraform
+
+Ensures that you have permission to create EKS cluster, and have already
+created EKS cluster role as well as worker Node profile.
+
+```bash
+export TF_VAR_eks_cluster_iam_role_name=YOUR_EKS_ROLE
+export TF_VAR_eks_iam_instance_profile_name=YOUR_EKS_WORKER_NODE_PROFILE
+export TF_VAR_eks_key_pair_name=YOUR_KEY_PAIR_TO_ACCESS_WORKER_NODE
+```
+
+Where
+
+- TF_VAR_eks_cluster_iam_role_name may be created by following these
+ [instructions](https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role.html#create-service-role)
+- TF_VAR_eks_iam_instance_profile_name may be created by following these
+ [instructions](https://docs.aws.amazon.com/eks/latest/userguide/worker_node_IAM_role.html#create-worker-node-role)
+- TF_VAR_eks_key_pair_name is the aws key pair name you have configured by following these
+ [instructions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#how-to-generate-your-own-key-and-import-it-to-aws),
+ using ssh-pair created in Prerequisites item 4
+
+Create EKS cluster
+
+```bash
+./hack/terraform-eks.sh create
+```
+
+Interact with EKS cluster
+
+```bash
+./hack/terraform-eks.sh kubectl ... # issue kubectl commands to EKS cluster
+./hack/terraform-eks.sh load ... # load local built images to EKS cluster
+./hack/terraform-eks.sh destroy # destroy EKS cluster
+```
+
+and worker Node can be accessed with ssh via their external IPs.
+
+Apply Antrea to EKS cluster
+
+```bash
+ ./hack/generate-manifest.sh --encap-mode networkPolicyOnly | ~/terraform/eks kubectl apply -f -
+```
diff --git a/content/docs/v1.15.0/docs/contributors/github-labels.md b/content/docs/v1.15.0/docs/contributors/github-labels.md
new file mode 100644
index 00000000..9a2ceb4f
--- /dev/null
+++ b/content/docs/v1.15.0/docs/contributors/github-labels.md
@@ -0,0 +1,131 @@
+# GitHub Label List
+
+We use GitHub labels to perform issue triage, track and report on development
+progress, plan roadmaps, and automate issue grooming.
+
+To ensure that contributing new issues and PRs remains straight-forward, we would
+like to keep the labels required for submission to a minimum. The remaining
+labels will be added either by automation or manual grooming by other
+contributors and maintainers.
+
+The labels in this list originated within Kubernetes at
+.
+
+## Labels that apply to issues or PRs
+
+| Label | Description | Added By |
+|-------|-------------|----------|
+| api-review | Categorizes an issue or PR as actively needing an API review. | Any |
+| area/api | Issues or PRs related to an API | Any |
+| area/blog | Issues or PRs related to blog entries | Any |
+| area/build-release | Issues or PRs related to building and releasing | Any |
+| area/component/antctl | Issues or PRs related to the command line interface component | Any |
+| area/component/agent | Issues or PRs related to the Agent component | Any |
+| area/component/cni | Issues or PRs related to the cni component | Any |
+| area/component/controller | Issues or PRs related to the Controller component | Any |
+| area/component/flow-aggregator | Issues or PRs related to the Flow Aggregator component | Any |
+| area/dependency | Issues or PRs related to dependency changes | Any |
+| area/endpoint/identity | Issues or PRs related to endpoint identity | Any |
+| area/endpoint/selection | Issues or PRs related to endpoint selection | Any |
+| area/endpoint/type | Issues or PRs related to endpoint type | Any |
+| area/flow-visibility | Issues or PRs related to flow visibility support in Antrea | Any |
+| area/flow-visibility/aggregation | Issues or PRs related to flow aggregation | Any |
+| area/flow-visibility/export | Issues or PRs related to the Flow Exporter functions in the Agent | Any |
+| area/github-membership | Categorizes an issue as a membership request to join the antrea-io Github organization | Any |
+| area/grouping | Issues or PRs related to ClusterGroup, Group API | Any |
+| area/ipam | Issues or PRs related to IP address management (IPAM) | Any |
+| area/interface | Issues or PRs related to network interfaces | Any |
+| area/licensing | Issues or PRs related to Antrea licensing | Any |
+| area/monitoring/auditing | Issues or PRs related to auditing | Any |
+| area/monitoring/health-performance | Issues or PRs related to health and performance monitoring | Any |
+| area/monitoring/logging | Issues or PRs related to logging | Any |
+| area/monitoring/mirroring | Issues or PRs related to mirroring | Any |
+| area/monitoring/traffic-analysis | Issues or PRs related to traffic analysis | Any |
+| area/multi-cluster | Issues or PRs related to multi cluster | Any |
+| area/network-policy | Issues or PRs related to network policy | Any |
+| area/network-policy/action | Issues or PRs related to network policy actions | Any |
+| area/network-policy/agent | Issues or PRs related to the network policy agents | Any |
+| area/network-policy/api | Issues or PRs related to the network policy API | Any |
+| area/network-policy/controller | Issues or PRs related to the network policy controller | Any |
+| area/network-policy/lifecycle | Issues or PRs related to the network policy lifecycle | Any |
+| area/network-policy/match | Issues or PRs related to matching packets | Any |
+| area/network-policy/precedence | Issues or PRs related to network policy precedence | Any |
+| area/ops | Issues or PRs related to features which support network operations and troubleshooting | Any |
+| area/ops/traceflow | Issues or PRs related to the Traceflow feature | Any |
+| area/ovs/openflow | Issues or PRs related to Open vSwitch Open Flow | Any |
+| area/ovs/ovsdb | Issues or PRs related to Open vSwitch database | Any |
+| area/OS/linux | Issues or PRs related to the Linux operating system | Any |
+| area/OS/windows | Issues or PRs related to the Windows operating system | Any |
+| area/provider/aws | Issues or PRs related to aws provider | Any |
+| area/provider/azure | Issues or PRs related to azure provider | Any |
+| area/provider/gcp | Issues or PRs related to gcp provider | Any |
+| area/provider/vmware | Issues or PRs related to vmware provider | Any |
+| area/proxy | Issues or PRs related to proxy functions in Antrea | Any |
+| area/proxy/clusterip | Issues or PRs related to the implementation of ClusterIP Services | Any |
+| area/proxy/nodeport | Issues or PRs related to the implementation of NodePort Services | Any |
+| area/proxy/nodeportlocal | Issues or PRs related to the NodePortLocal feature | Any |
+| area/secondary-network | Issues or PRs related to support for secondary networks in Antrea | Any |
+| area/security/access-control | Issues or PRs related to access control | Any |
+| area/security/controlplane | Issues or PRs related to controlplane security | Any |
+| area/security/dataplane | Issues or PRs related to dataplane security | Any |
+| area/test | Issues or PRs related to unit and integration tests. | Any |
+| area/test/community | Issues or PRs related to community testing | Any |
+| area/test/e2e | Issues or PRs related to Antrea specific end-to-end testing. | Any |
+| area/test/infra | Issues or PRs related to test infrastructure (Jenkins configuration, Ansible playbook, Kind wrappers, ...) | Any |
+| area/transit/addressing | Issues or PRs related to IP addressing category (unicast, multicast, broadcast, anycast) | Any |
+| area/transit/egress | Issues or PRs related to Egress (SNAT for traffic egressing the cluster) | Any |
+| area/transit/encapsulation | Issues or PRs related to encapsulation | Any |
+| area/transit/encryption | Issues or PRs related to transit encryption (IPsec, SSL) | Any |
+| area/transit/ipv6 | Issues or PRs related to IPv6 | Any |
+| area/transit/qos | Issues or PRs related to transit qos or policing | Any |
+| area/transit/routing | Issues or PRs related to routing and forwarding | Any |
+| kind/api-change | Categorizes issue or PR as related to adding, removing, or otherwise changing an API. | Any |
+| kind/bug | Categorizes issue or PR as related to a bug. | Any |
+| kind/cherry-pick | Categorizes issue or PR as related to the cherry-pick of a bug fix from the main branch to a release branch | Any |
+| kind/cleanup | Categorizes issue or PR as related to cleaning up code, process, or technical debt | Any |
+| kind/deprecation | Categorizes issue or PR as related to feature marked for deprecation | Any |
+| kind/design | Categorizes issue or PR as related to design | Any |
+| kind/documentation | Categorizes issue or PR as related to a documentation. | Any |
+| kind/failing-test | Categorizes issue or PR as related to a consistently or frequently failing test | Any |
+| kind/feature | Categorizes issue or PR as related to a new feature. | Any |
+| kind/release | Categorizes a PR used to create a new release (with CHANGELOG and VERSION updates) | Maintainers |
+| kind/support | Categorizes issue or PR as related to a support question. | Any |
+| kind/task | Categorizes issue or PR as related to a routine task that needs to be performed. | Any |
+| lifecycle/active | Indicates that an issue or PR is actively being worked on by a contributor. | Any |
+| lifecycle/frozen | Indicates that an issue or PR should not be auto-closed due to staleness. | Any |
+| lifecycle/stale | Denotes an issue or PR has remained open with no activity and has become stale. | Any |
+| priority/awaiting-more-evidence | Lowest priority. Possibly useful, but not yet enough support to actually get it done. | Any |
+| priority/backlog | Higher priority than priority/awaiting-more-evidence. | Any |
+| priority/critical-urgent | Highest priority. Must be actively worked on as someone's top priority right now. | Any |
+| priority/important-longterm | Important over the long term, but may not be staffed and/or may need multiple releases to complete. | Any |
+| priority/import-soon | Must be staffed and worked on either currently, or very soon, ideally in time for the next release. | Any |
+| ready-to-work | Indicates that an issue or PR has been sufficiently triaged and prioritized and is now ready to work. | Any |
+| size/L | Denotes a PR that changes 100-499 lines, ignoring generated files. | Any |
+| size/M | Denotes a PR that changes 30-99 lines, ignoring generated files.| Any |
+| size/S | Denotes a PR that changes 10-29 lines, ignoring generated files.| Any |
+| size/XL | Denotes a PR that changes 500+ lines, ignoring generated files.| Any |
+| size/XS | Denotes a PR that changes 0-9 lines, ignoring generated files.| Any |
+| triage/duplicate | Indicates an issue is a duplicate of other open issue. | Humans |
+| triage/needs-information | Indicates an issue needs more information in order to work on it. | Humans |
+| triage/not-reproducible | Indicates an issue can not be reproduced as described. | Humans |
+| triage/unresolved | Indicates an issue that can not or will not be resolved. | Humans |
+| action/backport | Indicates a PR that requires backports. | Humans |
+| action/release-note | Indicates a PR that should be included in release notes. | Humans |
+
+## Labels that apply only to issues
+
+| Label | Description | Added By |
+|-------|-------------|----------|
+| good first issue | Denotes an issue ready for a new contributor, according to the "help wanted" [guidelines](issue-management.md#good-first-issues-and-help-wanted). | Anyone |
+| help wanted | Denotes an issue that needs help from a contributor. Must meet "help wanted" [guidelines](issue-management.md#good-first-issues-and-help-wanted). | Anyone |
+
+## Labels that apply only to PRs
+
+| Label | Description | Added By |
+|-------|-------------|----------|
+| approved | Indicates a PR has been approved by owners in accordance with [GOVERNANCE.md](../../GOVERNANCE.md) guidelines. | Maintainers |
+| vmware-cla: no | Indicates the PR's author has not signed the [VMware CLA](https://cla.vmware.com/faq) | VMware CLA Bot |
+| vmware-cla: yes | Indicates the PR's author has signed the [VMware CLA](https://cla.vmware.com/faq) | VMware CLA Bot |
+| do-not-merge/hold | Indicates a PR should not be merged because someone has issued a /hold command | Merge Bot |
+| do-not-merge/work-in-progress | Indicates that a PR should not be merged because it is a work in progress. | Merge Bot |
+| lgtm | Indicates that a PR is ready to be merged. | Merge Bot |
diff --git a/content/docs/v1.15.0/docs/contributors/issue-management.md b/content/docs/v1.15.0/docs/contributors/issue-management.md
new file mode 100644
index 00000000..f7ff5546
--- /dev/null
+++ b/content/docs/v1.15.0/docs/contributors/issue-management.md
@@ -0,0 +1,406 @@
+# Issue Management
+
+This document further describes the developer workflow and how issues are
+managed as introduced in [CONTRIBUTING.md](../../CONTRIBUTING.md). Please read
+[CONTRIBUTING.md](../../CONTRIBUTING.md) first before proceeding.
+
+
+- [Developer Workflow Overview](#developer-workflow-overview)
+- [Creating New Issues and PRs](#creating-new-issues-and-prs)
+- [Good First Issues and Help Wanted](#good-first-issues-and-help-wanted)
+- [Issue and PR Triage Process](#issue-and-pr-triage-process)
+ - [Issue Triage](#issue-triage)
+ - [PR Triage](#pr-triage)
+- [Working an Issue](#working-an-issue)
+- [Issue and PR Labels](#issue-and-pr-labels)
+ - [Issue Kinds](#issue-kinds)
+ - [API Change](#api-change)
+ - [Bug](#bug)
+ - [Cleanup](#cleanup)
+ - [Feature](#feature)
+ - [Deprecation](#deprecation)
+ - [Task](#task)
+ - [Design](#design)
+ - [Documentation](#documentation)
+ - [Failing Test](#failing-test)
+ - [Support](#support)
+ - [Area](#area)
+ - [Size](#size)
+ - [Triage](#triage)
+ - [Lifecycle](#lifecycle)
+ - [Priority](#priority)
+
+
+## Developer Workflow Overview
+
+The purpose of this workflow is to formalize a lightweight set of processes that
+will optimize issue triage and management which will lead to better release
+predictability and community responsiveness for support and feature
+enhancements. Additionally, Antrea must prioritize issues to ensure interlock
+alignment and compatibility with other projects including Kubernetes. The
+processes described here will aid in accomplishing these goals.
+
+![developer workflow overview](../assets/developer-workflow-opaque-bg.png)
+
+## Creating New Issues and PRs
+
+Creating new issues and PRs is covered in detail in
+[CONTRIBUTING.md](../../CONTRIBUTING.md).
+
+## Good First Issues and Help Wanted
+
+We use `good first issue` and `help wanted` labels to indicate issues we would
+like contribution on. These two labels were borrowed from the Kubernetes project
+and represent the same context as described in [Help Wanted and Good First Issue
+Labels](https://www.kubernetes.dev/docs/guide/help-wanted/).
+
+We do not yet support the automation mentioned in the Kubernetes help guild.
+
+To summarize:
+
+* `good first issue` -- issues intended for first time contributors. Members
+ should keep an eye out for these pull requests and shepherd it through our
+ processes.
+* `help wanted` -- issues that represent clearly laid out tasks that are
+ generally tractable for new contributors. The solution has already been
+ designed and requires no further discussion from the community. This label
+ indicates we need additional contributors to help move this task along.
+
+## Issue and PR Triage Process
+
+When new issues or PRs are created, the maintainers must triage the issue
+to ensure the information is valid, complete, and properly categorized and
+prioritized.
+
+### Issue Triage
+
+An issue is triaged in the following way:
+
+1. Ensure the issue is not a duplicate. Do a quick search against existing
+ issues to determine if the issue has been or is currently being worked on. If
+ you suspect the issue is a duplicate, apply the [`triage/duplicate`](#triage) label.
+2. Ensure that the issue has captured all the information required for the given
+ issue [`kind/`](#issue-kinds). If information or context is needed, apply the
+ `triage/needs-information`.
+3. Apply any missing [`area/`](#area) labels. An issue can relate to more
+ than one area.
+4. Apply a [`priority/`](#priority) label. This may require further
+ discussion during the community meeting if the priority cannot be determined.
+ If undetermined, do not apply a priority. Issues with unassigned priorities
+ will be selected for review.
+5. Apply a [`size/`](#size) label if known. This may require further
+ discussion, a research spike or review by the assigned contributor who will
+ be working on this issue. This is only an estimate of the complexity and size
+ of the issue.
+
+Once an issue has been triaged, a comment should be left for original submitter
+to respond to any applied triage labels.
+
+If all triage labels have been addressed and the issue is ready to be worked,
+apply the label `ready-to-work` so the issue can be assigned to a milestone and
+worked by a contributor.
+
+If it is determined an issue will not be resolved or not fixed, apply the
+`triage/unresolved` label and leave a reason in a comment for the original
+submitter. Unresolved issues can be closed after giving the original submitter
+an opportunity to appeal the reason supplied.
+
+### PR Triage
+
+A PR is triaged in the following way:
+
+1. Automation will ensure that the submitter has signed the [CLA](../../CONTRIBUTING.md#cla).
+2. Automation will run CI tests against the submission to ensure compliance.
+3. Apply [`size/`](#size) label to the submission. (TODO: we plan to
+ automate this with a GitHub action and apply size based on lines of code).
+4. Ensure that the PR references an existing issue (exceptions to this should be
+ rare). If the PR is missing this or needs any additional information, note it
+ in the comment and apply the `triage/needs-information` label.
+5. The PR should have the same `area/`, `kind/`, and `lifecycle/` labels as that of
+ the referenced issue. (TODO: we plan to automate this with a GitHub action
+ and apply labels automatically)
+
+## Working an Issue
+
+When starting work on an issue, assign the issue to yourself if it has not
+already been assigned and apply the `lifecycle/active` label to signal that the
+issue is actively being worked on.
+
+Making code changes is covered in detail in
+[CONTRIBUTING.md](../../CONTRIBUTING.md#github-workflow).
+
+If the issue kind is a `kind/bug`, ensure that the issue can be reproduced. If
+not, assign the `triage/not-reproducible` and request feedback from the original
+submitter.
+
+## Issue and PR Labels
+
+This section describes the label metadata we use to track issues and PRs. For a
+definitive list of all GitHub labels used within this project, please see
+[github-labels.md](github-labels.md).
+
+### Issue Kinds
+
+An issue kind describes the kind of contribution being requested or submitted.
+In some cases, the kind will also influence how the issue or PR is triaged and
+worked.
+
+#### API Change
+
+A `kind/api-change` label categorizes an issue or PR as related to adding, removing,
+or otherwise changing an API.
+
+All API changes must be reviewed by maintainers in addition to the standard code
+review and approval workflow.
+
+To create an API change issue or PR:
+
+* label your issue or PR with `kind/api-change`
+* describe in the issue or PR body which API you are changing, making sure to include
+ * API endpoint and schema (endpoint, Version, APIGroup, etc.)
+ * Is this a breaking change?
+ * Can new or older clients opt-in to this API?
+ * Is there a fallback? What are the implications of not supporting this API version?
+ * How is an upgrade handled? If automatically, we need to ensure proper tests
+ are created. If we require a manual upgrade procedure, this needs to be
+ noted so that the release notes and docs can be updated appropriately.
+
+Before starting any work on an API change it is important that you have proper
+review and approval from the project maintainers.
+
+#### Bug
+
+A `kind/bug` label categorizes an issue or PR as related to a bug.
+
+Any problem encountered when building, configuring, or running Antrea could be a
+potential case for submitting a bug.
+
+To create a bug issue or bug fix PR:
+
+* label your issue or PR with `kind/bug`
+* describe your bug in the issue or PR body making sure to include:
+ * version of Antrea
+ * version of Kubernetes
+ * version of OS and any relevant environment or system configuration
+ * steps and/or configuration to reproduce the bug
+ * any tests that demonstrate the presence of the bug
+* please attach any relevant logs or diagnostic output
+
+#### Cleanup
+
+A `kind/cleanup` label categorizes an issue or PR as related to cleaning up
+code, process, or technical debt.
+
+To create a cleanup issue or PR:
+
+* label your issue or PR with `kind/cleanup`
+* describe your cleanup in the issue or PR body being sure to include
+ * what is being cleaned
+ * for what reason it is being cleaned (technical debt, deprecation, etc.)
+
+Examples of a cleanup include:
+
+* Adding comments to describe code execution
+* Making code easier to read and follow
+* Removing dead code related to deprecated features or implementations
+
+#### Feature
+
+A `kind/feature` label categorizes an issue or PR as related to a new feature.
+
+To create a feature issue or PR:
+
+* label your issue or PR with `kind/feature`
+* describe your proposed feature in the issue or PR body being sure to include
+ * a use case for the new feature
+ * list acceptance tests for the new feature
+ * describe any dependencies for the new feature
+* depending on the size and impact of the feature
+ * a design proposal may need to be submitted
+ * the feature may need to be discussed in the community meeting
+
+Before you begin work on your feature it is import to ensure that you have
+proper review and approval from the project maintainers.
+
+Examples of a new feature include:
+
+* Adding a new set of metrics for enabling additional telemetry.
+* Adding additional supported transport layer protocol options for network policy.
+* Adding support for IPsec.
+
+#### Deprecation
+
+A `kind/deprecation` label categorizes an issue or PR as related to feature
+marked for deprecation.
+
+To create a deprecation issue or PR:
+
+* label your issue or PR with `kind/deprecation`
+* title the issue or PR with the feature you are deprecating
+* describe the deprecation in the issue or PR body making sure to:
+ * explain why the feature is being deprecated
+ * discuss time-to-live for the feature and when deprecation will take place
+ * discuss any impacts to existing APIs
+
+#### Task
+
+A `kind/task` label categorizes an issue or PR as related to a "routine"
+maintenance task for the project, e.g. upgrading a software dependency or
+enabling a new CI job.
+
+To create a task issue or PR:
+
+* label your issue or PR with `kind/task`
+* describe your task in the issue or PR body, being sure to include the reason
+ for the task and the possible impacts of the change
+
+#### Design
+
+A `kind/design` label categorizes issue or PR as related to design.
+
+A design issue or PR is for discussing larger architectural and design proposals.
+Approval of a design proposal may result in multiple additional feature,
+api-change, or cleanup issues being created to implement the design.
+
+To create a design issue:
+
+* label your issue or PR with `kind/design`
+* describe the design in the issue or PR body
+
+Before creating additional issues or PRs that implement the proposed design it is
+important to get feedback and approval from the maintainers. Design feedback
+could include some of the following:
+
+* needs additional detail
+* no, this problem should be solved in another way
+* this is desirable but we need help completing other issues or PRs first; then we will
+ consider this design
+
+#### Documentation
+
+A `kind/documentation` label categorizes issue or PR as related to a
+documentation.
+
+To create a documentation issue or PR:
+
+* label your issue or PR with `kind/documentation`
+* title the issue with a short description of what you are documenting
+* provide a brief summary in the issue or PR body of what you are documenting. In some
+ cases, it might be useful to include a checklist of changed documentation
+ files to indicate your progress.
+
+#### Failing Test
+
+A `kind/failing-test` label categorizes issue or PR as related to a consistently
+or frequently failing test.
+
+To create a failing test issue or PR:
+
+* label your issue or PR with `kind/failing-test`
+
+TODO: As more automation is used in the continuous integration pipeline, we will
+be able to automatically generate an issue for failing tests.
+
+#### Support
+
+A `kind/support` label categorizes issue as related to a support request.
+
+To create a support issue or PR:
+
+* label your issue or PR with `kind/support`
+* title the issue or PR with a short description of your support request
+* answer all of the questions in the support issue template
+* to provide comprehensive information about your cluster that will be useful in
+ identifying and resolving the issue, you may want to consider producing a
+ ["support bundle"](../antctl.md/#collecting-support-information) and uploading it
+ to a publicly-accessible location. **Be aware that the generated support
+ bundle includes a lot of information, including logs, so please ensure that
+ you do not share anything sensitive.**
+
+### Area
+
+Area labels begin with `area/` and identify areas of interest or functionality
+to which an issue relates. An issue or PR could have multiple areas. These labels are
+used to sort issues and PRs into categories such as:
+
+* operating systems
+* cloud platform,
+* functional area,
+* operating or legal area (i.e., licensing),
+* etc.
+
+A list of areas is maintained in [`github-labels.md`](github-labels.md).
+
+An area may be changed, added or deleted during issue or PR triage.
+
+### Size
+
+Size labels begin with `size/` and estimate the relative complexity or work
+required to resolve an issue or PR.
+
+TODO: For submitted PRs, the size can be automatically calculated and the
+appropriate label assigned.
+
+Size labels are specified according to lines of code; however, some issues may
+not relate to lines of code submission such as documentation. In those cases,
+use the labels to apply an equivalent complexity or size to the task at hand.
+
+Size labels include:
+
+* `size/XS` -- denotes a extra small issue, or PR that changes 0-9 lines, ignoring generated files
+* `size/S` -- denotes a small issue, or PR that changes 10-29 lines, ignoring generated files
+* `size/M` -- denotes a medium issue, or PR that changes 30-99 lines, ignoring generated files
+* `size/L` -- denotes a large issue, or PR that changes 100-499 lines, ignoring generated files
+* `size/XL` -- denotes a very large issue, or PR that changes 500+ lines, ignoring generated files
+
+Size labels are defined in [`github-labels.md`](github-labels.md).
+
+### Triage
+
+As soon as new issues are submitted, they must be triaged until they are ready to
+work. The maintainers may apply the following labels during the issue triage
+process:
+
+* `triage/duplicate` -- indicates an issue is a duplicate of other open issue
+* `triage/needs-information` -- indicates an issue needs more information in order to work on it
+* `triage/not-reproducible` -- indicates an issue can not be reproduced as described
+* `triage/unresolved` -- indicates an issue that can not or will not be resolved
+
+Triage labels are defined in [`github-labels.md`](github-labels.md).
+
+### Lifecycle
+
+To track the state of an issue, the following labels will be assigned.
+
+* `lifecycle/active` -- indicates that an issue or PR is actively being worked on by a contributor
+* `lifecycle/frozen` -- indicates that an issue or PR should not be auto-closed due to staleness
+* `lifecycle/stale` -- denotes an issue or PR has remained open with no activity and has become stale
+
+The following schedule will be used to determine an issue's lifecycle:
+
+* after 180 days of inactivity, an issue will be automatically marked as `lifecycle/stale`
+* after an extra 180 days of inactivity, an issue will be automatically closed
+* any issue marked as `lifecycle/frozen` will prevent automatic transitions to
+ stale and prevent auto-closure
+* commenting on an issue will remove the `lifecycle/stale` label
+
+Issue lifecycle management ensures that the project backlog remains fresh and
+relevant. Project maintainers and contributors will need to revisit issues to
+periodically assess their relevance and progress.
+
+TODO: Additional CI automation (GitHub actions) will be used to automatically
+apply and manage some of these lifecycle labels.
+
+Lifecycle labels are defined in [`github-labels.md`](github-labels.md).
+
+### Priority
+
+A priority label signifies the overall priority that should be given to an
+issue or PR. Priorities are considered during backlog grooming and help to
+determine the number of features included in a milestone.
+
+* `priority/awaiting-more-evidence` -- lowest priority. Possibly useful, but not yet enough support to actually get it done.
+* `priority/backlog` -- higher priority than priority/awaiting-more-evidence.
+* `priority/critical-urgent` -- highest priority. Must be actively worked on as someone's top priority right now.
+* `priority/important-longterm` -- important over the long term, but may not be staffed and/or may need multiple releases to complete.
+* `priority/import-soon` -- must be staffed and worked on either currently, or very soon, ideally in time for the next release.
diff --git a/content/docs/v1.15.0/docs/cookbooks/fluentd/README.md b/content/docs/v1.15.0/docs/cookbooks/fluentd/README.md
new file mode 100644
index 00000000..8419bae9
--- /dev/null
+++ b/content/docs/v1.15.0/docs/cookbooks/fluentd/README.md
@@ -0,0 +1,147 @@
+# Using Antrea with Fluentd
+
+This guide will describe how to use Project Antrea with
+[Fluentd](https://github.com/fluent/fluentd-kubernetes-daemonset),
+in order for efficient audit logging.
+In this scenario, Antrea is used for the default network,
+[Elasticsearch](https://www.elastic.co/) is used for the default storage,
+and [Kibana](https://www.elastic.co/kibana/) dashboard is used for visualization.
+
+
+- [Prerequisites](#prerequisites)
+- [Practical steps](#practical-steps)
+ - [Step 1: Deploying Antrea](#step-1-deploying-antrea)
+ - [Step 2: Deploy Elasticsearch and Kibana Dashboard](#step-2-deploy-elasticsearch-and-kibana-dashboard)
+ - [Step 3: Configure Custom Fluentd Plugins](#step-3-configure-custom-fluentd-plugins)
+ - [Step 4: Deploy Fluentd DaemonSet](#step-4-deploy-fluentd-daemonset)
+ - [Step 5: Visualize with Kibana Dashboard](#step-5-visualize-with-kibana-dashboard)
+- [Email Alerting](#email-alerting)
+
+
+## Prerequisites
+
+The only prerequisites are:
+
+* a K8s cluster (Linux Nodes) running a K8s version supported by Antrea.
+* [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
+
+All the required software will be deployed using YAML manifests, and the
+corresponding container images will be downloaded from public registries.
+
+## Practical steps
+
+### Step 1: Deploying Antrea
+
+For detailed information on the Antrea requirements and instructions on how to
+deploy Antrea, please refer to
+[getting-started.md](../../getting-started.md). To deploy the latest version of
+Antrea, use:
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml
+```
+
+You may also choose a [released Antrea
+version](https://github.com/antrea-io/antrea/releases).
+
+### Step 2: Deploy Elasticsearch and Kibana Dashboard
+
+Fluentd supports multiple [output plugins](https://www.fluentd.org/plugins).
+Details will be discussed in [Step 4](#step-4-deploy-fluentd-daemonset), but
+by default, log records are collected by Fluentd DaemonSet and sent to Elasticsearch.
+A Kibana Dashboard can then be used to visualize the data. The YAML file for
+deployment is included in the `resources` directory. To deploy Elasticsearch
+and Kibana, run:
+
+```bash
+kubectl apply -f docs/cookbooks/fluentd/resources/kibana-elasticsearch.yml
+```
+
+### Step 3: Configure Custom Fluentd Plugins
+
+The architecture of Fluentd is a pipeline from input-> parser-> buffer->
+output-> formatter, many of these are plugins that could be configured to
+fit users’ different use cases.
+
+To specify custom input plugins and parsers, modify `./resources/kubernetes.conf`
+and create a ConfigMap with the following command. Later, direct Fluentd
+DaemonSet to refer to that ConfigMap. To see more variations of custom
+configuration, refer to [Fluentd inputs](https://docs.fluentd.org/input).
+This cookbook uses the [tail](https://docs.fluentd.org/input/tail)
+input plugin to monitor the audit logging files for Antrea-native policies
+on every K8s Node.
+
+```bash
+kubectl create configmap fluentd-conf --from-file=docs/cookbooks/fluentd/resources/kubernetes.conf --namespace=kube-logging
+```
+
+### Step 4: Deploy Fluentd DaemonSet
+
+Fluentd deployment includes RBAC and DaemonSet. Fluentd will collect logs
+from cluster components, so permissions need to be granted first through
+RBAC. In `fluentd.yml`, we create a ServiceAccount, and use a ClusterRole
+and a ClusterRoleBinding to grant it permissions to read, list and watch
+Pods in cluster scope.
+
+In the DaemonSet configuration, specify Elasticsearch host, port and scheme,
+as they are required by the Elasticsearch output plugin.
+In [Fluentd official documentation](https://github.com/fluent/fluentd-kubernetes-daemonset),
+output plugins are specified in `fluent.conf` depending on the chosen image.
+To change output plugins, choose a different image and specify it in `./resources/fluentd.yml`.
+When choosing image version, note that the current Elasticsearch version
+specified in `resources/kibana-elasticsearch.yml` is 7.8.0 and that the major
+Elasticsearch version must match between the 2 files.
+
+```bash
+kubectl apply -f docs/cookbooks/fluentd/resources/fluentd.yml
+```
+
+### Step 5: Visualize with Kibana Dashboard
+
+Navigate to `http://[NodeIP]: 30007` and create an index pattern with "fluentd-*".
+Go to `http://[NodeIP]: 30007/app/kibana#/discover` to see the results as below.
+
+{{< img src="https://downloads.antrea.io/static/07062023/audit-logging-fluentd-kibana.png" width="900" alt="Audit Logging Fluentd Kibana" >}}
+
+## Email Alerting
+
+Kibana dashboard supports creating alerts with the logs in this
+[guide](https://www.elastic.co/guide/en/kibana/current/alerting-getting-started.html).
+This [documentation](https://docs.fluentd.org/how-to-guides/splunk-like-grep-and-alert-email)
+also provides a detailed guide for email alerting when using td-agent
+(the stable version of Fluentd and preconfigured).
+
+For this cookbook with custom Fluentd configuration, modify and add the following
+code to `./resources/kubernetes.conf`, then update ConfigMap in
+[Step 3: Configure Custom Fluentd Plugins](#step-3-configure-custom-fluentd-plugins).
+
+```editorconfig
+
+ @type grepcounter
+ count_interval 3 # The time window for counting errors (in secs)
+ input_key code # The field to apply the regular expression
+ regexp ^5\d\d$ # The regular expression to be applied
+ threshold 1 # The minimum number of erros to trigger an alert
+ add_tag_prefix error_ANPxx # Generate tags like "error_ANPxx.antrea-networkpolicy"
+
+
+
+ @type copy
+
+ @type stdout # Print to stdout for debugging
+
+
+ @type mail
+ host smtp.gmail.com # Change this to your SMTP server host
+ port 587 # Normally 25/587/465 are used for submission
+ user USERNAME # Use your username to log in
+ password PASSWORD # Use your login password
+ enable_starttls_auto true # Use this option to enable STARTTLS
+ from example@antrea.com # Set the sender address
+ to alert@example.com # Set the recipient address
+ subject 'Antrea Native Policy Error'
+ message Total ANPxx error count: %s\n\nPlease check Antrea Native Policy feature ASAP
+ message_out_keys count # Use the "count" field to replace "%s" above
+
+
+```
diff --git a/content/docs/v1.15.0/docs/cookbooks/fluentd/_index.md b/content/docs/v1.15.0/docs/cookbooks/fluentd/_index.md
new file mode 100644
index 00000000..15878bff
--- /dev/null
+++ b/content/docs/v1.15.0/docs/cookbooks/fluentd/_index.md
@@ -0,0 +1,4 @@
+---
+---
+
+{{% include-md README.md %}}
diff --git a/content/docs/v1.15.0/docs/cookbooks/ids/README.md b/content/docs/v1.15.0/docs/cookbooks/ids/README.md
new file mode 100644
index 00000000..52427bbc
--- /dev/null
+++ b/content/docs/v1.15.0/docs/cookbooks/ids/README.md
@@ -0,0 +1,175 @@
+# Using Antrea with IDS
+
+This guide will describe how to use Project Antrea with threat detection
+engines, in order to provide network-based intrusion detection service to your
+Pods. In this scenario, Antrea is used for the default Pod network. For the sake
+of this guide, we will use [Suricata](https://suricata.io/) as the threat
+detection engine, but similar steps should apply for other engines as well.
+
+The solution works by configuring a TrafficControl resource applying to specific
+Pods. Traffic originating from the Pods or destined for the Pods is mirrored,
+and then inspected by Suricata to provide threat detection. Suricata is
+configured with IDS mode in this example, but it can also be configured with
+IPS/inline mode to proactively drop the traffic determined to be malicious.
+
+
+- [Prerequisites](#prerequisites)
+- [Practical steps](#practical-steps)
+ - [Step 1: Deploy Antrea](#step-1-deploy-antrea)
+ - [Step 2: Configure TrafficControl resource](#step-2-configure-trafficcontrol-resource)
+ - [Step 3: Deploy Suricata as a DaemonSet](#step-3-deploy-suricata-as-a-daemonset)
+- [Testing](#testing)
+
+
+## Prerequisites
+
+The general prerequisites are:
+
+* a K8s cluster running a K8s version supported by Antrea.
+* [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
+
+The [TrafficControl](../../traffic-control.md) capability was added in Antrea
+version 1.7. Therefore, an Antrea version >= v1.7.0 should be used to configure
+Pod traffic mirroring.
+
+All the required software will be deployed using YAML manifests, and the
+corresponding container images will be downloaded from public registries.
+
+## Practical steps
+
+### Step 1: Deploy Antrea
+
+For detailed information on the Antrea requirements and instructions on how to
+deploy Antrea, please refer to [getting-started.md](../../getting-started.md).
+As of now, the `TrafficControl` feature gate is disabled by default, you will
+need to enable it like the following command.
+
+To deploy the latest version of Antrea, use:
+
+```bash
+curl -s https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml | \
+ sed "s/.*TrafficControl:.*/ TrafficControl: true/" | \
+ kubectl apply -f -
+```
+
+You may also choose a [released Antrea
+version](https://github.com/antrea-io/antrea/releases).
+
+### Step 2: Configure TrafficControl resource
+
+To replicate Pod traffic to Suricata for analysis, create a TrafficControl with
+the `Mirror` action, and set the `targetPort` to an OVS internal port that
+Suricata will capture traffic from. This cookbook uses `tap0` as the port name
+and performs intrusion detection for Pods with the `app=web` label:
+
+```bash
+cat <.
+
+As the TrafficControl resource configured in the second step mirrors traffic to
+`tap0`, we run Suricata in the host network and specify the network interface to
+`tap0`.
+
+```yaml
+spec:
+ hostNetwork: true
+ containers:
+ - name: suricata
+ image: jasonish/suricata:latest
+ command:
+ - /usr/bin/suricata
+ - -i
+ - tap0
+```
+
+Suricata uses Signatures (rules) to trigger alerts. We use the default ruleset
+installed at `/var/lib/suricata/rules` of the image `jasonish/suricata`.
+
+The directory `/var/log/suricata` contains alert events. We mount the directory
+as a `hostPath` volume to expose and persist them on the host:
+
+```yaml
+spec:
+ containers:
+ - name: suricata
+ volumeMounts:
+ - name: host-var-log-suricata
+ mountPath: /var/log/suricata
+ volumes:
+ - name: host-var-log-suricata
+ hostPath:
+ path: /var/log/suricata
+ type: DirectoryOrCreate
+```
+
+To deploy Suricata, run:
+
+```bash
+kubectl apply -f docs/cookbooks/ids/resources/suricata.yml
+```
+
+## Testing
+
+To test the IDS functionality, you can create a Pod with the `app=web` label,
+using the following command:
+
+```bash
+kubectl create deploy web --image nginx:1.21.6
+```
+
+Let's log into the Node that the test Pod runs on and start `tail` to see
+updates to the alert log `/var/log/suricata/fast.log`:
+
+```bash
+tail -f /var/log/suricata/fast.log
+```
+
+You can then generate malicious requests to trigger alerts. For ingress traffic,
+you can fake a web application attack against the Pod with the following command
+(assuming that the Pod IP is 10.10.2.3):
+
+```bash
+curl http://10.10.2.3/dlink/hwiz.html
+```
+
+The following output should now be seen in the log:
+
+```text
+05/17/2022-04:29:51.717452 [**] [1:2008942:8] ET POLICY Dlink Soho Router Config Page Access Attempt [**] [Classification: Attempted Administrator Privilege Gain] [Priority: 1] {TCP} 10.10.2.1:48600 -> 10.10.2.3:80
+```
+
+For egress traffic, you can `kubectl exec` into the Pods and generate malicious
+requests against external web server with the following command:
+
+```bash
+kubectl exec deploy/web -- curl -s http://testmynids.org/uid/index.html
+```
+
+The following output should now be seen in the log:
+
+```text
+05/17/2022-04:36:46.706373 [**] [1:2013028:6] ET POLICY curl User-Agent Outbound [**] [Classification: Attempted Information Leak] [Priority: 2] {TCP} 10.10.2.3:55132 -> 65.8.161.92:80
+05/17/2022-04:36:46.708833 [**] [1:2100498:7] GPL ATTACK_RESPONSE id check returned root [**] [Classification: Potentially Bad Traffic] [Priority: 2] {TCP} 65.8.161.92:80 -> 10.10.2.3:55132
+```
diff --git a/content/docs/v1.15.0/docs/cookbooks/ids/_index.md b/content/docs/v1.15.0/docs/cookbooks/ids/_index.md
new file mode 100644
index 00000000..15878bff
--- /dev/null
+++ b/content/docs/v1.15.0/docs/cookbooks/ids/_index.md
@@ -0,0 +1,4 @@
+---
+---
+
+{{% include-md README.md %}}
diff --git a/content/docs/v1.15.0/docs/cookbooks/multus/README.md b/content/docs/v1.15.0/docs/cookbooks/multus/README.md
new file mode 100644
index 00000000..31ae41f1
--- /dev/null
+++ b/content/docs/v1.15.0/docs/cookbooks/multus/README.md
@@ -0,0 +1,381 @@
+# Using Antrea with Multus
+
+This guide will describe how to use Project Antrea with
+[Multus](https://github.com/k8snetworkplumbingwg/multus-cni), in order to attach multiple
+network interfaces to Pods. In this scenario, Antrea is used for the default
+network, i.e. it is the CNI plugin which provisions the "primary" network
+interface ("eth0") for each Pod. For the sake of this guide, we will use the
+[macvlan](https://github.com/containernetworking/plugins/tree/master/plugins/main/macvlan)
+CNI plugin to provision secondary network interfaces for selected Pods, but
+similar steps should apply for other plugins as well,
+e.g. [ipvlan](https://github.com/containernetworking/plugins/tree/master/plugins/main/ipvlan).
+
+## Prerequisites
+
+The general prerequisites are:
+
+* a K8s cluster (Linux Nodes) running a K8s version supported by Antrea. At the
+ time of writing, we recommend version 1.16 or later. Typically the cluster
+ needs to be running on a network infrastructure that you control. For example,
+ using macvlan networking will not work on public clouds like AWS.
+* [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
+
+The Antrea IPAM capability for secondary network was added in Antrea version
+1.7. To leverage Antrea IPAM for IP assignment of secondary networks, an Antrea
+version >= 1.7.0 should be used. There is no Antrea version requirement for
+other IPAM options.
+
+All the required software will be deployed using YAML manifests, and the
+corresponding container images will be downloaded from public registries.
+
+For the sake of this guide, we will use macvlan in "bridge" mode, which supports
+the creation of multiple subinterfaces on one parent interface, and connects
+them all using a bridge. Macvlan in "bridge" mode requires the network to be
+able to handle "promiscuous mode", as the same physical interface / virtual
+adapter ends up being assigned multiple MAC addresses. When using a virtual
+network for the Nodes, some configuration changes are usually required, which
+depend on the virtualization technology. For example:
+
+* when using VirtualBox and [Internal
+ Networking](https://www.virtualbox.org/manual/ch06.html#network_internal), set
+ the `Promiscuous Mode` to `Allow All`
+* when using VMware Fusion, enable "promiscuous mode" in the guest (Node) for
+ the appropriate interface (e.g. using `ifconfig`); this may prompt for your
+ password on the host unless you uncheck `Require authentication to enter
+ promiscuous mode` in `Preferences ... > Network`
+
+This needs to be done for every Node VM, so it's best if you can automate this
+when provisioning your VMs.
+
+### Suggested test cluster
+
+If you need to create a K8s cluster to test this guide, we suggest you create
+one by following [these
+steps](https://github.com/antrea-io/antrea/tree/main/test/e2e#creating-the-test-kubernetes-cluster-with-vagrant). You
+will need to use a slightly modified Vagrantfile, which you can find
+[here](test/Vagrantfile). Note that this Vagrantfile will create 3 VMs on your
+machine, and each VM will be allocated 2GB of memory, so make sure you have
+enough memory available. You can create the cluster with the following steps:
+
+```bash
+git clone https://github.com/antrea-io/antrea.git
+cd antrea
+cp docs/cookbooks/multus/test/Vagrantfile test/e2e/infra/vagrant/
+cd test/e2e/infra/vagrant
+./provision.sh
+```
+
+The last command will take around 10 to 15 minutes to complete. After that, your
+cluster is ready and you can set the `KUBECONFIG` environment variable in order
+to use `kubectl`:
+
+```bash
+export KUBECONFIG=`pwd`/infra/vagrant/playbook/kube/config
+kubectl cluster-info
+```
+
+The cluster that you have created by following these steps is the one we will
+use as an example in this guide.
+
+## Practical steps
+
+### Step 1: Deploying Antrea
+
+For detailed information on the Antrea requirements and instructions on how to
+deploy Antrea, please refer to [getting-started.md](../../getting-started.md).
+You can deploy the latest version of Antrea with
+[the manifest](https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml).
+You may also choose a [released Antrea version](https://github.com/antrea-io/antrea/releases).
+
+To leverage Antrea IPAM to assign IP addresses for the secondary network, you
+need to edit the Antrea deployment manifest and enable the `AntreaIPAM` feature
+gate for both `antrea-controller` and `antrea-agent`, and then deploy Antrea
+with:
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml
+```
+
+If you choose other IPAM options like DHCP or Whereabouts, you can just deploy
+Antrea with the Antrea deployment manifest without modification:
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml
+```
+
+### Step 2: Deploy Multus as a DaemonSet
+
+```bash
+git clone https://github.com/k8snetworkplumbingwg/multus-cni && cd multus-cni
+cat ./deployments/multus-daemonset-thick-plugin.yml | kubectl apply -f -
+```
+
+### Step 3: Create an `IPPool` and a `NetworkAttachmentDefinition`
+
+With Antrea IPAM, the subnet and IP ranges for the secondary network are defined
+with an Antrea `IPPool` CR. To learn more information about Antrea IPAM for
+secondary network, please refer to the [Antrea IPAM documentation](../../antrea-ipam.md#ipam-for-secondary-network).
+
+```bash
+cat <
+samplepod-7956c4498-9dz98 1/1 Running 0 68s 10.10.1.12 k8s-node-worker-1
+samplepod-7956c4498-ghrdg 1/1 Running 0 68s 10.10.1.13 k8s-node-worker-1
+samplepod-7956c4498-n65bn 1/1 Running 0 68s 10.10.2.12 k8s-node-worker-2
+samplepod-7956c4498-q6vp2 1/1 Running 0 68s 10.10.1.11 k8s-node-worker-1
+samplepod-7956c4498-xztf4 1/1 Running 0 68s 10.10.2.11 k8s-node-worker-2
+```
+
+```bash
+$ kubectl exec samplepod-7956c4498-65v6m -- ip addr
+1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+ inet 127.0.0.1/8 scope host lo
+ valid_lft forever preferred_lft forever
+3: eth0@if18: mtu 1450 qdisc noqueue state UP group default
+ link/ether c2:ce:36:6b:ba:2d brd ff:ff:ff:ff:ff:ff link-netnsid 0
+ inet 10.10.2.10/24 brd 10.10.2.255 scope global eth0
+ valid_lft forever preferred_lft forever
+4: net1@if4: mtu 1500 qdisc noqueue state UP group default
+ link/ether be:a0:35:f2:08:2d brd ff:ff:ff:ff:ff:ff link-netnsid 0
+ inet 192.168.78.205/24 brd 192.168.78.255 scope global net1
+ valid_lft forever preferred_lft forever
+```
+
+```bash
+$ kubectl exec samplepod-7956c4498-9dz98 -- ip addr
+1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+ inet 127.0.0.1/8 scope host lo
+ valid_lft forever preferred_lft forever
+3: eth0@if20: mtu 1450 qdisc noqueue state UP group default
+ link/ether 92:8f:8a:1d:a0:f5 brd ff:ff:ff:ff:ff:ff link-netnsid 0
+ inet 10.10.1.12/24 brd 10.10.1.255 scope global eth0
+ valid_lft forever preferred_lft forever
+4: net1@if4: mtu 1500 qdisc noqueue state UP group default
+ link/ether 22:6e:b1:0a:f3:ab brd ff:ff:ff:ff:ff:ff link-netnsid 0
+ inet 192.168.78.202/24 brd 192.168.78.255 scope global net1
+ valid_lft forever preferred_lft forever
+```
+
+```bash
+$ kubectl exec samplepod-7956c4498-9dz98 -- ping -c 3 192.168.78.205
+PING 192.168.78.205 (192.168.78.205) 56(84) bytes of data.
+64 bytes from 192.168.78.205: icmp_seq=1 ttl=64 time=0.846 ms
+64 bytes from 192.168.78.205: icmp_seq=2 ttl=64 time=0.410 ms
+64 bytes from 192.168.78.205: icmp_seq=3 ttl=64 time=0.507 ms
+
+--- 192.168.78.205 ping statistics ---
+3 packets transmitted, 3 received, 0% packet loss, time 2013ms
+rtt min/avg/max/mdev = 0.410/0.587/0.846/0.186 ms
+```
+
+## Overview of a test cluster Node
+
+The diagram below shows an overview of a K8s Node when using the [test cluster]
+using [DHCP] for IPAM, and following all the steps above. For the sake of
+completeness, we show the DHCP server running on that Node, but as we use a
+Deployment with a single replica, the server may be running on any worker Node
+in the cluster.
+
+{{< img src="assets/testbed-multus-macvlan.svg" width="900" alt="Test cluster Node" >}}
+
+## Using [Whereabouts] for IPAM
+
+If you do not already have a DHCP server for the underlying parent network and
+you find that deploying one in-cluster is impractical, you may want to consider
+using [Whereabouts] to assign IP addresses to the secondary interfaces. When
+using [Whereabouts], follow steps 1 and 2 above, along with step 4 if you want
+the Nodes to be able to communicate with the Pods using the secondary
+network.
+
+The next step is to install the [Whereabouts] plugin as follows:
+
+```bash
+git clone https://github.com/dougbtv/whereabouts && cd whereabouts
+kubectl apply -f ./doc/daemonset-install.yaml -f ./doc/whereabouts.cni.cncf.io_ippools.yaml
+```
+
+Then create a NetworkAttachmentDefinition like the one below, after ensuring
+that `"master"` matches the name of the parent interface on the Nodes, and that
+the `range` and `exclude` configuration parameters are correct for your cluster
+(in particular, make sure that you exclude IP addresses assigned to Nodes). If
+you are using our [test cluster], you can use the NetworkAttachmentDefinition
+below as is.
+
+```bash
+cat <
+
+
+
diff --git a/content/docs/v1.15.0/docs/cookbooks/multus/build/cni-dhcp-daemon/Dockerfile b/content/docs/v1.15.0/docs/cookbooks/multus/build/cni-dhcp-daemon/Dockerfile
new file mode 100644
index 00000000..beea7b02
--- /dev/null
+++ b/content/docs/v1.15.0/docs/cookbooks/multus/build/cni-dhcp-daemon/Dockerfile
@@ -0,0 +1,33 @@
+# Copyright 2022 Antrea Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+FROM ubuntu:22.04 as cni-binary
+
+LABEL maintainer="Antrea "
+LABEL description="A Docker which runs the dhcp daemon from the containernetworking project."
+
+RUN apt-get update && \
+ apt-get install -y --no-install-recommends wget ca-certificates
+
+# Leading dot is required for the tar command below
+ENV CNI_PLUGINS="./dhcp"
+
+RUN mkdir -p /opt/cni/bin && \
+ wget -q -O - https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz | tar xz -C /opt/cni/bin $CNI_PLUGINS
+
+FROM ubuntu:22.04
+
+COPY --from=cni-binary /opt/cni/bin/* /usr/local/bin
+
+ENTRYPOINT ["dhcp", "daemon"]
diff --git a/content/docs/v1.15.0/docs/cookbooks/multus/build/cni-dhcp-daemon/README.md b/content/docs/v1.15.0/docs/cookbooks/multus/build/cni-dhcp-daemon/README.md
new file mode 100644
index 00000000..88aa0fcc
--- /dev/null
+++ b/content/docs/v1.15.0/docs/cookbooks/multus/build/cni-dhcp-daemon/README.md
@@ -0,0 +1,16 @@
+# cni-dhcp-daemon
+
+This Docker image can be used to run the [DHCP daemon from the
+containernetworking
+project](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp).
+
+If you need to build a new version of the image and push it to Dockerhub, you
+can run the following:
+
+```bash
+docker build -t antrea/cni-dhcp-daemon:latest .
+docker push antrea/cni-dhcp-daemon:latest
+```
+
+The `docker push` command will fail if you do not have permission to push to the
+`antrea` Dockerhub repository.
diff --git a/content/docs/v1.15.0/docs/cookbooks/multus/test/Vagrantfile b/content/docs/v1.15.0/docs/cookbooks/multus/test/Vagrantfile
new file mode 100644
index 00000000..e16b3564
--- /dev/null
+++ b/content/docs/v1.15.0/docs/cookbooks/multus/test/Vagrantfile
@@ -0,0 +1,70 @@
+VAGRANTFILE_API_VERSION = "2"
+
+NUM_WORKERS = 2
+
+MODE = "v4"
+K8S_POD_NETWORK_CIDR = "10.10.0.0/16"
+K8S_SERVICE_NETWORK_CIDR = "10.96.0.0/12"
+K8S_NODE_CP_GW_IP = "10.10.0.1"
+
+Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
+ config.vm.box = "ubuntu/bionic64"
+
+ config.vm.provider "virtualbox" do |v|
+ v.memory = 2048
+ # 2 CPUS required to initialize K8s cluster with "kubeadm init"
+ v.cpus = 2
+
+ v.customize [
+ "modifyvm", :id,
+ "--nicpromisc3", "allow-all"
+ ]
+ end
+
+ groups = {
+ "controlplane" => ["k8s-node-control-plane"],
+ "workers" => ["k8s-node-worker-[1:#{NUM_WORKERS}]"],
+ }
+
+ config.vm.define "k8s-node-control-plane" do |node|
+ node.vm.hostname = "k8s-node-control-plane"
+ node_ip = "192.168.77.100"
+ node.vm.network "private_network", ip: node_ip
+ node.vm.network "private_network", ip: "192.168.78.100", virtualbox__intnet: true
+
+ node.vm.provision :ansible do |ansible|
+ ansible.playbook = "playbook/k8s.yml"
+ ansible.groups = groups
+ ansible.extra_vars = {
+ # Ubuntu bionic does not ship with python2
+ ansible_python_interpreter:"/usr/bin/python3",
+ node_ip: node_ip,
+ node_name: "k8s-node-control-plane",
+ k8s_pod_network_cidr: K8S_POD_NETWORK_CIDR,
+ k8s_service_network_cidr: K8S_SERVICE_NETWORK_CIDR,
+ k8s_api_server_ip: node_ip,
+ k8s_ip_family: MODE,
+ k8s_antrea_gw_ip: K8S_NODE_CP_GW_IP,
+ }
+ end
+ end
+
+ (1..NUM_WORKERS).each do |node_id|
+ config.vm.define "k8s-node-worker-#{node_id}" do |node|
+ node.vm.hostname = "k8s-node-worker-#{node_id}"
+ node_ip = "192.168.77.#{100 + node_id}"
+ node.vm.network "private_network", ip: node_ip
+ node.vm.network "private_network", ip: "192.168.78.#{100 + node_id}", virtualbox__intnet: true
+
+ node.vm.provision :ansible do |ansible|
+ ansible.playbook = "playbook/k8s.yml"
+ ansible.groups = groups
+ ansible.extra_vars = {
+ ansible_python_interpreter:"/usr/bin/python3",
+ node_ip: node_ip,
+ node_name: "k8s-node-worker-#{node_id}",
+ }
+ end
+ end
+ end
+end
diff --git a/content/docs/v1.15.0/docs/design/architecture.md b/content/docs/v1.15.0/docs/design/architecture.md
new file mode 100644
index 00000000..a7df3db6
--- /dev/null
+++ b/content/docs/v1.15.0/docs/design/architecture.md
@@ -0,0 +1,391 @@
+# Antrea Architecture
+
+Antrea is designed to be Kubernetes-centric and Kubernetes-native. It focuses on
+and is optimized for networking and security of a Kubernetes cluster. Its
+implementation leverages Kubernetes and Kubernetes native solutions as much as
+possible.
+
+Antrea leverages Open vSwitch as the networking data plane. Open vSwitch is a
+high-performance programmable virtual switch that supports both Linux and
+Windows. Open vSwitch enables Antrea to implement Kubernetes Network Policies
+in a high-performance and efficient manner. Thanks to the "programmable"
+characteristic of Open vSwitch, Antrea is able to implement an extensive set
+of networking and security features and services on top of Open vSwitch.
+
+Some information in this document and in particular when it comes to the Antrea
+Agent is specific to running Antrea on Linux Nodes. For information about how
+Antrea is run on Windows Nodes, please refer to the [Windows design document](windows-design.md).
+
+## Components
+
+In a Kubernetes cluster, Antrea creates a Deployment that runs Antrea
+Controller, and a DaemonSet that includes two containers to run Antrea Agent
+and OVS daemons respectively, on every Node. The DaemonSet also includes an
+init container that installs the CNI plugin - `antrea-cni` - on the Node and
+ensures that the OVS kernel module is loaded and it is chained with the portmap
+and bandwidth CNI plugins. All Antrea Controller, Agent, OVS daemons, and
+`antrea-cni` bits are included in a single Docker image. Antrea also has a
+command-line tool called `antctl`.
+
+{{< img src="../assets/arch.svg" width="600" alt="Antrea Architecture Overview" >}}
+
+### Antrea Controller
+
+Antrea Controller watches NetworkPolicy, Pod, and Namespace resources from the
+Kubernetes API, computes NetworkPolicies and distributes the computed policies
+to all Antrea Agents. Right now Antrea Controller supports only a single
+replica. At the moment, Antrea Controller mainly exists for NetworkPolicy
+implementation. If you only care about connectivity between Pods but not
+NetworkPolicy support, you may choose not to deploy Antrea Controller at all.
+However, in the future, Antrea might support more features that require Antrea
+Controller.
+
+Antrea Controller leverages the [Kubernetes apiserver library](https://github.com/kubernetes/apiserver)
+to implement the communication channel to Antrea Agents. Each Antrea Agent
+connects to the Controller API server and watches the computed NetworkPolicy
+objects. Controller also exposes a REST API for `antctl` on the same HTTP
+endpoint. See more information about the Controller API server implementation
+in the [Controller API server section](#controller-api-server).
+
+#### Controller API server
+
+Antrea Controller leverages the Kubernetes apiserver library to implement its
+own API server. The API server implementation is customized and optimized for
+publishing the computed NetworkPolicies to Agents:
+
+- The API server keeps all the state in in-memory caches and does not require a
+datastore to persist the data.
+- It sends the NetworkPolicy objects to only those Nodes that need to apply the
+NetworkPolicies locally. A Node receives a NetworkPolicy if and only if the
+NetworkPolicy is applied to at least one Pod on the Node.
+- It supports sending incremental updates to the NetworkPolicy objects to
+Agents.
+- Messages between Controller and Agent are serialized using the Protobuf format
+for reduced size and higher efficiency.
+
+The Antrea Controller API server also leverages Kubernetes Service for:
+
+- Service discovery
+- Authentication and authorization
+
+The Controller API endpoint is exposed through a Kubernetes ClusterIP type
+Service. Antrea Agent gets the Service's ClusterIP from the Service environment
+variable and connects to the Controller API server using the ClusterIP. The
+Controller API server delegates authentication and authorization to the
+Kubernetes API - the Antrea Agent uses a Kubernetes ServiceAccount token to
+authenticate to the Controller, and the Controller API server validates the
+token and whether the ServiceAccount is authorized for the API request with the
+Kubernetes API.
+
+Antrea Controller also exposes a REST API for `antctl` using the API server HTTP
+endpoint. It leverages [Kubernetes API aggregation](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
+to enable `antctl` to reach the Antrea Controller API through the Kubernetes
+API - `antctl` connects and authenticates to the Kubernetes API, which will
+proxy the `antctl` API requests to the Antrea Controller. In this way, `antctl`
+can be executed on any machine that can reach the Kubernetes API, and it can
+also leverage the `kubectl` configuration (`kubeconfig` file) to discover the
+Kubernetes API and authentication information. See also the [antctl section](#antctl).
+
+### Antrea Agent
+
+Antrea Agent manages the OVS bridge and Pod interfaces and implements Pod
+networking with OVS on every Kubernetes Node.
+
+Antrea Agent exposes a gRPC service (`Cni` service) which is invoked by the
+`antrea-cni` binary to perform CNI operations. For each new Pod to be created on
+the Node, after getting the CNI `ADD` call from `antrea-cni`, the Agent creates
+the Pod's network interface, allocates an IP address, connects the interface to
+the OVS bridge and installs the necessary flows in OVS. To learn more about the
+OVS flows check out the [OVS pipeline doc](ovs-pipeline.md).
+
+Antrea Agent includes two Kubernetes controllers:
+
+- The Node controller watches the Kubernetes API server for new Nodes, and
+creates an OVS (Geneve / VXLAN / GRE / STT) tunnel to each remote Node.
+- The NetworkPolicy controller watches the computed NetworkPolicies from the
+Antrea Controller API, and installs OVS flows to implement the NetworkPolicies
+for the local Pods.
+
+Antrea Agent also exposes a REST API on a local HTTP endpoint for `antctl`.
+
+### OVS daemons
+
+The two OVS daemons - `ovsdb-server` and `ovs-vswitchd` run in a separate
+container, called `antrea-ovs`, of the Antrea Agent DaemonSet.
+
+### antrea-cni
+
+`antrea-cni` is the [CNI](https://github.com/containernetworking/cni) plugin
+binary of Antrea. It is executed by `kubelet` for each CNI command. It is a
+simple gRPC client which issues an RPC to Antrea Agent for each CNI command. The
+Agent performs the actual work (sets up networking for the Pod) and returns the
+result or an error to `antrea-cni`.
+
+### antctl
+
+`antctl` is a command-line tool for Antrea. At the moment, it can show basic
+runtime information for both Antrea Controller and Antrea Agent, for debugging
+purposes.
+
+When accessing the Controller, `antctl` invokes the Controller API to query the
+required information. As described earlier, `antctl` can reach the Controller
+API through the Kubernetes API, and have the Kubernetes API authenticate,
+authorize, and proxy the API requests to the Controller. `antctl` can be
+executed through `kubectl` as a `kubectl` plugin as well.
+
+When accessing the Agent, `antctl` connects to the Agent's local REST endpoint,
+and can only be executed locally in the Agent's container.
+
+### Antrea web UI
+
+Antrea also comes with a web UI, which can show the Controller and Agent's
+health and basic runtime information. The UI gets the Controller and Agent's
+information from the `AntreaControllerInfo` and `AntreaAgentInfo` CRDs (Custom
+Resource Definition) in the Kubernetes API. The CRDs are created by the Antrea
+Controller and each Antrea Agent to populate their health and runtime
+information.
+
+The Antrea web UI provides additional capabilities. Please refer to the [Antrea
+UI repository](https://github.com/antrea-io/antrea-ui) for more information.
+
+## Pod Networking
+
+### Pod interface configuration and IPAM
+
+On every Node, Antrea Agent creates an OVS bridge (named `br-int` by default),
+and creates a veth pair for each Pod, with one end being in the Pod's network
+namespace and the other connected to the OVS bridge. On the OVS bridge, Antrea
+Agent also creates an internal port - `antrea-gw0` by default - to be the gateway of
+the Node's subnet, and a tunnel port `antrea-tun0` which is for creating overlay
+tunnels to other Nodes.
+
+{{< img src="../assets/node.svg.png" width="300" alt="Antrea Node Network" >}}
+
+By default, Antrea leverages Kubernetes' `NodeIPAMController` to allocate a
+single subnet for each Kubernetes Node, and Antrea Agent on a Node allocates an
+IP for each Pod on the Node from the Node's subnet. `NodeIPAMController` sets
+the `podCIDR` field of the Kubernetes Node spec to the allocated subnet. Antrea
+Agent retrieves the subnets of Nodes from the `podCIDR` field. It reserves the
+first IP of the local Node's subnet to be the gateway IP and assigns it to the
+`antrea-gw0` port, and invokes the [host-local IPAM plugin](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/host-local)
+to allocate IPs from the subnet to all Pods. A local Pod is assigned an IP
+when the CNI ADD command is received for that Pod.
+
+`NodeIPAMController` can run in `kube-controller-manager` context, or within
+the context of Antrea Controller.
+
+For every remote Node, Antrea Agent adds an OVS flow to send the traffic to that
+Node through the appropriate tunnel. The flow matches the packets' destination
+IP against each Node's subnet.
+
+In addition to Kubernetes NodeIPAM, Antrea also implements its own IPAM feature,
+which can allocate IPs for Pods from user-defined IP pools. For more
+information, please refer to the [Antrea IPAM documentation](../antrea-ipam.md).
+
+### Traffic walk
+
+{{< img src="../assets/traffic_walk.svg.png" width="600" alt="Antrea Traffic Walk" >}}
+
+* ***Intra-node traffic*** Packets between two local Pods will be forwarded by
+the OVS bridge directly.
+
+* ***Inter-node traffic*** Packets to a Pod on another Node will be first
+forwarded to the `antrea-tun0` port, encapsulated, and sent to the destination Node
+through the tunnel; then they will be decapsulated, injected through the `antrea-tun0`
+port to the OVS bridge, and finally forwarded to the destination Pod.
+
+* ***Pod to external traffic*** Packets sent to an external IP or the Nodes'
+network will be forwarded to the `antrea-gw0` port (as it is the gateway of the local
+Pod subnet), and will be routed (based on routes configured on the Node) to the
+appropriate network interface of the Node (e.g. a physical network interface for
+a baremetal Node) and sent out to the Node network from there. Antrea Agent
+creates an iptables (MASQUERADE) rule to perform SNAT on the packets from Pods,
+so their source IP will be rewritten to the Node's IP before going out.
+
+### ClusterIP Service
+
+Antrea supports two ways to implement Services of type ClusterIP - leveraging
+`kube-proxy`, or AntreaProxy that implements load balancing for ClusterIP
+Service traffic with OVS.
+
+When leveraging `kube-proxy`, Antrea Agent adds OVS flows to forward the packets
+from a Pod to a Service's ClusterIP to the `antrea-gw0` port, then `kube-proxy`
+will intercept the packets and select one Service endpoint to be the
+connection's destination and DNAT the packets to the endpoint's IP and port. If
+the destination endpoint is a local Pod, the packets will be forwarded to the
+Pod directly; if it is on another Node the packets will be sent to that Node via
+the tunnel.
+
+{{< img src="../assets/service_walk.svg.png" width="600" alt="Antrea Service Traffic Walk" >}}
+
+`kube-proxy` can be used in any supported mode: iptables, IPVS or nftables.
+See the [Kubernetes Service Proxies documentation](https://kubernetes.io/docs/reference/networking/virtual-ips)
+for more details.
+
+When AntreaProxy is enabled, Antrea Agent will add OVS flows that implement
+load balancing and DNAT for the ClusterIP Service traffic. In this way, Service
+traffic load balancing is done inside OVS together with the rest of the
+forwarding, and it can achieve better performance than using `kube-proxy`, as
+there is no extra overhead of forwarding Service traffic to the host's network
+stack and iptables processing. The AntreaProxy implementation in Antrea Agent
+leverages some `kube-proxy` packages to watch and process Service Endpoints.
+
+### NetworkPolicy
+
+An important design choice Antrea took regarding the NetworkPolicy
+implementation is centralized policy computation. Antrea Controller watches
+NetworkPolicy, Pod, and Namespace resources from the Kubernetes API. It
+processes podSelectors, namespaceSelectors, and ipBlocks as follows:
+
+- PodSelectors directly under the NetworkPolicy spec (which define the Pods to
+which the NetworkPolicy is applied) will be translated to member Pods.
+- Selectors (podSelectors and namespaceSelectors) and ipBlocks in rules (which
+define the ingress and egress traffic allowed by this policy) will be mapped to
+Pod IP addresses / IP address ranges.
+
+Antrea Controller also computes which Nodes need to receive a NetworkPolicy.
+Each Antrea Agent receives only the computed policies which affect Pods running
+locally on its Node, and directly uses the IP addresses computed by the
+Controller to create OVS flows enforcing the specified NetworkPolicies.
+
+We see the following major benefits of the centralized computation approach:
+
+* Only one Antrea Controller instance needs to receive and process all
+NetworkPolicy, Pod, and Namespace updates, and compute podSelectors and
+namespaceSelectors. This has a much lower overall cost compared to watching
+these updates and performing the same complex policy computation on all Nodes.
+
+* It could enable scale-out of Controllers, with multiple Controllers working
+together on the NetworkPolicy computation, each one being responsible for a
+subset of NetworkPolicies (though at the moment Antrea supports only a single
+Controller instance).
+
+* Antrea Controller is the single source of NetworkPolicy computation. It is
+much easier to achieve consistency among Nodes and easier to debug the
+NetworkPolicy implementation.
+
+As described earlier, Antrea Controller leverages the Kubernetes apiserver
+library to build the API and communication channel to Agents.
+
+### Hybrid, NoEncap, NetworkPolicyOnly TrafficEncapMode
+
+Besides the default `Encap` mode, which always creates overlay tunnels among
+Nodes and encapsulates inter-Node Pod traffic, Antrea also supports other
+TrafficEncapModes including `Hybrid`, `NoEncap`, `NetworkPolicyOnly` modes. This
+section introduces these modes.
+
+* ***Hybrid*** When two Nodes are in two different subnets, Pod traffic between
+the two Nodes is encapsulated; when the two Nodes are in the same subnet, Pod
+traffic between them is not encapsulated, instead the traffic is routed from one
+Node to another. Antrea Agent adds routes on the Node to enable the routing
+within the same Node subnet. For every remote Node in the same subnet as the
+local Node, Agent adds a static route entry that uses the remote Node IP as the
+next hop of its Pod subnet.
+
+`Hybrid` mode requires the Node network to allow packets with Pod IPs to be sent
+out from the Nodes' NICs.
+
+* ***NoEncap*** Pod traffic is never encapsulated. Antrea just assumes the Node
+network can handle routing of Pod traffic across Nodes. Typically this is
+achieved by the Kubernetes Cloud Provider implementation which adds routes for
+Pod subnets to the Node network routers. Antrea Agent still creates static
+routes on each Node for remote Nodes in the same subnet, which is an optimization
+that routes Pod traffic directly to the destination Node without going through
+the extra hop of the Node network router. Antrea Agent also creates the iptables
+(MASQUERADE) rule for SNAT of Pod-to-external traffic.
+
+[Antrea supports GKE](../gke-installation.md) with `NoEncap` mode.
+
+* ***NetworkPolicyOnly*** Inter-Node Pod traffic is neither tunneled nor routed
+by Antrea. Antrea just implements NetworkPolicies for Pod traffic, but relies on
+another cloud CNI and cloud network to implement Pod IPAM and cross-Node traffic
+forwarding. Refer to the [NetworkPolicyOnly mode design document](policy-only.md)
+for more information.
+
+[Antrea for AKS
+Engine](https://github.com/Azure/aks-engine/blob/master/docs/topics/features.md#feat-antrea)
+and [Antrea EKS support](../eks-installation.md) work in `NetworkPolicyOnly`
+mode.
+
+## Features
+
+### Antrea Network Policy
+
+Besides Kubernetes NetworkPolicy, Antrea supports two extra types of
+Network Policies available as CRDs - Antrea Namespaced NetworkPolicy and
+ClusterNetworkPolicy. The former is scoped to a specific Namespace, while the
+latter is scoped to the whole cluster. These two types of Network Policies
+extend Kubernetes NetworkPolicy with advanced features including: policy
+priority, tiering, deny action, external entity, and policy statistics. For more
+information about Antrea network policies, refer to the [Antrea Network Policy document](../antrea-network-policy.md).
+
+Just like for Kubernetes NetworkPolicies, Antrea Controller transforms Antrea
+NetworkPolicies and ClusterNetworkPolicies to internal NetworkPolicy,
+AddressGroup and AppliedToGroup objects, and disseminates them to Antrea
+Agents. Antrea Agents create OVS flows to enforce the NetworkPolicies applied
+to the local Pods on their Nodes.
+
+### IPsec encryption
+
+Antrea supports encrypting Pod traffic across Linux Nodes with IPsec ESP. The
+IPsec implementation leverages [OVS
+IPsec](https://docs.openvswitch.org/en/latest/tutorials/ipsec/) and leverages
+[strongSwan](https://www.strongswan.org) as the IKE daemon. By default GRE
+tunnels are used but other tunnel types are also supported.
+
+To enable IPsec, an extra container -`antrea-ipsec` - must be added to the
+Antrea Agent DaemonSet, which runs the `ovs-monitor-ipsec` and strongSwan
+daemons. Antrea now supports only using pre-shared key (PSK) for IKE
+authentication, and the PSK string must be passed to Antrea Agent using an
+environment variable - `ANTREA_IPSEC_PSK`. The PSK string can be specified in
+the [Antrea IPsec deployment yaml](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/antrea-ipsec.yml), which creates
+a Kubernetes Secret to save the PSK value and populates it to the
+`ANTREA_IPSEC_PSK` environment variable of the Antrea Agent container.
+
+When IPsec is enabled, Antrea Agent will create a separate tunnel port on
+the OVS bridge for each remote Node, and write the PSK string and the remote
+Node IP address to two OVS interface options of the tunnel interface. Then
+`ovs-monitor-ipsec` can detect the tunnel and create IPsec Security Policies
+with PSK for the remote Node, and strongSwan can create the IPsec Security
+Associations based on the Security Policies. These additional tunnel ports are
+not used to send traffic to a remote Node - the tunnel traffic is still output
+to the default tunnel port (`antrea-tun0`) with OVS flow based tunneling.
+However, the traffic from a remote Node will be received from the Node's IPsec
+tunnel port.
+
+### Network flow visibility
+
+Antrea supports exporting network flow information with Kubernetes context
+using IPFIX. The exported network flows can be visualized using Elastic Stack
+and Kibana dashboards. For more information, refer to the [network flow
+visibility document](../network-flow-visibility.md).
+
+### Prometheus integration
+
+Antrea supports exporting metrics to Prometheus. Both Antrea Controller and
+Antrea Agent implement the `/metrics` API endpoint on their API server to expose
+various metrics generated by Antrea components or 3rd party components used by
+Antrea. Prometheus can be configured to collect metrics from the API endpoints.
+For more information, please refer to the [Prometheus integration document](../prometheus-integration.md).
+
+### Windows Node
+
+On a Windows Node, Antrea acts very much like it does on a Linux Node. Antrea
+Agent and OVS are still run on the Node, Windows Pods are still connected to the
+OVS bridge, and Pod networking is still mostly implemented with OVS flows. Even
+the OVS flows are mostly the same as those on a Linux Node. The main differences
+in the Antrea implementation for Window Node are: how Antrea Agent and OVS
+daemons are run and managed, how the OVS bridge is configured and Pod network
+interfaces are connected to the bridge, and how host network routing and SNAT
+are implemented. For more information about the Antrea Windows implementation,
+refer to the [Windows design document](windows-design.md).
+
+### Antrea Multi-cluster
+
+Antrea Multi-cluster implements Multi-cluster Service API, which allows users to
+create multi-cluster Services that can be accessed cross clusters in a
+ClusterSet. Antrea Multi-cluster also supports Antrea ClusterNetworkPolicy
+replication. Multi-cluster admins can define ClusterNetworkPolicies to be
+replicated across the entire ClusterSet, and enforced in all member clusters.
+To learn more information about the Antrea Multi-cluster architecture, please
+refer to the [Antrea Multi-cluster architecture document](../multicluster/architecture.md).
diff --git a/content/docs/v1.15.0/docs/design/ovs-pipeline.md b/content/docs/v1.15.0/docs/design/ovs-pipeline.md
new file mode 100644
index 00000000..5188cfe4
--- /dev/null
+++ b/content/docs/v1.15.0/docs/design/ovs-pipeline.md
@@ -0,0 +1,1197 @@
+# Antrea OVS Pipeline
+
+## Terminology
+
+* *Node Route Controller*: the [K8s
+ controller](https://kubernetes.io/docs/concepts/architecture/controller/)
+ which is part of the Antrea Agent and watches for updates to Nodes. When a
+ Node is added, it updates the local networking configuration (e.g. configure
+ the tunnel to the new Node). When a Node is deleted, it performs the necessary
+ clean-ups.
+* *peer Node*: this is how we refer to other Nodes in the cluster, to which the
+ local Node is connected through a Geneve, VXLAN, GRE, or STT tunnel.
+* *Global Virtual MAC*: a virtual MAC address which is used as the destination
+ MAC for all tunnelled traffic across all Nodes. This simplifies networking by
+ enabling all Nodes to use this MAC address instead of the actual MAC address
+ of the appropriate remote gateway. This enables each vSwitch to act as a
+ "proxy" for the local gateway when receiving tunnelled traffic and directly
+ take care of the packet forwarding. At the moment, we use an hard-coded value
+ of `aa:bb:cc:dd:ee:ff`.
+* *Antrea-native Policies*: Antrea ClusterNetworkPolicy and Antrea NetworkPolicy
+ CRDs, as documented [here](../antrea-network-policy.md).
+* *`normal` action*: OpenFlow defines this action to submit a packet to "the
+ traditional non-OpenFlow pipeline of the switch". That is, if a flow uses this
+ action, then the packets in the flow go through the switch in the same way
+ that they would if OpenFlow was not configured on the switch. Antrea uses this
+ action to process ARP traffic as a regular learning L2 switch would.
+* *table-miss flow entry*: a "catch-all" entry in a OpenFlow table, which is
+ used if no other flow is matched. If the table-miss flow entry does not exist,
+ by default packets unmatched by flow entries are dropped (discarded).
+* *conjunctive match fields*: an efficient way in OVS to implement conjunctive
+ matches, that is a match for which we have multiple fields, each one with a
+ set of acceptable values. See [OVS
+ fields](http://www.openvswitch.org/support/dist-docs/ovs-fields.7.txt) for
+ more information.
+* *conntrack*: a connection tracking module that can be used by OVS to match on
+ the state of a TCP, UDP, ICMP, etc., connection. See the [OVS Conntrack
+ tutorial](https://docs.openvswitch.org/en/latest/tutorials/ovs-conntrack/) for
+ more information.
+* *dmac table*: a traditional L2 switch has a "dmac" table which maps
+ learned destination MAC address to the appropriate egress port. It is often
+ the same physical table as the "smac" table (which matches on the source MAC
+ address and initiate MAC learning if the address is unknown).
+* *group action*: an action which is used to process forwarding decisions
+ on multiple OVS ports. Examples include: load-balancing, multicast, and active/standby.
+ See [OVS group action](https://docs.openvswitch.org/en/latest/ref/ovs-actions.7/#the-group-action)
+ for more information.
+* *IN_PORT action*: an action to output the packet to the port on which it was
+ received. This is the only standard way to output the packet to the input port.
+* *session affinity*: a load balancer feature that always selects the same backend
+ Pod for connections from a particular client. For a K8s Service, session
+ affinity can be enabled by setting `service.spec.sessionAffinity` to `ClientIP`
+ (default is `None`). See [K8s Service](https://kubernetes.io/docs/concepts/services-networking/service/)
+ for more information about session affinity.
+
+**This document currently makes the following assumptions:**
+
+* Antrea is used in encap mode (an overlay network is created between all Nodes)
+* All the Nodes are Linux Nodes
+* IPv6 is disabled
+* AntreaProxy is enabled
+* AntreaPolicy is enabled
+
+## Dumping the Flows
+
+This guide includes a representative flow dump for every table in the pipeline,
+in order to illustrate the function of each table. If you have a cluster running
+Antrea, you can dump the flows for a given Node as follows:
+
+```bash
+kubectl exec -n kube-system -c antrea-ovs -- ovs-ofctl dump-flows [--no-stats] [--names]
+```
+
+where `` is the name of the Antrea Agent Pod running on
+that Node and `` is the name of the bridge created by Antrea
+(`br-int` by default).
+
+## Registers
+
+We use 2 32-bit OVS registers to carry information throughout the pipeline:
+
+* reg0 (NXM_NX_REG0):
+ - bits [0..3] are used to store the traffic source (from tunnel: 0, from
+ local gateway: 1, from local Pod: 2). It is set in [ClassifierTable].
+ - bit 16 is used to indicate whether the destination MAC address of a packet
+ is "known", i.e. corresponds to an entry in [L2ForwardingCalcTable], which
+ is essentially a "dmac" table.
+ - bit 18 is used to indicate whether the packet should be output to the port
+ on which it was received. It is consumed in [L2ForwardingOutTable]
+ to output the packet with action `IN_PORT`.
+ - bit 19 is used to indicate whether the destination and source MACs of the
+ packet should be rewritten in [l3ForwardingTable]. The bit is set for
+ packets received from the tunnel port in [ClassifierTable]. The
+ destination MAC of such packets is the Global Virtual MAC and should be
+ rewritten to the destination port's MAC before output to the port. When such
+ a packet is destined to a Pod, its source MAC should be rewritten to the
+ local gateway port's MAC too.
+* reg1 (NXM_NX_REG1): it is used to store the egress OF port for the packet. It
+ is set in [DNATTable] for traffic destined to Services and in
+ [L2ForwardingCalcTable] otherwise. It is consumed in [L2ForwardingOutTable] to
+ output each packet to the correct port.
+* reg3 (NXM_NX_REG3): it is used to store selected Service Endpoint IPv4 address
+ in OVS group entry. It is consumed in [EndpointDNATTable].
+* reg4 (NXM_NX_REG4):
+ * bits [0..16] are used to store selected Service Endpoint port number in OVS
+ group entry. They are consumed in [EndpointDNATTable].
+ * bits [17..18] are used to store the state of a Service request packet.
+ Marks in this field include,
+ * 0b001: packet needs to do Endpoint selection.
+ * 0b010: packet has done Endpoint selection.
+ * 0b011: packet has done Endpoint selection and the selection result needs to
+ be cached.
+
+## Network Policy Implementation
+
+Several tables of the pipeline are dedicated to [K8s Network
+Policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
+implementation ([EgressRuleTable], [EgressDefaultTable], [IngressRuleTable] and
+[IngressDefaultTable]).
+
+The Antrea implementation of K8s Network Policy, including the communication
+channel between the Controller and Agents, and how a Network Policy is mapped to
+OVS flows at each Node, will be described in details in a separate document. For
+the present document, we will use the Network Policy example below, and explain
+how these simple ingress and egress rules map to individual flows as we describe
+the relevant tables of our pipeline.
+
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: test-network-policy
+ namespace: default
+spec:
+ podSelector:
+ matchLabels:
+ app: nginx
+ policyTypes:
+ - Ingress
+ - Egress
+ ingress:
+ - from:
+ - podSelector:
+ matchLabels:
+ app: nginx
+ ports:
+ - protocol: TCP
+ port: 80
+ egress:
+ - to:
+ - podSelector:
+ matchLabels:
+ app: nginx
+ ports:
+ - protocol: TCP
+ port: 80
+```
+
+This Network Policy is applied to all Pods with the `nginx` app label in the
+`default` Namespace. For these Pods, it only allows TCP traffic on port 80 from
+and to Pods which also have the `nginx` app label. Because Antrea will only
+install OVS flows for this Network Policy on Nodes for which some of the Pods
+are the target of the policy, we have scheduled 2 `nginx` Pods on the same
+Node. They received IP addresses 10.10.1.2 and 10.10.1.3 from the Antrea CNI, so
+you will see these addresses show up in the OVS flows.
+
+## Antrea-native Policies Implementation
+
+In addition to the above tables created for K8s NetworkPolicy, Antrea creates
+additional dedicated tables to support the [Antrea-native policies](../antrea-network-policy.md)
+([AntreaPolicyEgressRuleTable] and [AntreaPolicyIngressRuleTable]).
+
+Consider the following Antrea ClusterNetworkPolicy (ACNP) in the Application tier as an
+example for the remainder of this document.
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: cnp0
+spec:
+ priority: 10
+ tier: application # defaults to application tier if not specified
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: server
+ ingress:
+ - action: Drop
+ from:
+ - podSelector:
+ matchLabels:
+ app: notClient
+ ports:
+ - protocol: TCP
+ port: 80
+ egress:
+ - action: Allow
+ to:
+ - podSelector:
+ matchLabels:
+ app: dns
+ ports:
+ - protocol: UDP
+ port: 53
+```
+
+This ACNP is applied to all Pods with the `app: server` label in all
+Namespaces. For these Pods, it drops TCP traffic on port 80 from all
+Pods which have the `app: notClient` label. In addition to the ingress rules,
+this policy also allows egress UDP traffic on port 53 to all Pods with the
+label `app: dns`. Similar to K8s NetworkPolicy, Antrea will only install OVS
+flows for this ACNP on Nodes for which some of the Pods are the target of the
+policy. Thus, we have scheduled three Pods (appServer, appDns, appNotClient)
+on the same Node and they have the following IP addresses:
+
+- appServer: 10.10.1.6
+- appNotClient: 10.10.1.7
+- appDns: 10.10.1.8
+
+## Tables
+
+![OVS pipeline](../assets/ovs-pipeline-antrea-proxy.svg)
+
+### ClassifierTable (0)
+
+This table is used to determine which "category" of traffic (tunnel, local
+gateway or local Pod) the packet belongs to. This is done by matching on the
+ingress port for the packet. The appropriate value is then written to bits
+[0..3] in NXM_NX_REG0: 0 for tunnel, 1 for local gateway and 2 for local Pod.
+This information is used by matches in subsequent tables. For a packet received
+from the tunnel port, bit 19 in NXM_NX_REG0 is set to 1, to indicate MAC rewrite
+should be performed for the packet in [L3ForwardingTable].
+
+If you dump the flows for this table, you may see the following:
+
+```text
+1. table=0, priority=200,in_port=2 actions=set_field:0x1/0xf->reg0,resubmit(,10)
+2. table=0, priority=200,in_port=1 actions=set_field:0/0xf->reg0,load:0x1->NXM_NX_REG0[19],resubmit(,30)
+3. table=0, priority=190,in_port=4 actions=set_field:0x2/0xf->reg0,resubmit(,10)
+4. table=0, priority=190,in_port=3 actions=set_field:0x2/0xf->reg0,resubmit(,10)
+5. table=0, priority=0 actions=drop
+```
+
+Flow 1 is for traffic coming in on the local gateway. Flow 2 is for traffic
+coming in through an overlay tunnel (i.e. from another Node). The next two
+flows (3 and 4) are for local Pods.
+
+Local traffic then goes to [SpoofGuardTable], while tunnel traffic from other
+Nodes goes to [ConntrackTable]. The table-miss flow entry will drop all
+unmatched packets (in practice this flow entry should almost never be used).
+
+### SpoofGuardTable (10)
+
+This table prevents IP and ARP
+[spoofing](https://en.wikipedia.org/wiki/Spoofing_attack) from local Pods. For
+each Pod (as identified by the ingress port), we ensure that:
+
+* for IP traffic, the source IP and MAC addresses are correct, i.e. match the
+ values configured on the interface when Antrea set-up networking for the Pod.
+* for ARP traffic, the advertised IP and MAC addresses are correct, i.e. match
+ the values configured on the interface when Antrea set-up networking for the
+ Pod.
+
+Because Antrea currently relies on kube-proxy to load-balance traffic destined
+to Services, implementing that kind of IP spoofing check for traffic coming-in
+on the local gateway port is not as trivial. Traffic from local Pods destined to
+Services will first go through the gateway, get load-balanced by the kube-proxy
+datapath (DNAT) then sent back through the gateway. This means that legitimate
+traffic can be received on the gateway port with a source IP belonging to a
+local Pod. We may add some fine-grained rules in the future to accommodate for
+this, but for now we just allow all IP traffic received from the gateway. We do
+have an ARP spoofing check for the gateway however, since there is no reason for
+the host to advertise a different MAC address on antrea-gw0.
+
+If you dump the flows for this table, you may see the following:
+
+```text
+1. table=10, priority=200,ip,in_port=2 actions=resubmit(,23)
+2. table=10, priority=200,arp,in_port=2,arp_spa=10.10.0.1,arp_sha=3a:dd:79:0f:55:4c actions=resubmit(,20)
+3. table=10, priority=200,arp,in_port=4,arp_spa=10.10.0.2,arp_sha=ce:99:ca:bd:62:c5 actions=resubmit(,20)
+4. table=10, priority=200,arp,in_port=3,arp_spa=10.10.0.3,arp_sha=3a:41:49:42:98:69 actions=resubmit(,20)
+5. table=10, priority=200,ip,in_port=4,dl_src=ce:99:ca:bd:62:c5,nw_src=10.10.0.2 actions=resubmit(,23)
+6. table=10, priority=200,ip,in_port=3,dl_src=3a:41:49:42:98:69,nw_src=10.10.0.3 actions=resubmit(,23)
+7. table=10, priority=0 actions=drop
+```
+
+After this table, ARP traffic goes to [ARPResponderTable], while IP
+traffic goes to [ServiceHairpinTable]. Traffic which does not match
+any of the rules described above will be dropped by the table-miss flow entry.
+
+### ARPResponderTable (20)
+
+The main purpose of this table is to reply to ARP requests from the local
+gateway asking for the MAC address of a remote peer gateway (another Node's
+gateway). This ensures that the local Node can reach any remote Pod, which in
+particular is required for Service traffic which has been load-balanced to a
+remote Pod backend by kube-proxy. Note that the table is programmed to reply to
+such ARP requests with a "Global Virtual MAC" ("Global" because it is used by
+all Antrea OVS bridges), and not with the actual MAC address of the remote
+gateway. This ensures that once the traffic is received by the remote OVS
+bridge, it can be directly forwarded to the appropriate Pod without actually
+going through the gateway. The Virtual MAC is used as the destination MAC
+address for all the traffic being tunnelled.
+
+If you dump the flows for this table, you may see the following:
+
+```text
+1. table=20, priority=200,arp,arp_tpa=10.10.1.1,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],mod_dl_src:aa:bb:cc:dd:ee:ff,set_field:2->arp_op,move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],load:0xaabbccddeeff->NXM_NX_ARP_SHA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0xa0a0101->NXM_OF_ARP_SPA[],IN_PORT
+2. table=20, priority=190,arp actions=NORMAL
+3. table=20, priority=0 actions=drop
+```
+
+Flow 1 is the "ARP responder" for the peer Node whose local Pod subnet is
+10.10.1.0/24. If we were to look at the routing table for the local Node, we
+would see the following "onlink" route:
+
+```text
+10.10.1.0/24 via 10.10.1.1 dev antrea-gw0 onlink
+```
+
+A similar route is installed on the gateway (antrea-gw0) interface every time the
+Antrea Node Route Controller is notified that a new Node has joined the
+cluster. The route must be marked as "onlink" since the kernel does not have a
+route to the peer gateway 10.10.1.1: we trick the kernel into believing that
+10.10.1.1 is directly connected to the local Node, even though it is on the
+other side of the tunnel.
+
+Flow 2 ensures that OVS handle the remainder of ARP traffic as a regular L2
+learning switch (using the `normal` action). In particular, this takes care of
+forwarding ARP requests and replies between local Pods.
+
+The table-miss flow entry (flow 3) will drop all other packets. This flow should
+never be used because only ARP traffic should go to this table, and
+ARP traffic will either match flow 1 or flow 2.
+
+### ServiceHairpinTable (23)
+
+When a backend Pod of a Service accesses the Service, and the Pod itself is selected
+as the destination, then we have the hairpin case, in which the source IP should be
+SNAT'd with a virtual hairpin IP in [hairpinSNATTable]. The source and destination
+IP addresses cannot be the same, otherwise the connection will be broken. It will be
+explained in detail in [hairpinSNATTable]. For response packets, the
+destination IP is the virtual hairpin IP, so the destination IP should be changed back
+to the IP of the backend Pod. Then the response packets can be forwarded back correctly.
+
+If you dump the flows for this table, you should see the flows:
+
+```text
+1. table=23, priority=200,ip,nw_dst=169.254.169.252 actions=move:NXM_OF_IP_SRC[]->NXM_OF_IP_DST[],load:0x1->NXM_NX_REG0[18],resubmit(,30)
+2. table=23, priority=0 actions=resubmit(,24)
+```
+
+Flow 1 is used to match packet whose destination IP is virtual hairpin IP and
+change the destination IP of the matched packet by loading register `NXM_OF_IP_SRC`
+to `NXM_OF_IP_DST`. Bit 18 in NXM_NX_REG0 is set to 0x1, which indicates that the
+packet should be output to the port on which it was received, which is done in
+[L2ForwardingOutTable].
+
+### ConntrackTable (30)
+
+The sole purpose of this table is to invoke the `ct` action on all packets and
+set the `ct_zone` (connection tracking context) to a hard-coded value, then
+forward traffic to [ConntrackStateTable]. If you dump the flows for this table,
+you should only see 1 flow:
+
+```text
+1. table=30, priority=200,ip actions=ct(table=31,zone=65520)
+```
+
+A `ct_zone` is simply used to isolate connection tracking rules. It is similar
+in spirit to the more generic Linux network namespaces, but `ct_zone` is
+specific to conntrack and has less overhead.
+
+After invoking the ct action, packets will be in the "tracked" (`trk`) state and
+all [connection tracking
+fields](https://www.openvswitch.org/support/dist-docs/ovs-fields.7.txt) will be
+set to the correct value. Packets will then move on to [ConntrackStateTable].
+
+Refer to [this
+document](https://docs.openvswitch.org/en/latest/tutorials/ovs-conntrack/) for
+more information on connection tracking in OVS.
+
+### ConntrackStateTable (31)
+
+This table handles "tracked" packets (packets which are moved to the tracked
+state by the previous table [ConntrackTable]) and "untracked" packets (packets
+is not in tracked state).
+
+This table serves the following purposes:
+
+* For tracked Service packets, bit 19 in NXM_NX_REG0 will be set to 0x1, then
+ the tracked packet will be forwarded to [EgressRuleTable] directly.
+* Drop packets reported as invalid by conntrack.
+* Non-Service tracked packets goes to [EgressRuleTable] directly.
+* Untracked packets goes to [SessionAffinityTable] and [ServiceLBTable].
+
+If you dump the flows for this table, you should see the following:
+
+```text
+1. table=31, priority=200,ct_state=-new+trk,ct_mark=0x21,ip actions=load:0x1->NXM_NX_REG0[19],resubmit(,50)
+2. table=31, priority=190,ct_state=+inv+trk,ip actions=drop
+3. table=31, priority=190,ct_state=-new+trk,ip actions=resubmit(,50)
+4. table=31, priority=0 actions=resubmit(,40),resubmit(,41)
+```
+
+Flow 1 is used to forward tracked Service packets to [EgressRuleTable] directly,
+without passing [SessionAffinityTable], [ServiceLBTable] and [EndpointDNATTable].
+The flow also sets bit 19 in NXM_NX_REG0 to 0x1, which indicates that the destination
+and source MACs of the matched packets should be rewritten in [l3ForwardingTable].
+
+Flow 2 is used to drop packets which is reported as invalid by conntrack.
+
+Flow 3 is used to forward tracked non-Service packets to [EgressRuleTable] directly,
+without passing [SessionAffinityTable], [ServiceLBTable] and [EndpointDNATTable].
+
+Flow 4 is used to match the first packet of untracked connection and forward it to
+[SessionAffinityTable] and [ServiceLBTable].
+
+### SessionAffinityTable (40)
+
+If `service.spec.sessionAffinity` of a Service is `None`, this table will set the value
+of bits [16..18] in NXM_NX_REG4 to 0b001, which indicates that the Service needs to do
+Endpoint selection. If you dump the flow, you should see the flow:
+
+```text
+table=40, priority=0 actions=load:0x1->NXM_NX_REG4[16..18]
+```
+
+If `service.spec.sessionAffinity` of a Service is `ClientIP`, when a client accesses
+the Service for the first time, a learned flow with hard timeout which equals
+`service.spec.sessionAffinityConfig.clientIP.timeoutSeconds` of the Service will be
+generated in this table. This will be explained in detail in chapter [ServiceLBTable].
+
+### ServiceLBTable (41)
+
+This table is used to implement Service Endpoint selection. Note that, currently, only
+ClusterIP Service request from Pods is supported. NodePort, LoadBalancer and ClusterIP
+whose client is from K8s Node will be supported in the future.
+
+When a ClusterIP Service is created with `service.spec.sessionAffinity` set to `None`, if you
+dump the flows, you should see the following flow:
+
+```text
+1. table=41, priority=200,tcp,reg4=0x10000/0x70000,nw_dst=10.107.100.231,tp_dst=443 actions=load:0x2->NXM_NX_REG4[16..18],load:0x1->NXM_NX_REG0[19],group:5
+```
+
+Among the match conditions of the above flow:
+
+* `reg4=0x10000/0x70000`, value of bits [16..18] in NXM_NX_REG4 is 0b001, which is used
+ to match Service packet whose state is to do Endpoint selection. The value of
+ bits [16..18] in NXM_NX_REG4 is set in [SessionAffinityTable] by flow `table=40, priority=0 actions=load:0x1->NXM_NX_REG4[16..18]`.
+
+The actions of the above flow:
+
+* `load:0x2->NXM_NX_REG4[16..18]` is used to set the value of bits [16..18] in NXM_NX_REG4
+ to 0b002, which indicates that Endpoint selection "is performed". Note that, Endpoint
+ selection has not really been done yet - it will be done by group action. The current
+ action should have been done in target OVS group entry after Endpoint selection. However,
+ we set the bits here, for the purpose of supporting more Endpoints in an OVS group.
+ Please check PR [#2101](https://github.com/antrea-io/antrea/pull/2101) to learn more information.
+* `load:0x1->NXM_NX_REG0[19]` is used to set the value of bit 19 in NXM_NX_REG0 to 0x1,
+ which means that the source and destination MACs need to be rewritten.
+* `group:5` is used to set the target OVS group. Note that, the target group needs to be
+ created first before the flow is created.
+
+Dump the group entry with command `ovs-ofctl dump-groups br-int 5`, you should see the
+following:
+
+```text
+group_id=5,type=select,\
+bucket=bucket_id:0,weight:100,actions=load:0xa0a0002->NXM_NX_REG3[],load:0x23c1->NXM_NX_REG4[0..15],resubmit(,42),\
+bucket=bucket_id:1,weight:100,actions=load:0xa0a0003->NXM_NX_REG3[],load:0x23c1->NXM_NX_REG4[0..15],resubmit(,42),\
+bucket=bucket_id:2,weight:100,actions=load:0xa0a0004->NXM_NX_REG3[],load:0x23c1->NXM_NX_REG4[0..15],resubmit(,42)
+```
+
+For the above OVS group, there are three buckets which have the same weight. Every bucket
+has the same chance to be selected since they have the same weight. The selected bucket
+will load Endpoint IPv4 address to NXM_NX_REG3, Endpoint port number to bits [0..15]
+in NXM_NX_REG4. Then the matched packet will be resubmitted to [EndpointDNATTable].
+
+When a ClusterIP Service is created with `service.spec.sessionAffinity` set to `ClientIP`, you may
+see the following flows:
+
+```text
+1. table=41, priority=200,tcp,reg4=0x10000/0x70000,nw_dst=10.107.100.231,tp_dst=443 actions=load:0x3->NXM_NX_REG4[16..18],load:0x1->NXM_NX_REG0[19],group:5
+2. table=41, priority=190,tcp,reg4=0x30000/0x70000,nw_dst=10.107.100.231,tp_dst=443 actions=\
+ learn(table=40,hard_timeout=300,priority=200,delete_learned,cookie=0x2040000000008, \
+ eth_type=0x800,nw_proto=6,NXM_OF_TCP_DST[],NXM_OF_IP_DST[],NXM_OF_IP_SRC[],\
+ load:NXM_NX_REG3[]->NXM_NX_REG3[],load:NXM_NX_REG4[0..15]->NXM_NX_REG4[0..15],load:0x2->NXM_NX_REG4[16..18],load:0x1->NXM_NX_REG0[19]),\
+ load:0x2->NXM_NX_REG4[16..18],\
+ resubmit(,42)
+```
+
+When a client (assumed that the source IP is 10.10.0.2) accesses the ClusterIP for the first
+time, the first packet of the connection will be matched by flow 1. Note that the action
+`load:0x3->NXM_NX_REG4[16..18]` indicates that the Service Endpoint selection result needs
+to be cached.
+
+Dump the group entry with command `ovs-ofctl dump-groups br-int 5`, you should see the
+following:
+
+```text
+group_id=5,type=select,\
+bucket=bucket_id:0,weight:100,actions=load:0xa0a0002->NXM_NX_REG3[],load:0x23c1->NXM_NX_REG4[0..15],resubmit(,41),\
+bucket=bucket_id:1,weight:100,actions=load:0xa0a0003->NXM_NX_REG3[],load:0x23c1->NXM_NX_REG4[0..15],resubmit(,41),\
+bucket=bucket_id:2,weight:100,actions=load:0xa0a0004->NXM_NX_REG3[],load:0x23c1->NXM_NX_REG4[0..15],resubmit(,41)
+```
+
+Note the action `resubmit(,41)` resubmits the first packet of a ClusterIP Service connection
+back to [ServiceLBTable], not resubmits the packet to [EndpointDNATTable]. Then the
+packet will be matched by flow 2 since value of bits [16..18] in NXM_NX_REG4 is 0b011. One
+action of the flow is to generate a learned flow in [SessionAffinityTable], the other
+action is to resubmit the packet to [EndpointDNATTable].
+
+Now if you dump flows of table [SessionAffinityTable], you may see the following flows:
+
+```text
+1. table=40, hard_timeout=300, priority=200,tcp,nw_src=10.10.0.2,nw_dst=10.107.100.231,tp_dst=443 \
+ actions=load:0xa0a0002->NXM_NX_REG3[],load:0x23c1->NXM_NX_REG4[0..15],load:0x2->NXM_NX_REG4[16..18],load:0x1->NXM_NX_REG0[19]
+2. table=40, priority=0 actions=load:0x1->NXM_NX_REG4[16..18]
+```
+
+Note that, flow 1 (the generated learned flow) has higher priority than flow 2 in table
+[SessionAffinityTable]. When a particular client accesses the ClusterIP once again, the first
+packet of the connection will be matched by flow 1 due to the match condition `nw_src=10.10.0.2`.
+
+The actions of flow 1:
+
+* `load:0xa0a0004->NXM_NX_REG3[]` is used to load Endpoint IPv4 address to NXM_NX_REG3.
+* `load:0x50->NXM_NX_REG4[0..15]` is used to load Endpoint port number to bits [0..15] in
+ NXM_NX_REG4.
+* `load:0x2->NXM_NX_REG4[16..18]` is used to set the value of bits [16..18] in NXM_NX_REG4 to
+ 0b010, which indicates that the Service has done Endpoint selection.
+* `load:0x1->NXM_NX_REG0[19]` is used to set the value of bit 19 in NXM_NX_REG0 to 0x1, which
+ indicates that the source and destination MACs need to be rewritten.
+
+Note that, if the value of bits [16..18] in NXM_NX_REG4 is 0b010 (set by action `load:0x2->NXM_NX_REG4[16..18]`
+in table [SessionAffinityTable]), then packet will not be matched by any flows in table
+[ServiceLBTable] except the last one. The last one just forwards the packet to table
+[EndpointDNATTable] without selecting target OVS group. Then connections from a particular
+client will always access the same backend Pod within the session timeout setting by
+`service.spec.sessionAffinityConfig.clientIP.timeoutSeconds`.
+
+### EndpointDNATTable (42)
+
+The table implements DNAT for Service traffic after Endpoint selection for the first
+packet of a Service connection.
+
+If you dump the flows for this table, you should see flows like the following:
+
+```text
+1. table=42, priority=200,tcp,reg3=0xc0a84d64,reg4=0x2192b/0x7ffff actions=ct(commit,table=45,zone=65520,nat(dst=192.168.77.100:6443),exec(load:0x21->NXM_NX_CT_MARK[]))
+2. table=42, priority=200,tcp,reg3=0xc0a84d65,reg4=0x2286d/0x7ffff actions=ct(commit,table=45,zone=65520,nat(dst=192.168.77.101:10349),exec(load:0x21->NXM_NX_CT_MARK[]))
+3. table=42, priority=200,tcp,reg3=0xa0a0004,reg4=0x20050/0x7ffff actions=ct(commit,table=45,zone=65520,nat(dst=10.10.0.4:80),exec(load:0x21->NXM_NX_CT_MARK[]))
+4. table=42, priority=200,tcp,reg3=0xa0a0102,reg4=0x20050/0x7ffff actions=ct(commit,table=45,zone=65520,nat(dst=10.10.1.2:80),exec(load:0x21->NXM_NX_CT_MARK[]))
+5. table=42, priority=200,udp,reg3=0xa0a0002,reg4=0x20035/0x7ffff actions=ct(commit,table=45,zone=65520,nat(dst=10.10.0.2:53),exec(load:0x21->NXM_NX_CT_MARK[]))
+6. table=42, priority=190,reg4=0x20000/0x70000 actions=load:0x1->NXM_NX_REG4[16..18],resubmit(,41)
+7. table=42, priority=0 actions=resubmit(,45)
+```
+
+For flow 1-5, DNAT is performed with the IPv4 address stored in NXM_NX_REG3 and port number stored in
+bits[0..15] in NXM_NX_REG4 by `ct commit` action. Note that, the match condition `reg4=0x2192b/0x7ffff`
+is a union value. The value of bits [0..15] is port number. The value of bits [16..18] is 0b010,
+which indicates that Service has done Endpoint selection. Service ct_mark `0x21` is also marked.
+
+If none of the flows described above are hit, flow 6 is used to forward packet back to table [ServiceLBTable]
+to select Endpoint again.
+
+Flow 7 is used to match non-Service packet.
+
+### AntreaPolicyEgressRuleTable (45)
+
+For this table, you will need to keep in mind the ACNP
+[specification](#antrea-native-policies-implementation)
+that we are using.
+
+This table is used to implement the egress rules across all Antrea-native policies,
+except for policies that are created in the Baseline Tier. Antrea-native policies
+created in the Baseline Tier will be enforced after K8s NetworkPolicies, and their
+egress rules are installed in the [EgressDefaultTable] and [EgressRuleTable]
+respectively, i.e.
+
+```text
+Baseline Tier -> EgressDefaultTable(60)
+K8s NetworkPolicy -> EgressRuleTable(50)
+All other Tiers -> AntreaPolicyEgressRuleTable(45)
+```
+
+Since the example ACNP resides in the Application tier, if you dump the flows for
+table 45, you should see something like this:
+
+```text
+1. table=45, priority=64990,ct_state=-new+est,ip actions=resubmit(,61)
+2. table=45, priority=14000,conj_id=1,ip actions=load:0x1->NXM_NX_REG5[],ct(commit,table=61,zone=65520,exec(load:0x1->NXM_NX_CT_LABEL[32..63]))
+3. table=45, priority=14000,ip,nw_src=10.10.1.6 actions=conjunction(1,1/3)
+4. table=45, priority=14000,ip,nw_dst=10.10.1.8 actions=conjunction(1,2/3)
+5. table=45, priority=14000,udp,tp_dst=53 actions=conjunction(1,3/3)
+6. table=45, priority=0 actions=resubmit(,50)
+```
+
+Similar to [K8s NetworkPolicy implementation](#egressruletable-50),
+AntreaPolicyEgressRuleTable also relies on the OVS built-in `conjunction` action to
+implement policies efficiently.
+
+The above example flows read as follow: if the source IP address is in set
+{10.10.1.6}, and the destination IP address is in the set {10.10.1.8}, and the
+destination TCP port is in the set {53}, then use the `conjunction` action with
+id 1, which stores the `conj_id` 1 in `ct_label[32..63]` for egress metrics collection
+purposes, and forwards the packet to EgressMetricsTable, then [L3ForwardingTable].
+Otherwise, go to [EgressRuleTable] if no conjunctive flow above priority 0 is matched.
+This corresponds to the case where the packet is not matched by any of the Antrea-native
+policy egress rules in any tier (except for the "baseline" tier).
+
+If the `conjunction` action is matched, packets are "allowed" or "dropped"
+based on the `action` field of the policy rule. If allowed, they follow a similar
+path as described in the following [EgressRuleTable] section.
+
+Unlike the default of K8s NetworkPolicies, Antrea-native policies have no such
+default rules. Hence, they are evaluated as-is, and there is no need for a
+AntreaPolicyEgressDefaultTable.
+
+### EgressRuleTable (50)
+
+For this table, you will need to keep mind the Network Policy
+[specification](#network-policy-implementation) that we are using. We have 2
+Pods running on the same Node, with IP addresses 10.10.1.2 to 10.10.1.3. They
+are allowed to talk to each other using TCP on port 80, but nothing else.
+
+This table is used to implement the egress rules across all Network Policies. If
+you dump the flows for this table, you should see something like this:
+
+```text
+1. table=50, priority=210,ct_state=-new+est,ip actions=goto_table:70
+2. table=50, priority=200,ip,nw_src=10.10.1.2 actions=conjunction(2,1/3)
+3. table=50, priority=200,ip,nw_src=10.10.1.3 actions=conjunction(2,1/3)
+4. table=50, priority=200,ip,nw_dst=10.10.1.2 actions=conjunction(2,2/3)
+5. table=50, priority=200,ip,nw_dst=10.10.1.3 actions=conjunction(2,2/3)
+6. table=50, priority=200,tcp,tp_dst=80 actions=conjunction(2,3/3)
+7. table=50, priority=190,conj_id=2,ip actions=load:0x2->NXM_NX_REG5[],ct(commit,table=61,zone=65520,exec(load:0x2->NXM_NX_CT_LABEL[32..63]))
+8. table=50, priority=0 actions=goto_table:60
+```
+
+Notice how we use the OVS built-in `conjunction` action to implement policies
+efficiently. This enables us to do a conjunctive match across multiple
+dimensions (source IP, destination IP, port) efficiently without "exploding" the
+number of flows. By definition of a conjunctive match, we have at least 2
+dimensions. For our use-case we have at most 3 dimensions.
+
+The only requirements on `conj_id` is for it to be a unique 32-bit integer
+within the table. At the moment we use a single custom allocator, which is
+common to all tables that can have NetworkPolicy flows installed (45, 50,
+60, 85, 90 and 100). This is why `conj_id` is set to 2 in the above example
+(1 was allocated for the egress rule of our Antrea-native NetworkPolicy example
+in the previous section).
+
+The above example flows read as follow: if the source IP address is in set
+{10.10.1.2, 10.10.1.3}, and the destination IP address is in the set {10.10.1.2,
+10.10.1.3}, and the destination TCP port is in the set {80}, then use the
+`conjunction` action with id 2, which goes to [EgressMetricsTable], and then
+[L3ForwardingTable]. Otherwise, packet goes to [EgressDefaultTable].
+
+If the Network Policy specification includes exceptions (`except` field), then
+the table will include multiple flows with conjunctive match, corresponding to
+each CIDR that is present in `from` or `to` fields, but not in `except` field.
+Network Policy implementation details are not covered in this document.
+
+If the `conjunction` action is matched, packets are "allowed" and forwarded
+directly to [L3ForwardingTable]. Other packets go to [EgressDefaultTable]. If a
+connection is established - as a reminder all connections are committed in
+[ConntrackCommitTable] - its packets go straight to [L3ForwardingTable], with no
+other match required (see flow 1 above, which has the highest priority). In
+particular, this ensures that reply traffic is never dropped because of a
+Network Policy rule. However, this also means that ongoing connections are not
+affected if the K8s Network Policies are updated.
+
+One thing to keep in mind is that for Service traffic, these rules are applied
+after the packets have gone through the local gateway and through kube-proxy. At
+this point the ingress port is no longer the Pod port, but the local gateway
+port. Therefore we cannot use the port as the match condition to identify if the
+Pod has been applied a Network Policy - which is what we do for the
+[IngressRuleTable] -, but instead have to use the source IP address.
+
+### EgressDefaultTable (60)
+
+This table complements [EgressRuleTable] for Network Policy egress rule
+implementation. In K8s, when a Network Policy is applied to a set of Pods, the
+default behavior for these Pods become "deny" (it becomes an [isolated Pod](
+https://kubernetes.io/docs/concepts/services-networking/network-policies/#isolated-and-non-isolated-pods)).
+This table is in charge of dropping traffic originating from Pods to which a Network
+Policy (with an egress rule) is applied, and which did not match any of the
+allowlist rules.
+
+Accordingly, based on our Network Policy example, we would expect to see flows
+to drop traffic originating from our 2 Pods (10.10.1.2 and 10.10.1.3), which is
+confirmed by dumping the flows:
+
+```text
+1. table=60, priority=200,ip,nw_src=10.10.1.2 actions=drop
+2. table=60, priority=200,ip,nw_src=10.10.1.3 actions=drop
+3. table=60, priority=0 actions=goto_table:61
+```
+
+This table is also used to implement Antrea-native policy egress rules that are
+created in the Baseline Tier. Since the Baseline Tier is meant to be enforced
+after K8s NetworkPolicies, the corresponding flows will be created at a lower
+priority than K8s default drop flows. For example, a baseline rule to drop
+egress traffic to 10.0.10.0/24 for a Namespace will look like the following:
+
+```text
+1. table=60, priority=80,ip,nw_src=10.10.1.11 actions=conjunction(5,1/2)
+2. table=60, priority=80,ip,nw_src=10.10.1.10 actions=conjunction(5,1/2)
+3. table=60, priority=80,ip,nw_dst=10.0.10.0/24 actions=conjunction(5,2)
+4. table=60, priority=80,conj_id=5,ip actions=load:0x3->NXM_NX_REG5[],load:0x1->NXM_NX_REG0[20],resubmit(,61)
+```
+
+The table-miss flow entry, which is used for non-isolated Pods, forwards
+traffic to the next table EgressMetricsTable, then ([L3ForwardingTable]).
+
+### L3ForwardingTable (70)
+
+This is the L3 routing table. It implements the following functionality:
+
+* Tunnelled traffic coming-in from a peer Node and destined to a local Pod is
+ directly forwarded to the Pod. This requires setting the source MAC to the MAC
+ of the local gateway interface and setting the destination MAC to the Pod's
+ MAC address. Then the packets will go to [L3DecTTLTable] for decrementing
+ the IP TTL value. Such packets can be identified by bit 19 of the NXM_NX_REG0
+ register (which was set to 1 in the [ClassifierTable]) and the destination IP
+ address (which should match the IP address of a local Pod). We therefore
+ install one flow for each Pod created locally on the Node. For example:
+
+```text
+table=70, priority=200,ip,reg0=0x80000/0x80000,nw_dst=10.10.0.2 actions=mod_dl_src:e2:e5:a4:9b:1c:b1,mod_dl_dst:12:9e:a6:47:d0:70,goto_table:72
+```
+
+* All tunnelled traffic destined to the local gateway (i.e. for which the
+ destination IP matches the local gateway's IP) is forwarded to the gateway
+ port by rewriting the destination MAC (from the Global Virtual MAC to the
+ local gateway's MAC).
+
+```text
+table=70, priority=200,ip,reg0=0x80000/0x80000,nw_dst=10.10.0.1 actions=mod_dl_dst:e2:e5:a4:9b:1c:b1,goto_table:80
+```
+
+* All reply traffic of connections initiated through the gateway port, i.e. for
+ which the first packet of the connection (SYN packet for TCP) was received
+ through the gateway. Such packets can be identified by the packet's direction
+ in `ct_state` and the `ct_mark` value `0x20` which is committed in
+ [ConntrackCommitTable] when the first packet of the connection was handled.
+ A flow will overwrite the destination MAC to the local gateway MAC to ensure
+ that they get forwarded through the gateway port. This is required to handle
+ the following cases:
+ - reply traffic for connections from a local Pod to a ClusterIP Service, which
+ are handled by kube-proxy and go through DNAT. In this case the destination
+ IP address of the reply traffic is the Pod which initiated the connection to
+ the Service (no SNAT by kube-proxy). We need to make sure that these packets
+ are sent back through the gateway so that the source IP can be rewritten to
+ the ClusterIP ("undo" DNAT). If we do not use connection tracking and do not
+ rewrite the destination MAC, reply traffic from the backend will go directly
+ to the originating Pod without going first through the gateway and
+ kube-proxy. This means that the reply traffic will arrive at the originating
+ Pod with the incorrect source IP (it will be set to the backend's IP instead
+ of the Service IP).
+ - when hair-pinning is involved, i.e. connections between 2 local Pods, for
+ which NAT is performed. One example is a Pod accessing a NodePort Service
+ for which `externalTrafficPolicy` is set to `Local` using the local Node's
+ IP address, as there will be no SNAT for such traffic. Another example could
+ be `hostPort` support, depending on how the feature is implemented.
+
+```text
+table=70, priority=210,ct_state=+rpl+trk,ct_mark=0x20,ip actions=mod_dl_dst:e2:e5:a4:9b:1c:b1,goto_table:80
+```
+
+* All traffic destined to a remote Pod is forwarded through the appropriate
+ tunnel. This means that we install one flow for each peer Node, each one
+ matching the destination IP address of the packet against the Pod subnet for
+ the Node. In case of a match the source MAC is set to the local gateway MAC,
+ the destination MAC is set to the Global Virtual MAC and we set the OF
+ `tun_dst` field to the appropriate value (i.e. the IP address of the remote
+ gateway). Traffic then goes to [L3DecTTLTable].
+ For a given peer Node, the flow may look like this:
+
+```text
+table=70, priority=200,ip,nw_dst=10.10.1.0/24 actions=mod_dl_src:e2:e5:a4:9b:1c:b1,mod_dl_dst:aa:bb:cc:dd:ee:ff,load:0x1->NXM_NX_REG1[],set_field:0x10000/0x10000->reg0,load:0xc0a80102->NXM_NX_TUN_IPV4_DST[],goto_table:72
+```
+
+If none of the flows described above are hit, traffic goes directly to
+[L2ForwardingCalcTable]. This is the case for external traffic, whose
+destination is outside the cluster (such traffic has already been
+forwarded to the local gateway by the local source Pod, and only L2 switching
+is required), as well as for local Pod-to-Pod traffic.
+
+```text
+table=70, priority=0 actions=goto_table:80
+```
+
+When the Egress feature is enabled, extra flows will be added to
+[L3ForwardingTable], which send the egress traffic from Pods to external network
+to [SNATTable]. The following two flows match traffic to local Pods and traffic
+to the local Node IP respectively, and keep them in the normal forwarding path
+(to [L2ForwardingCalcTable]), so they will not be sent to [SNATTable]:
+
+```text
+table=70, priority=200,ip,reg0=0/0x80000,nw_dst=10.10.1.0/24 actions=goto_table:80
+table=70, priority=200,ip,reg0=0x2/0xffff,nw_dst=192.168.1.1 actions=goto_table:80
+```
+
+The following two flows send the traffic not matched by other flows to
+[SNATTable]. One of the flows is for egress traffic from local Pods; another
+one is for egress traffic from remote Pods, which is tunnelled to this Node to
+be SNAT'd with a SNAT IP configured on the Node. In the latter case, the flow
+also rewrites the destination MAC to the local gateway interface MAC.
+
+```text
+table=70, priority=190,ip,reg0=0x2/0xf actions=goto_table:71
+table=70, priority=190,ip,reg0=0/0xf actions=mod_dl_dst:e2:e5:a4:9b:1c:b1,goto_table:71
+```
+
+### SNATTable (71)
+
+This table is created only when the Egress feature is enabled. It includes flows
+to implement Egresses and select the right SNAT IPs for egress traffic from Pods
+to external network.
+
+When no Egress applies to Pods on the Node, and no SNAT IP is configured on the
+Node, [SNATTable] just has two flows. One drops egress traffic tunnelled from
+remote Nodes that does not match any SNAT IP configured on this Node, and the
+default flow that sends egress traffic from local Pods, which do not have any
+Egress applied, to [L2ForwardingCalcTable]. Such traffic will be SNAT'd with
+the default SNAT IP (by an iptables masquerade rule).
+
+```text
+table=71, priority=190,ct_state=+new+trk,ip,reg0=0/0xf actions=drop
+table=71, priority=0 actions=goto_table:80
+```
+
+When there is an Egress applied to a Pod on the Node, a flow will be added for
+the Pod's egress traffic. If the SNAT IP of the Egress is configured on the
+local Node, the flow sets an 8 bits ID allocated for the SNAT IP to pkt_mark.
+The ID is for iptables SNAT rules to match the packets and perfrom SNAT with
+the right SNAT IP (Antrea Agent adds an iptables SNAT rule for each local SNAT
+IP that matches the ID).
+
+```text
+table=71, priority=200,ct_state=+new+trk,ip,in_port="pod1-7e503a" actions=set_field:0x1/0xff->pkt_mark,goto_table:80
+```
+
+When the SNAT IP of the Egress is on a remote Node, the flow will tunnel the
+packets to the remote Node with the tunnel's destination IP to be the SNAT IP.
+The packets will be SNAT'd on the remote Node. The same as a normal tunnel flow
+in [L3ForwardingTable], the flow will rewrite the packets' source and
+destination MAC addresses, load the SNAT IP to NXM_NX_TUN_IPV4_DST, and send the
+packets to [L3DecTTLTable].
+
+```text
+table=71, priority=200,ct_state=+new+trk,ip,in_port="pod2-357c21" actions=mod_dl_src:e2:e5:a4:9b:1c:b1,mod_dl_dst:aa:bb:cc:dd:ee:ff,load:0x1->NXM_NX_REG1[],set_field:0x10000/0x10000->reg0,load:0xc0a80a66->NXM_NX_TUN_IPV4_DST[],goto_table:72
+```
+
+Last, when a SNAT IP configured for Egresses is on the local Node, an additional
+flow is added in [SNATTable] for egress traffic from remote Node that should
+use the SNAT IP. The flow matches the tunnel destination IP (which should be
+equal to the SNAT IP), and sets the 8 bits ID of the SNAT IP to pkt_mark.
+
+```text
+table=71, priority=200,ct_state=+new+trk,ip,tun_dst="192.168.10.101" actions=set_field:0x1/0xff->pkt_mark,goto_table:80
+```
+
+### L3DecTTLTable (72)
+
+This is the table to decrement TTL for the IP packets destined to remote Nodes
+through a tunnel, or the IP packets received from a tunnel. But for the packets
+that enter the OVS pipeline from the local gateway and are destined to a remote
+Node, TTL should not be decremented in OVS on the source Node, because the host
+IP stack should have already decremented TTL if that is needed.
+
+If you dump the flows for this table, you should see flows like the following:
+
+```text
+1. table=72, priority=210,ip,reg0=0x1/0xf, actions=goto_table:80
+2. table=72, priority=200,ip, actions=dec_ttl,goto_table:80
+3. table=72, priority=0, actions=goto_table:80
+```
+
+The first flow is to bypass the TTL decrement for the packets from the gateway
+port.
+
+### L2ForwardingCalcTable (80)
+
+This is essentially the "dmac" table of the switch. We program one flow for each
+port (tunnel port, gateway port, and local Pod ports), as you can see if you
+dump the flows:
+
+```text
+1. table=80, priority=200,dl_dst=aa:bb:cc:dd:ee:ff actions=set_field:0x1->reg1,set_field:0x10000/0x10000->reg0,goto_table:105
+2. table=80, priority=200,dl_dst=e2:e5:a4:9b:1c:b1 actions=set_field:0x2->reg1,set_field:0x10000/0x10000->reg0,goto_table:105
+3. table=80, priority=200,dl_dst=12:9e:a6:47:d0:70 actions=set_field:0x3->reg1,set_field:0x10000/0x10000->reg0,goto_table:90
+4. table=80, priority=200,dl_dst=ba:a8:13:ca:ed:cf actions=set_field:0x4->reg1,set_field:0x10000/0x10000->reg0,goto_table:90
+5. table=80, priority=0 actions=goto_table:105
+```
+
+For each port flow (1 through 5 in the example above), we set bit 16 of the
+NXM_NX_REG0 register to indicate that there was a matching entry for the
+destination MAC address and that the packet must be forwarded. In the last table
+of the pipeline ([L2ForwardingOutTable]), we will drop all packets for which
+this bit is not set. We also use the NXM_NX_REG1 register to store the egress
+port for the packet, which will be used as a parameter to the `output` OpenFlow
+action in [L2ForwardingOutTable].
+
+The packets that match local Pods' MAC entries will go to the first table
+([AntreaPolicyIngressRuleTable] when AntreaPolicy is enabled, or
+[IngressRuleTable] when AntreaPolicy is not enabled) for NetworkPolicy ingress
+rules. Other packets will go to [ConntrackCommitTable]. Specifically, packets
+to the gateway port or the tunnel port will also go to [ConntrackCommitTable]
+and bypass the NetworkPolicy ingress rule tables, as NetworkPolicy ingress rules
+are not enforced for these packets on the source Node.
+
+What about L2 multicast / broadcast traffic? ARP requests will never reach this
+table, as they will be handled by the OpenFlow `normal` action in the
+[ArpResponderTable]. As for the rest, if it is IP traffic, it will hit the
+"last" flow in this table and go to [ConntrackCommitTable]; and finally the last
+table of the pipeline ([L2ForwardingOutTable]), and get dropped there since bit
+16 of the NXM_NX_REG0 will not be set. Traffic which is non-ARP and non-IP
+(assuming any can be received by the switch) is actually dropped much earlier in
+the pipeline ([SpoofGuardTable]). In the future, we may need to support more
+cases for L2 multicast / broadcast traffic.
+
+### AntreaPolicyIngressRuleTable (85)
+
+This table is very similar to [AntreaPolicyEgressRuleTable], but implements
+the ingress rules of Antrea-native Policies. Depending on the tier to which the policy
+belongs to, the rules will be installed in a table corresponding to that tier.
+The ingress table to tier mappings is as follows:
+
+```text
+Baseline Tier -> IngressDefaultTable(100)
+K8s NetworkPolicy -> IngressRuleTable(90)
+All other Tiers -> AntreaPolicyIngressRuleTable(85)
+```
+
+Again for this table, you will need to keep in mind the ACNP
+[specification](#antrea-native-policies-implementation) that we are using.
+Since the example ACNP resides in the Application tier, if you dump the flows
+for table 85, you should see something like this:
+
+```text
+1. table=85, priority=64990,ct_state=-new+est,ip actions=resubmit(,105)
+2. table=85, priority=14000,conj_id=4,ip actions=load:0x4->NXM_NX_REG3[],load:0x1->NXM_NX_REG0[20],resubmit(,101)
+3. table=85, priority=14000,ip,nw_src=10.10.1.7 actions=conjunction(4,1/3)
+4. table=85, priority=14000,ip,reg1=0x19c actions=conjunction(4,2/3)
+5. table=85, priority=14000,tcp,tp_dst=80 actions=conjunction(4,3/3)
+6. table=85, priority=0 actions=resubmit(,90)
+```
+
+As for [AntreaPolicyEgressRuleTable], flow 1 (highest priority) ensures that for
+established connections packets go straight to IngressMetricsTable,
+then [L2ForwardingOutTable], with no other match required.
+
+The rest of the flows read as follows: if the source IP address is in set
+{10.10.1.7}, and the destination OF port is in the set {412} (which
+correspond to IP addresses {10.10.1.6}), and the destination TCP port
+is in the set {80}, then use `conjunction` action with id 4, which loads
+the `conj_id` 4 into NXM_NX_REG3, a register used by Antrea internally to
+indicate the disposition of the packet is Drop, and forward the packet to
+IngressMetricsTable for it to be dropped.
+
+Otherwise, go to [IngressRuleTable] if no conjunctive flow above priority 0 is matched.
+This corresponds to the case where the packet is not matched by any of the Antrea-native
+policy ingress rules in any tier (except for the "baseline" tier).
+One notable difference is how we use OF ports to identify the destination of
+the traffic, while we use IP addresses in [AntreaPolicyEgressRuleTable] to
+identify the source of the traffic. More details regarding this can be found
+in the following [IngressRuleTable] section.
+
+As seen in [AntreaPolicyEgressRuleTable], the default action is to evaluate K8s
+Network Policy [IngressRuleTable] and a AntreaPolicyIngressDefaultTable does not exist.
+
+### IngressRuleTable (90)
+
+This table is very similar to [EgressRuleTable], but implements ingress rules
+for Network Policies. Once again, you will need to keep mind the Network Policy
+[specification](#network-policy-implementation) that we are using. We have 2
+Pods running on the same Node, with IP addresses 10.10.1.2 to 10.10.1.3. They
+are allowed to talk to each other using TCP on port 80, but nothing else.
+
+If you dump the flows for this table, you should see something like this:
+
+```text
+1. table=90, priority=210,ct_state=-new+est,ip actions=goto_table:101
+2. table=90, priority=210,pkt_mark=0x1/0x1 actions=goto_table:105
+3. table=90, priority=200,ip,nw_src=10.10.1.2 actions=conjunction(3,1/3)
+4. table=90, priority=200,ip,nw_src=10.10.1.3 actions=conjunction(3,1/3)
+5. table=90, priority=200,ip,reg1=0x3 actions=conjunction(3,2/3)
+6. table=90, priority=200,ip,reg1=0x4 actions=conjunction(3,2/3)
+7. table=90, priority=200,tcp,tp_dst=80 actions=conjunction(3,3/3)
+8. table=90, priority=190,conj_id=3,ip actions=load:0x3->NXM_NX_REG6[],ct(commit,table=101,zone=65520,exec(load:0x3->NXM_NX_CT_LABEL[0..31]))
+9. table=90, priority=0 actions=goto_table:100
+```
+
+As for [EgressRuleTable], flow 1 (highest priority) ensures that for established
+connections - as a reminder all connections are committed in
+[ConntrackCommitTable] - packets go straight to IngressMetricsTable,
+then [L2ForwardingOutTable], with no other match required.
+
+Flow 2 ensures that the traffic initiated from the host network namespace cannot
+be dropped because of Network Policies. This ensures that K8s [liveness
+probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
+can go through. An iptables rule in the mangle table of the host network
+namespace is responsible for marking the locally-generated packets with the
+`0x1/0x1` mark. Note that the flow will be different for Windows worker Node or
+when OVS userspace (netdev) datapath is used. This is because either there is no
+way to add mark for particular traffic (i.e. Windows) or matching the mark in
+OVS is not properly supported (i.e. netdev datapath). As a result, the flow will
+match source IP instead, however, NodePort Service access by external clients
+will be masqueraded as a local gateway IP to bypass Network Policies. This may
+be fixed after AntreaProxy can serve NodePort traffic.
+
+The rest of the flows read as follows: if the source IP address is in set
+{10.10.1.2, 10.10.1.3}, and the destination OF port is in the set {3, 4} (which
+correspond to IP addresses {10.10.1.2, 10.10.1.3}, and the destination TCP port
+is in the set {80}, then use `conjunction` action with id 3, which stores the
+`conj_id` 3 in `ct_label[0..31]` for egress metrics collection purposes, and forwards
+the packet to IngressMetricsTable, then [L2ForwardingOutTable]. Otherwise, go to
+[IngressDefaultTable]. One notable difference is how we use OF ports to identify
+the destination of the traffic, while we use IP addresses in [EgressRuleTable]
+to identify the source of the traffic. We do this as an increased security measure
+in case a local Pod is misbehaving and trying to access another local Pod using
+the correct destination MAC address but a different destination IP address to bypass
+an egress Network Policy rule. This is also why the Network Policy ingress rules
+are enforced after the egress port has been determined.
+
+### IngressDefaultTable (100)
+
+This table is similar in its purpose to [EgressDefaultTable], and it complements
+[IngressRuleTable] for Network Policy ingress rule implementation. In K8s, when
+a Network Policy is applied to a set of Pods, the default behavior for these
+Pods become "deny" (it becomes an [isolated
+Pod](https://kubernetes.io/docs/concepts/services-networking/network-policies/#isolated-and-non-isolated-pods)). This
+table is in charge of dropping traffic destined to Pods to which a Network
+Policy (with an ingress rule) is applied, and which did not match any of the
+allowlist rules.
+
+Accordingly, based on our Network Policy example, we would expect to see flows
+to drop traffic destined to our 2 Pods (3 and 4), which is confirmed by dumping
+the flows:
+
+```text
+1. table=100, priority=200,ip,reg1=0x3 actions=drop
+2. table=100, priority=200,ip,reg1=0x4 actions=drop
+3. table=100, priority=0 actions=goto_table:105
+```
+
+Similar to the [EgressDefaultTable], this table is also used to implement
+Antrea-native policy ingress rules that are created in the Baseline Tier.
+Since the Baseline Tier is meant to be enforced after K8s NetworkPolicies, the
+corresponding flows will be created at a lower priority than K8s default drop flows.
+For example, a baseline rule to isolate ingress traffic for a Namespace will look
+like the following:
+
+```text
+table=100, priority=80,ip,reg1=0xb actions=conjunction(6,2/3)
+table=100, priority=80,ip,reg1=0xc actions=conjunction(6,2/3)
+table=100, priority=80,ip,nw_src=10.10.1.9 actions=conjunction(6,1/3)
+table=100, priority=80,ip,nw_src=10.10.1.7 actions=conjunction(6,1/3)
+table=100, priority=80,tcp,tp_dst=8080 actions=conjunction(6,3/3)
+table=100, priority=80,conj_id=6,ip actions=load:0x6->NXM_NX_REG3[],load:0x1->NXM_NX_REG0[20],resubmit(,101)
+```
+
+The table-miss flow entry, which is used for non-isolated Pods, forwards
+traffic to the next table ([ConntrackCommitTable]).
+
+### ConntrackCommitTable (105)
+
+As mentioned before, this table is in charge of committing all new connections
+which are not dropped because of Network Policies. If you dump the flows for this
+table, you should see something like this:
+
+```text
+1. table=105, priority=200,ct_state=+new+trk,ip,reg0=0x1/0xf actions=ct(commit,table=108,zone=65520,exec(load:0x20->NXM_NX_CT_MARK[]))
+2. table=105, priority=190,ct_state=+new+trk,ip actions=ct(commit,table=108,zone=65520)
+3. table=105, priority=0 actions=goto_table:108
+```
+
+Flow 1 ensures that we commit connections initiated through the gateway
+interface and mark them with a `ct_mark` of `0x20`. This ensures that
+[ConntrackStateTable] can perform its functions correctly and rewrite the
+destination MAC address to the gateway's MAC address for connections which
+require it. Such connections include Pod-to-ClusterIP traffic. Note that the
+`0x20` mark is applied to *all* connections initiated through the gateway
+(i.e. for which the first packet of the connection was received through the
+gateway) and that [ConntrackStateTable] will perform the destination MAC address
+for the reply traffic of *all* such connections. In some cases (the ones
+described for [ConntrackStateTable]), this rewrite is necessary. For others
+(e.g. a connection from the host to a local Pod), this rewrite is not necessary
+but is also harmless, as the destination MAC is already correct.
+
+Flow 2 commits all other new connections.
+
+All traffic then goes to [HairpinSNATTable].
+
+### HairpinSNATTable (108)
+
+The table is used to handle Service hairpin case, which indicates that the
+packet should be output to the port on which it was received.
+
+If you dump the flows for this table, you should see the flows:
+
+```text
+1. table=108, priority=200,ip,nw_src=10.10.0.4,nw_dst=10.10.0.4 actions=mod_nw_src:169.254.169.252,load:0x1->NXM_NX_REG0[18],resubmit(,110)
+2. table=108, priority=200,ip,nw_src=10.10.0.2,nw_dst=10.10.0.2 actions=mod_nw_src:169.254.169.252,load:0x1->NXM_NX_REG0[18],resubmit(,110)
+3. table=108, priority=200,ip,nw_src=10.10.0.3,nw_dst=10.10.0.3 actions=mod_nw_src:169.254.169.252,load:0x1->NXM_NX_REG0[18],resubmit(,110)
+4. table=108, priority=0 actions=resubmit(,110)
+```
+
+Flow 1-3 are used to match Service packets from Pods. The source IP of the matched
+packets by flow 1-3 should be SNAT'd with a virtual hairpin IP since the source and
+destination IP addresses should not be the same. Without SNAT, response packets from
+a Pod will not be forwarded back to OVS pipeline as the destination IP is the Pod's
+own IP, then the connection is interrupted because the conntrack state is only stored
+in OVS ct zone, not in the Pod. With SNAT, the destination IP will be the virtual
+hairpin IP and forwarded back to OVS pipeline. Note that, bit 18 in NXM_NX_REG0 is
+set to 0x1, and it is consumed in [L2ForwardingOutTable] to output the packet
+to the port on which it was received with action `IN_PORT`.
+
+### L2ForwardingOutTable (110)
+
+It is a simple table and if you dump the flows for this table, you should only
+see 2 flows:
+
+```text
+1. table=110, priority=200,ip,reg0=0x10000/0x10000 actions=output:NXM_NX_REG1[]
+2. table=110, priority=0, actions=drop
+```
+
+The first flow outputs all unicast packets to the correct port (the port was
+resolved by the "dmac" table, [L2ForwardingCalcTable]). IP packets for which
+[L2ForwardingCalcTable] did not set bit 16 of NXM_NX_REG0 will be dropped.
+
+## Tables (AntreaProxy is disabled)
+
+![OVS pipeline](../assets/ovs-pipeline.svg)
+
+### DNATTable (40)
+
+This table is created only when AntreaProxy is disabled. Its only job is to
+send traffic destined to Services through the local gateway interface, without any
+modifications. kube-proxy will then take care of load-balancing the connections
+across the different backends for each Service.
+
+If you dump the flows for this table, you should see something like this:
+
+```text
+1. table=40, priority=200,ip,nw_dst=10.96.0.0/12 actions=set_field:0x2->reg1,load:0x1->NXM_NX_REG0[16],goto_table:105
+2. table=40, priority=0 actions=goto_table:45
+```
+
+In the example above, 10.96.0.0/12 is the Service CIDR (this is the default
+value used by `kubeadm init`). This flow is not actually required for
+forwarding, but to bypass [EgressRuleTable] and [EgressDefaultTable] for Service
+traffic on its way to kube-proxy through the gateway. If we omitted this flow,
+such traffic would be unconditionally dropped if a Network Policy is applied on
+the originating Pod. For such traffic, we instead enforce Network Policy egress
+rules when packets come back through the gateway and the destination IP has been
+rewritten by kube-proxy (DNAT to a backend for the Service). We cannot output
+the Service traffic to the gateway port directly as we haven't committed the
+connection yet; instead we store the port in NXM_NX_REG1 - similarly to how we
+process non-Service traffic in [L2ForwardingCalcTable] - and forward it to
+[ConntrackCommitTable]. By committing the connection we ensure that reply
+traffic (traffic from the Service backend which has already gone through
+kube-proxy for source IP rewrite) will not be dropped because of Network
+Policies.
+
+The table-miss flow entry (flow 2) for this table forwards all non-Service
+traffic to the next table, [AntreaPolicyEgressRuleTable].
+
+[ClassifierTable]: #classifiertable-0
+[SpoofGuardTable]: #spoofguardtable-10
+[ARPResponderTable]: #arprespondertable-20
+[ServiceHairpinTable]: #servicehairpintable-23
+[ConntrackTable]: #conntracktable-30
+[ConntrackStateTable]: #conntrackstatetable-31
+[DNATTable]: #dnattable-40
+[SessionAffinityTable]: #sessionaffinitytable-40
+[ServiceLBTable]: #servicelbtable-41
+[EndpointDNATTable]: #endpointdnattable-42
+[AntreaPolicyEgressRuleTable]: #antreapolicyegressruletable-45
+[EgressRuleTable]: #egressruletable-50
+[EgressDefaultTable]: #egressdefaulttable-60
+[L3ForwardingTable]: #l3forwardingtable-70
+[SNATTable]: #snattable-71
+[L3DecTTLTable]: #l3decttltable-72
+[L2ForwardingCalcTable]: #l2forwardingcalctable-80
+[AntreaPolicyIngressRuleTable]: #antreapolicyingressruletable-85
+[IngressRuleTable]: #ingressruletable-90
+[IngressDefaultTable]: #ingressdefaulttable-100
+[ConntrackCommitTable]: #conntrackcommittable-105
+[HairpinSNATTable]: #hairpinsnattable-108
+[L2ForwardingOutTable]: #l2forwardingouttable-110
diff --git a/content/docs/v1.15.0/docs/design/policy-only.md b/content/docs/v1.15.0/docs/design/policy-only.md
new file mode 100644
index 00000000..228b52ea
--- /dev/null
+++ b/content/docs/v1.15.0/docs/design/policy-only.md
@@ -0,0 +1,54 @@
+# Running Antrea in `networkPolicyOnly` Mode
+
+Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea
+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the
+primary CNI.
+
+## Design
+
+Antrea is designed to work as NetworkPolicy plug-in to work together with a routed CNIs.
+For as long as a CNI implementation fits into this model, Antrea may be inserted to enforce
+NetworkPolicy in that CNI's environment using Open vSwitch(OVS).
+
+In addition, Antrea working as NetworkPolicy plug-in automatically enables Antrea-proxy, because
+it requires Antrea-proxy to load balance Pod-to-Service traffic.
+
+{{< img src="../assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI" >}}
+
+The above diagram depicts a routed CNI network topology on the left, and what it looks like
+after Antrea inserts the OVS bridge into the data path.
+
+The diagram on the left illustrates a routed CNI network topology such as AWS EKS.
+In this topology a Pod connects to the host network via a
+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a
+host route with corresponding Pod's IP address as destination is created on each PtP device. Within
+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and
+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod
+traffic, even within the same worker Node must traverse first to the host network and be
+routed by it.
+
+When the container runtime instantiates a Pod, it first calls the primary CNI to configure Pod's
+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a
+veth-pair. When Antrea is chained with this primary CNI, container runtime then calls
+Antrea Agent, and the Antrea Agent attaches Pod's PtP device to the OVS bridge, and moves the host
+route to the Pod to local host gateway(`antrea-gw0`) interface from the PtP device. This is
+illustrated by the diagram on the right.
+
+Antrea needs to satisfy that
+
+1. All IP packets, sent on ``antrea-gw0`` in the host network, are received by the Pods exactly the same
+as if the OVS bridge had not been inserted.
+1. All IP packets, sent by Pods, are received by other Pods or the host network exactly
+the same as if OVS bridge had not been inserted.
+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.
+
+To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor
+of underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:
+
+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod can
+resolve its neighbors, and the Pod therefore can generate traffic to these neighbors.
+1. A L3 flow for each local Pod that routes IP packets to that Pod if packets' destination IP
+ matches that of the Pod.
+1. A L3 flow that routes all other IP packets to host network via `antrea-gw0` interface.
+
+These flows together handle all Pod traffic patterns.
diff --git a/content/docs/v1.15.0/docs/design/windows-design.md b/content/docs/v1.15.0/docs/design/windows-design.md
new file mode 100644
index 00000000..fd79a02f
--- /dev/null
+++ b/content/docs/v1.15.0/docs/design/windows-design.md
@@ -0,0 +1,271 @@
+# Running Antrea on Windows
+
+Antrea supports running on Windows worker Nodes. On Windows Nodes, Antrea sets up an overlay
+network to forward packets between Nodes and implements NetworkPolicies.
+
+## Design
+
+On Windows, the Host Networking Service (HNS) is a necessary component to support container
+networking. For Antrea on Windows, "Transparent" mode is chosen for the HNS network. In this
+mode, containers will be directly connected to the physical network through an **external**
+Hyper-V switch.
+
+OVS is working as a forwarding extension for the external Hyper-V switch which was created by
+HNS. Hence, the packets that are sent from/to the containers can be processed by OVS.
+The network adapter used in the HNS Network is also added to the OVS bridge as the uplink
+interface. An internal interface for the OVS bridge is created, and the original networking
+configuration (e.g., IP, MAC and routing entries) on the host network adapter is moved to
+this new interface. Some extra OpenFlow entries are needed to ensure the host traffic can be
+forwarded correctly.
+
+{{< img src="../assets/hns_integration.svg" width="600" alt="HNS Integration" >}}
+
+Windows NetNat is configured to make sure the Pods can access external addresses. The packet
+from a Pod to an external address is firstly output to antrea-gw0, and then SNAT is performed on the
+Windows host. The SNATed packet enters OVS from the OVS bridge interface and leaves the Windows host
+from the uplink interface directly.
+
+Antrea implements the Kubernetes ClusterIP Service leveraging OVS. Pod-to-ClusterIP-Service traffic
+is load-balanced and forwarded directly inside the OVS pipeline. And kube-proxy is running
+on each Windows Node to implement Kubernetes NodePort Service. Kube-proxy captures NodePort Service
+traffic and sets up a connection to a backend Pod to forwards the request using this connection.
+The forwarded request enters the OVS pipeline through "antrea-gw0" and is then forwarded to the
+Pod. To be compatible with OVS, kube-proxy on Windows must be configured to run in **userspace**
+mode, and a specific network adapter is required, on which Service IP addresses will be configured
+by kube-proxy.
+
+### HNS Network configuration
+
+HNS Network is created during the Antrea Agent initialization phase, and it should be created before
+the OVS bridge is created. This is because OVS is working as the Hyper-V Switch Extension, and the
+ovs-vswitchd process cannot work correctly until the OVS Extension is enabled on the newly created
+Hyper-V Switch.
+
+When creating the HNS Network, the local subnet CIDR and the uplink network adapter are required.
+Antrea Agent finds the network adapter from the Windows host using the Node's internal IP as a filter,
+and retrieves the local Subnet CIDR from the Node spec.
+
+After the HNS Network is created, OVS extension should be enabled at once on the Hyper-V Switch.
+
+### Container network configuration
+
+[**host-local**](https://github.com/containernetworking/plugins/tree/master/plugins/ipam/host-local)
+plugin is used to provide IPAM for containers, and the address is allocated from the subnet CIDR
+configured on the HNS Network.
+
+Windows HNS Endpoint is leveraged as the vNIC for each container. A single HNS Endpoint with the
+IP allocated by the IPAM plugin is created for each Pod. The HNS Endpoint should be attached to all
+containers in the same Pod to ensure that the network configuration can be correctly accessed (this
+operation is to make sure the DNS configuration is readable from all containers).
+
+One OVS internal port with the same name as the HNS Endpoint is also needed, in order to handle
+container traffic with OpenFlow rules. OpenFlow entries are installed to implement Pod-to-Pod,
+Pod-to-external and Pod-to-ClusterIP-Service connectivity.
+
+CNIAdd request might be called multiple times for a given Pod. This is because kubelet on Windows
+assumes CNIAdd is an idempotent event, and it uses this event to query the Pod networking status.
+Antrea needs to identify the container type (sandbox or workload) from the CNIAdd request:
+
+* we create the HNS Endpoint only when the request is for the sandbox container
+* we attach the HNS Endpoint no matter whether it is a sandbox container or a workload container.
+
+### Gateway port configuration
+
+The gateway port is created during the Antrea Agent initialization phase, and the address of the interface
+should be the first IP in the subnet. The port is an OVS internal port and its default name is "antrea-gw0".
+
+The gateway port is used to help implement L3 connectivity for the containers, including Pod-to-external,
+and Node-to-Pod. For the Pod-to-external case, OpenFlow entries are
+installed in order to output these packets to the host on the gateway port. To ensure the packet is forwarded
+correctly on the host, the IP-Forwarding feature should be enabled on the network adapter of the
+gateway port.
+
+A routing entry for traffic from the Node to the local Pod subnet is needed on the Windows host to ensure
+that the packet can enter the OVS pipeline on the gateway port. This routing entry is added when "antrea-gw0"
+is created.
+
+Every time a new Node joins the cluster, a host routing entry on the gateway port is required, and the
+remote subnet CIDR should be routed with the remote gateway address as the nexthop.
+
+### Tunnel port configuration
+
+Tunnel port configuration should be similar to Antrea on Linux:
+
+* tunnel port is added after OVS bridge is created;
+* a flow-based tunnel with the appropriate remote address is created for each Node in the cluster with OpenFlow.
+
+The only difference with Antrea on Linux is that the tunnel local address is required when creating the tunnel
+port (provided with `local_ip` option). This local address is the one configured on the OVS bridge.
+
+### OVS bridge interface configuration
+
+Since OVS is also responsible for taking charge of the network of the host, an interface for the OVS bridge
+is required on which the host network settings are configured. The virtual network adapter which is created
+when creating the HNS Network is used as the OVS bridge interface. The virtual network adapter is renamed as
+the expected OVS bridge name, then the OVS bridge port is created. Hence, OVS can find the virtual network
+adapter with the name and attach it directly. Windows host has configured the virtual network adapter with
+IP, MAC and route entries which were originally on the uplink interface when creating the HNSNetwork, as a
+result, no extra manual IP/MAC/Route configurations on the OVS bridge are needed.
+
+The packets that are sent to/from the Windows host should be forwarded on this interface. So the OVS bridge
+is also a valid entry point into the OVS pipeline. A special ofport number 65534 (named as LOCAL) for the
+OVS bridge is used in OpenFlow spec.
+
+In the OVS `Classifier` table, new OpenFlow entries are needed to match the packets from this interface. The
+packet entering OVS from this interface is output to the uplink interface directly.
+
+### OVS uplink interface configuration
+
+After the OVS bridge is created, the original physical adapter is added to the OVS bridge as the uplink interface.
+The uplink interface is used to support traffic from Pods accessing the world outside current host.
+
+We should differentiate the traffic if it is entering OVS from the uplink interface in OVS `Classifier`
+table. In encap mode, the packets entering OVS from the uplink interface is output to the bridge interface directly.
+In noEncap mode, there are three kinds of packets entering OVS from the uplink interface:
+
+ 1) Traffic that is sent to local Pods from Pod on a different Node
+ 2) Traffic that is sent to local Pods from a different Node according to the routing configuration
+ 3) Traffic on the host network
+
+For 1 and 2, the packet enters the OVS pipeline, and the `macRewriteMark` is set to ensure the destination MAC can be
+modified.
+For 3, the packet is output to the OVS bridge interface directly.
+
+The packet is always output to the uplink interface if it is entering OVS from the bridge interface. We
+also output the Pod traffic to the uplink interface in noEncap mode, if the destination is a Pod on a different Node,
+or if it is a reply packet to the request which is sent from a different Node. Then we can reduce the cost that the
+packet enters OVS twice (OVS -> Windows host -> OVS).
+
+Following are the OpenFlow entries for uplink interface in encap mode.
+
+```text
+Classifier Table: 0
+table=0, priority=200, in_port=$uplink actions=LOCAL
+table=0, priority=200, in_port=LOCAL actions=output:$uplink
+```
+
+Following is an example for the OpenFlow entries related with uplink interface in noEncap mode.
+
+```text
+Classifier Table: 0
+table=0, priority=210, ip, in_port=$uplink, nw_dst=$localPodSubnet, actions=load:0x4->NXM_NX_REG0[0..15],load:0x
+1->NXM_NX_REG0[19],resubmit(,29)
+table=0, priority=200, in_port=$uplink actions=LOCAL
+table=0, priority=200, in_port=LOCAL actions=output:$uplink
+
+L3Forwarding Table: 70
+// Rewrite the destination MAC with the Node's MAC on which target Pod is located.
+table=70, priority=200,ip,nw_dst=$peerPodSubnet actions=mod_dl_dst:$peerNodeMAC,resubmit(,80)
+// Rewrite the destination MAC with the Node's MAC if it is a reply for the access from the Node.
+table=70, priority=200,ct_state=+rpl+trk,ip,nw_dst=$peerNodeIP actions=mod_dl_dst:$peerNodeMAC,resubmit(,80)
+
+L2ForwardingCalcTable: 80
+table=80, priority=200,dl_dst=$peerNodeMAC actions=load:$uplink->NXM_NX_REG1[],set_field:0x10000/0x10000->reg0,resubmit(,105)
+```
+
+### SNAT configuration
+
+SNAT is an important feature of the Antrea Agent on Windows Nodes, required to support Pods accessing external
+addresses. It is implemented using the NAT capability of the Windows host.
+
+To support this feature, we configure NetNat on the Windows host for the Pod subnet:
+
+```text
+New-NetNat -Name antrea-nat -InternalIPInterfaceAddressPrefix $localPodSubnet
+```
+
+The packet that is sent from local Pod to an external address leaves OVS from `antrea-gw0` and enters Windows host,
+and SNAT action is performed. The SNATed address is chosen by Windows host according to the routing configuration.
+As for the reply packet of the Pod-to-external traffic, it enters Windows host and performs de-SNAT first, and then
+the packet enters OVS from `antrea-gw0` and is forwarded to the Pod finally.
+
+### Using Windows named pipe for internal connections
+
+Named pipe is used for local connections on Windows Nodes instead of Unix Domain Socket (UDS). It is used in
+these scenarios:
+
+* OVSDB connection
+* OpenFlow connection
+* The connection between CNI plugin and CNI server
+
+## Antrea and OVS Management on Windows
+
+While we provide different installation methods for Windows, the recommended one starting with
+Antrea v1.13 is to use the `antrea-windows-containerd-with-ovs.yml` manifest. With this method, the
+antrea-agent process and the OVS daemons (ovsdb-server and ovs-vswitchd) run as a Pod on Windows
+worker Nodes, and are managed by a DaemonSet. This installation method relies on
+[Windows HostProcess Pod](https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/)
+support.
+
+## Traffic walkthrough
+
+### Pod-to-Pod Traffic
+
+The intra-Node Pod-to-Pod traffic and inter-Node Pod-to-Pod traffic are the same as Antrea on Linux.
+It is processed and forwarded by OVS, and controlled with OpenFlow entries.
+
+### Service Traffic
+
+Kube-proxy userspace mode is configured to provide NodePort Service function. A specific Network adapter named
+"HNS Internal NIC" is provided to kube-proxy to configure Service addresses. The OpenFlow entries for the
+NodePort Service traffic on Windows are the same as those on Linux.
+
+AntreaProxy implements the ClusterIP Service function. Antrea Agent installs routes to send ClusterIP Service
+traffic from host network to the OVS bridge. For each Service, it adds a route that routes the traffic via a
+virtual IP (169.254.0.253), and it also adds a route to indicate that the virtual IP is reachable via
+antrea-gw0. The reason to add a virtual IP, rather than routing the traffic directly to antrea-gw0, is that
+then only one neighbor cache entry needs to be added, which resolves the virtual IP to a virtual MAC.
+
+When a Service's endpoints are in hostNetwork or external network, a request packet will have its
+destination IP DNAT'd to the selected endpoint IP and its source IP will be SNAT'd to the
+virtual IP (169.254.0.253). Such SNAT is needed for sending the reply packets back to the OVS pipeline
+from the host network, whose destination IP was the Node IP before d-SNATed to the virtual IP.
+Check the packet forwarding path described below.
+
+For a request packet from host, it will enter OVS pipeline via antrea-gw0 and exit via antrea-gw0
+as well to host network. On Windows host, with the help of NetNat, the request packet's source IP will
+be SNAT'd again to Node IP.
+
+The reply packets are the reverse for both situations regardless of whether the endpoint is in
+ClusterCIDR or not.
+
+The following path is an example of host accessing a Service whose endpoint is a hostNetwork Pod on
+another Node. The request packet is like:
+
+```text
+host -> antrea-gw0 -> OVS pipeline -> antrea-gw0 -> host NetNat -> br-int -> OVS pipeline -> peer Node
+ | |
+ DNAT(peer Node IP) SNAT(Node IP)
+ SNAT(virtual IP)
+```
+
+The forwarding path of a reply packet is like:
+
+```text
+peer Node -> OVS pipeline -> br-int -> host NetNat -> antrea-gw0 -> OVS pipeline -> antrea-gw0 -> host
+ | |
+ d-SNAT(virtual IP) d-SNAT(antrea-gw0 IP)
+ d-DNAT(Service IP)
+```
+
+### External Traffic
+
+The Pod-to-external traffic leaves the OVS pipeline from the gateway interface, and then is SNATed on the Windows
+host. If the packet should leave Windows host from OVS uplink interface according to the routing configuration on
+the Windows host, it is forwarded to OVS bridge first on which the host IP is configured, and then output to the
+uplink interface by OVS pipeline.
+
+The corresponding reply traffic will enter OVS from the uplink interface first, and then enter the host from the
+OVS bridge interface. It is de-SNATed on the host and then back to OVS from `antre-gw0` and forwarded to the Pod
+finally.
+{{< img src="../assets/windows_external_traffic.svg" width="600" alt="Traffic to external" >}}
+
+### Host Traffic
+
+In "Transparent" mode, the Antrea Agent should also support the host traffic when necessary, which includes
+packets sent from the host to external addresses, and the ones sent from external addresses to the host.
+
+The host traffic enters OVS bridge and output to the uplink interface if the destination is reachable from the
+network adapter which is plugged on OVS as uplink. For the reverse path, the packet enters OVS from the uplink
+interface first, and then directly output to the bridge interface and enters Windows host. For the traffic that
+is connected to the Windows network adapters other than the OVS uplink interface, it is managed by Windows host.
diff --git a/content/docs/v1.15.0/docs/egress.md b/content/docs/v1.15.0/docs/egress.md
new file mode 100644
index 00000000..f3c263c3
--- /dev/null
+++ b/content/docs/v1.15.0/docs/egress.md
@@ -0,0 +1,452 @@
+# Egress
+
+## Table of Contents
+
+
+- [What is Egress?](#what-is-egress)
+- [Prerequisites](#prerequisites)
+- [The Egress resource](#the-egress-resource)
+ - [AppliedTo](#appliedto)
+ - [EgressIP](#egressip)
+ - [ExternalIPPool](#externalippool)
+ - [Bandwidth](#bandwidth)
+- [The ExternalIPPool resource](#the-externalippool-resource)
+ - [IPRanges](#ipranges)
+ - [SubnetInfo](#subnetinfo)
+ - [NodeSelector](#nodeselector)
+- [Usage examples](#usage-examples)
+ - [Configuring High-Availability Egress](#configuring-high-availability-egress)
+ - [Configuring static Egress](#configuring-static-egress)
+- [Configuration options](#configuration-options)
+- [Egress on Cloud](#egress-on-cloud)
+ - [AWS](#aws)
+- [Limitations](#limitations)
+
+
+## What is Egress?
+
+`Egress` is a CRD API that manages external access from the Pods in a cluster.
+It supports specifying which egress (SNAT) IP the traffic from the selected Pods
+to the external network should use. When a selected Pod accesses the external
+network, the egress traffic will be tunneled to the Node that hosts the egress
+IP if it's different from the Node that the Pod runs on and will be SNATed to
+the egress IP when leaving that Node.
+
+You may be interested in using this capability if any of the following apply:
+
+- A consistent IP address is desired when specific Pods connect to services
+ outside of the cluster, for source tracing in audit logs, or for filtering
+ by source IP in external firewall, etc.
+
+- You want to force outgoing external connections to leave the cluster via
+ certain Nodes, for security controls, or due to network topology restrictions.
+
+This guide demonstrates how to configure `Egress` to achieve the above result.
+
+## Prerequisites
+
+Egress was introduced in v1.0 as an alpha feature, and was graduated to beta in
+v1.6, at which time it was enabled by default. Prior to v1.6, a feature gate,
+`Egress` must be enabled on the antrea-controller and antrea-agent in the
+`antrea-config` ConfigMap like the following options for the feature to work:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ Egress: true
+ antrea-controller.conf: |
+ featureGates:
+ Egress: true
+```
+
+## The Egress resource
+
+A typical Egress resource example:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: Egress
+metadata:
+ name: egress-prod-web
+spec:
+ appliedTo:
+ namespaceSelector:
+ matchLabels:
+ env: prod
+ podSelector:
+ matchLabels:
+ role: web
+ egressIP: 10.10.0.8 # can be populated by Antrea after assigning an IP from the pool below
+ externalIPPool: prod-external-ip-pool
+status:
+ egressNode: node01
+```
+
+### AppliedTo
+
+The `appliedTo` field specifies the grouping criteria of Pods to which the
+Egress applies to. Pods can be selected cluster-wide using `podSelector`. If set
+with a `namespaceSelector`, all Pods from Namespaces selected by the
+`namespaceSelector` will be selected. Specific Pods from specific Namespaces can
+be selected by providing both a `podSelector` and a `namespaceSelector`. Empty
+`appliedTo` selects nothing. The field is mandatory.
+
+### EgressIP
+
+The `egressIP` field specifies the egress (SNAT) IP the traffic from the
+selected Pods to the external network should use. **The IP must be reachable
+from all Nodes.** The IP can be specified when creating the Egress. Starting
+with Antrea v1.2, it can be allocated from an `ExternalIPPool` automatically.
+
+- If `egressIP` is not specified, `externalIPPool` must be specified. An IP will
+ be allocated from the pool by the antrea-controller. The IP will be assigned
+ to a Node selected by the `nodeSelector` of the `externalIPPool` automatically.
+- If both `egressIP` and `externalIPPool` are specified, the IP must be in the
+ range of the pool. Similarly, the IP will be assigned to a Node selected by
+ the `externalIPPool` automatically.
+- If only `egressIP` is specified, Antrea will not manage the assignment of the
+ IP and it must be assigned to an arbitrary interface of one Node manually.
+
+**Starting with Antrea v1.2, high availability is provided automatically when
+the `egressIP` is allocated from an `externalIPPool`**, i.e. when the
+`externalIPPool` is specified. If the Node hosting the `egressIP` fails, another
+Node will be elected (from among the remaining Nodes selected by the
+`nodeSelector` of the `externalIPPool`) as the new egress Node of this Egress.
+It will take over the IP and send layer 2 advertisement (for example, Gratuitous
+ARP for IPv4) to notify the other hosts and routers on the network that the MAC
+address associated with the IP has changed.
+
+**Note**: If more than one Egress applies to a Pod and they specify different
+`egressIP`, the effective egress IP will be selected randomly.
+
+### ExternalIPPool
+
+The `externalIPPool` field specifies the name of the `ExternalIPPool` that the
+`egressIP` should be allocated from. It also determines which Nodes the IP can
+be assigned to. It can be empty, which means users should assign the `egressIP`
+to one Node manually.
+
+### Bandwidth
+
+The `bandwidth` field enables traffic shaping for an Egress, by limiting the
+bandwidth for all egress traffic belonging to this Egress. `rate` specifies
+the maximum transmission rate. `burst` specifies the maximum burst size when
+traffic exceeds the rate. The user-provided values for `rate` and `burst` must
+follow the Kubernetes [Quantity](https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/quantity/) format,
+e.g. 300k, 100M, 2G. All backend workloads selected by a rate-limited Egress share the
+same bandwidth while sending egress traffic via this Egress. If these limits are exceeded,
+the traffic will be dropped.
+
+**Note**: Traffic shaping is currently in alpha version. To use this feature, users should
+enable the `EgressTrafficShaping` feature gate. Each Egress IP can be applied one bandwidth only.
+If multiple Egresses use the same IP but configure different bandwidths, the effective
+bandwidth will be selected randomly from the set of configured bandwidths. The effective use of the `bandwidth`
+function requires the OVS datapath to support meters.
+
+An Egress with traffic shaping example:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: Egress
+metadata:
+ name: egress-prod-web
+spec:
+ appliedTo:
+ namespaceSelector:
+ matchLabels:
+ env: prod
+ podSelector:
+ matchLabels:
+ role: web
+ egressIP: 10.10.0.8
+ bandwidth:
+ rate: 800M
+ burst: 2G
+status:
+ egressNode: node01
+```
+
+## The ExternalIPPool resource
+
+ExternalIPPool defines one or multiple IP ranges that can be used in the
+external network. The IPs in the pool can be allocated to the Egress resources
+as the Egress IPs. A typical ExternalIPPool resource example:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ExternalIPPool
+metadata:
+ name: prod-external-ip-pool
+spec:
+ ipRanges:
+ - start: 10.10.0.2
+ end: 10.10.0.10
+ - cidr: 10.10.1.0/28
+ nodeSelector:
+ matchLabels:
+ network-role: egress-gateway
+```
+
+### IPRanges
+
+The `ipRanges` field contains a list of IP ranges representing the available IPs
+of this IP pool. Each IP range may consist of a `cidr` or a pair of `start` and
+`end` IPs (which are themselves included in the range).
+
+### SubnetInfo
+
+By default, it's assumed that the IPs allocated from an ExternalIPPool are in
+the same subnet as the Node IPs. Starting with Antrea v1.15, IPs can be
+allocated from a subnet different from the Node IPs.
+
+The optional `subnetInfo` field contains the subnet attributes of the IPs in
+this pool. When using a different subnet:
+
+* `gateway` and `prefixLength` must be set. Antrea will route Egress traffic to
+the specified gateway when the destination is not in the same subnet of the
+Egress IP, otherwise route it to the destination directly.
+
+* Optionally, you can specify `vlan` if the underlying network is expecting it.
+Once set, Antrea will tag Egress traffic leaving the Egress Node with the
+specified VLAN ID. Correspondingly, it's expected that reply traffic towards
+these Egress IPs is also tagged with the specified VLAN ID when arriving at the
+Egress Node.
+
+An example of ExternalIPPool using a non-default subnet is as below:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ExternalIPPool
+metadata:
+ name: prod-external-ip-pool
+spec:
+ ipRanges:
+ - start: 10.10.0.2
+ end: 10.10.0.10
+ subnetInfo:
+ gateway: 10.10.0.1
+ prefixLength: 24
+ vlan: 10
+ nodeSelector:
+ matchLabels:
+ network-role: egress-gateway
+```
+
+**Note**: Specifying different subnets is currently in alpha version. To use
+this feature, users should enable the `EgressSeparateSubnet` feature gate.
+Currently, the maximum number of different subnets that can be supported in a
+cluster is 20, which should be sufficient for most cases. If you need to have
+more subnets, please raise an issue with your use case, and we will consider
+revising the limit based on that.
+
+### NodeSelector
+
+The `nodeSelector` field specifies which Nodes the IPs in this pool can be
+assigned to. It's useful when you want to limit egress traffic to certain Nodes.
+The semantics of the selector is the same as those used elsewhere in Kubernetes,
+i.e. both `matchLabels` and `matchExpressions` are supported. It can be empty,
+which means all Nodes can be selected.
+
+## Usage examples
+
+### Configuring High-Availability Egress
+
+In this example, we will make web apps in different namespaces use different
+egress IPs to access the external network.
+
+First, create an `ExternalIPPool` with a list of external routable IPs on the
+network.
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ExternalIPPool
+metadata:
+ name: external-ip-pool
+spec:
+ ipRanges:
+ - start: 10.10.0.11 # 10.10.0.11-10.10.0.20 can be used as Egress IPs
+ end: 10.10.0.20
+ nodeSelector: {} # All Nodes can be Egress Nodes
+```
+
+Then create two `Egress` resources, each of which applies to web apps in one
+Namespace.
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: Egress
+metadata:
+ name: egress-prod-web
+spec:
+ appliedTo:
+ namespaceSelector:
+ matchLabels:
+ kubernetes.io/metadata.name: prod
+ podSelector:
+ matchLabels:
+ app: web
+ externalIPPool: external-ip-pool
+---
+apiVersion: crd.antrea.io/v1beta1
+kind: Egress
+metadata:
+ name: egress-staging-web
+spec:
+ appliedTo:
+ namespaceSelector:
+ matchLabels:
+ kubernetes.io/metadata.name: staging
+ podSelector:
+ matchLabels:
+ app: web
+ externalIPPool: external-ip-pool
+```
+
+List the `Egress` resource with kubectl. The output shows each Egress gets one
+IP from the IP pool and gets one Node assigned as its Egress Node.
+
+```yaml
+# kubectl get egress
+NAME EGRESSIP AGE NODE
+egress-prod-web 10.10.0.11 1m node-4
+egress-staging-web 10.10.0.12 1m node-6
+```
+
+Now, the packets from the Pods with label `app=web` in the `prod` Namespace to
+the external network will be redirected to the `node-4` Node and SNATed to
+`10.10.0.11` while the packets from the Pods with label `app=web` in the
+`staging` Namespace to the external network will be redirected to the `node-6`
+Node and SNATed to `10.10.0.12`.
+
+Finally, if the `node-4` Node powers off, `10.10.0.11` will be re-assigned to
+another available Node quickly, and the packets from the Pods with label
+`app=web` in the `prod` Namespace will be redirected to the new Node, minimizing
+egress connection disruption without manual intervention.
+
+### Configuring static Egress
+
+In this example, we will make Pods in different namespaces use specific Node IPs
+(or any IPs that are configured to the interfaces of the Nodes) to access the
+external network.
+
+Since the Egress IPs have been configured to the Nodes, we can create `Egress`
+resources with specific IPs directly.
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: Egress
+metadata:
+ name: egress-prod
+spec:
+ appliedTo:
+ namespaceSelector:
+ matchLabels:
+ kubernetes.io/metadata.name: prod
+ egressIP: 10.10.0.104 # node-4's IP
+---
+apiVersion: crd.antrea.io/v1beta1
+kind: Egress
+metadata:
+ name: egress-staging
+spec:
+ appliedTo:
+ namespaceSelector:
+ matchLabels:
+ kubernetes.io/metadata.name: staging
+ egressIP: 10.10.0.105 # node-5's IP
+```
+
+List the `Egress` resource with kubectl. The output shows `10.10.0.104` is
+discovered on `node-4` Node while `10.10.0.105` is discovered on `node-5`.
+
+```yaml
+# kubectl get egress
+NAME EGRESSIP AGE NODE
+egress-prod 10.10.0.104 1m node-4
+egress-staging 10.10.0.105 1m node-5
+```
+
+Now, the packets from the Pods with in the `prod` Namespace to the external
+network will be redirected to the `node-4` Node and SNATed to `10.10.0.104`
+while the packets from the Pods in the `staging` Namespace to the external
+network will be redirected to the `node-5` Node and SNATed to `10.10.0.105`.
+
+In this configuration, if the `node-4` Node powers off, re-configuring
+`10.10.0.104` to another Node or updating the `egressIP` of `egress-prod` to
+another Node's IP can recover the egress connection. Antrea will detect the
+configuration change and redirect the packets from the Pods in the `prod`
+Namespace to the new Node.
+
+## Configuration options
+
+There are several options that can be configured for Egress according to your
+case.
+
+- `egress.exceptCIDRs` - The CIDR ranges to which outbound Pod traffic will not
+ be SNAT'd by Egresses. The option was added in Antrea v1.4.0.
+- `egress.maxEgressIPsPerNode` - The maximum number of Egress IPs that can be
+ assigned to a Node. It's useful when the Node network restricts the number of
+ secondary IPs a Node can have, e.g. in AWS EC2. The configured value must not
+ be greater than 255. The restriction applies to all Nodes in the cluster. If
+ you want to set different capacities for Nodes, the
+ `node.antrea.io/max-egress-ips` annotation of Node objects can be used to
+ specify different values for different Nodes, taking priority over the value
+ configured in the config file. The option and the annotation were added in
+ Antrea v1.11.0.
+
+## Egress on Cloud
+
+High-Availability Egress requires the Egress IPs to be able to float across
+Nodes. When assigning an Egress IP to a Node, Antrea assumes the responsibility
+of advertising the Egress IPs to the Node network via the ARP or NDP protocols.
+However, cloud networks usually apply SpoofGuard which prevents the Nodes from
+using any IP that is not configured for them in the cloud's control plane, or
+even don't support multicast and broadcast. These restrictions lead to
+High-Availability Egress not being as readily available on some clouds as it is
+on on-premise networks, and some custom (i.e., cloud-specific) work is required
+in the cloud's control plane to assign the Egress IP as secondary Node IPs.
+
+### AWS
+
+In Amazon VPC, ARP packets never hit the network, and traffic with Egress IP as
+source IP or destination IP isn't transmitted arbitrarily unless they are
+explicitly authorized (check [AWS VPC Whitepaper](https://docs.aws.amazon.com/whitepapers/latest/logical-separation/vpc-and-accompanying-features.html)
+for more information). To authorize an Egress IP, it must be configured as the
+secondary IP of the primary network interface of the Egress Node instance. You
+can refer to the [AWS doc](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/MultipleIP.html#assignIP-existing)
+to assign a secondary IP to a network interface.
+
+If you are using static Egress and managing the assignment of Egress IPs
+yourself: you should ensure the Egress IP is assigned as one of the IP
+addresses of the primary network interface of the Egress Node instance via
+Amazon EC2 console or AWS CLI.
+
+If you are using High-Availability Egress and let Antrea manage the assignment
+of Egress IPs: at the moment Antrea can only assign the Egress IP to an Egress
+Node at the operating system level (i.e., add the IP to the interface), and you
+still need to ensure the Egress IP is assigned to the Node instance via Amazon
+EC2 console or AWS CLI. To automate it, you can build a Kubernetes Operator
+which watches the Egress API, gets the Egress IP and the Egress Node from the
+status fields, and configures the Egress IP as the secondary IP of the primary
+network interface of the Egress Node instance via the
+[AssignPrivateIpAddresses](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AssignPrivateIpAddresses.html)
+API.
+
+## Limitations
+
+This feature is currently only supported for Nodes running Linux and "encap"
+mode. The support for Windows and other traffic modes will be added in the
+future.
+
+The previous implementation of Antrea Egress before Antrea v1.7.0 does not work
+with the `strictARP` configuration of `kube-proxy` IPVS mode. The `strictARP`
+configuration is required by some Service load balancing solutions including:
+[Antrea Service external IP management, MetalLB](service-loadbalancer.md#interoperability-with-kube-proxy-ipvs-mode),
+and kube-vip. It means Antrea Egress cannot work together with these solutions
+in a cluster using `kube-proxy` IPVS. The issue was fixed in Antrea v1.7.0.
diff --git a/content/docs/v1.15.0/docs/eks-installation.md b/content/docs/v1.15.0/docs/eks-installation.md
new file mode 100644
index 00000000..fd8e336a
--- /dev/null
+++ b/content/docs/v1.15.0/docs/eks-installation.md
@@ -0,0 +1,146 @@
+# Deploying Antrea in AWS EKS
+
+This document describes steps to deploy Antrea in `networkPolicyOnly` mode or `encap` mode to an
+AWS EKS cluster.
+
+## Deploying Antrea in `networkPolicyOnly` mode
+
+In `networkPolicyOnly` mode, Antrea implements NetworkPolicy and other services for an EKS cluster,
+while Amazon VPC CNI takes care of IPAM and Pod traffic routing across Nodes. Refer to
+[the design document](design/policy-only.md) for more information about `networkPolicyOnly` mode.
+
+This document assumes you already have an EKS cluster, and have the `KUBECONFIG` environment variable
+point to the kubeconfig file of that cluster. You can follow [the EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html)
+to create the cluster.
+
+With Antrea >=v0.9.0 release, you should apply `antrea-eks-node-init.yaml` before deploying Antrea.
+This will restart existing Pods (except those in host network), so that Antrea can also manage them
+(i.e. enforce NetworkPolicies on them) once it is installed.
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-eks-node-init.yml
+```
+
+To deploy a released version of Antrea, pick a deployment manifest from the
+[list of releases](https://github.com/antrea-io/antrea/releases).
+Note that EKS support was added in release 0.5.0, which means you cannot
+pick a release older than 0.5.0. For any given release `` (e.g. `v0.5.0`),
+you can deploy Antrea as follows:
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea-eks.yml
+```
+
+To deploy the latest version of Antrea (built from the main branch), use the
+checked-in [deployment yaml](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/antrea-eks.yml):
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-eks.yml
+```
+
+Now Antrea should be plugged into the EKS CNI and is ready to enforce NetworkPolicy.
+
+## Deploying Antrea in `encap` mode
+
+In `encap` mode, Antrea acts as the primary CNI of an EKS cluster, and
+implements all Pod networking functionalities, including IPAM and routing across
+Nodes. The major benefit of Antrea as the primary CNI is that it can get rid of
+the Pods per Node limits with Amazon VPC CNI. For example, the default mode of
+VPC CNI allocates a secondary IP for each Pod, and the maximum number of Pods
+that can be created on a Node is decided by the maximum number of elastic
+network interfaces and secondary IPs per interface that can be attached to an
+EC2 instance type. When Antrea is the primary CNI, Pods are connected to the
+Antrea overlay network and Pod IPs are allocated from the private CIDRs
+configured for an EKS cluster, and so the number of Pods per Node is no longer
+limited by the number of secondary IPs per instance.
+
+Note: as a general limitation when using custom CNIs with EKS, Antrea cannot be
+installed to the EKS control plane Nodes. As a result, EKS control plane
+cannot initiate a connection to a Pod in Antrea overlay network, when Antrea
+runs in `encap` mode, and so applications that require control plane to Pod
+connections might not work properly. For example, [Kubernetes API aggregation](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation),
+[apiserver proxy](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls),
+or [admission controller](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers),
+will not work with `encap` mode on EKS, when the Services are provided
+by Pods in overlay network. A workaround is to run such Pods in `hostNetwork`.
+
+### 1. Create an EKS cluster without Nodes
+
+This guide uses `eksctl` to create an EKS cluster, but you can also follow the
+[EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html)
+to create an EKS cluster. `eksctl` can be installed following the [eksctl guide](https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html).
+
+Run the following `eksctl` command to create a cluster named `antrea-eks-cluster`:
+
+```bash
+eksctl create cluster --name antrea-eks-cluster --without-nodegroup
+```
+
+After the command runs successfully, you should be able to access the cluster
+using `kubectl`, for example:
+
+```bash
+kubectl get node
+```
+
+Note, as the cluster does not have a node group configured yet, no Node will be
+returned by the command.
+
+### 2. Delete Amazon VPC CNI
+
+As Antrea is the primary CNI in `encap` mode, the VPC CNI (`aws-node` DaemonSet)
+installed with the EKS cluster needs to be deleted:
+
+```bash
+kubectl -n kube-system delete daemonset aws-node
+```
+
+### 3. Install Antrea
+
+First, download the Antrea deployment yaml. Note that `encap` mode support for
+EKS was added in release 1.4.0, which means you cannot pick a release older
+than 1.4.0. For any given release `` (e.g. `v1.4.0`), get the Antrea
+deployment yaml at:
+
+```text
+https://github.com/antrea-io/antrea/releases/download//antrea.yml
+```
+
+To deploy the latest version of Antrea (built from the main branch), get the
+deployment yaml at:
+
+```text
+https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml
+```
+
+`encap` mode on EKS requires Antrea's built-in Node IPAM feature to be enabled.
+For information about how to configure Antrea Node IPAM, please refer to
+[Antrea Node IPAM guide](antrea-ipam.md#running-nodeipam-within-antrea-controller).
+
+After enabling Antrea Node IPAM in the deployment yaml, deploy Antrea with:
+
+```bash
+kubectl apply -f antrea.yml
+```
+
+### 4. Create a node group for the EKS cluster
+
+For example, you can run the following command to create a node group of two
+Nodes:
+
+```bash
+eksctl create nodegroup --cluster antrea-eks-cluster --nodes 2
+```
+
+### 5. Validate Antrea installation
+
+After the EKS Nodes are successfully created and booted, you can verify that
+Antrea Controller and Agent Pods are running on the Nodes:
+
+```bash
+$ kubectl get pods --namespace kube-system -l app=antrea
+NAME READY STATUS RESTARTS AGE
+antrea-agent-bpj72 2/2 Running 0 40s
+antrea-agent-j2sjz 2/2 Running 0 40s
+antrea-controller-6f7468cbff-5sk4t 1/1 Running 0 43s
+```
diff --git a/content/docs/v1.15.0/docs/external-node.md b/content/docs/v1.15.0/docs/external-node.md
new file mode 100644
index 00000000..50f40e46
--- /dev/null
+++ b/content/docs/v1.15.0/docs/external-node.md
@@ -0,0 +1,664 @@
+# External Node
+
+## Table of Contents
+
+
+- [What is ExternalNode?](#what-is-externalnode)
+- [Prerequisites](#prerequisites)
+- [The ExternalNode resource](#the-externalnode-resource)
+ - [Name and Namespace](#name-and-namespace)
+ - [Interfaces](#interfaces)
+- [Install Antrea Agent on VM](#install-antrea-agent-on-vm)
+ - [Prerequisites on Kubernetes cluster](#prerequisites-on-kubernetes-cluster)
+ - [Installation on Linux VM](#installation-on-linux-vm)
+ - [Prerequisites on Linux VM](#prerequisites-on-linux-vm)
+ - [Installation steps on Linux VM](#installation-steps-on-linux-vm)
+ - [Service Installation](#service-installation)
+ - [Container Installation](#container-installation)
+ - [Installation on Windows VM](#installation-on-windows-vm)
+ - [Prerequisites on Windows VM](#prerequisites-on-windows-vm)
+ - [Installation steps on Windows VM](#installation-steps-on-windows-vm)
+- [VM network configuration](#vm-network-configuration)
+- [RBAC for antrea-agent](#rbac-for-antrea-agent)
+- [Apply Antrea NetworkPolicy to ExternalNode](#apply-antrea-networkpolicy-to-externalnode)
+ - [Antrea NetworkPolicy configuration](#antrea-networkpolicy-configuration)
+ - [Bypass Antrea NetworkPolicy](#bypass-antrea-networkpolicy)
+- [OpenFlow pipeline](#openflow-pipeline)
+ - [Non-IP packet](#non-ip-packet)
+ - [IP packet](#ip-packet)
+- [Limitations](#limitations)
+
+
+## What is ExternalNode?
+
+`ExternalNode` is a CRD API that enables Antrea to manage the network connectivity
+and security on a Non-Kubernetes Node (like a virtual machine or a bare-metal
+server). It supports specifying which network interfaces on the external Node
+are expected to be protected with Antrea NetworkPolicy rules. The virtual machine
+or bare-metal server represented by an `ExternalNode` resource can be either
+Linux or Windows. "External Node" will be used to designate such a virtual
+machine or bare-metal server in the rest of this document.
+
+Antrea NetworkPolicies are applied to an external Node by leveraging the
+`ExternalEntity` resource. `antrea-controller` creates an `ExternalEntity`
+resource for each network interface specified in the `ExternalNode` resource.
+
+`antrea-agent` is running on the external Node, and it controls network
+connectivity and security by attaching the network interface(s) to an OVS bridge.
+A [new OpenFlow pipeline](#openflow-pipeline) has been implemented, dedicated to
+the ExternalNode feature.
+
+You may be interested in using this capability for the below scenarios:
+
+- To apply Antrea NetworkPolicy to an external Node.
+- You want the same security configurations on the external Node for all
+ Operating Systems.
+
+This guide demonstrates how to configure `ExternalNode` to achieve the above
+result.
+
+## Prerequisites
+
+`ExternalNode` is introduced in v1.8 as an alpha feature. The feature gate
+`ExternalNode` must be enabled in the `antrea-controller` and `antrea-agent`
+configuration. The configuration for `antrea-controller` is modified in the
+`antrea-config` ConfigMap as follows for the feature to work:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-controller.conf: |
+ featureGates:
+ ExternalNode: true
+```
+
+The `antrea-controller` implements the `antrea` Service, which accepts
+connections from each `antrea-agent` and is an important part of the
+NetworkPolicy implementation. By default, the `antrea` Service has type
+`ClusterIP`. Because external Nodes run outside of the Kubernetes cluster, they
+cannot directly access the `ClusterIP` address. Therefore, the `antrea` Service
+needs to become externally-accessible, by changing its type to `NodePort` or
+`LoadBalancer`.
+
+Since `antrea-agent` is running on an external Node which is not managed by
+Kubernetes, a configuration file needs to be present on each machine where the
+`antrea-agent` is running, and the path to this file will be provided to the
+`antrea-agent` as a command-line argument. Refer to the [sample configuration](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/externalnode/conf/antrea-agent.conf)
+to learn the`antrea-agent` configuration options when running on an external Node.
+
+A further [section](#install-antrea-agent-on-vm) will provide detailed steps
+for running the `antrea-agent` on a VM.
+
+## The ExternalNode resource
+
+An example `ExternalNode` resource:
+
+```yaml
+apiVersion: crd.antrea.io/v1alpha1
+kind: ExternalNode
+metadata:
+ name: vm1
+ namespace: vm-ns
+ labels:
+ role: db
+spec:
+ interfaces:
+ - ips: [ "172.16.100.3" ]
+ name: ""
+```
+
+Note: **Only one interface is supported for Antrea v1.8**.
+
+### Name and Namespace
+
+The `name` field in an `ExternalNode` uniquely identifies an external Node.
+The `ExternalNode` name is provided to `antrea-agent` via an environment
+variable `NODE_NAME`, otherwise `antrea-agent` will use the hostname to find
+the `ExternalNode` resource if `NODE_NAME` is not set.
+
+`ExternalNode` resource is `Namespace` scoped. The `Namespace` is provided to
+`antrea-agent` with option `externalNodeNamespace` in
+[antrea-agent.conf](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/externalnode/conf/antrea-agent.conf).
+
+```yaml
+externalNodeNamespace: vm-ns
+```
+
+### Interfaces
+
+The `interfaces` field specifies the list of the network interfaces expected to
+be guarded by Antrea NetworkPolicy. At least one interface is required. Interface
+`name` or `ips` is used to identify the target interface. **The field `ips`
+must be provided in the CRD**, but `name` is optional. Multiple IPs on a single
+interface is supported. In the case that multiple `interfaces` are configured,
+`name` must be specified for every `interface`.
+
+`antrea-controller` creates an `ExternalEntity` for each interface whenever an
+`ExternalNode` is created. The created `ExternalEntity` has the following
+characteristics:
+
+- It is configured within the same Namespace as the `ExternalNode`.
+- The `name` is generated according to the following principles:
+ - Use the `ExternalNode` name directly, if there is only one interface, and
+ interface name is not specified.
+ - Use the format `$ExternalNode.name-$hash($interface.name)[:5]` for other
+ cases.
+- The `externalNode` field is set with the `ExternalNode` name.
+- The `owner` is referring to the `ExternalNode` resource.
+- All labels added on `ExternalNode` are copied to the `ExternalEntity`.
+- Each IP address of the interface is added as an endpoint in the `endpoints`
+ list, and the interface name is used as the endpoint name if it is set.
+
+The `ExternalEntity` resource created for the above `ExternalNode` interface
+would look like this:
+
+```yaml
+apiVersion: crd.antrea.io/v1alpha2
+kind: ExternalEntity
+metadata:
+ labels:
+ role: db
+ name: vm1
+ namespace: vm-ns
+ ownerReferences:
+ - apiVersion: v1alpha1
+ kind: ExternalNode
+ name: vm1
+ uid: 99b09671-72da-4c64-be93-17185e9781a5
+ resourceVersion: "5513"
+ uid: 5f360f32-7806-4d2d-9f36-80ce7db8de10
+spec:
+ endpoints:
+ - ip: 172.16.100.3
+ externalNode: vm1
+```
+
+## Install Antrea Agent on VM
+
+### Prerequisites on Kubernetes cluster
+
+1. Enable `ExternalNode` feature on the `antrea-controller`, and expose the
+ antrea Service externally (e.g., as a NodePort Service).
+2. Create a Namespace for `antrea-agent`. This document will use `vm-ns` as an
+ example Namespace for illustration.
+
+ ```bash
+ kubectl create ns vm-ns
+ ```
+
+3. Create a ServiceAccount, ClusterRole and ClusterRoleBinding for `antrea-agent`
+ as shown below. If you use a Namespace other than `vm-ns`, you need to update
+ the [VM RBAC manifest](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/externalnode/vm-agent-rbac.yml) and
+ change `vm-ns` to the right Namespace.
+
+ ```bash
+ kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/externalnode/vm-agent-rbac.yml
+ ```
+
+4. Create `antrea-agent.kubeconfig` file for `antrea-agent` to access the K8S
+ API server.
+
+ ```bash
+ CLUSTER_NAME="kubernetes"
+ SERVICE_ACCOUNT="vm-agent"
+ NAMESPACE="vm-ns"
+ KUBECONFIG="antrea-agent.kubeconfig"
+ APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(@.name==\"$CLUSTER_NAME\")].cluster.server}")
+ TOKEN=$(kubectl -n $NAMESPACE get secrets -o jsonpath="{.items[?(@.metadata.name=='${SERVICE_ACCOUNT}-service-account-token')].data.token}"|base64 --decode)
+ kubectl config --kubeconfig=$KUBECONFIG set-cluster $CLUSTER_NAME --server=$APISERVER --insecure-skip-tls-verify=true
+ kubectl config --kubeconfig=$KUBECONFIG set-credentials antrea-agent --token=$TOKEN
+ kubectl config --kubeconfig=$KUBECONFIG set-context antrea-agent@$CLUSTER_NAME --cluster=$CLUSTER_NAME --user=antrea-agent
+ kubectl config --kubeconfig=$KUBECONFIG use-context antrea-agent@$CLUSTER_NAME
+ # Copy antrea-agent.kubeconfig to the VM
+ ```
+
+5. Create `antrea-agent.antrea.kubeconfig` file for `antrea-agent` to access
+ the `antrea-controller` API server.
+
+ ```bash
+ # Specify the antrea-controller API server endpoint. Antrea-Controller needs
+ # to be exposed via the Node IP or a public IP that is reachable from the VM
+ ANTREA_API_SERVER="https://172.18.0.1:443"
+ ANTREA_CLUSTER_NAME="antrea"
+ NAMESPACE="vm-ns"
+ KUBECONFIG="antrea-agent.antrea.kubeconfig"
+ TOKEN=$(kubectl -n $NAMESPACE get secrets -o jsonpath="{.items[?(@.metadata.name=='${SERVICE_ACCOUNT}-service-account-token')].data.token}"|base64 --decode)
+ kubectl config --kubeconfig=$KUBECONFIG set-cluster $ANTREA_CLUSTER_NAME --server=$ANTREA_API_SERVER --insecure-skip-tls-verify=true
+ kubectl config --kubeconfig=$KUBECONFIG set-credentials antrea-agent --token=$TOKEN
+ kubectl config --kubeconfig=$KUBECONFIG set-context antrea-agent@$ANTREA_CLUSTER_NAME --cluster=$ANTREA_CLUSTER_NAME --user=antrea-agent
+ kubectl config --kubeconfig=$KUBECONFIG use-context antrea-agent@$ANTREA_CLUSTER_NAME
+ # Copy antrea-agent.antrea.kubeconfig to the VM
+ ```
+
+6. Create an `ExternalNode` resource for the VM.
+
+ After preparing the `ExternalNode` configuration yaml for the VM, we can
+ apply it in the cluster.
+
+ ```bash
+ cat << EOF | kubectl apply -f -
+ apiVersion: crd.antrea.io/v1alpha1
+ kind: ExternalNode
+ metadata:
+ name: vm1
+ namespace: vm-ns
+ labels:
+ role: db
+ spec:
+ interfaces:
+ - ips: [ "172.16.100.3" ]
+ name: ""
+ EOF
+ ```
+
+### Installation on Linux VM
+
+#### Prerequisites on Linux VM
+
+OVS needs to be installed on the VM. For more information about OVS installation
+please refer to the [getting-started guide](getting-started.md#open-vswitch).
+
+#### Installation steps on Linux VM
+
+`Antrea Agent` can be installed as a native service or can be installed in a container.
+
+##### Service Installation
+
+1. Build `antrea-agent` binary in the root of the Antrea code tree and copy the
+ `antrea-agent` binary from the `bin` directory to the Linux VM.
+
+ ```bash
+ make docker-bin
+ ```
+
+2. Copy configuration files to the VM, including [antrea-agent.conf](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/externalnode/conf/antrea-agent.conf),
+ which specifies agent configuration parameters;
+ `antrea-agent.antrea.kubeconfig` and `antrea-agent.kubeconfig`, which were
+ generated in steps 4 and 5 of [Prerequisites on Kubernetes cluster](#prerequisites-on-kubernetes-cluster).
+
+3. Bootstrap `antrea-agent` using one of these 2 methods:
+
+ 1. Bootstrap `antrea-agent` using the [installation script](https://github.com/antrea-io/antrea/blob/v1.15.0/hack/externalnode/install-vm.sh)
+ as shown below (Ubuntu 18.04 and 20.04, and Red Hat Enterprise Linux 8.4).
+
+ ```bash
+ ./install-vm.sh --ns vm-ns --bin ./antrea-agent --config ./antrea-agent.conf \
+ --kubeconfig ./antrea-agent.kubeconfig \
+ --antrea-kubeconfig ./antrea-agent.antrea.kubeconfig --nodename vm1
+ ```
+
+ 2. Bootstrap `antrea-agent` manually. First edit the `antrea-agent.conf` file
+ to set `clientConnection`, `antreaClientConnection` and `externalNodeNamespace`
+ to the correct values.
+
+ ```bash
+ AGENT_NAMESPACE="vm-ns"
+ AGENT_CONF_PATH="/etc/antrea"
+ mkdir -p $AGENT_CONF_PATH
+ # Copy antrea-agent kubeconfig files
+ cp ./antrea-agent.kubeconfig $AGENT_CONF_PATH
+ cp ./antrea-agent.antrea.kubeconfig $AGENT_CONF_PATH
+ # Update clientConnection and antreaClientConnection
+ sed -i "s|kubeconfig: |kubeconfig: $AGENT_CONF_PATH/|g" antrea-agent.conf
+ sed -i "s|#externalNodeNamespace: default|externalNodeNamespace: $AGENT_NAMESPACE|g" antrea-agent.conf
+ # Copy antrea-agent configuration file
+ cp ./antrea-agent.conf $AGENT_CONF_PATH
+ ```
+
+ Then create `antrea-agent` service. Below is a sample snippet to start
+ `antrea-agent` as a service on Ubuntu 18.04 or later:
+
+ Note: Environment variable `NODE_NAME` needs to be set in the service
+ configuration, if the VM's hostname is different from the name defined in
+ the `ExternalNode` resource.
+
+ ```bash
+ AGENT_BIN_PATH="/usr/sbin"
+ AGENT_LOG_PATH="/var/log/antrea"
+ mkdir -p $AGENT_BIN_PATH
+ mkdir -p $AGENT_LOG_PATH
+ cat << EOF > /etc/systemd/system/antrea-agent.service
+ Description="antrea-agent as a systemd service"
+ After=network.target
+ [Service]
+ Environment="NODE_NAME=vm1"
+ ExecStart=$AGENT_BIN_PATH/antrea-agent \
+ --config=$AGENT_CONF_PATH/antrea-agent.conf \
+ --logtostderr=false \
+ --log_file=$AGENT_LOG_PATH/antrea-agent.log
+ Restart=on-failure
+ [Install]
+ WantedBy=multi-user.target
+ EOF
+
+ sudo systemctl daemon-reload
+ sudo systemctl enable antrea-agent
+ sudo systemctl start antrea-agent
+ ```
+
+##### Container Installation
+
+1. `Docker` is used as the container runtime for Linux VMs. The Docker image can be built from source code
+ or can be downloaded from the Antrea repository.
+
+ 1. From Source
+
+ Build `antrea-ubuntu` Docker image in the root of the Antrea code tree.
+
+ ```bash
+ make
+ ```
+
+ Note: The image repository name should be `antrea/antrea-ubuntu` and tag should be `latest`.
+
+ Copy the `antrea/antrea-ubuntu:latest` image to the target VM. Please follow
+ the below steps.
+
+ ```bash
+ # Save it in a tar file
+ docker save -o antrea/antrea-ubuntu:latest
+
+ # Copy this tar file to the target VM.
+ # Then load that image on the target VM.
+ docker load -i
+ ```
+
+ 2. Docker Repository
+
+ The released version of `antrea-ubuntu` Docker image can be downloaded from Antrea `Dockerhub`
+ repository. Pick a version from the [list of releases](https://github.com/antrea-io/antrea/releases). For any given
+ release `` (e.g. `v1.9.0`), download `antrea-ubuntu` Docker image as follows:
+
+ ```bash
+ docker pull antrea/antrea-ubuntu:
+ ```
+
+ The [installation script](https://github.com/antrea-io/antrea/blob/v1.15.0/hack/externalnode/install-vm.sh) automatically downloads the specific released
+ version of `antrea-ubuntu` Docker image on VM by specifying the installation argument `--antrea-version`.
+ Also, the script automatically loads that image into Docker. For any given release `` (e.g. `v1.9.0`), specify
+ it in the --antrea-version argument as follows.
+
+ ```bash
+ --antrea-version
+ ```
+
+2. Copy configuration files to the VM, including [antrea-agent.conf](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/externalnode/conf/antrea-agent.conf),
+ which specifies agent configuration parameters;
+ `antrea-agent.antrea.kubeconfig` and `antrea-agent.kubeconfig`, which were
+ generated in steps 4 and 5 of [Prerequisites on Kubernetes cluster](#prerequisites-on-kubernetes-cluster).
+
+3. Bootstrap `antrea-agent` using the [installation script](https://github.com/antrea-io/antrea/blob/v1.15.0/hack/externalnode/install-vm.sh)
+ as shown below (Ubuntu 18.04, 20.04, and Rhel 8.4).
+
+ ```bash
+ ./install-vm.sh --ns vm-ns --config ./antrea-agent.conf \
+ --kubeconfig ./antrea-agent.kubeconfig \
+ --antrea-kubeconfig ./antrea-agent.antrea.kubeconfig --containerize --antrea-version v1.9.0
+ ```
+
+### Installation on Windows VM
+
+#### Prerequisites on Windows VM
+
+1. Enable the Windows Hyper-V optional feature on Windows VM.
+
+ ```powershell
+ Install-WindowsFeature Hyper-V-Powershell
+ Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All -NoRestart
+ ```
+
+2. OVS needs to be installed on the VM. For more information about OVS
+ installation please refer to the [Antrea Windows documentation](windows.md#1-optional-install-ovs-provided-by-antrea-or-your-own).
+3. Download [nssm](https://nssm.cc/download) which will be used to create the
+ Windows service for `antrea-agent`.
+
+Note: Only Windows Server 2019 is supported in the first release at the moment.
+
+#### Installation steps on Windows VM
+
+1. Build `antrea-agent` binary in the root of the antrea code tree and copy the
+ `antrea-agent` binary from the `bin` directory to the Windows VM.
+
+ ```bash
+ #! /bin/bash
+ make docker-windows-bin
+ ```
+
+2. Copy [antrea-agent.conf](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/externalnode/conf/antrea-agent.conf),
+ `antrea-agent.kubeconfig` and `antrea-agent.antrea.kubeconfig` files to the
+ VM. Please refer to the step 2 of [Installation on Linux VM](#installation-steps-on-linux-vm)
+ section for more information.
+
+ ```powershell
+ $WIN_AGENT_CONF_PATH="C:\antrea-agent\conf"
+ New-Item -ItemType Directory -Force -Path $WIN_AGENT_CONF_PATH
+ # Copy antrea-agent kubeconfig files
+ Copy-Item .\antrea-agent.kubeconfig $WIN_AGENT_CONF_PATH
+ Copy-Item .\antrea-agent.antrea.kubeconfig $WIN_AGENT_CONF_PATH
+ # Copy antrea-agent configuration file
+ Copy-Item .\antrea-agent.conf $WIN_AGENT_CONF_PATH
+ ```
+
+3. Bootstrap `antrea-agent` using one of these 2 methods:
+
+ 1. Bootstrap `antrea-agent` using the [installation script](https://github.com/antrea-io/antrea/blob/v1.15.0/hack/externalnode/install-vm.ps1)
+ as shown below (only Windows Server 2019 is tested and supported).
+
+ ```powershell
+ .\Install-vm.ps1 -Namespace vm-ns -BinaryPath .\antrea-agent.exe `
+ -ConfigPath .\antrea-agent.conf -KubeConfigPath .\antrea-agent.kubeconfig `
+ -AntreaKubeConfigPath .\antrea-agent.antrea.kubeconfig `
+ -InstallDir C:\antrea-agent -NodeName vm1
+ ```
+
+ 2. Bootstrap `antrea-agent` manually. First edit the `antrea-agent.conf` file to
+ set `clientConnection`, `antreaClientConnection` and `externalNodeNamespace`
+ to the correct values.
+ Configure environment variable `NODE_NAME` if the VM's hostname is different
+ from the name defined in the `ExternalNode` resource.
+
+ ```powershell
+ [Environment]::SetEnvironmentVariable("NODE_NAME", "vm1")
+ [Environment]::SetEnvironmentVariable("NODE_NAME", "vm1", [System.EnvironmentVariableTarget]::Machine)
+ ```
+
+ Then create `antrea-agent` service using nssm. Below is a sample snippet to start
+ `antrea-agent` as a service:
+
+ ```powershell
+ $WIN_AGENT_BIN_PATH="C:\antrea-agent"
+ $WIN_AGENT_LOG_PATH="C:\antrea-agent\logs"
+ New-Item -ItemType Directory -Force -Path $WIN_AGENT_BIN_PATH
+ New-Item -ItemType Directory -Force -Path $WIN_AGENT_LOG_PATH
+ Copy-Item .\antrea-agent.exe $WIN_AGENT_BIN_PATH
+ nssm.exe install antrea-agent $WIN_AGENT_BIN_PATH\antrea-agent.exe --config $WIN_AGENT_CONF_PATH\antrea-agent.conf --log_file $WIN_AGENT_LOG_PATH\antrea-agent.log --logtostderr=false
+ nssm.exe start antrea-agent
+ ```
+
+## VM network configuration
+
+`antrea-agent` uses the interface IPs or name to find the network interface on
+the external Node, and then attaches it to the OVS bridge. The network interface
+is attached to OVS as uplink, and a new OVS internal Port is created to take over
+the uplink interface's IP/MAC and routing configurations. On Windows, the DNS
+configurations are also moved to the OVS internal port from uplink. Before
+attaching the uplink to OVS, the network interface is renamed with a suffix
+"~", and OVS internal port is configured with the original name of the uplink.
+As a result, IP/MAC/routing entries are seen on a network interface configuring
+with the same name on the external Node.
+
+The outbound traffic sent from the external Node enters OVS from the internal
+port, and finally output from the uplink, and the inbound traffic enters OVS
+from the uplink and output to the internal port. The IP packet is processed by
+the OpenFlow pipeline, and the non-IP packet is forwarded directly.
+
+The following diagram depicts the OVS bridge and traffic forwarding on an
+external Node:
+![Traffic On ExternalNode](assets/traffic_external_node.svg)
+
+## RBAC for antrea-agent
+
+An external Node is regarded as an untrusted entity on the network. To follow
+the least privilege principle, the RBAC configuration for `antrea-agent`
+running on an external Node is as follows:
+
+- Only `get`, `list` and `watch` permissions are given on resource `ExternalNode`
+- Only `update` permission is given on resource `antreaagentinfos`, and `create`
+ permission is moved to `antrea-controller`
+
+For more details please refer to [vm-agent-rbac.yml](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/externalnode/vm-agent-rbac.yml)
+
+`antrea-agent` reports its status by updating the `antreaagentinfo` resource
+which is created with the same name as the `ExternalNode`. `antrea-controller`
+creates an `antreaagentinfo` resource for each new `ExternalNode`, and then
+`antrea-agent` updates it every minute with its latest status. `antreaagentinfo`
+is deleted by `antrea-controller` when the `ExternalNode` is deleted.
+
+## Apply Antrea NetworkPolicy to ExternalNode
+
+### Antrea NetworkPolicy configuration
+
+An Antrea NetworkPolicy is applied to an `ExternalNode` by providing an
+`externalEntitySelector` in the `appliedTo` field. The `ExternalEntity`
+resource is automatically created for each interface of an `ExternalNode`.
+`ExternalEntity` resources are used by `antrea-controller` to process the
+NetworkPolicies, and each `antrea-agent` (including those running on external
+Nodes) receives the appropriate internal AntreaNetworkPolicy objects.
+
+Following types of (from/to) network peers are supported in an Antrea
+NetworkPolicy applied to an external Node:
+
+- ExternalEntities selected by an `externalEntitySelector`
+- An `ipBlock`
+- A FQDN address in an egress rule
+
+Following actions are supported in an Antrea NetworkPolicy applied to an
+external Node:
+
+- Allow
+- Drop
+- Reject
+
+Below is an example of applying an Antrea NetworkPolicy to the external Nodes
+labeled with `role=db` to reject SSH connections from IP "172.16.100.5" or from
+other external Nodes labeled with `role=front`:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: NetworkPolicy
+metadata:
+ name: annp1
+ namespace: vm-ns
+spec:
+ priority: 9000.0
+ appliedTo:
+ - externalEntitySelector:
+ matchLabels:
+ role: db
+ ingress:
+ - action: Reject
+ ports:
+ - protocol: TCP
+ port: 22
+ from:
+ - externalEntitySelector:
+ matchLabels:
+ role: front
+ - ipBlock:
+ cidr: 172.16.100.5/32
+```
+
+### Bypass Antrea NetworkPolicy
+
+In some cases, users may want some particular traffic to bypass Antrea
+NetworkPolicy rules on an external Node, e.g.,the SSH connection from a special
+host to the external Node. `policyBypassRules` can be added in the agent
+configuration to define traffic that needs to bypass NetworkPolicy enforcement.
+Below is a configuration example:
+
+```yaml
+policyBypassRules:
+ - direction: ingress
+ protocol: tcp
+ cidr: 1.1.1.1/32
+ port: 22
+```
+
+The `direction` can be `ingress` or `egress`. The supported protocols include:
+`tcp`,`udp`, `icmp` and `ip`. The `cidr` gives the peer address, which is the
+destination in an `egress` rule, and the source in an `ingress` rule. For `tcp`
+and `udp` protocols, the `port` is required to specify the destination port.
+
+## OpenFlow pipeline
+
+A new OpenFlow pipeline is implemented by `antrea-agent` dedicated for
+`ExternalNode` feature.
+
+![OVS pipeline](assets/ovs-pipeline-external-node.svg)
+
+### Non-IP packet
+
+`NonIPTable` is a new OpenFlow table introduced only on external Nodes,
+which is dedicated to all non-IP packets. A non-IP packet is forwarded between
+the pair ports directly, e.g., a non-IP packet entering OVS from the uplink
+interface is output to the paired internal port, and a packet from the internal
+port is output to the uplink.
+
+### IP packet
+
+A new OpenFlow pipeline is set up on external Nodes to process IP packets.
+Antrea NetworkPolicy enforcement is the major function in this new pipeline, and
+the OpenFlow tables used are similar to the Pod pipeline. No L3 routing is
+provided on an external Node, and a simple L2 forwarding policy is implemented.
+OVS connection tracking is used to assist the NetworkPolicy function; as a result
+only the first packet is validated by the OpenFlow entries, and the subsequent
+packets in an accepted connection are allowed directly.
+
+- Egress/Ingress Tables
+
+Table `XgressSecurityClassifierTable` is installed in both `stageEgressSecurity`
+and `stageIngressSecurity`, which is used to install the OpenFlow entries for
+the [`policyBypassRules`](#bypass-antrea-networkpolicy) in the agent configuration.
+
+This is an example of the OpenFlow entry for the above configuration:
+
+```yaml
+table=IngressSecurityClassifier, priority=200,ct_state=+new+trk,tcp,nw_src=1.1.1.1,tp_dst=22 actions=resubmit(,IngressMetric)
+```
+
+Other OpenFlow tables in `stageEgressSecurity` and `stageIngressSecurity` are
+the same as those installed on a Kubernetes worker Node. For more details about
+these tables, please refer to the general [introduction](design/ovs-pipeline.md)
+of Antrea OVS pipeline.
+
+- L2 Forwarding Tables
+
+`L2ForwardingCalcTable` is used to calculate the expected output port of an IP
+packet. As the pair ports with the internal port and uplink always exist on the
+OVS bridge, and both interfaces are configured with the same MAC address, the
+match condition of an OpenFlow entry in `L2ForwardingCalcTable` uses the input
+port number but not the MAC address of the packet. The flow actions are:
+
+1) set flag `OutputToOFPortRegMark`, and
+2) set the peer port as the `TargetOFPortField`, and
+3) enforce the packet to go to stageIngressSecurity.
+
+Below is an example OpenFlow entry in `L2ForwardingCalcTable`
+
+```yaml
+table=L2ForwardingCalc, priority=200,ip,in_port=ens224 actions=load:0x1->NXM_NX_REG0[8],load:0x7->NXM_NX_REG1[],resubmit(,IngressSecurityClassifier)
+table=L2ForwardingCalc, priority=200,ip,in_port="ens224~" actions=load:0x1->NXM_NX_REG0[8],load:0x8->NXM_NX_REG1[],resubmit(,IngressSecurityClassifier)
+```
+
+## Limitations
+
+This feature currently supports only one interface per `ExternalNode` object,
+and `ips` must be set in the interface. The support for multiple network
+interfaces will be added in the future.
+
+`ExternalNode` name must be unique in the `cluster` scope even though it is
+itself a Namespaced resource.
diff --git a/content/docs/v1.15.0/docs/feature-gates.md b/content/docs/v1.15.0/docs/feature-gates.md
new file mode 100644
index 00000000..79008d60
--- /dev/null
+++ b/content/docs/v1.15.0/docs/feature-gates.md
@@ -0,0 +1,440 @@
+# Antrea Feature Gates
+
+This page contains an overview of the various features an administrator can turn on or off for Antrea components. We
+follow the same convention as the
+[Kubernetes feature gates](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/).
+
+In particular:
+
+* a feature in the Alpha stage will be disabled by default but can be enabled by editing the appropriate `.conf` entry
+ in the Antrea manifest.
+* a feature in the Beta stage will be enabled by default but can be disabled by editing the appropriate `.conf` entry in
+ the Antrea manifest.
+* a feature in the GA stage will be enabled by default and cannot be disabled.
+
+Some features are specific to the Agent, others are specific to the Controller, and some apply to both and should be
+enabled / disabled consistently in both
+`.conf` entries.
+
+To enable / disable a feature, edit the Antrea manifest appropriately. For example, to enable `FeatureGateFoo` on Linux,
+edit the Agent configuration in the
+`antrea` ConfigMap as follows:
+
+```yaml
+ antrea-agent.conf: |
+ # FeatureGates is a map of feature names to bools that enable or disable experimental features.
+ featureGates:
+ # Enable the feature gate.
+ FeatureGateFoo: true
+```
+
+## List of Available Features
+
+| Feature Name | Component | Default | Stage | Alpha Release | Beta Release | GA Release | Extra Requirements | Notes |
+| ----------------------------- | ------------------ | ------- | ----- | ------------- | ------------ | ---------- | ------------------ | --------------------------------------------- |
+| `AntreaProxy` | Agent | `true` | GA | v0.8 | v0.11 | v1.14 | Yes | Must be enabled for Windows. |
+| `EndpointSlice` | Agent | `true` | GA | v0.13.0 | v1.11 | v1.14 | Yes | |
+| `TopologyAwareHints` | Agent | `true` | Beta | v1.8 | v1.12 | N/A | Yes | |
+| `CleanupStaleUDPSvcConntrack` | Agent | `false` | Alpha | v1.13 | N/A | N/A | Yes | |
+| `LoadBalancerModeDSR` | Agent | `false` | Alpha | v1.13 | N/A | N/A | Yes | |
+| `AntreaPolicy` | Agent + Controller | `true` | Beta | v0.8 | v1.0 | N/A | No | Agent side config required from v0.9.0+. |
+| `Traceflow` | Agent + Controller | `true` | Beta | v0.8 | v0.11 | N/A | Yes | |
+| `FlowExporter` | Agent | `false` | Alpha | v0.9 | N/A | N/A | Yes | |
+| `NetworkPolicyStats` | Agent + Controller | `true` | Beta | v0.10 | v1.2 | N/A | No | |
+| `NodePortLocal` | Agent | `true` | GA | v0.13 | v1.4 | v1.14 | Yes | Important user-facing change in v1.2.0 |
+| `Egress` | Agent + Controller | `true` | Beta | v1.0 | v1.6 | N/A | Yes | |
+| `NodeIPAM` | Controller | `true` | Beta | v1.4 | v1.12 | N/A | Yes | |
+| `AntreaIPAM` | Agent + Controller | `false` | Alpha | v1.4 | N/A | N/A | Yes | |
+| `Multicast` | Agent + Controller | `true` | Beta | v1.5 | v1.12 | N/A | Yes | |
+| `SecondaryNetwork` | Agent | `false` | Alpha | v1.5 | N/A | N/A | Yes | |
+| `ServiceExternalIP` | Agent + Controller | `false` | Alpha | v1.5 | N/A | N/A | Yes | |
+| `TrafficControl` | Agent | `false` | Alpha | v1.7 | N/A | N/A | No | |
+| `Multicluster` | Agent + Controller | `false` | Alpha | v1.7 | N/A | N/A | Yes | Controller side feature gate added in v1.10.0 |
+| `IPsecCertAuth` | Agent + Controller | `false` | Alpha | v1.7 | N/A | N/A | No | |
+| `ExternalNode` | Agent | `false` | Alpha | v1.8 | N/A | N/A | Yes | |
+| `SupportBundleCollection` | Agent + Controller | `false` | Alpha | v1.10 | N/A | N/A | Yes | |
+| `L7NetworkPolicy` | Agent + Controller | `false` | Alpha | v1.10 | N/A | N/A | Yes | |
+| `AdminNetworkPolicy` | Controller | `false` | Alpha | v1.13 | N/A | N/A | Yes | |
+| `EgressTrafficShaping` | Agent | `false` | Alpha | v1.14 | N/A | N/A | Yes | OVS meters should be supported |
+| `EgressSeparateSubnet` | Agent | `false` | Alpha | v1.15 | N/A | N/A | No | |
+| `NodeNetworkPolicy` | Agent | `false` | Alpha | v1.15 | N/A | N/A | Yes | |
+| `L7FlowExporter` | Agent | `false` | Alpha | v1.15 | N/A | N/A | Yes | |
+
+## Description and Requirements of Features
+
+### AntreaProxy
+
+`AntreaProxy` implements Service load-balancing for ClusterIP Services as part of the OVS pipeline, as opposed to
+relying on kube-proxy. This only applies to traffic originating from Pods, and destined to ClusterIP Services. In
+particular, it does not apply to NodePort Services. Please note that due to some restrictions on the implementation of
+Services in Antrea, the maximum number of Endpoints that Antrea can support at the moment is 800. If the number of
+Endpoints for a given Service exceeds 800, extra Endpoints will be dropped.
+
+Note that this feature must be enabled for Windows. The Antrea Windows YAML manifest provided as part of releases
+enables this feature by default. If you edit the manifest, make sure you do not disable it, as it is needed for correct
+NetworkPolicy implementation for Pod-to-Service traffic.
+
+Please refer to this [document](antrea-proxy.md) for extra information on AntreaProxy and how it can be configured.
+
+#### Requirements for this Feature
+
+When using the OVS built-in kernel module (which is the most common case), your kernel version must be >= 4.6 (as
+opposed to >= 4.4 without this feature).
+
+### EndpointSlice
+
+`EndpointSlice` enables Service EndpointSlice support in AntreaProxy. The EndpointSlice API was introduced in Kubernetes
+1.16 (alpha) and it is enabled by default in Kubernetes 1.17 (beta), promoted to GA in Kubernetes 1.21. The EndpointSlice
+feature will take no effect if AntreaProxy is not enabled. Refer to this [link](https://kubernetes.io/docs/tasks/administer-cluster/enabling-endpointslices/)
+for more information about EndpointSlice. If this feature is enabled but the EndpointSlice v1 API is not available
+(Kubernetes version is lower than 1.21), Antrea Agent will log a message and fallback to the Endpoints API.
+
+#### Requirements for this Feature
+
+- EndpointSlice v1 API is available (Kubernetes version >=1.21).
+- Option `antreaProxy.enable` is set to true.
+
+### TopologyAwareHints
+
+`TopologyAwareHints` enables TopologyAwareHints support in AntreaProxy. For AntreaProxy, traffic can be routed to the
+Endpoint which is closer to where it originated when this feature is enabled. Refer to this [link](https://kubernetes.io/docs/concepts/services-networking/topology-aware-hints/)
+for more information about TopologyAwareHints.
+
+#### Requirements for this Feature
+
+- Option `antreaProxy.enable` is set to true.
+- EndpointSlice API version v1 is available in Kubernetes.
+
+### LoadBalancerModeDSR
+
+`LoadBalancerModeDSR` allows users to specify the load balancer mode as DSR (Direct Server Return). The load balancer
+mode determines how external traffic destined to LoadBalancerIPs and ExternalIPs of Services is processed when it's load
+balanced across Nodes. In DSR mode, external traffic is never SNAT'd and backend Pods running on Nodes that are not the
+ingress Node can reply to clients directly, bypassing the ingress Node. Therefore, DSR mode can preserve client IP of
+requests, and usually has lower latency and higher throughput. It's only meaningful to use this feature when AntreaProxy
+is enabled and configured to proxy external traffic (proxyAll=true). Refer to this [link](
+antrea-proxy.md#configuring-load-balancer-mode-for-external-traffic) for more information about load balancer mode.
+
+#### Requirements for this Feature
+
+- Options `antreaProxy.enable` and `antreaProxy.proxyAll` are set to true.
+- IPv4 only.
+- Linux Nodes only.
+- Encap mode only.
+
+### CleanupStaleUDPSvcConntrack
+
+`CleanupStaleUDPSvcConntrack` enables support for cleaning up stale UDP Service conntrack connections in AntreaProxy.
+
+#### Requirements for this Feature
+
+Option `antreaProxy.enable` is set to true.
+
+### AntreaPolicy
+
+`AntreaPolicy` enables Antrea ClusterNetworkPolicy and Antrea NetworkPolicy CRDs to be handled by Antrea
+controller. `ClusterNetworkPolicy` is an Antrea-specific extension to K8s NetworkPolicies, which enables cluster admins
+to define security policies which apply to the entire cluster. `Antrea NetworkPolicy` also complements K8s
+NetworkPolicies by supporting policy priorities and rule actions. Refer to this [document](antrea-network-policy.md) for
+more information.
+
+#### Requirements for this Feature
+
+None
+
+### Traceflow
+
+`Traceflow` enables a CRD API for Antrea that supports generating tracing requests for traffic going through the
+Antrea-managed Pod network. This is useful for troubleshooting connectivity issues, e.g. determining if a NetworkPolicy
+is responsible for traffic drops between two Pods. Refer to this [document](traceflow-guide.md) for more information.
+
+#### Requirements for this Feature
+
+Until Antrea v0.11, this feature could only be used in "encap" mode, with the Geneve tunnel type (default configuration
+for both Linux and Windows). In v0.11, this feature was graduated to Beta (enabled by default) and this requirement was
+lifted.
+
+In order to support cluster Services as the destination for tracing requests, option `antreaProxy.enable` should be set
+to true to enable AntreaProxy.
+
+### Flow Exporter
+
+`Flow Exporter` is a feature that runs as part of the Antrea Agent, and enables network flow visibility into a
+Kubernetes cluster. Flow exporter sends IPFIX flow records that are built from observed connections in Conntrack module
+to a flow collector. Refer to this [document](network-flow-visibility.md) for more information.
+
+#### Requirements for this Feature
+
+This feature is currently only supported for Nodes running Linux. Windows support will be added in the future.
+
+### NetworkPolicyStats
+
+`NetworkPolicyStats` enables collecting NetworkPolicy statistics from antrea-agents and exposing them through Antrea
+Stats API, which can be accessed by kubectl get commands, e.g. `kubectl get networkpolicystats`. The statistical data
+includes total number of sessions, packets, and bytes allowed or denied by a NetworkPolicy. It is collected
+asynchronously so there may be a delay of up to 1 minute for changes to be reflected in API responses. The feature
+supports K8s NetworkPolicies and Antrea native policies, the latter of which requires
+`AntreaPolicy` to be enabled. Usage examples:
+
+```bash
+# List stats of all K8s NetworkPolicies.
+> kubectl get networkpolicystats -A
+NAMESPACE NAME SESSIONS PACKETS BYTES CREATED AT
+default access-nginx 3 36 5199 2020-09-07T13:19:38Z
+kube-system access-dns 1 12 1221 2020-09-07T13:22:42Z
+
+# List stats of all Antrea ClusterNetworkPolicies.
+> kubectl get antreaclusternetworkpolicystats
+NAME SESSIONS PACKETS BYTES CREATED AT
+cluster-deny-egress 3 36 5199 2020-09-07T13:19:38Z
+cluster-access-dns 10 120 12210 2020-09-07T13:22:42Z
+
+# List stats of all Antrea NetworkPolicies.
+> kubectl get antreanetworkpolicystats -A
+NAMESPACE NAME SESSIONS PACKETS BYTES CREATED AT
+default access-http 3 36 5199 2020-09-07T13:19:38Z
+foo bar 1 12 1221 2020-09-07T13:22:42Z
+
+# List per-rule statistics for Antrea ClusterNetworkPolicy cluster-access-dns.
+# Both Antrea NetworkPolicy and Antrea ClusterNetworkPolicy support per-rule statistics.
+> kubectl get antreaclusternetworkpolicystats cluster-access-dns -o json
+{
+ "apiVersion": "stats.antrea.io/v1alpha1",
+ "kind": "AntreaClusterNetworkPolicyStats",
+ "metadata": {
+ "creationTimestamp": "2022-02-24T09:04:53Z",
+ "name": "cluster-access-dns",
+ "uid": "940cf76a-d836-4e76-b773-d275370b9328"
+ },
+ "ruleTrafficStats": [
+ {
+ "name": "rule1",
+ "trafficStats": {
+ "bytes": 392,
+ "packets": 4,
+ "sessions": 1
+ }
+ },
+ {
+ "name": "rule2",
+ "trafficStats": {
+ "bytes": 111,
+ "packets": 2,
+ "sessions": 1
+ }
+ }
+ ],
+ "trafficStats": {
+ "bytes": 503,
+ "packets": 6,
+ "sessions": 2
+ }
+}
+```
+
+#### Requirements for this Feature
+
+None
+
+### NodePortLocal
+
+`NodePortLocal` (NPL) is a feature that runs as part of the Antrea Agent, through which each port of a Service backend
+Pod can be reached from the external network using a port of the Node on which the Pod is running. NPL enables better
+integration with external Load Balancers which can take advantage of the feature: instead of relying on NodePort
+Services implemented by kube-proxy, external Load-Balancers can consume NPL port mappings published by the Antrea
+Agent (as K8s Pod annotations) and load-balance Service traffic directly to backend Pods. Refer to
+this [document](node-port-local.md) for more information.
+
+#### Requirements for this Feature
+
+This feature is currently only supported for Nodes running Linux with IPv4 addresses. Only TCP & UDP Service ports are
+supported (not SCTP).
+
+### Egress
+
+`Egress` enables a CRD API for Antrea that supports specifying which egress
+(SNAT) IP the traffic from the selected Pods to the external network should use. When a selected Pod accesses the
+external network, the egress traffic will be tunneled to the Node that hosts the egress IP if it's different from the
+Node that the Pod runs on and will be SNATed to the egress IP when leaving that Node. Refer to
+this [document](egress.md) for more information.
+
+#### Requirements for this Feature
+
+This feature is currently only supported for Nodes running Linux and "encap"
+mode. The support for Windows and other traffic modes will be added in the future.
+
+### NodeIPAM
+
+`NodeIPAM` runs a Node IPAM Controller similar to the one in Kubernetes that allocates Pod CIDRs for Nodes. Running Node
+IPAM Controller with Antrea is useful in environments where Kubernetes Controller Manager does not run the Node IPAM
+Controller, and Antrea has to handle the CIDR allocation.
+
+#### Requirements for this Feature
+
+This feature requires the Node IPAM Controller to be disabled in Kubernetes Controller Manager. When Antrea and
+Kubernetes both run Node IPAM Controller there is a risk of conflicts in CIDR allocation between the two.
+
+### AntreaIPAM
+
+`AntreaIPAM` feature allocates IP addresses from IPPools. It is required by bridging mode Pods. The bridging mode allows
+flexible control over Pod IP addressing. The desired set of IP ranges, optionally with VLANs, are defined with `IPPool`
+CRD. An IPPool can be annotated to Namespace, Pod and PodTemplate of StatefulSet/Deployment. Then, Antrea will manage IP
+address assignment for corresponding Pods according to `IPPool` spec. On a Node, cross-Node/VLAN traffic of AntreaIPAM
+Pods is sent to the underlay network, and forwarded/routed by the underlay network. For more information, please refer
+to the
+[Antrea IPAM document](antrea-ipam.md#antrea-flexible-ipam).
+
+This feature gate also needs to be enabled to use Antrea for IPAM when configuring secondary network interfaces with
+Multus, in which case Antrea works as an IPAM plugin and allocates IP addresses for Pods' secondary networks, again from
+the configured IPPools of a secondary network. Refer to the
+[secondary network IPAM document](antrea-ipam.md#ipam-for-secondary-network) to learn more information.
+
+#### Requirements for this Feature
+
+Both bridging mode and secondary network IPAM are supported only on Linux Nodes.
+
+The bridging mode works only with `system` OVS datapath type; and `noEncap`,
+`noSNAT` traffic mode. At the moment, it supports only IPv4. The IPs in an IP range without a VLAN must be in the same
+underlay subnet as the Node IPs, because inter-Node traffic of AntreaIPAM Pods is forwarded by the Node network. IP
+ranges with a VLAN must not overlap with other network subnets, and the underlay network router should provide the
+network connectivity for these VLANs.
+
+### Multicast
+
+The `Multicast` feature enables forwarding multicast traffic within the cluster network (i.e., between Pods) and between
+the external network and the cluster network. Refer to this [document](multicast-guide.md) for more information.
+
+#### Requirements for this Feature
+
+This feature is only supported:
+
+* on Linux Nodes
+* for IPv4 traffic
+* in `noEncap` and `encap` traffic modes
+
+### SecondaryNetwork
+
+The `SecondaryNetwork` feature enables support for provisioning secondary network interfaces for Pods, by annotating
+them appropriately.
+
+More documentation will be coming in the future.
+
+#### Requirements for this Feature
+
+At the moment, Antrea can only create secondary network interfaces using SR-IOV VFs on baremetal Linux Nodes.
+
+### ServiceExternalIP
+
+The `ServiceExternalIP` feature enables a controller which can allocate external IPs for Services with
+type `LoadBalancer`. External IPs are allocated from an
+`ExternalIPPool` resource and each IP gets assigned to a Node selected by the
+`nodeSelector` of the pool automatically. That Node will receive Service traffic destined to that IP and distribute it
+among the backend Endpoints for the Service (through kube-proxy). To enable external IP allocation for a
+`LoadBalancer` Service, you need to annotate the Service with
+`"service.antrea.io/external-ip-pool": ""` and define the appropriate `ExternalIPPool` resource.
+Refer to this [document](service-loadbalancer.md) for more information.
+
+#### Requirements for this Feature
+
+This feature is currently only supported for Nodes running Linux.
+
+### TrafficControl
+
+`TrafficControl` enables a CRD API for Antrea that controls and manipulates the transmission of Pod traffic. It allows
+users to mirror or redirect traffic originating from specific Pods or destined for specific Pods to a local network
+device or a remote destination via a tunnel of various types. It enables a monitoring solution to get full visibility
+into network traffic, including both north-south and east-west traffic. Refer to this [document](traffic-control.md)
+for more information.
+
+### Multicluster
+
+The `Multicluster` feature gate of Antrea Agent enables [Antrea Multi-cluster Gateways](multicluster/user-guide.md#multi-cluster-gateway-configuration)
+which route Multi-cluster Service and Pod traffic through tunnels across clusters, and support for
+[Multi-cluster NetworkPolicy ingress rules](multicluster/user-guide.md#ingress-rule).
+The `Multicluster` feature gate of Antrea Controller enables support for [Multi-cluster NetworkPolicy](multicluster/user-guide.md#multi-cluster-networkpolicy).
+
+#### Requirements for this Feature
+
+Antrea Multi-cluster Controller must be deployed and the cluster must join a Multi-cluster ClusterSet to configure
+Antrea Multi-cluster features. Refer to [Antrea Multi-cluster user guide](multicluster/user-guide.md) for more
+information about Multi-cluster configuration. At the moment, Antrea Multi-cluster supports only IPv4.
+
+### IPsecCertAuth
+
+This feature enables certificate-based authentication for IPSec tunnel.
+
+### ExternalNode
+
+The `ExternalNode` feature enables Antrea Agent runs on a virtual machine or a bare-metal server which is not a
+Kubernetes Node, and enforces Antrea NetworkPolicy for the VM/BM. Antrea Agent supports the `ExternalNode` feature on
+both Linux and Windows.
+
+Refer to this [document](external-node.md) for more information.
+
+#### Requirements for this Feature
+
+Since Antrea Agent is running on an unmanaged VM/BM when this feature is enabled, features designed for K8s Pods are
+disabled. As of now, this feature requires that `AntreaPolicy` and `NetworkPolicyStats` are also enabled.
+
+OVS is required to be installed on the virtual machine or the bare-metal server before running Antrea Agent, and the OVS
+version must be >= 2.13.0.
+
+### SupportBundleCollection
+
+`SupportBundleCollection` feature enables a CRD API for Antrea to collect support bundle files on any Node or
+ExternalNode, and upload to a user defined file server.
+
+More documentation will be coming in the future.
+
+#### Requirements for this Feature
+
+User should provide a file server with this feature, and store its authentication credential in a Secret. Antrea
+Controller is required to be configured with the permission to read the Secret.
+
+### L7NetworkPolicy
+
+`L7NetworkPolicy` enables users to protect their applications by specifying how they are allowed to communicate with
+others, taking into account application context, providing fine-grained control over the network traffic beyond IP,
+transport protocol, and port. Refer to this [document](antrea-l7-network-policy.md) for more information.
+
+#### Requirements for this Feature
+
+This feature is currently only supported for Nodes running Linux, and TX checksum offloading must be disabled. Refer to
+this [document](antrea-l7-network-policy.md#prerequisites) for more information and how it can be configured.
+
+### AdminNetworkPolicy
+
+The `AdminNetworkPolicy` API (which currently includes the AdminNetworkPolicy and BaselineAdminNetworkPolicy objects)
+complements the Antrea-native policies and help cluster administrators to set security postures in a portable manner.
+
+### NodeNetworkPolicy
+
+`NodeNetworkPolicy` allows users to apply ClusterNetworkPolicy to Kubernetes Nodes.
+
+#### Requirements for this Feature
+
+This feature is only supported for Linux Nodes at the moment.
+
+### EgressTrafficShaping
+
+The `EgressTrafficShaping` feature gate of Antrea Agent enables traffic shaping of Egress, which could limit the
+bandwidth for all egress traffic belonging to an Egress. Refer to this [document](egress.md#trafficshaping)
+
+#### Requirements for this Feature
+
+This feature leverages OVS meters to do the actual rate-limiting, therefore this feature requires OVS meters
+to be supported in the datapath.
+
+### EgressSeparateSubnet
+
+`EgressSeparateSubnet` allows users to allocate Egress IPs from a different subnet from the default Node subnet.
+Refer to this [document](egress.md#subnetinfo) for more information.
+
+### L7FlowExporter
+
+`L7FlowExporter` enables users to export application-layer flow data using Pod or Namespace annotations.
+Refer to this [document](network-flow-visibility.md#l7-visibility) for more information.
+
+#### Requirements for this Feature
+
+- Linux Nodes only.
diff --git a/content/docs/v1.15.0/docs/getting-started.md b/content/docs/v1.15.0/docs/getting-started.md
new file mode 100644
index 00000000..a0a1ceda
--- /dev/null
+++ b/content/docs/v1.15.0/docs/getting-started.md
@@ -0,0 +1,268 @@
+# Getting Started
+
+Antrea is super easy to install. All the Antrea components are
+containerized and can be installed using the Kubernetes deployment
+manifest.
+
+![antrea-demo](https://user-images.githubusercontent.com/2495809/94325574-e7876500-ff53-11ea-9ecd-6dedef339fac.gif)
+
+## Ensuring requirements are satisfied
+
+### NodeIPAM
+
+Antrea relies on `NodeIPAM` for per-Node CIDR allocation. `NodeIPAM` can run
+within the Kubernetes `kube-controller-manager`, or within the Antrea
+Controller.
+
+#### NodeIPAM within kube-controller-manager
+
+When using `kubeadm` to create the Kubernetes cluster, passing
+`--pod-network-cidr=` to `kubeadm init` will enable
+`NodeIpamController`. Clusters created with kubeadm will always have
+`CNI` plugins enabled. Refer to
+[Creating a cluster with kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm)
+for more information about setting up a Kubernetes cluster with `kubeadm`.
+
+When the cluster is deployed by other means then:
+
+* To enable `NodeIpamController`, `kube-controller-manager` should be started
+with the following flags:
+ - `--cluster-cidr=`
+ - `--allocate-node-cidrs=true`
+
+* To enable `CNI` network plugins, `kubelet` should be started with the
+`--network-plugin=cni` flag.
+
+* To enable masquerading of traffic for Service cluster IP via iptables,
+`kube-proxy` should be started with the `--cluster-cidr=`
+flag.
+
+#### NodeIPAM within Antrea Controller
+
+For further info about running NodeIPAM within Antrea Controller, see
+[Antrea IPAM Capabilities](antrea-ipam.md)
+
+### Open vSwitch
+
+As for OVS, when using the built-in kernel module, kernel version >= 4.6 is
+required. On the other hand, when building it from OVS sources, OVS
+version >= 2.6.0 is required.
+
+Red Hat Enterprise Linux and CentOS 7.x use kernel 3.10, but as changes to
+OVS kernel modules are regularly backported to these kernel versions, they
+should work with Antrea, starting with version 7.4.
+
+In case a node does not have a supported OVS module installed,
+you can install it following the instructions at:
+[Installing Open vSwitch](https://docs.openvswitch.org/en/latest/intro/install/).
+Please be aware that the `vport-stt` module is not in the Linux tree and needs to be
+built from source, please build and load it manually before STT tunneling is enabled.
+
+Some experimental features disabled by default may have additional requirements,
+please refer to the [Feature Gates documentation](feature-gates.md) to determine
+whether it applies to you.
+
+Antrea will work out-of-the-box on most popular Operating Systems. Known issues
+encountered when running Antrea on specific OSes are documented
+[here](os-issues.md).
+
+There are also a few network prerequisites which need to be satisfied, and they depend
+on the tunnel mode you choose, please check [network requirements](./network-requirements.md).
+
+## Installation / Upgrade
+
+To deploy a released version of Antrea, pick a deployment manifest from the
+[list of releases](https://github.com/antrea-io/antrea/releases). For any
+given release `` (e.g. `v0.1.0`), you can deploy Antrea as follows:
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea.yml
+```
+
+To deploy the latest version of Antrea (built from the main branch), use the
+checked-in [deployment yaml](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/antrea.yml):
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml
+```
+
+You can use the same `kubectl apply` command to upgrade to a more recent version
+of Antrea.
+
+Antrea supports some experimental features that can be enabled or disabled,
+please refer to the [Feature Gates documentation](feature-gates.md) for more
+information.
+
+### Windows support
+
+If you want to add Windows Nodes to your cluster, please refer to these
+[installation instructions](windows.md).
+
+### ARM support
+
+Starting with v1.0, Antrea supports arm64 and arm/v7 Nodes. The installation
+instructions do not change when some (or all) Linux Nodes in a cluster use an
+ARM architecture: the same deployment YAML can be used, as the
+`antrea/antrea-ubuntu` Docker image is actually a manifest list with support for
+the amd64, arm64 and arm/v7 architectures.
+
+Note that while we do run a subset of the Kubernetes conformance tests on both
+the arm/v7 and arm64 Docker images (using [k3s](https://k3s.io/) as the
+Kubernetes distribution), our testing is not as thorough as for the amd64
+image. However, we do not anticipate any issue.
+
+### Install with Helm
+
+Starting with v1.8, Antrea can be installed and updated with Helm. Please refer
+to these [installation instructions](helm.md).
+
+### Deploying Antrea on a Cluster with Existing CNI
+
+The instructions above only apply when deploying Antrea in a new cluster. If you
+need to migrate your existing cluster from another CNI plugin to Antrea, you
+will need to do the following:
+
+* Delete previous CNI, including all resources (K8s objects, iptables rules,
+interfaces, ...) created by that CNI.
+* Deploy Antrea.
+* Restart all Pods in the CNI network in order for Antrea to set-up networking
+for them. This does not apply to Pods which use the Node's network namespace
+(i.e. Pods configured with `hostNetwork: true`). You may use `kubectl drain` to
+drain each Node or reboot all your Nodes.
+
+While this is in-progress, networking will be disrupted in your cluster. After
+deleting the previous CNI, existing Pods may not be reachable anymore.
+
+For example, when migrating from Flannel to Antrea, you will need to do the
+following:
+
+1. Delete Flannel with `kubectl delete -f `.
+2. Delete Flannel bridge and tunnel interface with `ip link delete flannel.1 &&
+ip link delete flannel cni0` **on each Node**.
+3. Ensure [requirements](#ensuring-requirements-are-satisfied) are satisfied.
+4. [Deploy Antrea](#installation--upgrade).
+5. Drain and uncordon Nodes one-by-one. For each Node, run `kubectl drain
+--ignore-daemonsets && kubectl uncordon `. The
+`--ignore-daemonsets` flag will ignore DaemonSet-managed Pods, including the
+Antrea Agent Pods. If you have any other DaemonSet-managed Pods (besides the
+Antrea ones and system ones such as kube-proxy), they will be ignored and will
+not be drained from the Node. Refer to the [Kubernetes
+documentation](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/)
+for more information. Alternatively, you can also restart all the Pods yourself,
+or simply reboot your Nodes.
+
+To build the image locally, you can follow the instructions in the [Contributor
+Guide](../CONTRIBUTING.md#building-and-testing-your-change).
+
+### Deploying Antrea in Kind
+
+To deploy Antrea in a [Kind](https://github.com/kubernetes-sigs/kind) cluster,
+please refer to this [guide](kind.md).
+
+### Deploying Antrea in Minikube
+
+To deploy Antrea in a [Minikube](https://github.com/kubernetes/minikube) cluster,
+please refer to this [guide](minikube.md).
+
+### Deploying Antrea in Rancher Managed Cluster
+
+To deploy Antrea in a [Rancher](https://github.com/rancher/rancher) managed cluster,
+please refer to this [guide](kubernetes-installers.md#rancher).
+
+### Deploying Antrea in AKS, EKS, and GKE
+
+Antrea can work with cloud managed Kubernetes services, and can be deployed to
+AKS, EKS, and GKE clusters.
+
+* To deploy Antrea to an AKS or an AKS Engine cluster, please refer to [the AKS installation guide](aks-installation.md).
+* To deploy Antrea to an EKS cluster, please refer to [the EKS installation guide](eks-installation.md).
+* To deploy Antrea to a GKE cluster, please refer to [the GKE installation guide](gke-installation.md).
+
+### Deploying Antrea with Custom Certificates
+
+By default, Antrea generates the certificates needed for itself to run. To
+provide your own certificates, please refer to [Securing Control Plane](securing-control-plane.md).
+
+### Antctl: Installation and Usage
+
+To use antctl, the Antrea command-line tool, please refer to [this guide](antctl.md).
+
+## Features
+
+### Antrea Network Policy
+
+Besides Kubernetes NetworkPolicy, Antrea also implements its own Network Policy
+CRDs, which provide advanced features including: policy priority, tiering, deny
+action, external entity, and policy statistics. For more information on usage of
+Antrea Network Policies, refer to the [Antrea Network Policy document](antrea-network-policy.md).
+
+### Egress
+
+Antrea supports specifying which egress (SNAT) IP the traffic from the selected
+Pods to the external network should use and which Node the traffic should leave
+the cluster from. For more information, refer to the [Egress document](egress.md).
+
+### Network Flow Visibility
+
+Antrea supports exporting network flow information using IPFIX, and provides a
+reference cookbook on how to visualize the exported network flows using Elastic
+Stack and Kibana dashboards. For more information, refer to the [network flow
+visibility document](network-flow-visibility.md).
+
+### NoEncap and Hybrid Traffic Modes
+
+Besides the default `Encap` mode, in which Pod traffic across Nodes will be
+encapsulated and sent over tunnels, Antrea also supports `NoEncap` and `Hybrid`
+traffic modes. In `NoEncap` mode, Antrea does not encapsulate Pod traffic, but
+relies on the Node network to route the traffic across Nodes. In `Hybrid` mode,
+Antrea encapsulates Pod traffic when the source Node and the destination Node
+are in different subnets, but does not encapsulate when the source and the
+destination Nodes are in the same subnet. Refer to [this guide](noencap-hybrid-modes.md)
+to learn how to configure Antrea with `NoEncap` or `Hybrid` mode.
+
+### Antrea Web UI
+
+Antrea comes with a web UI, which can show runtime information of Antrea
+components and perform Antrea Traceflow operations. Please refer to the [Antrea
+UI repository](https://github.com/antrea-io/antrea-ui) for installation
+instructions and more information.
+
+### OVS Hardware Offload
+
+Antrea can offload OVS flow processing to the NICs that support OVS kernel
+hardware offload using TC. The hardware offload can improve OVS performance
+significantly. For more information on how to configure OVS offload, refer to
+the [OVS hardware offload guide](ovs-offload.md).
+
+### Prometheus Metrics
+
+Antrea supports exporting metrics to Prometheus. For more information, refer to
+the [Prometheus integration document](prometheus-integration.md).
+
+### Support for Services of type LoadBalancer
+
+By leveraging Antrea's Service external IP management feature or configuring
+MetalLB to work with Antrea, Services of type LoadBalancer can be supported
+without requiring an external LoadBalancer. To learn more information, please
+refer to the [Service LoadBalancer document](service-loadbalancer.md).
+
+### Traceflow
+
+Traceflow is a very useful network diagnosis feature in Antrea. It can trace
+and report the forwarding path of a specified packet in the Antrea network.
+For usage of this feature, refer to the [Traceflow user guide](traceflow-guide.md).
+
+### Traffic Encryption
+
+Antrea supports encrypting traffic between Linux Nodes using IPsec or WireGuard.
+To deploy Antrea with traffic encryption enabled, please refer to [this guide](traffic-encryption.md).
+
+### Antrea Multi-cluster
+
+Antrea Multi-cluster implements Multi-cluster Service API, which allows users to
+create multi-cluster Services that can be accessed cross clusters in a
+ClusterSet. Antrea Multi-cluster also supports Antrea ClusterNetworkPolicy
+replication. Multi-cluster admins can define ClusterNetworkPolicies to be
+replicated across the entire ClusterSet, and enforced in all member clusters.
+To learn more information about Antrea Multi-cluster, please refer to the
+[Antrea Multi-cluster user guide](multicluster/user-guide.md).
diff --git a/content/docs/v1.15.0/docs/gke-installation.md b/content/docs/v1.15.0/docs/gke-installation.md
new file mode 100644
index 00000000..01c4e78a
--- /dev/null
+++ b/content/docs/v1.15.0/docs/gke-installation.md
@@ -0,0 +1,133 @@
+# Deploying Antrea on a GKE cluster
+
+We support running Antrea inside of GKE clusters on Ubuntu Node. Antrea would operate
+in NetworkPolicy only mode, in which no encapsulation is required for any kind of traffic
+(Intra Node, Inter Node, etc) and NetworkPolicies are enforced using OVS. Antrea is supported
+on both VPC-native Enable/Disable modes.
+
+## GKE Prerequisites
+
+1. Install the Google Cloud SDK (gcloud). Refer to [Google Cloud SDK installation guide](https://cloud.google.com/sdk/install)
+
+ ```bash
+ curl https://sdk.cloud.google.com | bash
+ ```
+
+2. Make sure you are authenticated to use the Google Cloud API
+
+ ```bash
+ export ADMIN_USER=user@email.com
+ gcloud auth login
+ ```
+
+3. Create a project or use an existing one
+
+ ```bash
+ export GKE_PROJECT=gke-clusters
+ gcloud projects create $GKE_PROJECT
+ ```
+
+## Creating the cluster
+
+You can use any method to create a GKE cluster (gcloud SDK, gcloud Console, etc). The example
+given here is using the Google Cloud SDK.
+
+**Note:** Antrea is supported on Ubuntu Nodes only for GKE cluster. When creating the cluster, you
+ must use the default network provider and must *not* enable "Dataplane V2".
+
+1. Create a GKE cluster
+
+ ```bash
+ export GKE_ZONE="us-west1"
+ export GKE_HOST="UBUNTU"
+ gcloud container --project $GKE_PROJECT clusters create cluster1 --image-type $GKE_HOST \
+ --zone $GKE_ZONE --enable-ip-alias
+ ```
+
+2. Access your cluster
+
+ ```bash
+ kubectl get nodes
+ NAME STATUS ROLES AGE VERSION
+ gke-cluster1-default-pool-93d7da1c-61z4 Ready 3m11s 1.25.7-gke.1000
+ gke-cluster1-default-pool-93d7da1c-rkbm Ready 3m9s 1.25.7-gke.1000
+ ```
+
+3. Create a cluster-admin ClusterRoleBinding
+
+ ```bash
+ kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user user@email.com
+ ```
+
+ **Note:** To create clusterRoleBinding, the user must have `container.clusterRoleBindings.create` permission.
+Use this command to enable it, if the previous command fails due to permission error. Only cluster Admin can
+assign this permission.
+
+ ```bash
+ gcloud projects add-iam-policy-binding $GKE_PROJECT --member user:user@email.com --role roles/container.admin
+ ```
+
+## Deploying Antrea
+
+1. Prepare the Cluster Nodes
+
+ Deploy ``antrea-node-init`` DaemonSet to enable ``kubelet`` to operate in CNI mode.
+
+ ```bash
+ kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-gke-node-init.yml
+ ```
+
+2. Deploy Antrea
+
+ To deploy a released version of Antrea, pick a deployment manifest from the
+[list of releases](https://github.com/antrea-io/antrea/releases).
+Note that GKE support was added in release 0.5.0, which means you cannot
+pick a release older than 0.5.0. For any given release `` (e.g. `v0.5.0`),
+you can deploy Antrea as follows:
+
+ ```bash
+ kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea-gke.yml
+ ```
+
+ To deploy the latest version of Antrea (built from the main branch), use the
+checked-in [deployment yaml](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/antrea-gke.yml):
+
+ ```bash
+ kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-gke.yml
+ ```
+
+ The command will deploy a single replica of Antrea controller to the GKE
+cluster and deploy Antrea agent to every Node. After a successful deployment
+you should be able to see these Pods running in your cluster:
+
+ ```bash
+ $ kubectl get pods --namespace kube-system -l app=antrea -o wide
+ NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+ antrea-agent-24vwr 2/2 Running 0 46s 10.138.15.209 gke-cluster1-default-pool-93d7da1c-rkbm
+ antrea-agent-7dlcp 2/2 Running 0 46s 10.138.15.206 gke-cluster1-default-pool-9ba12cea-wjzn
+ antrea-controller-5f9985c59-5crt6 1/1 Running 0 46s 10.138.15.209 gke-cluster1-default-pool-93d7da1c-rkbm
+ ```
+
+3. Restart remaining Pods
+
+ Once Antrea is up and running, restart all Pods in all Namespaces (kube-system, gmp-system, etc) so they can be managed by Antrea.
+
+ ```bash
+ $ for ns in $(kubectl get ns -o=jsonpath=''{.items[*].metadata.name}'' --no-headers=true); do \
+ pods=$(kubectl get pods -n $ns -o custom-columns=NAME:.metadata.name,HOSTNETWORK:.spec.hostNetwork --no-headers=true | grep '' | awk '{ print $1 }'); \
+ [ -z "$pods" ] || kubectl delete pods -n $ns $pods; done
+ pod "alertmanager-0" deleted
+ pod "collector-4sfvd" deleted
+ pod "collector-gtlxf" deleted
+ pod "gmp-operator-67c4678f5c-ffktp" deleted
+ pod "rule-evaluator-85b8bb96dc-trnqj" deleted
+ pod "event-exporter-gke-7bf6c99dcb-4r62c" deleted
+ pod "konnectivity-agent-autoscaler-6dfdb49cf7-hfv9g" deleted
+ pod "konnectivity-agent-cc655669b-2cjc9" deleted
+ pod "konnectivity-agent-cc655669b-d79vf" deleted
+ pod "kube-dns-5bfd847c64-ksllw" deleted
+ pod "kube-dns-5bfd847c64-qv9tq" deleted
+ pod "kube-dns-autoscaler-84b8db4dc7-2pb2b" deleted
+ pod "l7-default-backend-64679d9c86-q69lm" deleted
+ pod "metrics-server-v0.5.2-6bf74b5d5f-22gqq" deleted
+ ```
diff --git a/content/docs/v1.15.0/docs/helm.md b/content/docs/v1.15.0/docs/helm.md
new file mode 100644
index 00000000..fab3ca9a
--- /dev/null
+++ b/content/docs/v1.15.0/docs/helm.md
@@ -0,0 +1,129 @@
+# Installing Antrea with Helm
+
+## Table of Contents
+
+
+- [Prerequisites](#prerequisites)
+- [Charts](#charts)
+ - [Antrea chart](#antrea-chart)
+ - [Installation](#installation)
+ - [Upgrade](#upgrade)
+ - [An important note on CRDs](#an-important-note-on-crds)
+ - [Flow Aggregator chart](#flow-aggregator-chart)
+ - [Installation](#installation-1)
+ - [Upgrade](#upgrade-1)
+ - [Theia chart](#theia-chart)
+
+
+Starting with Antrea v1.8, Antrea can be installed and updated using
+[Helm](https://helm.sh/).
+
+We provide the following Helm charts:
+
+* `antrea/antrea`: the Antrea network plugin.
+* `antrea/flow-aggregator`: the Antrea Flow Aggregator; see
+ [here](network-flow-visibility.md) for more details.
+* `antrea/theia`: Theia, the Antrea network observability solution; refer to the
+ [Theia](https://github.com/antrea-io/theia) sub-project for more details.
+
+Note that these charts are the same charts that we use to generate the YAML
+manifests for the `kubectl apply` installation method.
+
+Helm installation is currently considered Alpha.
+
+## Prerequisites
+
+* Ensure that the necessary
+ [requirements](getting-started.md#ensuring-requirements-are-satisfied) for
+ running Antrea are met.
+* Ensure that Helm 3 is [installed](https://helm.sh/docs/intro/install/). We
+ recommend using a recent version of Helm if possible. Refer to the [Helm
+ documentation](https://helm.sh/docs/topics/version_skew/) for compatibility
+ between Helm and Kubernetes versions.
+* Add the Antrea Helm chart repository:
+
+ ```bash
+ helm repo add antrea https://charts.antrea.io
+ helm repo update
+ ```
+
+## Charts
+
+### Antrea chart
+
+#### Installation
+
+To install the Antrea Helm chart, use the following command:
+
+```bash
+helm install antrea antrea/antrea --namespace kube-system
+```
+
+This will install the latest available version of Antrea. You can also install a
+specific version of Antrea (>= v1.8.0) with `--version `.
+
+#### Upgrade
+
+To upgrade the Antrea Helm chart, use the following commands:
+
+```bash
+# Upgrading CRDs requires an extra step; see explanation below
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea-crds.yml
+helm upgrade antrea antrea/antrea --namespace kube-system --version
+```
+
+#### An important note on CRDs
+
+Helm 3 introduces "special treatment" for
+[CRDs](https://helm.sh/docs/chart_best_practices/custom_resource_definitions/),
+with the ability to place CRD definitions (as plain YAML, not templated) in a
+special crds/ directory. When CRDs are defined this way, they will be installed
+before other resources (in case these other resources include CRs corresponding
+to these CRDs). CRDs defined this way will also never be deleted (to avoid
+accidental deletion of user-defined CRs) and will also never be upgraded (in
+case the chart author didn't ensure that the upgrade was
+backwards-compatible). The rationale for all of this is described in details in
+this [Helm community
+document](https://github.com/helm/community/blob/main/hips/hip-0011.md).
+
+Even though Antrea follows a [strict versioning policy](versioning.md), which
+reduces the likelihood of a serious issue when upgrading Antrea, we have decided
+to follow Helm best practices when it comes to CRDs. It means that an extra step
+is required for upgrading the chart:
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea-crds.yml
+```
+
+When upgrading CRDs in production, it is recommended to make a backup of your
+Custom Resources (CRs) first.
+
+### Flow Aggregator chart
+
+The Flow Aggregator is on the same release schedule as Antrea. Please ensure
+that you use the same released version for the Flow Aggregator chart as for the
+Antrea chart.
+
+#### Installation
+
+To install the Flow Aggregator Helm chart, use the following command:
+
+```bash
+helm install flow-aggregator antrea/flow-aggregator --namespace flow-aggregator --create-namespace
+```
+
+This will install the latest available version of the Flow Aggregator. You can
+also install a specific version (>= v1.8.0) with `--version `.
+
+#### Upgrade
+
+To upgrade the Flow Aggregator Helm chart, use the following command:
+
+```bash
+helm upgrade flow-aggregator antrea/flow-aggregator --namespace flow-aggregator --version
+```
+
+### Theia chart
+
+Refer to the [Theia
+documentation](https://github.com/antrea-io/theia/blob/main/docs/getting-started.md).
diff --git a/content/docs/v1.15.0/docs/kind.md b/content/docs/v1.15.0/docs/kind.md
new file mode 100644
index 00000000..03ca92bd
--- /dev/null
+++ b/content/docs/v1.15.0/docs/kind.md
@@ -0,0 +1,189 @@
+# Deploying Antrea on a Kind cluster
+
+
+- [Create a Kind cluster and deploy Antrea in a few seconds](#create-a-kind-cluster-and-deploy-antrea-in-a-few-seconds)
+ - [Using the kind-setup.sh script](#using-the-kind-setupsh-script)
+ - [As an Antrea developer](#as-an-antrea-developer)
+ - [Create a Kind cluster manually](#create-a-kind-cluster-manually)
+ - [Deploy Antrea to your Kind cluster](#deploy-antrea-to-your-kind-cluster)
+ - [Deploy a local build of Antrea to your Kind cluster (for developers)](#deploy-a-local-build-of-antrea-to-your-kind-cluster-for-developers)
+ - [Check that everything is working](#check-that-everything-is-working)
+- [Run the Antrea e2e tests](#run-the-antrea-e2e-tests)
+- [FAQ](#faq)
+ - [Antrea Agents are not starting on macOS, what could it be?](#antrea-agents-are-not-starting-on-macos-what-could-it-be)
+ - [Antrea Agents are not starting on Windows, what could it be?](#antrea-agents-are-not-starting-on-windows-what-could-it-be)
+
+
+We support running Antrea inside of Kind clusters on both Linux and macOS
+hosts. On macOS, support for Kind requires the use of Docker Desktop, instead of
+the legacy [Docker
+Toolbox](https://docs.docker.com/docker-for-mac/docker-toolbox/).
+
+To deploy a released version of Antrea on an existing Kind cluster, you can
+simply use the same command as for other types of clusters:
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea.yml
+```
+
+## Create a Kind cluster and deploy Antrea in a few seconds
+
+### Using the kind-setup.sh script
+
+To create a simple two worker Node cluster and deploy a released version of
+Antrea, use:
+
+```bash
+./ci/kind/kind-setup.sh create
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea.yml
+```
+
+Or, for the latest version of Antrea, use:
+
+```bash
+./ci/kind/kind-setup.sh create
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml
+```
+
+The `kind-setup.sh` script may execute `kubectl` commands to set up the cluster,
+and requires that `kubectl` be present in your `PATH`.
+
+To specify a different number of worker Nodes, use `--num-workers `. To
+specify the IP family of the kind cluster, use `--ip-family `.
+To specify the Kubernetes version of the kind cluster, use
+`--k8s-version `. To specify the Service Cluster IP range, use
+`--service-cidr `.
+
+If you want to pre-load the Antrea image in each Node (to avoid having each Node
+pull from the registry), you can use:
+
+```bash
+docker pull projects.registry.vmware.com/antrea/antrea-ubuntu:
+./ci/kind/kind-setup.sh --images projects.registry.vmware.com/antrea/antrea-ubuntu: create
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea.yml
+```
+
+The `kind-setup.sh` is a convenience script typically used by developers for
+testing. For more information on how to create a Kind cluster manually and
+deploy Antrea, read the following sections.
+
+#### As an Antrea developer
+
+If you are an Antrea developer and you need to deploy Antrea with your local
+changes and locally built Antrea image, use:
+
+```bash
+./ci/kind/kind-setup.sh --antrea-cni create
+```
+
+`kind-setup.sh` allows developers to specify the number of worker Nodes, the
+docker bridge networks/subnets connected to the worker Nodes (to test Antrea in
+different encap modes), and a list of docker images to be pre-loaded in each
+Node. For more information on usage, run:
+
+```bash
+./ci/kind/kind-setup.sh help
+```
+
+As a developer, you do usually want to provide the `--antrea-cni` flag, so that
+the `kind-setup.sh` can generate the appropriate Antrea YAML manifest for you on
+the fly, and apply it to the created cluster directly.
+
+### Create a Kind cluster manually
+
+The only requirement is to use a Kind configuration file which disables the
+Kubernetes default CNI (`kubenet`). For example, your configuration file may
+look like this:
+
+```yaml
+kind: Cluster
+apiVersion: kind.x-k8s.io/v1alpha4
+networking:
+ disableDefaultCNI: true
+ podSubnet: 10.10.0.0/16
+nodes:
+- role: control-plane
+- role: worker
+- role: worker
+```
+
+Once you have created your configuration file (let's call it `kind-config.yml`),
+create your cluster with:
+
+```bash
+kind create cluster --config kind-config.yml
+```
+
+### Deploy Antrea to your Kind cluster
+
+```bash
+# pull the Antrea Docker image
+docker pull projects.registry.vmware.com/antrea/antrea-ubuntu:
+# load the Antrea Docker image in the Nodes
+kind load docker-image projects.registry.vmware.com/antrea/antrea-ubuntu:
+# deploy Antrea
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea.yml
+```
+
+### Deploy a local build of Antrea to your Kind cluster (for developers)
+
+These instructions assume that you have built the Antrea Docker image locally
+(e.g. by running `make` from the root of the repository).
+
+```bash
+# load the Antrea Docker image in the Nodes
+kind load docker-image antrea/antrea-ubuntu:latest
+# deploy Antrea
+kubectl apply -f build/yamls/antrea.yml
+```
+
+### Check that everything is working
+
+After a few seconds you should be able to observe the following when running
+`kubectl get -n kube-system pods -l app=antrea`:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+antrea-agent-dgsfs 2/2 Running 0 8m56s
+antrea-agent-nzsmx 2/2 Running 0 8m56s
+antrea-agent-zsztq 2/2 Running 0 8m56s
+antrea-controller-775f4d79f8-6tksp 1/1 Running 0 8m56s
+```
+
+## Run the Antrea e2e tests
+
+To run the Antrea e2e test suite on your Kind cluster, please refer to [this
+document](../test/e2e/README.md#running-the-e2e-tests-on-a-kind-cluster).
+
+## FAQ
+
+### Antrea Agents are not starting on macOS, what could it be?
+
+Some older versions of Docker Desktop did not include all the required Kernel
+modules to run the Antrea Agent, and in particular the `openvswitch` Kernel
+module. See [this issue](https://github.com/docker/for-mac/issues/4660) for more
+information. This issue does not exist with recent Docker Desktop versions (`>=
+2.5`).
+
+### Antrea Agents are not starting on Windows, what could it be?
+
+At this time, we do not officially support Antrea for Kind clusters running on
+Windows hosts. In recent Docker Desktop versions, the default way of running
+Linux containers on Windows is by using the [Docker Desktop WSL 2
+backend](https://docs.docker.com/desktop/windows/wsl/). However, the Linux
+Kernel used by default in WSL 2 does not include all the required Kernel modules
+to run the Antrea Agent, and in particular the `openvswitch` Kernel
+module. There are 2 different ways to work around this issue, which we will not
+detail in this document:
+
+* use the Hyper-V backend for Docker Desktop
+* build a custom Kernel for WSL, with the required Kernel configuration:
+
+ ```text
+ CONFIG_NETFILTER_XT_MATCH_RECENT=y
+ CONFIG_NETFILTER_XT_TARGET_CT=y
+ CONFIG_OPENVSWITCH=y
+ CONFIG_OPENVSWITCH_GRE=y
+ CONFIG_OPENVSWITCH_VXLAN=y
+ CONFIG_OPENVSWITCH_GENEVE=y
+ ```
diff --git a/content/docs/v1.15.0/docs/kubernetes-installers.md b/content/docs/v1.15.0/docs/kubernetes-installers.md
new file mode 100644
index 00000000..45ab830c
--- /dev/null
+++ b/content/docs/v1.15.0/docs/kubernetes-installers.md
@@ -0,0 +1,148 @@
+# K8s Installers and Distributions
+
+## Tested installers and distributions
+
+The table below is not comprehensive. Antrea should work with most K8s
+installers and distributions. The table refers to specific version combinations
+which are known to work and have been tested, but support is not limited to that
+list. Each Antrea version supports [multiple K8s minor versions](versioning.md#supported-k8s-versions),
+and installers / distributions based on any one of these K8s versions should
+work with that Antrea version.
+
+| Antrea Version | Installer / Distribution | Cloud Infra | Node Info | Node Size | Conformance Results | Comments |
+|-|-|-|-|-|-|-|
+| v1.0.0 | Kubeadm v1.21.0 | AWS EC2 | Ubuntu 20.04.2 LTS (5.4.0-1045-aws) amd64, docker://20.10.6 | t3.medium | | |
+| - | - | - | Windows Server 2019 Datacenter (10.0.17763.1817), docker://19.3.14 | t3.medium | | |
+| - | - | - | Ubuntu 20.04.2 LTS (5.4.0-1045-aws) arm64, docker://20.10.6 | t3.medium | | |
+| - | Cluster API Provider vSphere (CAPV), K8s 1.19.1 | VMC on AWS, vSphere 7.0.1 | Ubuntu 18.04, containerd | 2 vCPUs, 8GB RAM | | Antrea CI |
+| - | K3s v1.19.8+k3s1 | [OSUOSL] | Ubuntu 20.04.1 LTS (5.4.0-66-generic) arm64, containerd://1.4.3-k3s3 | 2 vCPUs, 4GB RAM | | Antrea CI, cluster installed with [k3sup] 0.9.13 |
+| - | Kops v1.20, K8s v1.20.5 | AWS EC2 | Ubuntu 20.04.2 LTS (5.4.0-1041-aws) amd64, containerd://1.4.4 | t3.medium | [results tarball](http://downloads.antrea.io/artifacts/sonobuoy-conformance/kops_202104212218_sonobuoy_bf0f8e77-c9df-472a-85e2-65e456cf4d83.tar.gz) | |
+| - | EKS, K8s v1.17.12 | AWS | AmazonLinux2, docker | t3.medium | | Antrea CI |
+| - | GKE, K8s v1.19.8-gke.1600 | GCP | Ubuntu 18.04, docker | e2-standard-4 | | Antrea CI |
+| - | AKS, K8s v1.18.14 | Azure | Ubuntu 18.04, moby | Standard_DS2_v2 | | Antrea CI |
+| - | AKS, K8s v1.19.9 | Azure | Ubuntu 18.04, containerd | Standard_DS2_v2 | | Antrea CI |
+| - | Kind v0.9.0, K8s v1.19.1 | N/A | Ubuntu 20.10, containerd://1.4.0 | N/A | | [Requirements for using Antrea on Kind](kind.md) |
+| - | Minikube v1.25.0 | N/A | Ubuntu 20.04.2 LTS (5.10.76-linuxkit) arm64, docker://20.10.12 | 8GB RAM | | |
+| v1.10.0 | Rancher v2.7.0, K8s v1.24.10 | vSphere | Ubuntu 22.04.1 LTS (5.15.0-57-generic) amd64, docker://20.10.21 | 4 vCPUs, 4GB RAM | | |
+| v1.11.0 | Kubeadm v1.20.2 | N/A | openEuler 22.03 LTS, docker://18.09.0 | 10GB RAM | | |
+| v1.11.0 | Kubeadm v1.25.5 | N/A | openEuler 22.03 LTS, containerd://1.6.18 | 10GB RAM | | |
+| v1.15.0 | Talos v1.5.5 | Docker provisioner | Talos | 2 vCPUs, 2.1 GB RAM | Pass | Requires Antrea v1.15 or above |
+| - | - | QEMU provisioner | Talos | 2 vCPUs, 2.1 GB RAM | Pass | Requires Antrea v1.15 or above |
+
+## Installer-specific instructions
+
+### Kubeadm
+
+When running `kubeadm init` to create a cluster, you need to provide a range of
+IP addresses for the Pod network using `--pod-network-cidr`. By default, a /24
+subnet will be allocated out of the CIDR to every Node which joins the cluster,
+so make sure you use a large enough CIDR to accommodate the number of Nodes you
+want. Once the cluster has been created, this CIDR cannot be changed.
+
+### Rancher
+
+Follow these steps to deploy Antrea (as a [custom CNI](https://rke.docs.rancher.com/config-options/add-ons/network-plugins/custom-network-plugin-example))
+on [Rancher](https://ranchermanager.docs.rancher.com/pages-for-subheaders/kubernetes-clusters-in-rancher-setup) cluster:
+
+* Edit the cluster YAML and set the `network-plugin` option to none.
+
+* Add an addon for Antrea, in the following manner:
+
+ ```yaml
+ addons_include:
+ -
+ ```
+
+### K3s
+
+When creating a cluster, run K3s with the following options:
+
+* `--flannel-backend=none`, which lets you run the [CNI of your
+ choice](https://rancher.com/docs/k3s/latest/en/installation/network-options/)
+* `--disable-network-policy`, to disable the K3s NetworkPolicy controller
+
+### Kops
+
+When creating a cluster, run Kops with `--networking cni`, to enable CNI for the
+cluster without deploying a specific network plugin.
+
+### Kind
+
+To deploy Antrea on Kind, please follow these [steps](kind.md).
+
+### Minikube
+
+To deploy Antrea on minikube, please follow these [steps](minikube.md).
+
+### Talos
+
+[Talos](https://www.talos.dev/) is a Linux distribution designed for running
+Kubernetes. Antrea can be used as the CNI on Talos clusters (tested with both
+the Docker provisioner and the QEMU provisioner). However, because of some
+built-in security settings in Talos, the default configuration values cannot be
+used when installing Antrea. You will need to install Antrea using Helm, with a
+few custom values. Antrea v1.15 or above is required.
+
+Follow these steps to deploy Antrea on a Talos cluster:
+
+* Make sure that your Talos cluster is created without a CNI. To ensure this,
+ you can use a config patch. For example, to create a Talos cluster without a
+ CNI, using the Docker provisioner:
+
+ ```bash
+ cat << EOF > ./patch.yaml
+ cluster:
+ network:
+ cni:
+ name: none
+ EOF
+
+ talosctl cluster create --config-patch=@patch.yaml --wait=false --workers 2
+ ```
+
+ Notice how we use `--wait=false`: the cluster will never be "ready" until a
+ CNI is installed.
+
+ Note that while we use the Docker provisioner here, you can use the Talos
+ platform of your choice.
+
+* Ensure that you retrieve the Kubeconfig for your new cluster once it is
+ available. You may need to use the `talosctl kubeconfig` command for this.
+
+* Install Antrea using Helm, with the appropriate values:
+
+ ```bash
+ cat << EOF > ./values.yaml
+ agent:
+ dontLoadKernelModules: true
+ installCNI:
+ securityContext:
+ capabilities: []
+ EOF
+
+ helm install -n kube-system antrea -f value.yml antrea/antrea
+ ```
+
+ The above configuration will drop all capabilities from the `installCNI`
+ container, and instruct the Antrea Agent not to try loading any Kernel module
+ explicitly.
+
+## Updating the list
+
+You can [open a Pull Request](../CONTRIBUTING.md) to:
+
+* Add a new K8s installer or distribution to the table above.
+* Add a new combination of versions that you have tested successfully to the
+ table above.
+
+Please make sure that you run conformance tests with [sonobuoy] and consider
+uploading the test results to a publicly accessible location. You can run
+sonobuoy with:
+
+```bash
+sonobuoy run --mode certified-conformance
+```
+
+[k3sup]: https://github.com/alexellis/k3sup
+[OSUOSL]: https://osuosl.org/services/aarch64/
+[sonobuoy]: https://github.com/vmware-tanzu/sonobuoy
diff --git a/content/docs/v1.15.0/docs/maintainers/antrea-docker-image.md b/content/docs/v1.15.0/docs/maintainers/antrea-docker-image.md
new file mode 100644
index 00000000..0d7747cc
--- /dev/null
+++ b/content/docs/v1.15.0/docs/maintainers/antrea-docker-image.md
@@ -0,0 +1,39 @@
+# Antrea Docker image
+
+The main Antrea Docker image, `antrea/antrea-ubuntu`, is a multi-arch image. The
+`antrea/antrea-ubuntu` manifest is a list of three manifests:
+`antrea/antrea-ubuntu-amd64`, `antrea/antrea-ubuntu-arm64` and
+`antrea/antrea-ubuntu-arm`. Of these three manifests, only the first one is
+built and uploaded to Dockerhub by Github workflows defined in the
+`antrea-io/antrea` repositories. The other two are built and uploaded by Github
+workflows defined in a private repository (`vmware-tanzu/antrea-build-infra`),
+to which only the project maintainers have access. These workflows are triggered
+every time the `main` branch of `antrea-io/antrea` is updated, as well as every
+time a new Antrea Github release is created. They build the
+`antrea/antrea-ubuntu-arm64` and `antrea/antrea-ubuntu-arm` Docker images on
+native arm64 workers, then create the `antrea/antrea-ubuntu` multi-arch manifest
+and push it to Dockerhub. They are also in charge of testing the images in a
+[K3s](https://github.com/k3s-io/k3s) cluster.
+
+## Why do we use a private repository?
+
+The `vmware-tanzu/antrea-build-infra` repository uses self-hosted ARM64 workers
+provided by the [Open Source Lab](https://osuosl.org/services/aarch64/) at
+Oregon State University. These workers enable us to build, and more importantly
+*test*, the Antrea Docker images for the arm64 and arm/v7 architectures. Being
+able to build Docker images on native ARM platforms is convenient as it is much
+faster than emulation. But if we just wanted to build the images, emulation
+would probably be good enough. However, testing Kubernetes ARM support using
+emulation is no piece of cake. Which is why we prefer to use native ARM64
+workers.
+
+Github strongly
+[recommends](https://docs.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners#self-hosted-runner-security-with-public-repositories)
+not to use self-hosted runners with public repositories, for security
+reasons. It would be too easy for a malicious person to run arbitrary code on
+the runners by opening a pull request. Were we to make this repository public,
+we would therefore at least need to disable pull requests, which is sub-optimal
+for a public repository. We believe Github will address the issue eventually and
+provide safeguards to enable using self-hosted runners with public
+repositories, at which point we will migrate workflows from this repository to
+the main Antrea repository.
diff --git a/content/docs/v1.15.0/docs/maintainers/build-kubemark.md b/content/docs/v1.15.0/docs/maintainers/build-kubemark.md
new file mode 100644
index 00000000..c31fccfb
--- /dev/null
+++ b/content/docs/v1.15.0/docs/maintainers/build-kubemark.md
@@ -0,0 +1,13 @@
+# Build the kubemark image
+
+This documentation simply describes how to build the kubemark image used in
+[Antrea scale testing](../antrea-agent-simulator.md)
+
+```bash
+cd $KUBERNETES_PATH
+git checkout v1.29.0
+make WHAT=cmd/kubemark KUBE_BUILD_PLATFORMS=linux/amd64
+cp ./_output/local/bin/linux/amd64/kubemark cluster/images/kubemark
+cd cluster/images/kubemark
+docker build -t antrea/kubemark:v1.29.0 .
+```
diff --git a/content/docs/v1.15.0/docs/maintainers/getting-started-gif.md b/content/docs/v1.15.0/docs/maintainers/getting-started-gif.md
new file mode 100644
index 00000000..4e5f78fc
--- /dev/null
+++ b/content/docs/v1.15.0/docs/maintainers/getting-started-gif.md
@@ -0,0 +1,12 @@
+# Getting-started GIF
+
+To refresh the gif image included in
+[getting-started.md](../getting-started.md), follow these steps:
+
+* install [asciinema](https://asciinema.org/)
+* set `PS1="> "` in your bash profile file (e.g. `.bashrc`, `zshrc`, ...) to simplify the prompt
+* record the cast with the correct shell, e.g. `SHELL=zsh asciinema rec my.cast`
+* convert the cast file to a gif file: `docker run --rm -v $PWD:/data -w /data asciinema/asciicast2gif -s 3 -w 120 -h 20 my.cast my.gif`
+* upload the gif file to Github's CDN by following these
+ [instructions](https://gist.github.com/vinkla/dca76249ba6b73c5dd66a4e986df4c8d)
+* update the link in [getting-started.md](../getting-started.md) by opening a PR
diff --git a/content/docs/v1.15.0/docs/maintainers/release.md b/content/docs/v1.15.0/docs/maintainers/release.md
new file mode 100644
index 00000000..8e9e46ae
--- /dev/null
+++ b/content/docs/v1.15.0/docs/maintainers/release.md
@@ -0,0 +1,93 @@
+# Antrea Release Process
+
+This file documents the list of steps to perform to create a new Antrea
+release. We use `` as a placeholder for the release tag (e.g. `v1.4.0`).
+
+1. *For a minor release* On the code freeze date (typically one week before the
+ actual scheduled release date), create a release branch for the new minor
+ release (e.g `release-1.4`).
+ - after that time, only bug fixes should be merged into the release branch,
+ by [cherry-picking](../contributors/cherry-picks.md) the fix after it has
+ been merged into main. The maintainer in charge of that specific minor
+ release can either do the cherry-picking directly or ask the person who
+ contributed the fix to do it.
+
+2. Open a PR (labelled with `kind/release`) against the appropriate release
+ branch with the following commits:
+ - a commit to update the [CHANGELOG](https://github.com/antrea-io/antrea/blob/v1.15.0/CHANGELOG). *For a minor release*,
+ all significant changes and all bug fixes (labelled with
+ `action/release-note` since the first version of the previous minor release
+ should be mentioned, even bug fixes which have already been included in
+ some patch release. *For a patch release*, you will mention all the bug
+ fixes since the previous release with the same minor version. The commit
+ message must be *exactly* `"Update CHANGELOG for release"`, as a bot
+ will look for this commit and cherry-pick it to update the main branch
+ (starting with Antrea v1.0). The
+ [prepare-changelog.sh](https://github.com/antrea-io/antrea/blob/v1.15.0/hack/release/prepare-changelog.sh) script may
+ be used to easily generate links to PRs and the Github profiles of PR
+ authors. Use `prepare-changelog.sh -h` to get the usage.
+ - a commit to update [VERSION](https://github.com/antrea-io/antrea/blob/v1.15.0/VERSION) as needed, using the following
+ commit message: `"Set VERSION to "`. Before committing, ensure that
+ you run `make -C build/charts/ helm-docs` and include the changes.
+
+3. Run all the tests for the PR, investigating test failures and re-triggering
+ the tests as needed.
+ - Github worfklows are run automatically whenever the head branch is updated.
+ - Jenkins tests need to be [triggered manually](../../CONTRIBUTING.md#getting-your-pr-verified-by-ci).
+ - Cloud tests need to be triggered manually through the
+ [Jenkins web UI](https://jenkins.antrea-ci.rocks/). Admin access is
+ required. For each job (AKS, EKS, GKE), click on `Build with Parameters`,
+ and enter the name of your fork as `ANTREA_REPO` and the name of your
+ branch as `ANTREA_GIT_REVISION`. Test starting times need to be staggered:
+ if multiple jobs run at the same time, the Jenkins worker may run
+ out-of-memory.
+
+4. Request a review from the other maintainers, and anyone else who may need to
+ review the release notes. In case of feedback, you may want to consider
+ waiting for all the tests to succeed before updating your PR. Once all the
+ tests have run successfully once, address review comments, get approval for
+ your PR, and merge.
+ - this is the only case for which the "Rebase and merge" option should be
+ used instead of the "Squash and merge" option. This is important, in order
+ to ensure that changes to the CHANGELOG are preserved as an individual
+ commit. You will need to enable the "Allow rebase merging" setting in the
+ repository settings temporarily, and remember to disable it again right
+ after you merge.
+
+5. Make the release on Github **with the release branch as the target** and copy
+ the relevant section of the CHANGELOG as the release description (make sure
+ all the markdown links work). The
+ [draft-release.sh](https://github.com/antrea-io/antrea/blob/v1.15.0/hack/release/draft-release.sh) script can
+ be used to create the release draft. Use `draft-release.sh -h` to get the
+ usage. You typically should **not** be checking the `Set as a pre-release`
+ box. This would only be necessary for a release candidate (e.g., `` is
+ `1.4.0-rc.1`), which we do not have at the moment. There is no need to upload
+ any assets as this will be done automatically by a Github workflow, after you
+ create the release.
+ - the `Set as the latest release` box is checked by default. **If you are
+ creating a patch release for an older minor version of Antrea, you should
+ uncheck the box.**
+
+6. After a while (time for the relevant Github workflows to complete), check that:
+ - the Docker image has been pushed to
+ [dockerhub](https://hub.docker.com/u/antrea) with the correct tag. This is
+ handled by a Github worfklow defined in a separate Github repository and it
+ can take some time for this workflow to complete. See this
+ [document](antrea-docker-image.md) for more information.
+ - the assets have been uploaded to the release (`antctl` binaries and yaml
+ manifests). This is handled by the `Upload assets to release` workflow. In
+ particular, the following link should work:
+ `https://github.com/antrea-io/antrea/releases/download//antrea.yml`.
+
+7. After the appropriate Github workflow completes, a bot will automatically
+ submit a PR to update the CHANGELOG in the main branch. You should verify the
+ contents of the PR and merge it (no need to run the tests, use admin
+ privileges).
+
+8. *For a minor release* Finally, open a PR against the main branch with a
+ single commit, to update [VERSION](https://github.com/antrea-io/antrea/blob/v1.15.0/VERSION) to the next minor version
+ (with `-dev` suffix). For example, if the release was for `v1.4.0`, the
+ VERSION file should be updated to `v1.5.0-dev`. Before committing, ensure
+ that you run `make -C build/charts/ helm-docs` and include the changes. Note
+ that after a patch release, the VERSION file in the main branch is never
+ updated, so no additional commit is needed.
diff --git a/content/docs/v1.15.0/docs/maintainers/updating-ovs-windows.md b/content/docs/v1.15.0/docs/maintainers/updating-ovs-windows.md
new file mode 100644
index 00000000..f42612f5
--- /dev/null
+++ b/content/docs/v1.15.0/docs/maintainers/updating-ovs-windows.md
@@ -0,0 +1,37 @@
+# Updating the OVS Windows Binaries
+
+Antrea ships a zip archive with OVS binaries for Windows. The binaries are
+hosted on the antrea.io website and updated as needed. This file documents the
+procedure to upload a new version of the OVS binaries. The archive is served
+from AWS S3, and therefore access to the Antrea S3 account is required for this
+procedure.
+
+* We assume that you have already built the OVS binaries (if a custom built is
+ required), or retrieved them from the official OVS build pipelines. The
+ binaries must be built in **Release** mode for acceptable performance.
+
+* Name the zip archive appropriately:
+ `ovs-[-antrea.]-win64.zip`
+ - the format for `` is `..`, with no `v`
+ prefix.
+ - the `-antrea.` component is optional but must be provided if this
+ is not the official build for the referenced OVS version. ``
+ starts at 1 and is incremented for every new upload corresponding to that
+ OVS version.
+
+* Generate the SHA256 checksum for the archive.
+ - place yourself in the directory containing the archive.
+ - run `sha256sum -b .zip > .zip.sha256`, where `` is
+ determined by the previous step.
+
+* Upload the archive and SHA256 checksum file to the `ovs/` folder in the
+ `downloads.antrea.io` S3 bucket. As you upload the files, grant public read
+ access to them (you can also do it after the upload with the `Make public`
+ action).
+
+* Validate both public links:
+ - `https://downloads.antrea.io/ovs/.zip`
+ - `https://downloads.antrea.io/ovs/.zip.sha256`
+
+* Update the Antrea Windows documentation and helper scripts as needed,
+ e.g. `hack/windows/Install-OVS.ps1`.
diff --git a/content/docs/v1.15.0/docs/migrate-to-antrea.md b/content/docs/v1.15.0/docs/migrate-to-antrea.md
new file mode 100644
index 00000000..1eb4d2bc
--- /dev/null
+++ b/content/docs/v1.15.0/docs/migrate-to-antrea.md
@@ -0,0 +1,74 @@
+# Migrate from another CNI to Antrea
+
+This document provides guidance on migrating from other CNIs to Antrea
+starting from version v1.15.0 onwards.
+
+NOTE: The following is a reference list of CNIs and versions for which we have
+verified the migration process. CNIs and versions that are not listed here
+might also work. Please create an issue if you run into problems during the
+migration to Antrea. During the migration process, no Kubernetes resources
+should be created or deleted, otherwise the migration process might fail or
+some unexpected problems might occur.
+
+| CNI | Version |
+|---------|---------|
+| Calico | v3.26 |
+| Flannel | v0.22.0 |
+
+The migration process is divided into three steps:
+
+1. Clean up the old CNI.
+2. Install Antrea in the cluster.
+3. Deploy Antrea migrator.
+
+## Clean up the old CNI
+
+The cleanup process varies across CNIs, typically you should remove
+the DaemonSet, Deployment, and CRDs of the old CNI from the cluster.
+For example, if you used `kubectl apply -f ` to install
+the old CNI, you could then use `kubectl delete -f ` to
+uninstall it.
+
+## Install Antrea
+
+The second step is to install Antrea in the cluster. You can follow the
+[installation guide](https://github.com/antrea-io/antrea/blob/main/docs/getting-started.md)
+to install Antrea. The following is an example of installing Antrea v1.14.1:
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download/v1.14.1/antrea.yml
+```
+
+## Deploy Antrea migrator
+
+After Antrea is up and running, you can now deploy Antrea migrator
+by the following command. The migrator runs as a DaemonSet, `antrea-migrator`,
+in the cluster, which will restart all non hostNetwork Pods in the cluster
+in-place and perform necessary network resource cleanup.
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-migrator.yml
+```
+
+The reason for restarting all Pods is that Antrea needs to take over the
+network management and IPAM from the old CNI. In order to avoid the Pods
+being rescheduled and minimize service downtime, the migrator restarts
+all non-hostNetwork Pods in-place by restarting their sandbox containers.
+Therefore, it's expected to see the `RESTARTS` count for these Pods being
+increased by 1 like below:
+
+```bash
+$ kubectl get pod -o wide
+NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+migrate-example-6d6b97f96b-29qbq 1/1 Running 1 (24s ago) 2m5s 10.10.1.3 test-worker
+migrate-example-6d6b97f96b-dqx2g 1/1 Running 1 (23s ago) 2m5s 10.10.1.6 test-worker
+migrate-example-6d6b97f96b-jpflg 1/1 Running 1 (23s ago) 2m5s 10.10.1.5 test-worker
+```
+
+When the `antrea-migrator` Pods on all Nodes are in `Running` state,
+the migration process is completed. You can then remove the `antrea-migrator`
+DaemonSet safely with the following command:
+
+```bash
+kubectl delete -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-migrator.yml
+```
diff --git a/content/docs/v1.15.0/docs/minikube.md b/content/docs/v1.15.0/docs/minikube.md
new file mode 100644
index 00000000..ebd9f442
--- /dev/null
+++ b/content/docs/v1.15.0/docs/minikube.md
@@ -0,0 +1,47 @@
+# Deploying Antrea on Minikube
+
+
+- [Install Minikube](#install-minikube)
+- [Deploy Antrea](#deploy-antrea)
+ - [Deploy Antrea to Minikube cluster](#deploy-antrea-to-minikube-cluster)
+ - [Deploy a local build of Antrea to Minikube cluster (for developers)](#deploy-a-local-build-of-antrea-to-minikube-cluster-for-developers)
+- [Verification](#verification)
+
+
+## Install Minikube
+
+Follow these [steps](https://minikube.sigs.k8s.io/docs/start) to install minikube and set its development environment.
+
+## Deploy Antrea
+
+### Deploy Antrea to Minikube cluster
+
+```bash
+# curl is required because --cni flag does not accept URL as a parameter
+curl -Lo https://github.com/antrea-io/antrea/releases/download//antrea.yml
+minikube start --cni=antrea.yml --network-plugin=cni
+```
+
+### Deploy a local build of Antrea to Minikube cluster (for developers)
+
+These instructions assume that you have built the Antrea Docker image locally
+(e.g. by running `make` from the root of the repository, or in case of arm64 architecture by running
+`DOCKER_BUILDKIT=1 ./hack/build-antrea-ubuntu-all.sh --platform linux/arm64`).
+
+```bash
+# load the Antrea Docker image in the minikube nodes
+minikube image load antrea/antrea-ubuntu:latest
+# deploy Antrea
+kubectl apply -f antrea/build/yamls/antrea.yml
+```
+
+## Verification
+
+After a few seconds you should be able to observe the following when running
+`kubectl get pods -l app=antrea -n kube-system`:
+
+```txt
+NAME READY STATUS RESTARTS AGE
+antrea-agent-9ftn9 2/2 Running 0 66m
+antrea-controller-56f97bbcff-zbfmv 1/1 Running 0 66m
+```
diff --git a/content/docs/v1.15.0/docs/multicast-guide.md b/content/docs/v1.15.0/docs/multicast-guide.md
new file mode 100644
index 00000000..2e53ba6d
--- /dev/null
+++ b/content/docs/v1.15.0/docs/multicast-guide.md
@@ -0,0 +1,187 @@
+# Multicast User Guide
+
+Antrea supports multicast traffic in the following scenarios:
+
+1. Pod to Pod - a Pod that has joined a multicast group will receive the
+ multicast traffic to that group from the Pod senders.
+2. Pod to External - external hosts can receive the multicast traffic sent
+ from Pods, when the Node network supports multicast forwarding / routing to
+ the external hosts.
+3. External to Pod - Pods can receive the multicast traffic from external
+ hosts.
+
+## Table of Contents
+
+
+- [Prerequisites](#prerequisites)
+- [Multicast NetworkPolicy](#multicast-networkpolicy)
+- [Debugging and collecting multicast statistics](#debugging-and-collecting-multicast-statistics)
+ - [Pod multicast group information](#pod-multicast-group-information)
+ - [Inbound and outbound multicast traffic statistics](#inbound-and-outbound-multicast-traffic-statistics)
+ - [Multicast NetworkPolicy statistics](#multicast-networkpolicy-statistics)
+- [Use case example](#use-case-example)
+- [Limitations](#limitations)
+ - [Encap mode](#encap-mode)
+ - [Maximum number of receiver groups on one Node](#maximum-number-of-receiver-groups-on-one-node)
+ - [Traffic in local network control block](#traffic-in-local-network-control-block)
+ - [Linux kernel](#linux-kernel)
+ - [Antrea FlexibleIPAM](#antrea-flexibleipam)
+
+
+## Prerequisites
+
+Multicast support was introduced in Antrea v1.5.0 as an alpha feature, and was
+graduated to beta in v1.12.0.
+
+* Prior to v1.12.0, a feature gate, `Multicast` must be enabled in the
+ `antrea-controller` and `antrea-agent` configuration to use the feature.
+* Starting from v1.12.0, the feature gate is enabled by default, you need to set
+ the `multicast.enable` flag to true in the `antrea-agent` configuration to use
+ the feature.
+
+There are three other configuration options -`multicastInterfaces`,
+`igmpQueryVersions`, and `igmpQueryInterval` for `antrea-agent`.
+
+```yaml
+ antrea-agent.conf: |
+ multicast:
+ enable: true
+ # The names of the interfaces on Nodes that are used to forward multicast traffic.
+ # Defaults to transport interface if not set.
+ multicastInterfaces:
+ # The versions of IGMP queries antrea-agent sends to Pods.
+ # Valid versions are 1, 2 and 3.
+ igmpQueryVersions:
+ - 1
+ - 2
+ - 3
+ # The interval at which the antrea-agent sends IGMP queries to Pods.
+ # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+ igmpQueryInterval: "125s"
+```
+
+## Multicast NetworkPolicy
+
+Antrea NetworkPolicy and Antrea ClusterNetworkPolicy are supported for the
+following types of multicast traffic:
+
+1. IGMP egress rules: applied to IGMP membership report and IGMP leave group
+ messages.
+2. IGMP ingress rules: applied to IGMP query, which includes IGMPv1, IGMPv2, and
+ IGMPv3.
+3. Multicast egress rules: applied to non-IGMP multicast traffic from the
+ selected Pods to other Pods or external hosts.
+
+Note, multicast ingress rules are not supported at the moment.
+
+Examples: You can refer to the [ACNP for IGMP traffic](antrea-network-policy.md#acnp-for-igmp-traffic)
+and [ACNP for multicast egress traffic](antrea-network-policy.md#acnp-for-multicast-egress-traffic)
+examples in the Antrea NetworkPolicy document.
+
+## Debugging and collecting multicast statistics
+
+Antrea provides tooling to check multicast group information and multicast
+traffic statistics.
+
+### Pod multicast group information
+
+The `kubectl get multicastgroups` command prints multicast groups joined by Pods
+in the cluster. Example output of the command:
+
+```bash
+$ kubectl get multicastgroups
+GROUP PODS
+225.1.2.3 default/mcjoin, namespace/pod
+224.5.6.4 default/mcjoin
+```
+
+### Inbound and outbound multicast traffic statistics
+
+`antctl` supports printing multicast traffic statistics of Pods. Please refer to
+the corresponding [antctl user guide section](antctl.md#multicast-commands).
+
+### Multicast NetworkPolicy statistics
+
+The [Antrea NetworkPolicyStats feature](feature-gates.md#networkpolicystats)
+also supports multicast NetworkPolices.
+
+## Use case example
+
+This section will take multicast video streaming as an example to demonstrate
+how multicast works with Antrea. In this example,
+[VLC](https://www.videolan.org/vlc/) multimedia tools are used to generate and
+consume multicast video streams.
+
+To start a video streaming server, we start a VLC Pod to stream a sample video
+to the multicast IP address `239.255.12.42` with TTL 6.
+
+```bash
+kubectl run -i --tty --image=quay.io/galexrt/vlc:latest vlc-sender -- --intf ncurses --vout dummy --aout dummy 'https://upload.wikimedia.org/wikipedia/commons/transcoded/2/26/Bees_on_flowers.webm/Bees_on_flowers.webm.120p.vp9.webm' --sout udp:239.255.12.42 --ttl 6 --repeat
+```
+
+You can verify multicast traffic is sent out from this Pod by running
+`antctl get podmulticaststats` in the `antrea-agent` Pod on the local Node,
+which indicates the VLC Pod is sending out multicast video streams.
+
+You can also check the multicast routes on the Node by running command
+`ip mroute`, which should print the following route for forwarding the multicast
+traffic from the Antrea gateway interface to the transport interface.
+
+```bash
+$ ip mroute
+(, 239.255.12.42) Iif: antrea-gw0 Oifs: State: resolved
+```
+
+We also create a VLC Pod to be the receiver with the following command:
+
+```bash
+kubectl run -i --tty --image=quay.io/galexrt/vlc:latest vlc-receiver -- --intf ncurses --vout dummy --aout dummy udp://@239.255.12.42 --repeat
+```
+
+It's expected to see inbound multicast traffic to this Pod by running
+`antctl get podmulticaststats` in the local `antrea-agent` Pod,
+which indicates the VLC Pod is receiving the video stream.
+
+Also, the `kubectl get multicastgroups` command will show that `vlc-receiver`
+has joined multicast group `239.255.12.42`.
+
+## Limitations
+
+This feature is currently supported only for IPv4 Linux clusters. Support for
+Windows and IPv6 will be added in the future.
+
+### Encap mode
+
+The configuration option `multicastInterfaces` is not supported with encap mode.
+Multicast packets in encap mode are SNATed and forwarded to the transport
+interface only.
+
+### Maximum number of receiver groups on one Node
+
+A Linux host limits the maximum number of multicast groups it can subscribe to;
+the default number is 20. The limit can be changed by setting [/proc/sys/net/ipv4/igmp_max_memberships](https://sysctl-explorer.net/net/ipv4/igmp_max_memberships/).
+Users are responsible for changing the limit if Pods on the Node are expected to
+join more than 20 groups.
+
+### Traffic in local network control block
+
+Multicast IPs in [Local Network Control Block](https://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml#multicast-addresses-1) (224.0.0.0/24)
+can only work in encap mode. Multicast traffic destined for those addresses
+is not expected to be forwarded, therefore, no multicast route will be
+configured for them. External hosts are not supposed to send and receive traffic
+with those addresses either.
+
+### Linux kernel
+
+If the following situations apply to your Nodes, you may observe multicast
+traffic is not routed correctly:
+
+1. Node kernel version under 5.4
+2. Node network doesn't support IGMP snooping
+
+### Antrea FlexibleIPAM
+
+The configuration option `multicastInterfaces` is not supported with
+[Antrea FlexibleIPAM](antrea-ipam.md#antrea-flexible-ipam). When Antrea
+FlexibleIPAM is enabled, multicast packets are forwarded to the uplink interface
+only.
diff --git a/content/docs/v1.15.0/docs/multicluster/antctl.md b/content/docs/v1.15.0/docs/multicluster/antctl.md
new file mode 100644
index 00000000..01dab406
--- /dev/null
+++ b/content/docs/v1.15.0/docs/multicluster/antctl.md
@@ -0,0 +1,150 @@
+# Antctl Multi-cluster commands
+
+Starting from version 1.6.0, Antrea supports the `antctl mc` commands, which can
+collect information from a leader cluster for troubleshooting Antrea
+Multi-cluster issues, deploy Antrea Multi-cluster and set up ClusterSets in both
+leader and member clusters. The `antctl mc get` command is supported since
+Antrea v1.6.0, while other commands are supported since v1.8.0. These commands
+cannot run inside the `antrea-controller`, `antrea-agent` or
+`antrea-mc-controller` Pods. antctl needs a kubeconfig file to access the target
+cluster's API server, and it will look for the kubeconfig file at
+`$HOME/.kube/config` by default. You can select a different file by setting the
+`KUBECONFIG` environment variable or with the `--kubeconfig` option of antctl.
+
+## antctl mc get
+
+- `antctl mc get clusterset` (or `get clustersets`) command prints all
+ClusterSets, a specified Clusterset, or the ClusterSet in a specified Namespace.
+- `antctl mc get resourceimport` (or `get resourceimports`, `get ri`) command
+prints all ResourceImports, a specified ResourceImport, or ResourceImports in a
+specified Namespace.
+- `antctl mc get resourceexport` (or `get resourceexports`, `get re`) command
+prints all ResourceExports, a specified ResourceExport, or ResourceExports in a
+specified Namespace.
+- `antctl mc get joinconfig` command prints member cluster join parameters of
+the ClusterSet in a specified leader cluster Namespace.
+- `antctl mc get membertoken` (or `get membertokens`) command prints all member tokens,
+a specified token, or member tokens in a specified Namespace. The command is supported
+only on a leader cluster.
+
+Using the `json` or `yaml` antctl output format can print more information of
+ClusterSet, ResourceImport, and ResourceExport than using the default table
+output format.
+
+```bash
+antctl mc get clusterset [NAME] [-n NAMESPACE] [-o json|yaml] [-A]
+antctl mc get resourceimport [NAME] [-n NAMESPACE] [-o json|yaml] [-A]
+antctl mc get resourceexport [NAME] [-n NAMESPACE] [-clusterid CLUSTERID] [-o json|yaml] [-A]
+antctl mc get joinconfig [--member-token TOKEN_NAME] [-n NAMESPACE]
+antctl mc get membertoken [NAME] [-n NAMESPACE] [-o json|yaml] [-A]
+```
+
+To see the usage examples of these commands, you may also run `antctl mc get [subcommand] --help`.
+
+## antctl mc create
+
+`antctl mc create` command creates a token for member clusters to join a ClusterSet. The command will
+also create a Secret to store the token, as well as a ServiceAccount and a RoleBinding. The `--output-file`
+option saves the member token Secret manifest to a file.
+
+```bash
+anctcl mc create membertoken NAME -n NAMESPACE [-o OUTPUT_FILE]
+```
+
+To see the usage examples of these commands, you may also run `antctl mc create [subcommand] --help`.
+
+## antctl mc delete
+
+`antctl mc delete` command deletes a member token of a ClusterSet. The command will delete the
+corresponding Secret, ServiceAccount and RoleBinding if they exist.
+
+```bash
+anctcl mc delete membertoken NAME -n NAMESPACE
+```
+
+To see the usage examples of these commands, you may also run `antctl mc delete [subcommand] --help`.
+
+## antctl mc deploy
+
+`antctl mc deploy` command deploys Antrea Multi-cluster Controller to a leader or member cluster.
+
++ `antctl mc deploy leadercluster` command deploys Antrea Multi-cluster Controller to a leader cluster and imports
+ all the Antrea Multi-cluster CRDs.
++ `antctl mc deploy membercluster` command deploys Antrea Multi-cluster Controller to a member cluster and imports
+ all the Antrea Multi-cluster CRDs.
+
+```bash
+antctl mc deploy leadercluster -n NAMESPACE [--antrea-version ANTREA_VERSION] [-f PATH_TO_MANIFEST]
+antctl mc deploy membercluster -n NAMESPACE [--antrea-version ANTREA_VERSION] [-f PATH_TO_MANIFEST]
+```
+
+To see the usage examples of these commands, you may also run `antctl mc deploy [subcommand] --help`.
+
+## antctl mc init
+
+`antctl mc init` command initializes an Antrea Multi-cluster ClusterSet in a leader cluster. It will create a
+ClusterSet for the leader cluster. If the `-j|--join-config-file` option is specified, the ClusterSet join
+parameters will be saved to the specified file, which can be used in the `antctl mc join` command
+for a member cluster to join the ClusterSet.
+
+```bash
+antctl mc init -n NAMESPACE --clusterset CLUSTERSET_ID --clusterid CLUSTERID [--create-token] [-j JOIN_CONFIG_FILE]
+```
+
+To see the usage examples of this command, you may also run `antctl mc init --help`.
+
+## antctl mc join
+
+`antctl mc join` command lets a member cluster join an existing Antrea Multi-cluster ClusterSet. It will create a
+ClusterSet for the member cluster. Users can use command line options or a config file (which can be the output
+file of the `anctl mc init` command) to specify the ClusterSet join arguments.
+
+When the config file is provided, the command line options may be overridden by the file. A token is needed for a
+member cluster to access the leader cluster API server. Users can either specify a pre-created token Secret with the
+`--token-secret-name` option, or pass a Secret manifest to create the Secret with either the `--token-secret-file`
+option or the config file.
+
+```bash
+antctl mc join --clusterset=CLUSTERSET_ID \
+ --clusterid=CLUSTER_ID \
+ --namespace=[MEMBER_NAMESPACE] \
+ --leader-clusterid=LEADER_CLUSTER_ID \
+ --leader-namespace=LEADER_NAMESPACE \
+ --leader-apiserver=LEADER_APISERVER \
+ --token-secret-name=[TOKEN_SECRET_NAME] \
+ --token-secret-file=[TOKEN_SECRET_FILE]
+
+antctl mc join --config-file JOIN_CONFIG_FILE [--clusterid=CLUSTER_ID] [--token-secret-name=TOKEN_SECRET_NAME] [--token-secret-file=TOKEN_SECRET_FILE]
+```
+
+Below is a config file example:
+
+```yaml
+apiVersion: multicluster.antrea.io/v1alpha1
+kind: ClusterSetJoinConfig
+clusterSetID: clusterset1
+clusterID: cluster-east
+namespace: kube-system
+leaderClusterID: cluster-north
+leaderNamespace: antrea-multicluster
+leaderAPIServer: https://172.18.0.3:6443
+tokenSecretName: cluster-east-token
+```
+
+## antctl mc leave
+
+`antctl mc leave` command lets a member cluster leave a ClusterSet. It will delete the ClusterSet
+and other resources created by antctl for the member cluster.
+
+```bash
+antctl mc leave --clusterset CLUSTERSET_ID --namespace [NAMESPACE]
+```
+
+## antctl mc destroy
+
+`antctl mc destroy` command can destroy an Antrea Multi-cluster ClusterSet in a leader cluster. It will delete the
+ClusterSet and other resources created by antctl for the leader cluster.
+
+```bash
+antctl mc destroy --clusterset=CLUSTERSET_ID --namespace NAMESPACE
+```
diff --git a/content/docs/v1.15.0/docs/multicluster/api.md b/content/docs/v1.15.0/docs/multicluster/api.md
new file mode 100644
index 00000000..d9159193
--- /dev/null
+++ b/content/docs/v1.15.0/docs/multicluster/api.md
@@ -0,0 +1,36 @@
+# Antrea Multi-cluster API
+
+This document lists all the API resource versions currently supported by Antrea Mulit-cluster.
+
+Antrea Multi-cluster is supported since v1.5.0. Most Custom Resource Definitions (CRDs)
+used by Antrea Multi-cluster are in the API group `multicluster.crd.antrea.io`, and
+two CRDs from [mcs-api](https://github.com/kubernetes-sigs/mcs-api) are in group `multicluster.x-k8s.io`
+which is defined by Kubernetes upstream [KEP-1645](https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/1645-multi-cluster-services-api).
+
+## Currently-supported
+
+### CRDs in `multicluster.crd.antrea.io`
+
+| CRD | CRD version | Introduced in | Deprecated in / Planned Deprecation | Planned Removal |
+| ------------------------ | ----------- | ------------- | ----------------------------------- | --------------- |
+| `ClusterSets` | v1alpha2 | v1.13.0 | N/A | N/A |
+| `MemberClusterAnnounces` | v1alpha1 | v1.5.0 | N/A | N/A |
+| `ResourceExports` | v1alpha1 | v1.5.0 | N/A | N/A |
+| `ResourceImports` | v1alpha1 | v1.5.0 | N/A | N/A |
+| `Gateway` | v1alpha1 | v1.7.0 | N/A | N/A |
+| `ClusterInfoImport` | v1alpha1 | v1.7.0 | N/A | N/A |
+
+### CRDs in `multicluster.x-k8s.io`
+
+| CRD | CRD version | Introduced in | Deprecated in / Planned Deprecation | Planned Removal |
+| ---------------- | ----------- | ------------- | ----------------------------------- | --------------- |
+| `ServiceExports` | v1alpha1 | v1.5.0 | N/A | N/A |
+| `ServiceImports` | v1alpha1 | v1.5.0 | N/A | N/A |
+
+## Previously-supported
+
+| CRD | API group | CRD version | Introduced in | Deprecated in | Removed in |
+| ------------------------ | ---------------------------- | ----------- | ------------- | ------------- | ---------- |
+| `ClusterClaims` | `multicluster.crd.antrea.io` | v1alpha1 | v1.5.0 | v1.8.0 | v1.8.0 |
+| `ClusterClaims` | `multicluster.crd.antrea.io` | v1alpha2 | v1.8.0 | v1.13.0 | v1.13.0 |
+| `ClusterSets` | `multicluster.crd.antrea.io` | v1alpha1 | v1.5.0 | v1.13.0 | N/A |
diff --git a/content/docs/v1.15.0/docs/multicluster/architecture.md b/content/docs/v1.15.0/docs/multicluster/architecture.md
new file mode 100644
index 00000000..8465be7e
--- /dev/null
+++ b/content/docs/v1.15.0/docs/multicluster/architecture.md
@@ -0,0 +1,211 @@
+# Antrea Multi-cluster Architecture
+
+Antrea Multi-cluster implements [Multi-cluster Service API](https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/1645-multi-cluster-services-api),
+which allows users to create multi-cluster Services that can be accessed cross
+clusters in a ClusterSet. Antrea Multi-cluster also supports Antrea
+ClusterNetworkPolicy replication. Multi-cluster admins can define
+ClusterNetworkPolicies to be replicated across the entire ClusterSet, and
+enforced in all member clusters.
+
+An Antrea Multi-cluster ClusterSet includes a leader cluster and multiple member
+clusters. Antrea Multi-cluster Controller needs to be deployed in the leader and
+all member clusters. A cluster can serve as the leader, and meanwhile also be a
+member cluster of the ClusterSet.
+
+The diagram below depicts a basic Antrea Multi-cluster topology with one leader
+cluster and two member clusters.
+
+{{< img src="assets/basic-topology.svg" width="650" alt="Antrea Multi-cluster Topology" >}}
+
+## Terminology
+
+ClusterSet is a placeholder name for a group of clusters with a high degree of mutual
+trust and shared ownership that share Services amongst themselves. Within a ClusterSet,
+Namespace sameness applies, which means all Namespaces with a given name are considered to
+be the same Namespace. The ClusterSet Custom Resource Definition(CRD) defines a ClusterSet
+including the leader and member clusters information.
+
+The MemberClusterAnnounce CRD declares a member cluster configuration to the leader cluster.
+
+The Common Area is an abstraction in the Antrea Multi-cluster implementation provides a storage
+interface for resource export/import that can be read/written by all member and leader clusters
+in the ClusterSet. The Common Area is implemented with a Namespace in the leader cluster for a
+given ClusterSet.
+
+## Antrea Multi-cluster Controller
+
+Antrea Multi-cluster Controller implements ClusterSet management and resource
+export/import in the ClusterSet. In either a leader or a member cluster, Antrea
+Multi-cluster Controller is deployed with a Deployment of a single replica, but
+it takes different responsibilities in leader and member clusters.
+
+### ClusterSet Establishment
+
+In a member cluster, Multi-cluster Controller watches and validates the ClusterSet,
+and creates a MemberClusterAnnounce CR in the Common Area of the leader cluster to
+join the ClusterSet.
+
+In the leader cluster, Multi-cluster controller watches, validates and initializes
+the ClusterSet. It also validates the MemberClusterAnnounce CR created by a member
+cluster and updates the member cluster's connection status to `ClusterSet.Status`.
+
+### Resource Export and Import
+
+In a member cluster, Multi-cluster controller watches exported resources (e.g.
+ServiceExports, Services, Multi-cluster Gateways), encapsulates an exported
+resource into a ResourceExport and creates the ResourceExport CR in the Common
+Area of the leader cluster.
+
+In the leader cluster, Multi-cluster Controller watches ResourceExports created
+by member clusters (in the case of Service and ClusterInfo export), or by the
+ClusterSet admin (in the case of Multi-cluster NetworkPolicy), converts
+ResourceExports to ResourceImports, and creates the ResourceImport CRs in the
+Common Area for member clusters to import them. Multi-cluster Controller also
+merges ResourceExports from different member clusters to a single
+ResourceImport, when these exported resources share the same kind, name, and
+original Namespace (matching Namespace sameness).
+
+Multi-cluster Controller in a member cluster also watches ResourceImports in the
+Common Area of the leader cluster, decapsulates the resources from them, and
+creates the resources (e.g. Services, Endpoints, Antrea ClusterNetworkPolicies,
+ClusterInfoImports) in the member cluster.
+
+For more information about multi-cluster Service export/import, please also check
+the [Service Export and Import](#service-export-and-import) section.
+
+## Multi-cluster Service
+
+### Service Export and Import
+
+{{< img src="assets/resource-export-import-pipeline.svg" width="1500" alt="Antrea Multi-cluster Service Export/Import Pipeline" >}}
+
+Antrea Multi-cluster Controller implements Service export/import among member
+clusters. The above diagram depicts Antrea Multi-cluster resource export/import
+pipeline, using Service export/import as an example.
+
+Given two Services with the same name and Namespace in two member clusters -
+`foo.ns.cluster-a.local` and `foo.ns.cluster-b.local`, a multi-cluster Service can
+be created by the following resource export/import workflow.
+
+* User creates a ServiceExport `foo` in Namespace `ns` in each of the two
+clusters.
+* Multi-cluster Controllers in `cluster-a` and `cluster-b` see ServiceExport
+`foo`, and both create two ResourceExports for the Service and Endpoints
+respectively in the Common Area of the leader cluster.
+* Multi-cluster Controller in the leader cluster sees the ResourcesExports in
+the Common Area, including the two for Service `foo`: `cluster-a-ns-foo-service`,
+`cluster-b-ns-foo-service`; and the two for the Endpoints:
+`cluster-a-ns-foo-endpoints`, `cluster-b-ns-foo-endpoints`. It then creates a
+ResourceImport `ns-foo-service` for the multi-cluster Service; and a
+ResourceImport `ns-foo-endpoints` for the Endpoints, which includes the
+exported endpoints of both `cluster-a-ns-foo-endpoints` and
+`cluster-b-ns-foo-endpoints`.
+* Multi-cluster Controller in each member cluster watches the ResourceImports
+from the Common Area, decapsulates them and gets Service `ns/antrea-mc-foo` and
+Endpoints `ns/antrea-mc-foo`, and creates the Service and Endpoints, as well as
+a ServiceImport `foo` in the local Namespace `ns`.
+
+### Service Access Across Clusters
+
+Since Antrea v1.7.0, the Service's ClusterIP is exported as the multi-cluster
+Service's Endpoints. Multi-cluster Gateways must be configured to support
+multi-cluster Service access across member clusters, and Service CIDRs cannot
+overlap between clusters. Please refer to [Multi-cluster Gateway](#multi-cluster-gateway)
+for more information. Before Antrea v1.7.0, Pod IPs are exported as the
+multi-cluster Service's Endpoints. Pod IPs must be directly reachable across
+clusters for multi-cluster Service access, and Pod CIDRs cannot overlap between
+clusters. Antrea Multi-cluster only supports creating multi-cluster Services
+for Services of type ClusterIP.
+
+## Multi-cluster Gateway
+
+Antrea started to support Multi-cluster Gateway since v1.7.0. User can choose
+one K8s Node as the Multi-cluster Gateway in a member cluster. The Gateway Node
+is responsible for routing all cross-clusters traffic from the local cluster to
+other member clusters through tunnels. The diagram below depicts Antrea
+Multi-cluster connectivity with Multi-cluster Gateways.
+
+{{< img src="assets/mc-gateway.svg" width="800" alt="Antrea Multi-cluster Gateway" >}}
+
+Antrea Agent is responsible for setting up tunnels between Gateways of member
+clusters. The tunnels between Gateways use Antrea Agent's configured tunnel type.
+All member clusters in a ClusterSet need to deploy Antrea with the same tunnel
+type.
+
+The Multi-cluster Gateway implementation introduces two new CRDs `Gateway` and
+`ClusterInfoImport`. `Gateway` includes the local Multi-cluster Gateway
+information including: `internalIP` for tunnels to local Nodes, and `gatewayIP`
+for tunnels to remote cluster Gateways. `ClusterInfoImport` includes Gateway
+and network information of member clusters, including Gateway IPs and Service
+CIDRs. The existing esource export/import pipeline is leveraged to exchange
+the cluster network information among member clusters, generating
+ClusterInfoImports in each member cluster.
+
+### Multi-cluster Service Traffic Walk
+
+Let's use the ClusterSet in the above diagram as an example. As shown in the
+diagram:
+
+1. Cluster A has a client Pod named `pod-a` running on a regular Node, and a
+ multi-cluster Service named `antrea-mc-nginx` with ClusterIP `10.112.10.11`
+ in the `default` Namespace.
+2. Cluster B exported a Service named `nginx` with ClusterIP `10.96.2.22` in
+ the `default` Namespace. The Service has one Endpoint `172.170.11.22` which is
+ `pod-b`'s IP.
+3. Cluster C exported a Service named `nginx` with ClusterIP `10.11.12.33` also
+ in the `default` Namespace. The Service has one Endpoint `172.10.11.33` which
+ is `pod-c`'s IP.
+
+The multi-cluster Service `antrea-mc-nginx` in cluster A will have two
+Endpoints:
+
+* `nginx` Service's ClusterIP `10.96.2.22` from cluster B.
+* `nginx` Service's ClusterIP `10.11.12.33` from cluster C.
+
+When the client Pod `pod-a` on cluster A tries to access the multi-cluster
+Service `antrea-mc-nginx`, the request packet will first go through the Service
+load balancing pipeline on the source Node `node-a2`, with one endpoint of the
+multi-cluster Service being chosen as the destination. Let's say endpoint
+`10.11.12.33` from cluster C is chosen, then the request packet will be DNAT'd
+with IP `10.11.12.33` and tunnelled to the local Gateway Node `node-a1`.
+`node-a1` knows from the destination IP (`10.11.12.33`) the packet is
+multi-cluster Service traffic destined for cluster C, and it will tunnel the
+packet to cluster C's Gateway Node `node-c1`, after performing SNAT and setting
+the packet's source IP to its own Gateway IP. On `node-c1`, the packet will go
+through the Service load balancing pipeline again with an endpoint of Service
+`nginx` being chosen as the destination. As the Service has only one endpoint -
+`172.10.11.33` of `pod-c`, the request packet will be DNAT'd to `172.10.11.33`
+and tunnelled to `node-c2` where `pod-c` is running. Finally, on `node-c2` the
+packet will go through the normal Antrea forwarding pipeline and be forwarded
+to `pod-c`.
+
+## Antrea Multi-cluster NetworkPolicy
+
+At this moment, Antrea does not support Pod-level policy enforcement for
+cross-cluster traffic. Access towards multi-cluster Services can be regulated
+with Antrea ClusterNetworkPolicy `toService` rules. In each member cluster,
+users can create an Antrea ClusterNetworkPolicy selecting Pods in that cluster,
+with the imported Mutli-cluster Service name and Namespace in an egress
+`toService` rule, and the Action to take for traffic matching this rule.
+For more information regarding Antrea ClusterNetworkPolicy (ACNP), refer
+to [this document](../antrea-network-policy.md).
+
+Multi-cluster admins can also specify certain ClusterNetworkPolicies to be
+replicated across the entire ClusterSet. The ACNP to be replicated should
+be created as a ResourceExport in the leader cluster, and the resource
+export/import pipeline will ensure member clusters receive this ACNP spec
+to be replicated. Each member cluster's Multi-cluster Controller will then
+create an ACNP in their respective clusters.
+
+## Antrea Traffic Modes
+
+Multi-cluster Gateway supports all of `encap`, `noEncap`, `hybrid`, and
+`networkPolicyOnly` modes. In all supported modes, the cross-cluster traffic
+is routed by Multi-cluster Gateways of member clusters, and the traffic goes
+through Antrea overlay tunnels between Gateways. In `noEncap`, `hybrid`, and
+`networkPolicyOnly` modes, even when in-cluster Pod traffic does not go through
+tunnels, antrea-agent still creates tunnels between the Gateway Node and other
+Nodes, and routes cross-cluster traffic to reach the Gateway through the tunnels.
+Specially for [`networkPolicyOnly` mode](../design/policy-only.md), Antrea only
+handles multi-cluster traffic routing, while the primary CNI takes care of in-cluster
+traffic routing.
diff --git a/content/docs/v1.15.0/docs/multicluster/assets/basic-topology.svg b/content/docs/v1.15.0/docs/multicluster/assets/basic-topology.svg
new file mode 100644
index 00000000..d6f5f08e
--- /dev/null
+++ b/content/docs/v1.15.0/docs/multicluster/assets/basic-topology.svg
@@ -0,0 +1,548 @@
+
+
+
+
diff --git a/content/docs/v1.15.0/docs/multicluster/assets/mc-gateway.svg b/content/docs/v1.15.0/docs/multicluster/assets/mc-gateway.svg
new file mode 100644
index 00000000..20dd6985
--- /dev/null
+++ b/content/docs/v1.15.0/docs/multicluster/assets/mc-gateway.svg
@@ -0,0 +1,1026 @@
+
+
+
+
diff --git a/content/docs/v1.15.0/docs/multicluster/assets/resource-export-import-pipeline.svg b/content/docs/v1.15.0/docs/multicluster/assets/resource-export-import-pipeline.svg
new file mode 100644
index 00000000..26c3450d
--- /dev/null
+++ b/content/docs/v1.15.0/docs/multicluster/assets/resource-export-import-pipeline.svg
@@ -0,0 +1,665 @@
+
+
+
+
diff --git a/content/docs/v1.15.0/docs/multicluster/assets/sample-clusterset.svg b/content/docs/v1.15.0/docs/multicluster/assets/sample-clusterset.svg
new file mode 100644
index 00000000..5418bd09
--- /dev/null
+++ b/content/docs/v1.15.0/docs/multicluster/assets/sample-clusterset.svg
@@ -0,0 +1,565 @@
+
+
diff --git a/content/docs/v1.15.0/docs/multicluster/policy-only-mode.md b/content/docs/v1.15.0/docs/multicluster/policy-only-mode.md
new file mode 100644
index 00000000..2bcb502e
--- /dev/null
+++ b/content/docs/v1.15.0/docs/multicluster/policy-only-mode.md
@@ -0,0 +1,82 @@
+# Antrea Multi-cluster with NetworkPolicy Only Mode
+
+Multi-cluster Gateway works with Antrea `networkPolicyOnly` mode, in which
+cross-cluster traffic is routed by Multi-cluster Gateways of member clusters,
+and the traffic goes through Antrea overlay tunnels between Gateways and local
+cluster Pods. Pod traffic within a cluster is still handled by the primary CNI,
+not Antrea.
+
+## Deploying Antrea in `networkPolicyOnly` mode with Multi-cluster feature
+
+This section describes steps to deploy Antrea in `networkPolicyOnly` mode
+with the Multi-cluster feature enabled on an EKS cluster.
+
+You can follow [the EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html)
+to create an EKS cluster, and follow the [Antrea EKS installation guide](../eks-installation.md)
+to deploy Antrea to an EKS cluster. Please note there are a few changes required
+by Antrea Multi-cluster. You should set the following configuration parameters in
+ `antrea-agent.conf` of the Antrea deployment manifest to enable the `Multicluster`
+ feature and Antrea Multi-cluster Gateway:
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ Multicluster: true
+ multicluster:
+ enableGateway: true
+ namespace: "" # Change to the Namespace where antrea-mc-controller is deployed.
+```
+
+Repeat the same steps to deploy Antrea for all member clusters in a ClusterSet.
+Besides the Antrea deployment, you also need to deploy Antrea Multi-cluster Controller
+in each member cluster. Make sure the Service CIDRs (ClusterIP ranges) must not overlap
+among the member clusters. Please refer to [the quick start guide](./quick-start.md)
+or [the user guide](./user-guide.md) to learn more information about how to configure
+a ClusterSet.
+
+## Connectivity between Clusters
+
+When EKS clusters of a ClusterSet are in different VPCs, you may need to enable connectivity
+between VPCs to support Multi-cluster traffic. You can check the following steps to set up VPC
+connectivity for a ClusterSet.
+
+In the following descriptions, we take a ClusterSet with two member clusters in two VPCs as
+an example to describe the VPC configuration.
+
+| Cluster ID | PodCIDR | Gateway IP |
+| ------------ | ------------- | ------------ |
+| west-cluster | 110.13.0.0/16 | 110.13.26.12 |
+| east-cluster | 110.14.0.0/16 | 110.14.18.50 |
+
+### VPC Peering Configuration
+
+When the Gateway Nodes do not have public IPs, you may create a VPC peering connection between
+the two VPCs for the Gateways to reach each other. You can follow the
+[AWS documentation](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html) to
+configure VPC peering.
+
+You also need to add a route to the route tables of the Gateway Nodes' subnets, to enable
+routing across the peering connection. For `west-cluster`, the route should have `east-cluster`'s
+Pod CIDR: `110.14.0.0/16` to be the destination, and the peering connection to be the target;
+for `east-cluster`, the route should have `west-cluster`'s Pod CIDR: `110.13.0.0/16` to be the
+destination. To learn more about VPC peering routes, please refer to the [AWS documentation](https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-routing.html).
+
+### Security Groups
+
+AWS security groups may need to be configured to allow tunnel traffic to Multi-cluster Gateways,
+especially when the member clusters are in different VPCs. EKS should have already created a
+security group for each cluster, which should have a description like "EKS created security group
+applied to ENI that is attached to EKS Control Plane master nodes, as well as any managed workloads.".
+You can add a new rule to the security group for Gateway traffic. For `west-cluster`, add an inbound
+rule with source to be `east-cluster`'s Gateway IP `110.14.18.50/32`; for `east-cluster`, the source
+should be `west-cluster`'s Gateway IP `110.13.26.12/32`.
+
+By default, Multi-cluster Gateway IP should be the `InternalIP` of the Gateway Node, but you may
+configure Antrea Multi-cluster to use the Node `ExternalIP`. Please use the right Node IP address
+as the Gateway IP in the security group rule.
diff --git a/content/docs/v1.15.0/docs/multicluster/quick-start.md b/content/docs/v1.15.0/docs/multicluster/quick-start.md
new file mode 100644
index 00000000..035de57c
--- /dev/null
+++ b/content/docs/v1.15.0/docs/multicluster/quick-start.md
@@ -0,0 +1,326 @@
+# Antrea Multi-cluster Quick Start
+
+In this quick start guide, we will set up an Antrea Multi-cluster ClusterSet
+with two clusters. One cluster will serve as the leader of the ClusterSet, and
+meanwhile also join as a member cluster; another cluster will be a member only.
+Antrea Multi-cluster supports two types of IP addresses as multi-cluster
+Service endpoints - exported Services' ClusterIPs or backend Pod IPs.
+We use the default `ClusterIP` endpoint type for multi-cluster Services
+in this guide.
+
+The diagram below shows the two clusters and the ClusterSet to be created (for
+simplicity, the diagram just shows two Nodes for each cluster).
+
+{{< img src="assets/sample-clusterset.svg" width="800" alt="Antrea Multi-cluster Example ClusterSet" >}}
+
+## Preparation
+
+We assume an Antrea version >= `v1.8.0` is used in this guide, and the Antrea
+version is set to an environment variable `TAG`. For example, the following
+command sets the Antrea version to `v1.8.0`.
+
+```bash
+export TAG=v1.8.0
+```
+
+To use the latest version of Antrea Multi-cluster from the Antrea main branch,
+you can change the YAML manifest path to: `https://github.com/antrea-io/antrea/tree/main/multicluster/build/yamls/`
+when applying or downloading an Antrea YAML manifest.
+
+Antrea must be deployed in both cluster A and cluster B, and the `Multicluster`
+feature of `antrea-agent` must be enabled to support multi-cluster Services. As we
+use `ClusterIP` endpoint type for multi-cluster Services, an Antrea Multi-cluster
+Gateway needs be set up in each member cluster to route Service traffic across clusters,
+and two clusters **must have non-overlapping Service CIDRs**. Set the following
+configuration parameters in `antrea-agent.conf` of the Antrea deployment
+manifest to enable the `Multicluster` feature:
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ Multicluster: true
+ multicluster:
+ enableGateway: true
+ namespace: ""
+```
+
+At the moment, Multi-cluster Gateway only works with the Antrea `encap` traffic
+mode, and all member clusters in a ClusterSet must use the same tunnel type.
+
+## Steps with antctl
+
+`antctl` provides a couple of commands to facilitate deployment, configuration,
+and troubleshooting of Antrea Multi-cluster. This section describes the steps
+to deploy Antrea Multi-cluster and set up the example ClusterSet using `antctl`.
+A [further section](#steps-with-yaml-manifests) will describe the steps to
+achieve the same using YAML manifests.
+
+To execute any command in this section, `antctl` needs access to the target
+cluster's API server, and it needs a kubeconfig file for that. Please refer to
+the [`antctl` Multi-cluster manual](antctl.md) to learn more about the
+kubeconfig file configuration, and the `antctl` Multi-cluster commands. For
+installation of `antctl`, please refer to the [installation guide](../antctl.md#installation).
+
+### Set up Leader and Member in Cluster A
+
+#### Step 1 - deploy Antrea Multi-cluster Controllers for leader and member
+
+Run the following commands to deploy Multi-cluster Controller for the leader
+into Namespace `antrea-multicluster` (Namespace `antrea-multicluster` will be
+created by the commands), and Multi-cluster Controller for the member into
+Namepsace `kube-system`.
+
+```bash
+kubectl create ns antrea-multicluster
+antctl mc deploy leadercluster -n antrea-multicluster --antrea-version $TAG
+antctl mc deploy membercluster -n kube-system --antrea-version $TAG
+```
+
+You can run the following command to verify the the leader and member
+`antrea-mc-controller` Pods are deployed and running:
+
+```bash
+$ kubectl get all -A -l="component=antrea-mc-controller"
+NAMESPACE NAME READY STATUS RESTARTS AGE
+antrea-multicluster pod/antrea-mc-controller-cd7bf8f68-kh4kz 1/1 Running 0 50s
+kube-system pod/antrea-mc-controller-85dbf58b75-pjj48 1/1 Running 0 48s
+
+NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
+antrea-multicluster deployment.apps/antrea-mc-controller 1/1 1 1 50s
+kube-system deployment.apps/antrea-mc-controller 1/1 1 1 48s
+```
+
+#### Step 2 - initialize ClusterSet
+
+Run the following commands to create a ClusterSet with cluster A to be the
+leader, and also join the ClusterSet as a member.
+
+```bash
+antctl mc init --clusterset test-clusterset --clusterid test-cluster-leader -n antrea-multicluster --create-token -j join-config.yml
+antctl mc join --clusterid test-cluster-leader -n kube-system --config-file join-config.yml
+```
+
+The above `antctl mc init` command creates a default token (with the
+`--create-token` flag) for member clusters to join the ClusterSet and
+authenticate to the leader cluster API server, and the command saves the token
+Secret manifest and other ClusterSet join arguments to file `join-config.yml`
+(specified with the `-o` option), which can be provided to the `antctl mc join`
+command (with the `--config-file` option) to join the ClusterSet with these
+arguments. If you want to use a separate token for each member cluster for
+security considerations, you can run the following commands to create a token
+and use the token (together with the previously generated configuration file
+`join-config.yml`) to join the ClusterSet:
+
+```bash
+antctl mc create membertoken test-cluster-leader-token -n antrea-multicluster -o test-cluster-leader-token.yml
+antctl mc join --clusterid test-cluster-leader -n kube-system --config-file join-config.yml --token-secret-file test-cluster-leader-token.yml
+```
+
+#### Step 3 - specify Multi-cluster Gateway Node
+
+Last, you need to choose at least one Node in cluster A to serve as the
+Multi-cluster Gateway. The Node should have an IP that is reachable from the
+cluster B's Gateway Node, so a tunnel can be created between the two Gateways.
+For more information about Multi-cluster Gateway, please refer to the
+[Multi-cluster User Guide](user-guide.md#multi-cluster-gateway-configuration).
+
+Assuming K8s Node `node-a1` is selected for the Multi-cluster Gateway, run
+the following command to annotate the Node with:
+`multicluster.antrea.io/gateway=true` (so Antrea can know it is the Gateway
+Node from the annotation):
+
+```bash
+kubectl annotate node node-a1 multicluster.antrea.io/gateway=true
+```
+
+### Set up Cluster B
+
+Let us switch to cluster B. All the `kubectl` and `antctl` commands in the
+following steps should be run with the `kubeconfig` for cluster B.
+
+#### Step 1 - deploy Antrea Multi-cluster Controller for member
+
+Run the following command to deploy the member Multi-cluster Controller into
+Namespace `kube-system`.
+
+```bash
+antctl mc deploy membercluster -n kube-system --antrea-version $TAG
+```
+
+You can run the following command to verify the `antrea-mc-controller` Pod is
+deployed and running:
+
+```bash
+$ kubectl get all -A -l="component=antrea-mc-controller"
+NAMESPACE NAME READY STATUS RESTARTS AGE
+kube-system pod/antrea-mc-controller-85dbf58b75-pjj48 1/1 Running 0 40s
+
+NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
+kube-system deployment.apps/antrea-mc-controller 1/1 1 1 40s
+```
+
+#### Step 2 - join ClusterSet
+
+Run the following command to make cluster B join the ClusterSet:
+
+```bash
+antctl mc join --clusterid test-cluster-member -n kube-system --config-file join-config.yml
+```
+
+`join-config.yml` is generated when creating the ClusterSet in cluster A. Again,
+you can also run the `antctl mc create membertoken` in the leader cluster
+(cluster A) to create a separate token for cluster B, and join using that token,
+rather than the default token in `join-config.yml`.
+
+#### Step 3 - specify Multi-cluster Gateway Node
+
+Assuming K8s Node `node-b1` is chosen to be the Multi-cluster Gateway for cluster
+B, run the following command to annotate the Node:
+
+```bash
+kubectl annotate node node-b1 multicluster.antrea.io/gateway=true
+```
+
+## What is Next
+
+So far, we set up an Antrea Multi-cluster ClusterSet with two clusters following
+the above sections of this guide. Next, you can start to consume the Antrea
+Multi-cluster features with the ClusterSet, including [Multi-cluster Services](user-guide.md#multi-cluster-service),
+[Multi-cluster NetworkPolicy](user-guide.md#multi-cluster-networkpolicy), and
+[ClusterNetworkPolicy replication](user-guide.md#clusternetworkpolicy-replication),
+Please check the relevant Antrea Multi-cluster User Guide sections to learn more.
+
+If you want to add a new member cluster to your ClusterSet, you can follow the
+steps for cluster B to do so. For example, you can run the following command to
+join the ClusterSet in a member cluster with ID `test-cluster-member2`:
+
+```bash
+antctl mc join --clusterid test-cluster-member2 -n kube-system --config-file join-config.yml
+```
+
+## Steps with YAML Manifests
+
+### Set up Leader and Member in Cluster A
+
+#### Step 1 - deploy Antrea Multi-cluster Controllers for leader and member
+
+Run the following commands to deploy Multi-cluster Controller for the leader
+into Namespace `antrea-multicluster` (Namespace `antrea-multicluster` will be
+created by the commands), and Multi-cluster Controller for the member into
+Namepsace `kube-system`.
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-leader-global.yml
+kubectl create ns antrea-multicluster
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-leader-namespaced.yml
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-member.yml
+```
+
+#### Step 2 - initialize ClusterSet
+
+Antrea provides several template YAML manifests to set up a ClusterSet quicker.
+You can run the following commands that use the template manifests to create a
+ClusterSet named `test-clusterset` in the leader cluster and a default token
+for the member clusters (both cluster A and B in our case) to join the
+ClusterSet.
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/$TAG/multicluster/config/samples/clusterset_init/leader-clusterset-template.yml
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/$TAG/multicluster/config/samples/clusterset_init/leader-access-token-template.yml
+kubectl get secret default-member-token -n antrea-multicluster -o yaml | grep -w -e '^apiVersion' -e '^data' -e '^metadata' -e '^ *name:' -e '^kind' -e ' ca.crt' -e ' token:' -e '^type' -e ' namespace' | sed -e 's/kubernetes.io\/service-account-token/Opaque/g' -e 's/antrea-multicluster/kube-system/g' > default-member-token.yml
+```
+
+The last command saves the token Secret manifest to `default-member-token.yml`,
+which will be needed for member clusters to join the ClusterSet. Note, in this
+example, we use a shared token for all member clusters. If you want to use a
+separate token for each member cluster for security considerations, you can
+follow the instructions in the [Multi-cluster User Guide](user-guide.md#set-up-access-to-leader-cluster).
+
+Next, run the following commands to make cluster A join the ClusterSet also as a
+member:
+
+```bash
+kubectl apply -f default-member-token.yml
+curl -L https://raw.githubusercontent.com/antrea-io/antrea/$TAG/multicluster/config/samples/clusterset_init/member-clusterset-template.yml > member-clusterset.yml
+sed -e 's/test-cluster-member/test-cluster-leader/g' -e 's//172.10.0.11/g' member-clusterset.yml | kubectl apply -f -
+```
+
+Here, `172.10.0.11` is the `kube-apiserver` IP of cluster A. You should replace
+it with the `kube-apiserver` IP of your leader cluster.
+
+#### Step 3 - specify Multi-cluster Gateway Node
+
+Assuming K8s Node `node-a1` is selected for the Multi-cluster Gateway, run
+the following command to annotate the Node:
+
+```bash
+kubectl annotate node node-a1 multicluster.antrea.io/gateway=true
+```
+
+### Set up Cluster B
+
+Let us switch to cluster B. All the `kubectl` commands in the following steps
+should be run with the `kubeconfig` for cluster B.
+
+#### Step 1 - deploy Antrea Multi-cluster Controller for member
+
+Run the following command to deploy the member Multi-cluster Controller into
+Namespace `kube-system`.
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-member.yml
+```
+
+You can run the following command to verify the `antrea-mc-controller` Pod is
+deployed and running:
+
+```bash
+$ kubectl get all -A -l="component=antrea-mc-controller"
+NAMESPACE NAME READY STATUS RESTARTS AGE
+kube-system pod/antrea-mc-controller-85dbf58b75-pjj48 1/1 Running 0 40s
+
+NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
+kube-system deployment.apps/antrea-mc-controller 1/1 1 1 40s
+```
+
+#### Step 2 - join ClusterSet
+
+Run the following commands to make cluster B join the ClusterSet:
+
+```bash
+kubectl apply -f default-member-token.yml
+curl -L https://raw.githubusercontent.com/antrea-io/antrea/$TAG/multicluster/config/samples/clusterset_init/member-clusterset-template.yml > member-clusterset.yml
+sed -e 's//172.10.0.11/g' member-clusterset.yml | kubectl apply -f -
+```
+
+`default-member-token.yml` saves the default member token which was generated
+when initializing the ClusterSet in cluster A.
+
+#### Step 3 - specify Multi-cluster Gateway Node
+
+Assuming K8s Node `node-b1` is chosen to be the Multi-cluster Gateway for cluster
+B, run the following command to annotate the Node:
+
+```bash
+kubectl annotate node node-b1 multicluster.antrea.io/gateway=true
+```
+
+### Add new member clusters
+
+If you want to add a new member cluster to your ClusterSet, you can follow the
+steps for cluster B to do so. Remember to update the member cluster ID `spec.clusterID`
+in `member-clusterset-template.yml` to the new member cluster's ID in the step 2 of
+joining ClusterSet. For example, you can run the following commands to join the
+ClusterSet in a member cluster with ID `test-cluster-member2`:
+
+```bash
+kubectl apply -f default-member-token.yml
+curl -L https://raw.githubusercontent.com/antrea-io/antrea/$TAG/multicluster/config/samples/clusterset_init/member-clusterset-template.yml > member-clusterset.yml
+sed -e 's//172.10.0.11/g' -e 's/test-cluster-member/test-cluster-member2/g' member-clusterset.yml | kubectl apply -f -
+```
diff --git a/content/docs/v1.15.0/docs/multicluster/upgrade.md b/content/docs/v1.15.0/docs/multicluster/upgrade.md
new file mode 100644
index 00000000..f9229490
--- /dev/null
+++ b/content/docs/v1.15.0/docs/multicluster/upgrade.md
@@ -0,0 +1,128 @@
+# Antrea Multi-cluster Upgrade Guide
+
+The Antrea Multi-cluster feature is introduced from v1.5.0. There is no data-plane
+related changes from release v1.5.0, so Antrea deployment and Antrea Multi-cluster
+deployment are indenpendent. However, we suggest to keep Antrea and Antrea Multi-cluster
+in the same version considering there will be data-plane change involved in the future.
+Please refer to [Antrea upgrade and supported version skew](../versioning.md#antrea-upgrade-and-supported-version-skew)
+to learn the requirement of Antrea upgrade. This doc focuses on Multi-cluster deployment only.
+
+The goal is to support 'graceful' upgrade. Multi-cluster upgrade will not have disruption
+to data-plane of member clusters, but there can be downtime of processing new configurations
+when individual components restart:
+
+- During Leader Controller restart, a new member cluster, ClusterSet or ResourceExport will
+ not be processed. This is because the Controller also runs the validation webhooks for
+ MemberClusterAnnounce, ClusterSet and ResourceExport.
+- During Member Controller restart, a new ClusterSet will not be processed, this is because
+ the Controller runs the validation webhooks for ClusterSet.
+
+Our goal is to support version skew for different Antrea Multi-cluster components, but the
+Multi-cluster feature is still in Alpha version, and the API is not stable yet. Our recommendation
+is always to upgrade Antrea Multi-cluster to the same version for a ClusterSet.
+
+- **Antrea Leader Controller**: must be upgraded first
+- **Antrea Member Controller**: must the same version as the **Antrea Leader Controller**.
+- **Antctl**: must not be newer than the **Antrea Leader/Member Controller**. Please
+ notice Antctl for Multi-cluster is added since v1.6.0.
+
+## Upgrade in one ClusterSet
+
+In one ClusterSet, We recommend all member and leader clusters deployed with the same version.
+During Leader controller upgrade, resource export/import between member clusters is not
+supported. Before all member clusters are upgraded to the same version as Leader controller,
+the feature introduced in old version should still work cross clusters, but no guarantee
+for the feature in new version.
+
+It should have no impact during upgrade to those imported resources like Service, Endpoints
+or AntreaClusterNetworkPolicy.
+
+## Upgrade from a version prior to v1.13
+
+Prior to Antrea v1.13, the `ClusterClaim` CRD is used to define both the local Cluster ID and
+the ClusterSet ID. Since Antrea v1.13, the `ClusterClaim` CRD is removed, and the `ClusterSet`
+CRD solely defines a ClusterSet. The name of a `ClusterSet` CR must match the ClusterSet ID,
+and a new `clusterID` field specifies the local Cluster ID.
+
+After upgrading Antrea Multi-cluster Controller from a version older than v1.13, the new version
+Multi-cluster Controller can still recognize and work with the old version `ClusterClaim` and
+`ClusterSet` CRs. However, we still suggest updating the `ClusterSet` CR to the new version after
+upgrading Multi-cluster Controller. You just need to update the existing `ClusterSet` CR and add the
+right `clusterID` to the spec. An example `ClusterSet` CR is like the following:
+
+```yaml
+apiVersion: multicluster.crd.antrea.io/v1alpha2
+kind: ClusterSet
+metadata:
+ name: test-clusterset # This value must match the ClusterSet ID.
+ namespace: kube-system
+spec:
+ clusterID: test-cluster-north # The new added field since v1.13.
+ leaders:
+ - clusterID: test-cluster-north
+ secret: "member-north-token"
+ server: "https://172.18.0.1:6443"
+ namespace: antrea-multicluster
+```
+
+You may also delete the `ClusterClaim` CRD after the upgrade, and then all existing `ClusterClaim`
+CRs will be removed automatically after the CRD is deleted.
+
+```bash
+kubectl delete crds clusterclaims.multicluster.crd.antrea.io
+```
+
+## APIs deprecation policy
+
+The Antrea Multi-cluster APIs are built using K8s CustomResourceDefinitions and we
+follow the same versioning scheme as the K8s APIs and the same [deprecation policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/).
+
+Other than the most recent API versions in each track, older API versions must be
+supported after their announced deprecation for a duration of no less than:
+
+- GA: 12 months
+- Beta: 9 months
+- Alpha: N/A (can be removed immediately)
+
+K8s has a [moratorium](https://github.com/kubernetes/kubernetes/issues/52185) on the
+removal ofAPI object versions that have been persisted to storage. We adopt the following
+rules for the CustomResources which are persisted by the K8s apiserver.
+
+- Alpha API versions may be removed at any time.
+- The [`deprecated` field](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-deprecation) must be used for CRDs to indicate that a particular version of
+ the resource has been deprecated.
+- Beta and GA API versions must be supported after deprecation for the respective
+ durations stipulated above before they can be removed.
+- For deprecated Beta and GA API versions, a [conversion webhook](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#webhook-conversion) must be provided along with
+ each Antrea release, until the API version is removed altogether.
+
+## Supported K8s versions
+
+Please refer to [Supported K8s versions](../versioning.md#supported-k8s-versions)
+to learn the details.
+
+## Feature list
+
+Following is the Antrea Multi-cluster feature list. For the details of each feature,
+please refer to [Antrea Multi-cluster Architecture](./architecture.md).
+
+| Feature | Supported in |
+| -------------------------------- | ------------ |
+| Service Export/Import | v1.5.0 |
+| ClusterNetworkPolicy Replication | v1.6.0 |
+
+## Known Issues
+
+When you are trying to directly apply a newer Antrea Multi-cluster YAML manifest, as
+provided with [an Antrea release](https://github.com/antrea-io/antrea/releases), you will
+probably meet an issue like below if you are upgrading Multi-cluster components
+from v1.5.0 to a newer one:
+
+```log
+label issue:The Deployment "antrea-mc-controller" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"antrea", "component":"antrea-mc-controller"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
+```
+
+The issue is caused by the label change introduced by [PR3266](https://github.com/antrea-io/antrea/pull/3266).
+The reason is mutation of label selectors on Deployments is not allowed in `apps/v1beta2`
+and forward. You need to delete the Deployment "antrea-mc-controller" first, then run
+`kubectl apply -f` with the manifest of the newer version.
diff --git a/content/docs/v1.15.0/docs/multicluster/user-guide.md b/content/docs/v1.15.0/docs/multicluster/user-guide.md
new file mode 100644
index 00000000..6673f6ec
--- /dev/null
+++ b/content/docs/v1.15.0/docs/multicluster/user-guide.md
@@ -0,0 +1,910 @@
+# Antrea Multi-cluster User Guide
+
+## Table of Contents
+
+
+- [Quick Start](#quick-start)
+- [Installation](#installation)
+ - [Preparation](#preparation)
+ - [Deploy Antrea Multi-cluster Controller](#deploy-antrea-multi-cluster-controller)
+ - [Deploy in a Dedicated Leader Cluster](#deploy-in-a-dedicated-leader-cluster)
+ - [Deploy in a Member Cluster](#deploy-in-a-member-cluster)
+ - [Deploy Leader and Member in One Cluster](#deploy-leader-and-member-in-one-cluster)
+ - [Create ClusterSet](#create-clusterset)
+ - [Set up Access to Leader Cluster](#set-up-access-to-leader-cluster)
+ - [Initialize ClusterSet](#initialize-clusterset)
+ - [Initialize ClusterSet for a Dual-role Cluster](#initialize-clusterset-for-a-dual-role-cluster)
+- [Multi-cluster Gateway Configuration](#multi-cluster-gateway-configuration)
+ - [Multi-cluster WireGuard Encryption](#multi-cluster-wireguard-encryption)
+- [Multi-cluster Service](#multi-cluster-service)
+- [Multi-cluster Pod-to-Pod Connectivity](#multi-cluster-pod-to-pod-connectivity)
+- [Multi-cluster NetworkPolicy](#multi-cluster-networkpolicy)
+ - [Egress Rule to Multi-cluster Service](#egress-rule-to-multi-cluster-service)
+ - [Ingress Rule](#ingress-rule)
+- [ClusterNetworkPolicy Replication](#clusternetworkpolicy-replication)
+- [Build Antrea Multi-cluster Controller Image](#build-antrea-multi-cluster-controller-image)
+- [Uninstallation](#uninstallation)
+ - [Remove a Member Cluster](#remove-a-member-cluster)
+ - [Remove a Leader Cluster](#remove-a-leader-cluster)
+- [Known Issue](#known-issue)
+
+
+Antrea Multi-cluster implements [Multi-cluster Service API](https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/1645-multi-cluster-services-api),
+which allows users to create multi-cluster Services that can be accessed cross
+clusters in a ClusterSet. Antrea Multi-cluster also extends Antrea native
+NetworkPolicy to support Multi-cluster NetworkPolicy rules that apply to
+cross-cluster traffic, and ClusterNetworkPolicy replication that allows a
+ClusterSet admin to create ClusterNetworkPolicies which are replicated across
+the entire ClusterSet and enforced in all member clusters. Antrea Multi-cluster
+was first introduced in Antrea v1.5.0. In Antrea v1.7.0, the Multi-cluster
+Gateway feature was added that supports routing multi-cluster Service traffic
+through tunnels among clusters. The ClusterNetworkPolicy replication feature is
+supported since Antrea v1.6.0, and Multi-cluster NetworkPolicy rules are
+supported since Antrea v1.10.0.
+
+Antrea v1.13 promoted the ClusterSet CRD version from v1alpha1 to v1alpha2. If you
+plan to upgrade from a previous version to v1.13 or later, please check
+the [upgrade guide](./upgrade.md#upgrade-from-a-version-prior-to-v113).
+
+## Quick Start
+
+Please refer to the [Quick Start Guide](quick-start.md) to learn how to build a
+ClusterSet with two clusters quickly.
+
+## Installation
+
+In this guide, all Multi-cluster installation and ClusterSet configuration are
+done by applying Antrea Multi-cluster YAML manifests. Actually, all operations
+can also be done with `antctl` Multi-cluster commands, which may be more
+convenient in many cases. You can refer to the [Quick Start Guide](quick-start.md)
+and [antctl Guide](antctl.md) to learn how to use the Multi-cluster commands.
+
+### Preparation
+
+We assume an Antrea version >= `v1.8.0` is used in this guide, and the Antrea
+version is set to an environment variable `TAG`. For example, the following
+command sets the Antrea version to `v1.8.0`.
+
+```bash
+export TAG=v1.8.0
+```
+
+To use the latest version of Antrea Multi-cluster from the Antrea main branch,
+you can change the YAML manifest path to: `https://github.com/antrea-io/antrea/tree/main/multicluster/build/yamls/`
+when applying or downloading an Antrea YAML manifest.
+
+[Multi-cluster Services](#multi-cluster-service) and
+[multi-cluster Pod-to-Pod connectivity](#multi-cluster-pod-to-pod-connectivity),
+in particular configuration (please check the corresponding sections to learn more
+information), requires an Antrea Multi-cluster Gateway to be set up in each member
+cluster by default to route Service and Pod traffic across clusters. To support
+Multi-cluster Gateways, `antrea-agent` must be deployed with the `Multicluster`
+feature enabled in a member cluster. You can set the following configuration parameters
+in `antrea-agent.conf` of the Antrea deployment manifest to enable the `Multicluster`
+feature:
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ Multicluster: true
+ multicluster:
+ enableGateway: true
+ namespace: "" # Change to the Namespace where antrea-mc-controller is deployed.
+```
+
+In order for Multi-cluster features to work, it is necessary for `enableGateway` to be set to true by
+the user, except when Pod-to-Pod direct connectivity already exists (e.g., provided by the cloud provider)
+and `endpointIPType` is configured as `PodIP`. Details can be found in [Multi-cluster Services](#multi-cluster-service).
+Please note that [Multi-cluster NetworkPolicy](#multi-cluster-networkpolicy) always requires
+Gateway.
+
+Prior to Antrea v1.11.0, Multi-cluster Gateway only works with Antrea `encap` traffic
+mode, and all member clusters in a ClusterSet must use the same tunnel type. Since
+Antrea v1.11.0, Multi-cluster Gateway also works with the Antrea `noEncap`, `hybrid`
+and `networkPolicyOnly` modes. For `noEncap` and `hybrid` modes, Antrea Multi-cluster
+deployment is the same as `encap` mode. For `networkPolicyOnly` mode, we need extra
+Antrea configuration changes to support Multi-cluster Gateway. Please check
+[the deployment guide](./policy-only-mode.md) for more information. When using
+Multi-cluster Gateway, it is not possible to enable WireGuard for inter-Node
+traffic within the same member cluster. It is however possible to [enable
+WireGuard for cross-cluster traffic](#multi-cluster-wireguard-encryption)
+between member clusters.
+
+### Deploy Antrea Multi-cluster Controller
+
+A Multi-cluster ClusterSet is comprised of a single leader cluster and at least
+two member clusters. Antrea Multi-cluster Controller needs to be deployed in the
+leader and all member clusters. A cluster can serve as the leader, and meanwhile
+also be a member cluster of the ClusterSet. To deploy Multi-cluster Controller
+in a dedicated leader cluster, please refer to [Deploy in a Dedicated Leader
+cluster](#deploy-in-a-dedicated-leader-cluster). To deploy Multi-cluster
+Controller in a member cluster, please refer to [Deploy in a Member Cluster](#deploy-in-a-member-cluster).
+To deploy Multi-cluster Controller in a dual-role cluster, please refer to
+[Deploy Leader and Member in One Cluster](#deploy-leader-and-member-in-one-cluster).
+
+#### Deploy in a Dedicated Leader Cluster
+
+Since Antrea v1.14.0, you can run the following command to install Multi-cluster Controller
+in the leader cluster. Multi-cluster Controller is deployed into a Namespace. You must
+create the Namespace first, and then apply the deployment manifest in the Namespace.
+
+For a version older than v1.14, please check the user guide document of the version:
+`https://github.com/antrea-io/antrea/blob/release-$version/docs/multicluster/user-guide.md`,
+where `$version` can be `1.12`, `1.13` etc.
+
+ ```bash
+ kubectl create ns antrea-multicluster
+ kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-leader.yml
+ ```
+
+The Multi-cluster Controller in the leader cluster will be deployed in Namespace `antrea-multicluster`
+by default. If you'd like to use another Namespace, you can change `antrea-multicluster` to the desired
+Namespace in `antrea-multicluster-leader-namespaced.yml`, for example:
+
+```bash
+kubectl create ns ''
+curl -L https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-leader-namespaced.yml > antrea-multicluster-leader-namespaced.yml
+sed 's/antrea-multicluster//g' antrea-multicluster-leader-namespaced.yml | kubectl apply -f -
+```
+
+#### Deploy in a Member Cluster
+
+You can run the following command to install Multi-cluster Controller in a
+member cluster. The command will run the controller in the "member" mode in the
+`kube-system` Namespace. If you want to use a different Namespace other than
+`kube-system`, you can edit `antrea-multicluster-member.yml` and change
+`kube-system` to the desired Namespace.
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-member.yml
+```
+
+#### Deploy Leader and Member in One Cluster
+
+We need to run two instances of Multi-cluster Controller in the dual-role
+cluster, one in leader mode and another in member mode.
+
+1. Follow the steps in section [Deploy in a Dedicated Leader Cluster](#deploy-in-a-dedicated-leader-cluster)
+ to deploy the leader controller and import the Multi-cluster CRDs.
+2. Follow the steps in section [Deploy in a Member Cluster](#deploy-in-a-member-cluster)
+ to deploy the member controller.
+
+### Create ClusterSet
+
+An Antrea Multi-cluster ClusterSet should include at least one leader cluster
+and two member clusters. As an example, in the following sections we will create
+a ClusterSet `test-clusterset` which has two member clusters with cluster ID
+`test-cluster-east` and `test-cluster-west` respectively, and one leader cluster
+with ID `test-cluster-north`. Please note that the name of a ClusterSet CR must
+match the ClusterSet ID. In all the member and leader clusters of a ClusterSet,
+the ClusterSet CR must use the ClusterSet ID as the name, e.g. `test-clusterset`
+in the example of this guide.
+
+#### Set up Access to Leader Cluster
+
+We first need to set up access to the leader cluster's API server for all member
+clusters. We recommend creating one ServiceAccount for each member for
+fine-grained access control.
+
+The Multi-cluster Controller deployment manifest for a leader cluster also creates
+a default member cluster token. If you prefer to use the default token, you can skip
+step 1 and replace the Secret name `member-east-token` to the default token Secret
+`antrea-mc-member-access-token` in step 2.
+
+1. Apply the following YAML manifest in the leader cluster to set up access for
+ `test-cluster-east`:
+
+ ```yml
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: member-east
+ namespace: antrea-multicluster
+ ---
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: member-east-token
+ namespace: antrea-multicluster
+ annotations:
+ kubernetes.io/service-account.name: member-east
+ type: kubernetes.io/service-account-token
+ ---
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: RoleBinding
+ metadata:
+ name: member-east
+ namespace: antrea-multicluster
+ roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: antrea-mc-member-cluster-role
+ subjects:
+ - kind: ServiceAccount
+ name: member-east
+ namespace: antrea-multicluster
+ ```
+
+2. Generate the token Secret manifest from the leader cluster, and create a
+ Secret with the manifest in member cluster `test-cluster-east`, e.g.:
+
+ ```bash
+ # Generate the file 'member-east-token.yml' from your leader cluster
+ kubectl get secret member-east-token -n antrea-multicluster -o yaml | grep -w -e '^apiVersion' -e '^data' -e '^metadata' -e '^ *name:' -e '^kind' -e ' ca.crt' -e ' token:' -e '^type' -e ' namespace' | sed -e 's/kubernetes.io\/service-account-token/Opaque/g' -e 's/antrea-multicluster/kube-system/g' > member-east-token.yml
+ # Apply 'member-east-token.yml' to the member cluster.
+ kubectl apply -f member-east-token.yml --kubeconfig=/path/to/kubeconfig-of-member-test-cluster-east
+ ```
+
+3. Replace all `east` to `west` and repeat step 1/2 for the other member cluster
+ `test-cluster-west`.
+
+#### Initialize ClusterSet
+
+In all clusters, a `ClusterSet` CR must be created to define the ClusterSet and claim the
+cluster is a member of the ClusterSet.
+
+- Create `ClusterSet` in the leader cluster `test-cluster-north` with the following YAML
+ manifest (you can also refer to [leader-clusterset-template.yml](../../multicluster/config/samples/clusterset_init/leader-clusterset-template.yml)):
+
+```yaml
+apiVersion: multicluster.crd.antrea.io/v1alpha2
+kind: ClusterSet
+metadata:
+ name: test-clusterset
+ namespace: antrea-multicluster
+spec:
+ clusterID: test-cluster-north
+ leaders:
+ - clusterID: test-cluster-north
+```
+
+- Create `ClusterSet` in member cluster `test-cluster-east` with the following
+YAML manifest (you can also refer to [member-clusterset-template.yml](../../multicluster/config/samples/clusterset_init/member-clusterset-template.yml)):
+
+```yaml
+apiVersion: multicluster.crd.antrea.io/v1alpha2
+kind: ClusterSet
+metadata:
+ name: test-clusterset
+ namespace: kube-system
+spec:
+ clusterID: test-cluster-east
+ leaders:
+ - clusterID: test-cluster-north
+ secret: "member-east-token"
+ server: "https://172.18.0.1:6443"
+ namespace: antrea-multicluster
+```
+
+Note: update `server: "https://172.18.0.1:6443"` in the `ClusterSet` spec to the
+correct leader cluster API server address.
+
+- Create `ClusterSet` in member cluster `test-cluster-west`:
+
+```yaml
+apiVersion: multicluster.crd.antrea.io/v1alpha2
+kind: ClusterSet
+metadata:
+ name: test-clusterset
+ namespace: kube-system
+spec:
+ clusterID: test-cluster-west
+ leaders:
+ - clusterID: test-cluster-north
+ secret: "member-west-token"
+ server: "https://172.18.0.1:6443"
+ namespace: antrea-multicluster
+```
+
+#### Initialize ClusterSet for a Dual-role Cluster
+
+If you want to make the leader cluster `test-cluster-north` also a member
+cluster of the ClusterSet, make sure you follow the steps in [Deploy Leader and
+Member in One Cluster](#deploy-leader-and-member-in-one-cluster) and repeat the
+steps in [Set up Access to Leader Cluster](#set-up-access-to-leader-cluster) as
+well (don't forget replace all `east` to `north` when you repeat the steps).
+
+Then create the `ClusterSet` CR in cluster `test-cluster-north` in the
+`kube-system` Namespace (where the member Multi-cluster Controller runs):
+
+```yaml
+apiVersion: multicluster.crd.antrea.io/v1alpha2
+kind: ClusterSet
+metadata:
+ name: test-clusterset
+ namespace: kube-system
+spec:
+ clusterID: test-cluster-north
+ leaders:
+ - clusterID: test-cluster-north
+ secret: "member-north-token"
+ server: "https://172.18.0.1:6443"
+ namespace: antrea-multicluster
+```
+
+## Multi-cluster Gateway Configuration
+
+Multi-cluster Gateways are responsible for establishing tunnels between clusters.
+Each member cluster should have one Node serving as its Multi-cluster Gateway.
+Multi-cluster Service traffic is routed among clusters through the tunnels between
+Gateways.
+
+Below is a table about communication support for different configurations.
+
+| Pod-to-Pod connectivity provided by underlay | Gateway Enabled | MC EndpointTypes | Cross-cluster Service/Pod communications |
+| -------------------------------------------- | --------------- | ----------------- | ---------------------------------------- |
+| No | No | N/A | No |
+| Yes | No | PodIP | Yes |
+| No | Yes | PodIP/ClusterIP | Yes |
+| Yes | Yes | PodIP/ClusterIP | Yes |
+
+After a member cluster joins a ClusterSet, and the `Multicluster` feature is
+enabled on `antrea-agent`, you can select a Node of the cluster to serve as
+the Multi-cluster Gateway by adding an annotation:
+`multicluster.antrea.io/gateway=true` to the K8s Node. For example, you can run
+the following command to annotate Node `node-1` as the Multi-cluster Gateway:
+
+```bash
+kubectl annotate node node-1 multicluster.antrea.io/gateway=true
+```
+
+You can annotate multiple Nodes in a member cluster as the candidates for
+Multi-cluster Gateway, but only one Node will be selected as the active Gateway.
+Before Antrea v1.9.0, the Gateway Node is just randomly selected and will never
+change unless the Node or its `gateway` annotation is deleted. Starting with
+Antrea v1.9.0, Antrea Multi-cluster Controller will guarantee a "ready" Node
+is selected as the Gateway, and when the current Gateway Node's status changes
+to not "ready", Antrea will try selecting another "ready" Node from the
+candidate Nodes to be the Gateway.
+
+Once a Gateway Node is decided, Multi-cluster Controller in the member cluster
+will create a `Gateway` CR with the same name as the Node. You can check it with
+command:
+
+```bash
+$ kubectl get gateway -n kube-system
+NAME GATEWAY IP INTERNAL IP AGE
+node-1 10.17.27.55 10.17.27.55 10s
+```
+
+`internalIP` of the Gateway is used for the tunnels between the Gateway Node and
+other Nodes in the local cluster, while `gatewayIP` is used for the tunnels to
+remote Gateways of other member clusters. Multi-cluster Controller discovers the
+IP addresses from the K8s Node resource of the Gateway Node. It will always use
+`InternalIP` of the K8s Node as the Gateway's `internalIP`. For `gatewayIP`,
+there are several possibilities:
+
+* By default, the K8s Node's `InternalIP` is used as `gatewayIP` too.
+* You can choose to use the K8s Node's `ExternalIP` as `gatewayIP`, by changing
+the configuration option `gatewayIPPrecedence` to value: `external`, when
+deploying the member Multi-cluster Controller. The configration option is
+defined in ConfigMap `antrea-mc-controller-config` in `antrea-multicluster-member.yml`.
+* When the Gateway Node has a separate IP for external communication or is
+associated with a public IP (e.g. an Elastic IP on AWS), but the IP is not added
+to the K8s Node, you can still choose to use the IP as `gatewayIP`, by adding an
+annotation: `multicluster.antrea.io/gateway-ip=` to the K8s Node.
+
+When choosing a candidate Node for Multi-cluster Gateway, you need to make sure
+the resulted `gatewayIP` can be reached from the remote Gateways. You may need
+to [configure firewall or security groups](../network-requirements.md) properly
+to allow the tunnels between Gateway Nodes. As of now, only IPv4 Gateway IPs are
+supported.
+
+After the Gateway is created, Multi-cluster Controller will be responsible
+for exporting the cluster's network information to other member clusters
+through the leader cluster, including the cluster's Gateway IP and Service
+CIDR. Multi-cluster Controller will try to discover the cluster's Service CIDR
+automatically, but you can also manually specify the `serviceCIDR` option in
+ConfigMap `antrea-mc-controller-config`. In other member clusters, a
+ClusterInfoImport CR will be created for the cluster which includes the
+exported network information. For example, in cluster `test-cluster-west`, you
+you can see a ClusterInfoImport CR with name `test-cluster-east-clusterinfo`
+is created for cluster `test-cluster-east`:
+
+```bash
+$ kubectl get clusterinfoimport -n kube-system
+NAME CLUSTER ID SERVICE CIDR AGE
+test-cluster-east-clusterinfo test-cluster-east 110.96.0.0/20 10s
+```
+
+Make sure you repeat the same step to assign a Gateway Node in all member
+clusters. Once you confirm that all `Gateway` and `ClusterInfoImport` are
+created correctly, you can follow the [Multi-cluster Service](#multi-cluster-service)
+section to create multi-cluster Services and verify cross-cluster Service
+access.
+
+### Multi-cluster WireGuard Encryption
+
+Since Antrea v1.12.0, Antrea Multi-cluster supports WireGuard tunnel between
+member clusters. If WireGuard is enabled, the WireGuard interface and routes
+will be created by Antrea Agent on the Gateway Node, and all cross-cluster
+traffic will be encrypted and forwarded to the WireGuard tunnel.
+
+Please note that WireGuard encryption requires the `wireguard` kernel module be
+present on the Kubernetes Nodes. `wireguard` module is part of mainline kernel
+since Linux 5.6. Or, you can compile the module from source code with a kernel
+version >= 3.10. [This WireGuard installation guide](https://www.wireguard.com/install)
+documents how to install WireGuard together with the kernel module on various
+operating systems.
+
+To enable the WireGuard encryption, the `TrafficEncryptMode`
+in Multi-cluster configuration should be set to `wireGuard` and the `enableGateway`
+field should be set to `true` as follows:
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ Multicluster: true
+ multicluster:
+ enableGateway: true
+ trafficEncryptionMode: "wireGuard"
+ wireGuard:
+ port: 51821
+```
+
+When WireGuard encryption is enabled for cross-cluster traffic as part of the
+Multi-cluster feature, in-cluster encryption (for traffic within a given member
+cluster) is no longer supported, not even with IPsec.
+
+## Multi-cluster Service
+
+After you set up a ClusterSet properly, you can create a `ServiceExport` CR to
+export a Service from one cluster to other clusters in the Clusterset, like the
+example below:
+
+```yaml
+apiVersion: multicluster.x-k8s.io/v1alpha1
+kind: ServiceExport
+metadata:
+ name: nginx
+ namespace: default
+```
+
+For example, once you export the `default/nginx` Service in member cluster
+`test-cluster-west`, it will be automatically imported in member cluster
+`test-cluster-east`. A Service and an Endpoints with name
+`default/antrea-mc-nginx` will be created in `test-cluster-east`, as well as
+a ServcieImport CR with name `default/nginx`. Now, Pods in `test-cluster-east`
+can access the imported Service using its ClusterIP, and the requests will be
+routed to the backend `nginx` Pods in `test-cluster-west`. You can check the
+imported Service and ServiceImport with commands:
+
+```bash
+$ kubectl get serviceimport antrea-mc-nginx -n default
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+antrea-mc-nginx ClusterIP 10.107.57.62 443/TCP 10s
+
+$ kubectl get serviceimport nginx -n default
+NAME TYPE IP AGE
+nginx ClusterSetIP ["10.19.57.62"] 10s
+```
+
+As part of the Service export/import process, in the leader cluster, two
+ResourceExport CRs will be created in the Multi-cluster Controller Namespace,
+for the exported Service and Endpoints respectively, as well as two
+ResourceImport CRs. You can check them in the leader cluster with commands:
+
+```bash
+$ kubectl get resourceexport -n antrea-multicluster
+NAME CLUSTER ID KIND NAMESPACE NAME AGE
+test-cluster-west-default-nginx-endpoints test-cluster-west Endpoints default nginx 30s
+test-cluster-west-default-nginx-service test-cluster-west Service default nginx 30s
+
+$ kubectl get resourceimport -n antrea-multicluster
+NAME KIND NAMESPACE NAME AGE
+default-nginx-endpoints Endpoints default nginx 99s
+default-nginx-service ServiceImport default nginx 99s
+```
+
+When there is any new change on the exported Service, the imported multi-cluster
+Service resources will be updated accordingly. Multiple member clusters can
+export the same Service (with the same name and Namespace). In this case, the
+imported Service in a member cluster will include endpoints from all the export
+clusters, and the Service requests will be load-balanced to all these clusters.
+Even when the client Pod's cluster also exported the Service, the Service
+requests may be routed to other clusters, and the endpoints from the local
+cluster do not take precedence. A Service cannot have conflicted definitions in
+different export clusters, otherwise only the first export will be replicated to
+other clusters; other exports as well as new updates to the Service will be
+ingored, until user fixes the conflicts. For example, after a member cluster
+exported a Service: `default/nginx` with TCP Port `80`, other clusters can only
+export the same Service with the same Ports definition including Port names. At
+the moment, Antrea Multi-cluster supports only IPv4 multi-cluster Services.
+
+By default, a multi-cluster Service will use the exported Services' ClusterIPs (the
+original Service ClusterIPs in the export clusters) as Endpoints. Since Antrea
+v1.9.0, Antrea Multi-cluster also supports using the backend Pod IPs as the
+multi-cluster Service endpoints. You can change the value of configuration option
+`endpointIPType` in ConfigMap `antrea-mc-controller-config` from `ClusterIP`
+to `PodIP` to use Pod IPs as endpoints. All member clusters in a ClusterSet should
+use the same endpoint type. Existing ServiceExports should be re-exported after
+changing `endpointIPType`. `ClusterIP` type requires that Service CIDRs (ClusterIP
+ranges) must not overlap among member clusters, and always requires Multi-cluster
+Gateways to be configured. `PodIP` type requires Pod CIDRs not to overlap among
+clusters, and it also requires Multi-cluster Gateways when there is no direct Pod-to-Pod
+connectivity across clusters. Also refer to [Multi-cluster Pod-to-Pod Connectivity](#multi-cluster-pod-to-pod-connectivity)
+for more information.
+
+## Multi-cluster Pod-to-Pod Connectivity
+
+Since Antrea v1.9.0, Multi-cluster supports routing Pod traffic across clusters
+through Multi-cluster Gateways. Pod IPs can be reached in all member clusters
+within a ClusterSet. To enable this feature, the cluster's Pod CIDRs must be set
+in ConfigMap `antrea-mc-controller-config` of each member cluster and
+`multicluster.enablePodToPodConnectivity` must be set to `true` in the `antrea-agent`
+configuration.
+Note, **Pod CIDRs must not overlap among clusters to enable cross-cluster
+Pod-to-Pod connectivity**.
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ labels:
+ app: antrea
+ name: antrea-mc-controller-config
+ namespace: kube-system
+data:
+ controller_manager_config.yaml: |
+ apiVersion: multicluster.crd.antrea.io/v1alpha1
+ kind: MultiClusterConfig
+ podCIDRs:
+ - "10.10.1.1/16"
+```
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ Multicluster: true
+ multicluster:
+ enablePodToPodConnectivity: true
+```
+
+You can edit [antrea-multicluster-member.yml](../../multicluster/build/yamls/antrea-multicluster-member.yml),
+or use `kubectl edit` to change the ConfigMap:
+
+```bash
+kubectl edit configmap -n kube-system antrea-mc-controller-config
+```
+
+Normally, `podCIDRs` should be the value of `kube-controller-manager`'s
+`cluster-cidr` option. If it's left empty, the Pod-to-Pod connectivity feature
+will not be enabled. If you use `kubectl edit` to edit the ConfigMap, then you
+need to restart the `antrea-mc-controller` Pod to load the latest configuration.
+
+## Multi-cluster NetworkPolicy
+
+Antrea-native policies can be enforced on cross-cluster traffic in a ClusterSet.
+To enable Multi-cluster NetworkPolicy features, check the Antrea Controller and
+Agent ConfigMaps and make sure that `enableStretchedNetworkPolicy` is set to
+`true` in addition to enabling the `multicluster` feature gate:
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-controller.conf: |
+ featureGates:
+ Multicluster: true
+ multicluster:
+ enableStretchedNetworkPolicy: true # required by both egress and ingres rules
+ antrea-agent.conf: |
+ featureGates:
+ Multicluster: true
+ multicluster:
+ enableGateway: true
+ enableStretchedNetworkPolicy: true # required by only ingress rules
+ namespace: ""
+```
+
+### Egress Rule to Multi-cluster Service
+
+Restricting Pod egress traffic to backends of a Multi-cluster Service (which can be on the
+same cluster of the source Pod or on a different cluster) is supported by Antrea-native
+policy's `toServices` feature in egress rules. To define such a policy, simply put the exported
+Service name and Namespace in the `toServices` field of an Antrea-native policy, and set `scope`
+of the `toServices` peer to `ClusterSet`:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: acnp-drop-tenant-to-secured-mc-service
+spec:
+ priority: 1
+ tier: securityops
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ role: tenant
+ egress:
+ - action: Drop
+ toServices:
+ - name: secured-service # an exported Multi-cluster Service
+ namespace: svcNamespace
+ scope: ClusterSet
+```
+
+The `scope` field of `toServices` rules is supported since Antrea v1.10. For earlier versions
+of Antrea, an equivalent rule can be written by not specifying `scope` and providing the
+imported Service name instead (i.e. `antrea-mc-[svcName]`).
+
+Note that the scope of policy's `appliedTo` field will still be restricted to the cluster
+where the policy is created in. To enforce such a policy for all `role=tenant` Pods in the
+entire ClusterSet, use the [ClusterNetworkPolicy Replication](#clusternetworkpolicy-replication)
+feature described in the later section, and set the `clusterNetworkPolicy` field of
+the ResourceExport to the `acnp-drop-tenant-to-secured-mc-service` spec above. Such
+replication should only be performed by ClusterSet admins, who have clearance of creating
+ClusterNetworkPolicies in all clusters of a ClusterSet.
+
+### Ingress Rule
+
+Antrea-native policies now support selecting ingress peers in the ClusterSet scope (since v1.10.0).
+Policy rules can be created to enforce security postures on ingress traffic from all member
+clusters in a ClusterSet:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ClusterNetworkPolicy
+metadata:
+ name: drop-tenant-access-to-admin-namespace
+spec:
+ appliedTo:
+ - namespaceSelector:
+ matchLabels:
+ role: admin
+ priority: 1
+ tier: securityops
+ ingress:
+ - action: Deny
+ from:
+ # Select all Pods in role=tenant Namespaces in the ClusterSet
+ - scope: ClusterSet
+ namespaceSelector:
+ matchLabels:
+ role: tenant
+```
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: NetworkPolicy
+metadata:
+ name: db-svc-allow-ingress-from-client-only
+ namespace: prod-us-west
+spec:
+ appliedTo:
+ - podSelector:
+ matchLabels:
+ app: db
+ priority: 1
+ tier: application
+ ingress:
+ - action: Allow
+ from:
+ # Select all Pods in Namespace "prod-us-west" from all clusters in the ClusterSet (if the
+ # Namespace exists in that cluster) whose labels match app=client
+ - scope: ClusterSet
+ podSelector:
+ matchLabels:
+ app: client
+ - action: Deny
+```
+
+As shown in the examples above, setting `scope` to `ClusterSet` expands the
+scope of the `podSelector` or `namespaceSelector` of an ingress peer to the
+entire ClusterSet that the policy is created in. Similar to egress rules, the
+scope of an ingress rule's `appliedTo` is still restricted to the local cluster.
+
+To use the ingress cross-cluster NetworkPolicy feature, the `enableStretchedNetworkPolicy`
+option needs to be set to `true` in `antrea-mc-controller-config`, for each `antrea-mc-controller`
+running in the ClusterSet. Refer to the [previous section](#multi-cluster-pod-to-pod-connectivity)
+on how to change the ConfigMap:
+
+```yaml
+ controller_manager_config.yaml: |
+ apiVersion: multicluster.crd.antrea.io/v1alpha1
+ kind: MultiClusterConfig
+ enableStretchedNetworkPolicy: true
+```
+
+Note that currently ingress stretched NetworkPolicy only works with the Antrea `encap`
+traffic mode.
+
+## ClusterNetworkPolicy Replication
+
+Since Antrea v1.6.0, Multi-cluster admins can specify certain
+ClusterNetworkPolicies to be replicated and enforced across the entire
+ClusterSet. This is especially useful for ClusterSet admins who want all
+clusters in the ClusterSet to be applied with a consistent security posture (for
+example, all Namespaces in all clusters can only communicate with Pods in their
+own Namespaces). For more information regarding Antrea ClusterNetworkPolicy
+(ACNP), please refer to [this document](../antrea-network-policy.md).
+
+To achieve such ACNP replication across clusters, admins can, in the leader
+cluster of a ClusterSet, create a `ResourceExport` CR of kind
+`AntreaClusterNetworkPolicy` which contains the ClusterNetworkPolicy spec
+they wish to be replicated. The `ResourceExport` should be created in the
+Namespace where the ClusterSet's leader Multi-cluster Controller runs.
+
+```yaml
+apiVersion: multicluster.crd.antrea.io/v1alpha1
+kind: ResourceExport
+metadata:
+ name: strict-namespace-isolation-for-test-clusterset
+ namespace: antrea-multicluster # Namespace that Multi-cluster Controller is deployed
+spec:
+ kind: AntreaClusterNetworkPolicy
+ name: strict-namespace-isolation # In each importing cluster, an ACNP of name antrea-mc-strict-namespace-isolation will be created with the spec below
+ clusterNetworkPolicy:
+ priority: 1
+ tier: securityops
+ appliedTo:
+ - namespaceSelector: {} # Selects all Namespaces in the member cluster
+ ingress:
+ - action: Pass
+ from:
+ - namespaces:
+ match: Self # Skip drop rule for traffic from Pods in the same Namespace
+ - podSelector:
+ matchLabels:
+ k8s-app: kube-dns # Skip drop rule for traffic from the core-dns components
+ - action: Drop
+ from:
+ - namespaceSelector: {} # Drop from Pods from all other Namespaces
+```
+
+The above sample spec will create an ACNP in each member cluster which
+implements strict Namespace isolation for that cluster.
+
+Note that because the Tier that an ACNP refers to must exist before the ACNP is applied, an importing
+cluster may fail to create the ACNP to be replicated, if the Tier in the ResourceExport spec cannot be
+found in that particular cluster. If there are such failures, the ACNP creation status of failed member
+clusters will be reported back to the leader cluster as K8s Events, and can be checked by describing
+the `ResourceImport` of the original `ResourceExport`:
+
+```bash
+$ kubectl describe resourceimport -A
+Name: strict-namespace-isolation-antreaclusternetworkpolicy
+Namespace: antrea-multicluster
+API Version: multicluster.crd.antrea.io/v1alpha1
+Kind: ResourceImport
+Spec:
+ Clusternetworkpolicy:
+ Applied To:
+ Namespace Selector:
+ Ingress:
+ Action: Pass
+ Enable Logging: false
+ From:
+ Namespaces:
+ Match: Self
+ Pod Selector:
+ Match Labels:
+ k8s-app: kube-dns
+ Action: Drop
+ Enable Logging: false
+ From:
+ Namespace Selector:
+ Priority: 1
+ Tier: random
+ Kind: AntreaClusterNetworkPolicy
+ Name: strict-namespace-isolation
+ ...
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Warning ACNPImportFailed 2m11s resourceimport-controller ACNP Tier random does not exist in the importing cluster test-cluster-west
+```
+
+In future releases, some additional tooling may become available to automate the
+creation of ResourceExports for ACNPs, and provide a user-friendly way to define
+Multi-cluster NetworkPolicies to be enforced in the ClusterSet.
+
+## Build Antrea Multi-cluster Controller Image
+
+If you'd like to build Multi-cluster Controller Docker image locally, you can
+follow the following steps:
+
+1. Go to your local `antrea` source tree, run `make build-antrea-mc-controller`, and you
+will get a new image named `antrea/antrea-mc-controller:latest` locally.
+2. Run `docker save antrea/antrea-mc-controller:latest > antrea-mcs.tar` to save
+the image.
+3. Copy the image file `antrea-mcs.tar` to the Nodes of your local cluster.
+4. Run `docker load < antrea-mcs.tar` in each Node of your local cluster.
+
+## Uninstallation
+
+### Remove a Member Cluster
+
+If you want to remove a member cluster from a ClusterSet and uninstall Antrea
+Multi-cluster, please follow the following steps.
+
+Note: please replace `kube-system` with the right Namespace in the example
+commands and manifest if Antrea Multi-cluster is not deployed in
+the default Namespace.
+
+1. Delete all ServiceExports and the Multi-cluster Gateway annotation on the
+Gateway Nodes.
+
+2. Delete the ClusterSet CR. Antrea Multi-cluster Controller will be
+responsible for cleaning up all resources created by itself automatically.
+
+3. Delete the Antrea Multi-cluster Deployment:
+
+```bash
+kubectl delete -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-member.yml
+```
+
+### Remove a Leader Cluster
+
+If you want to delete a ClusterSet and uninstall Antrea Multi-cluster in
+a leader cluster, please follow the following steps. You should first
+[remove all member clusters](#remove-a-member-cluster) before removing
+a leader cluster from a ClusterSet.
+
+Note: please replace `antrea-multicluster` with the right Namespace in the
+following example commands and manifest if Antrea Multi-cluster is not
+deployed in the default Namespace.
+
+1. Delete AntreaClusterNetworkPolicy ResourceExports in the leader cluster.
+
+2. Verify that there is no remaining MemberClusterAnnounces.
+
+ ```bash
+ kubectl get memberclusterannounce -n antrea-multicluster
+ ```
+
+3. Delete the ClusterSet CR. Antrea Multi-cluster Controller will be
+responsible for cleaning up all resources created by itself automatically.
+
+4. Check there is no remaining ResourceExports and ResourceImports:
+
+ ```bash
+ kubectl get resourceexports -n antrea-multicluster
+ kubectl get resourceimports -n antrea-multicluster
+ ```
+
+ Note: you can follow the [Known Issue section](#known-issue) to delete the left-over ResourceExports.
+
+5. Delete the Antrea Multi-cluster Deployment:
+
+ ```bash
+ kubectl delete -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-leader.yml
+ ```
+
+## Known Issue
+
+We recommend user to redeploy or update Antrea Multi-cluster Controller through
+`kubectl apply`. If you are using `kubectl delete -f *` and `kubectl create -f *`
+to redeploy Controller in the leader cluster, you might encounter [a known issue](https://github.com/kubernetes/kubernetes/issues/60538)
+in `ResourceExport` CRD cleanup. To avoid this issue, please delete any
+`ResourceExport` CRs in the leader cluster first, and make sure
+`kubectl get resourceexport -A` returns empty result before you can redeploy
+Multi-cluster Controller.
+
+All `ResourceExports` can be deleted with the following command:
+
+```bash
+kubectl get resourceexport -A -o json | jq -r '.items[]|[.metadata.namespace,.metadata.name]|join(" ")' | xargs -n2 bash -c 'kubectl delete -n $0 resourceexport/$1'
+```
diff --git a/content/docs/v1.15.0/docs/network-flow-visibility.md b/content/docs/v1.15.0/docs/network-flow-visibility.md
new file mode 100644
index 00000000..ecc962b5
--- /dev/null
+++ b/content/docs/v1.15.0/docs/network-flow-visibility.md
@@ -0,0 +1,671 @@
+# Network Flow Visibility in Antrea
+
+## Table of Contents
+
+
+- [Overview](#overview)
+- [Flow Exporter](#flow-exporter)
+ - [Configuration](#configuration)
+ - [Configuration pre Antrea v1.13](#configuration-pre-antrea-v113)
+ - [IPFIX Information Elements (IEs) in a Flow Record](#ipfix-information-elements-ies-in-a-flow-record)
+ - [IEs from IANA-assigned IE Registry](#ies-from-iana-assigned-ie-registry)
+ - [IEs from Reverse IANA-assigned IE Registry](#ies-from-reverse-iana-assigned-ie-registry)
+ - [IEs from Antrea IE Registry](#ies-from-antrea-ie-registry)
+ - [Supported Capabilities](#supported-capabilities)
+ - [Types of Flows and Associated Information](#types-of-flows-and-associated-information)
+ - [Connection Metrics](#connection-metrics)
+- [Flow Aggregator](#flow-aggregator)
+ - [Deployment](#deployment)
+ - [Configuration](#configuration-1)
+ - [Configuring secure connections to the ClickHouse database](#configuring-secure-connections-to-the-clickhouse-database)
+ - [Example of flow-aggregator.conf](#example-of-flow-aggregatorconf)
+ - [IPFIX Information Elements (IEs) in an Aggregated Flow Record](#ipfix-information-elements-ies-in-an-aggregated-flow-record)
+ - [IEs from Antrea IE Registry](#ies-from-antrea-ie-registry-1)
+ - [Supported Capabilities](#supported-capabilities-1)
+ - [Storage of Flow Records](#storage-of-flow-records)
+ - [Correlation of Flow Records](#correlation-of-flow-records)
+ - [Aggregation of Flow Records](#aggregation-of-flow-records)
+ - [Antctl Support](#antctl-support)
+- [Quick Deployment](#quick-deployment)
+ - [Image-building Steps](#image-building-steps)
+ - [Deployment Steps](#deployment-steps)
+- [Flow Collectors](#flow-collectors)
+ - [Go-ipfix Collector](#go-ipfix-collector)
+ - [Deployment Steps](#deployment-steps-1)
+ - [Output Flow Records](#output-flow-records)
+ - [Grafana Flow Collector (migrated)](#grafana-flow-collector-migrated)
+ - [ELK Flow Collector (removed)](#elk-flow-collector-removed)
+- [Layer 7 Network Flow Exporter](#layer-7-network-flow-exporter)
+ - [Prerequisites](#prerequisites)
+ - [Usage](#usage)
+
+
+## Overview
+
+[Antrea](design/architecture.md) is a Kubernetes network plugin that provides network
+connectivity and security features for Pod workloads. Considering the scale and
+dynamism of Kubernetes workloads in a cluster, Network Flow Visibility helps in
+the management and configuration of Kubernetes resources such as Network Policy,
+Services, Pods etc., and thereby provides opportunities to enhance the performance
+and security aspects of Pod workloads.
+
+For visualizing the network flows, Antrea monitors the flows in Linux conntrack
+module. These flows are converted to flow records, and then flow records are post-processed
+before they are sent to the configured external flow collector. High-level design is given below:
+
+![Antrea Flow Visibility Design](assets/flow_visibility.svg)
+
+## Flow Exporter
+
+In Antrea, the basic building block for the Network Flow Visibility is the **Flow
+Exporter**. Flow Exporter operates within Antrea Agent; it builds and maintains
+a connection store by polling and dumping flows from conntrack module periodically.
+Connections from the connection store are exported to the [Flow Aggregator
+Service](#flow-aggregator) using the IPFIX protocol, and for this purpose we use
+the IPFIX exporter process from the [go-ipfix](https://github.com/vmware/go-ipfix)
+library.
+
+### Configuration
+
+In addition to enabling the Flow Exporter feature gate (if needed), you need to
+ensure that the `flowExporter.enable` flag is set to true in the Antrea Agent
+configuration.
+
+your `antrea-agent` ConfigMap should look like this:
+
+```yaml
+ antrea-agent.conf: |
+ # FeatureGates is a map of feature names to bools that enable or disable experimental features.
+ featureGates:
+ # Enable flowexporter which exports polled conntrack connections as IPFIX flow records from each agent to a configured collector.
+ FlowExporter: true
+
+ flowExporter:
+ # Enable FlowExporter, a feature used to export polled conntrack connections as
+ # IPFIX flow records from each agent to a configured collector. To enable this
+ # feature, you need to set "enable" to true, and ensure that the FlowExporter
+ # feature gate is also enabled.
+ enable: true
+ # Provide the IPFIX collector address as a string with format :[][:].
+ # HOST can either be the DNS name, IP, or Service name of the Flow Collector. If
+ # using an IP, it can be either IPv4 or IPv6. However, IPv6 address should be
+ # wrapped with []. When the collector is running in-cluster as a Service, set
+ # to /. For example,
+ # "flow-aggregator/flow-aggregator" can be provided to connect to the Antrea
+ # Flow Aggregator Service.
+ # If PORT is empty, we default to 4739, the standard IPFIX port.
+ # If no PROTO is given, we consider "tls" as default. We support "tls", "tcp" and
+ # "udp" protocols. "tls" is used for securing communication between flow exporter and
+ # flow aggregator.
+ flowCollectorAddr: "flow-aggregator/flow-aggregator:4739:tls"
+
+ # Provide flow poll interval as a duration string. This determines how often the
+ # flow exporter dumps connections from the conntrack module. Flow poll interval
+ # should be greater than or equal to 1s (one second).
+ # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+ flowPollInterval: "5s"
+
+ # Provide the active flow export timeout, which is the timeout after which a flow
+ # record is sent to the collector for active flows. Thus, for flows with a continuous
+ # stream of packets, a flow record will be exported to the collector once the elapsed
+ # time since the last export event is equal to the value of this timeout.
+ # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+ activeFlowExportTimeout: "5s"
+
+ # Provide the idle flow export timeout, which is the timeout after which a flow
+ # record is sent to the collector for idle flows. A flow is considered idle if no
+ # packet matching this flow has been observed since the last export event.
+ # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+ idleFlowExportTimeout: "15s"
+```
+
+Please note that the default value for `flowExporter.flowCollectorAddr` is
+`"flow-aggregator/flow-aggregator:4739:tls"`, which enables the Flow Exporter to connect
+the Flow Aggregator Service, assuming it is running in the same K8 cluster with the Name
+and Namespace set to `flow-aggregator`. If you deploy the Flow Aggregator Service with
+a different Name and Namespace, then set `flowExporter.flowCollectorAddr` appropriately.
+
+Please note that the default values for
+`flowExporter.flowPollInterval`, `flowExporter.activeFlowExportTimeout`, and
+`flowExporter.idleFlowExportTimeout` parameters are set to 5s, 5s, and 15s, respectively.
+TLS communication between the Flow Exporter and the Flow Aggregator is enabled by default.
+Please modify them as per your requirements.
+
+#### Configuration pre Antrea v1.13
+
+Prior to the Antrea v1.13 release, the `flowExporter` option group in the
+Antrea Agent configuration did not exist. To enable the Flow Exporter feature,
+one simply needed to enable the feature gate, and the Flow Exporter related
+configuration could be configured using the (now deprecated) `flowCollectorAddr`,
+`flowPollInterval`, `activeFlowExportTimeout`, `idleFlowExportTimeout`
+parameters.
+
+### IPFIX Information Elements (IEs) in a Flow Record
+
+There are 34 IPFIX IEs in each exported flow record, which are defined in the
+IANA-assigned IE registry, the Reverse IANA-assigned IE registry and the Antrea
+IE registry. The reverse IEs are used to provide bi-directional information about
+the flow. The Enterprise ID is 0 for IANA-assigned IE registry, 29305 for reverse
+IANA IE registry, 56505 for Antrea IE registry. All the IEs used by the Antrea
+Flow Exporter are listed below:
+
+#### IEs from IANA-assigned IE Registry
+
+| IPFIX Information Element| Field ID | Type |
+|--------------------------|----------|----------------|
+| flowStartSeconds | 150 | dateTimeSeconds|
+| flowEndSeconds | 151 | dateTimeSeconds|
+| flowEndReason | 136 | unsigned8 |
+| sourceIPv4Address | 8 | ipv4Address |
+| destinationIPv4Address | 12 | ipv4Address |
+| sourceIPv6Address | 27 | ipv6Address |
+| destinationIPv6Address | 28 | ipv6Address |
+| sourceTransportPort | 7 | unsigned16 |
+| destinationTransportPort | 11 | unsigned16 |
+| protocolIdentifier | 4 | unsigned8 |
+| packetTotalCount | 86 | unsigned64 |
+| octetTotalCount | 85 | unsigned64 |
+| packetDeltaCount | 2 | unsigned64 |
+| octetDeltaCount | 1 | unsigned64 |
+
+#### IEs from Reverse IANA-assigned IE Registry
+
+| IPFIX Information Element| Field ID | Type |
+|--------------------------|----------|----------------|
+| reversePacketTotalCount | 86 | unsigned64 |
+| reverseOctetTotalCount | 85 | unsigned64 |
+| reversePacketDeltaCount | 2 | unsigned64 |
+| reverseOctetDeltaCount | 1 | unsigned64 |
+
+#### IEs from Antrea IE Registry
+
+| IPFIX Information Element | Field ID | Type | Description |
+|----------------------------------|----------|-------------|-------------|
+| sourcePodNamespace | 100 | string | |
+| sourcePodName | 101 | string | |
+| destinationPodNamespace | 102 | string | |
+| destinationPodName | 103 | string | |
+| sourceNodeName | 104 | string | |
+| destinationNodeName | 105 | string | |
+| destinationClusterIPv4 | 106 | ipv4Address | |
+| destinationClusterIPv6 | 107 | ipv6Address | |
+| destinationServicePort | 108 | unsigned16 | |
+| destinationServicePortName | 109 | string | |
+| ingressNetworkPolicyName | 110 | string | Name of the ingress network policy applied to the destination Pod for this flow. |
+| ingressNetworkPolicyNamespace | 111 | string | Namespace of the ingress network policy applied to the destination Pod for this flow. |
+| ingressNetworkPolicyType | 115 | unsigned8 | 1 stands for Kubernetes Network Policy. 2 stands for Antrea Network Policy. 3 stands for Antrea Cluster Network Policy. |
+| ingressNetworkPolicyRuleName | 141 | string | Name of the ingress network policy Rule applied to the destination Pod for this flow. |
+| egressNetworkPolicyName | 112 | string | Name of the egress network policy applied to the source Pod for this flow. |
+| egressNetworkPolicyNamespace | 113 | string | Namespace of the egress network policy applied to the source Pod for this flow. |
+| egressNetworkPolicyType | 118 | unsigned8 | |
+| egressNetworkPolicyRuleName | 142 | string | Name of the egress network policy rule applied to the source Pod for this flow. |
+| ingressNetworkPolicyRuleAction | 139 | unsigned8 | 1 stands for Allow. 2 stands for Drop. 3 stands for Reject. |
+| egressNetworkPolicyRuleAction | 140 | unsigned8 | |
+| tcpState | 136 | string | The state of the TCP connection. The states are: LISTEN, SYN-SENT, SYN-RECEIVED, ESTABLISHED, FIN-WAIT-1, FIN-WAIT-2, CLOSE-WAIT, CLOSING, LAST-ACK, TIME-WAIT, and CLOSED. |
+| flowType | 137 | unsigned8 | 1 stands for Intra-Node. 2 stands for Inter-Node. 3 stands for To External. 4 stands for From External. |
+
+### Supported Capabilities
+
+#### Types of Flows and Associated Information
+
+Currently, the Flow Exporter feature provides visibility for Pod-to-Pod, Pod-to-Service
+and Pod-to-External network flows along with the associated statistics such as data
+throughput (bits per second), packet throughput (packets per second), cumulative byte
+count and cumulative packet count. Pod-To-Service flow visibility is supported
+only [when Antrea Proxy enabled](feature-gates.md), which is the case by default
+starting with Antrea v0.11. In the future, we will enable the support for External-To-Service
+flows.
+
+Kubernetes information such as Node name, Pod name, Pod Namespace, Service name,
+NetworkPolicy name and NetworkPolicy Namespace, is added to the flow records.
+Network Policy Rule Action (Allow, Reject, Drop) is also supported for both
+Antrea-native NetworkPolicies and K8s NetworkPolicies. For K8s NetworkPolicies,
+connections dropped due to [isolated Pod behavior](https://kubernetes.io/docs/concepts/services-networking/network-policies/#isolated-and-non-isolated-pods)
+will be assigned the Drop action.
+For flow records that are exported from any given Antrea Agent, the Flow Exporter
+only provides the information of Kubernetes entities that are local to the Antrea
+Agent. In other words, flow records are only complete for intra-Node flows, but
+incomplete for inter-Node flows. It is the responsibility of the [Flow Aggregator](#flow-aggregator)
+to correlate flows from the source and destination Nodes and produce complete flow
+records.
+
+Both Flow Exporter and Flow Aggregator are supported in IPv4 clusters, IPv6 clusters and dual-stack clusters.
+
+#### Connection Metrics
+
+We support following connection metrics as Prometheus metrics that are exposed
+through [Antrea Agent apiserver endpoint](prometheus-integration.md):
+`antrea_agent_conntrack_total_connection_count`,
+`antrea_agent_conntrack_antrea_connection_count`,
+`antrea_agent_denied_connection_count`,
+`antrea_agent_conntrack_max_connection_count`, and
+`antrea_agent_flow_collector_reconnection_count`
+
+## Flow Aggregator
+
+Flow Aggregator is deployed as a Kubernetes Service. The main functionality of Flow
+Aggregator is to store, correlate and aggregate the flow records received from the
+Flow Exporter of Antrea Agents. More details on the functionality are provided in
+the [Supported Capabilities](#supported-capabilities-1) section.
+
+Flow Aggregator is implemented as IPFIX mediator, which
+consists of IPFIX Collector Process, IPFIX Intermediate Process and IPFIX Exporter
+Process. We use the [go-ipfix](https://github.com/vmware/go-ipfix) library to implement
+the Flow Aggregator.
+
+### Deployment
+
+To deploy a released version of Flow Aggregator Service, pick a deployment manifest from the
+[list of releases](https://github.com/antrea-io/antrea/releases). For any
+given release `` (e.g. `v0.12.0`), you can deploy Flow Aggregator as follows:
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//flow-aggregator.yml
+```
+
+To deploy the latest version of Flow Aggregator Service (built from the main branch), use the
+checked-in [deployment yaml](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/flow-aggregator.yml):
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/flow-aggregator.yml
+```
+
+### Configuration
+
+The following configuration parameters have to be provided through the Flow
+Aggregator ConfigMap. Flow aggregator needs to be configured with at least one
+of the supported [Flow Collectors](#flow-collectors).
+`flowCollector` is mandatory for [go-ipfix collector](#deployment-steps), and
+`clickHouse` is mandatory for [Grafana Flow Collector](#grafana-flow-collector-migrated).
+We provide an example value for this parameter in the following snippet.
+
+* If you have deployed the [go-ipfix collector](#deployment-steps),
+then please set `flowCollector.enable` to `true` and use the address for
+`flowCollector.address`: `::`
+* If you have deployed the [Grafana Flow Collector](#grafana-flow-collector-migrated),
+then please enable the collector by setting `clickHouse.enable` to `true`. If
+it is deployed following the [deployment steps](#deployment-steps-1), the
+ClickHouse server is already exposed via a K8s Service, and no further
+configuration is required. If a different FQDN or IP is desired, please use
+the URL for `clickHouse.databaseURL` in the following format:
+`://:`.
+
+#### Configuring secure connections to the ClickHouse database
+
+Starting with Antrea v1.13, you can enable TLS when connecting to the ClickHouse
+Server by setting `clickHouse.databaseURL` with protocol `tls` or `https`.
+You can also change the value of `clickHouse.tls.insecureSkipVerify` to
+determine whether to skip the verification of the server's certificate.
+If you want to provide a custom CA certificate, you can set
+`clickHouse.tls.caCert` to `true` and the flow Aggregator will read the
+certificate key pair from the`clickhouse-ca` Secret.
+
+Make sure to follow the following form when creating the `clickhouse-ca` Secret
+with the custom CA certificate:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: clickhouse-ca
+ namespace: flow-aggregator
+data:
+ ca.crt:
+```
+
+You can use `kubectl apply -f ` to create the above secret
+, or use `kubectl create secret`:
+
+```bash
+kubectl create secret generic clickhouse-ca -n flow-aggregator --from-file=ca.crt=
+```
+
+Prior to Antrea v1.13, secure connections to ClickHouse are not supported,
+and TCP is the only supported protocol when connecting to the ClickHouse
+server from the Flow Aggregator.
+
+#### Example of flow-aggregator.conf
+
+```yaml
+flow-aggregator.conf: |
+ # Provide the active flow record timeout as a duration string. This determines
+ # how often the flow aggregator exports the active flow records to the flow
+ # collector. Thus, for flows with a continuous stream of packets, a flow record
+ # will be exported to the collector once the elapsed time since the last export
+ # event in the flow aggregator is equal to the value of this timeout.
+ # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+ activeFlowRecordTimeout: 60s
+
+ # Provide the inactive flow record timeout as a duration string. This determines
+ # how often the flow aggregator exports the inactive flow records to the flow
+ # collector. A flow record is considered to be inactive if no matching record
+ # has been received by the flow aggregator in the specified interval.
+ # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+ inactiveFlowRecordTimeout: 90s
+
+ # Provide the transport protocol for the flow aggregator collecting process, which is tls, tcp or udp.
+ aggregatorTransportProtocol: "tls"
+
+ # Provide an extra DNS name or IP address of flow aggregator for generating TLS certificate.
+ flowAggregatorAddress: ""
+
+ # recordContents enables configuring some fields in the flow records. Fields can
+ # be excluded to reduce record size, but some features or external tooling may
+ # depend on these fields.
+ recordContents:
+ # Determine whether source and destination Pod labels will be included in the flow records.
+ podLabels: false
+
+ # apiServer contains APIServer related configuration options.
+ apiServer:
+ # The port for the flow-aggregator APIServer to serve on.
+ apiPort: 10348
+
+ # Comma-separated list of Cipher Suites. If omitted, the default Go Cipher Suites will be used.
+ # https://golang.org/pkg/crypto/tls/#pkg-constants
+ # Note that TLS1.3 Cipher Suites cannot be added to the list. But the apiserver will always
+ # prefer TLS1.3 Cipher Suites whenever possible.
+ tlsCipherSuites: ""
+
+ # TLS min version from: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13.
+ tlsMinVersion: ""
+
+ # flowCollector contains external IPFIX or JSON collector related configuration options.
+ flowCollector:
+ # Enable is the switch to enable exporting flow records to external flow collector.
+ enable: false
+
+ # Provide the flow collector address as string with format :[:], where proto is tcp or udp.
+ # If no L4 transport proto is given, we consider tcp as default.
+ address: ""
+
+ # Provide the 32-bit Observation Domain ID which will uniquely identify this instance of the flow
+ # aggregator to an external flow collector. If omitted, an Observation Domain ID will be generated
+ # from the persistent cluster UUID generated by Antrea. Failing that (e.g. because the cluster UUID
+ # is not available), a value will be randomly generated, which may vary across restarts of the flow
+ # aggregator.
+ #observationDomainID:
+
+ # Provide format for records sent to the configured flow collector.
+ # Supported formats are IPFIX and JSON.
+ recordFormat: "IPFIX"
+
+ # clickHouse contains ClickHouse related configuration options.
+ clickHouse:
+ # Enable is the switch to enable exporting flow records to ClickHouse.
+ enable: false
+
+ # Database is the name of database where Antrea "flows" table is created.
+ database: "default"
+
+ # DatabaseURL is the url to the database. Provide the database URL as a string with format
+ # ://:. The protocol has to be
+ # one of the following: "tcp", "tls", "http", "https". When "tls" or "https" is used, tls
+ # will be enabled.
+ databaseURL: "tcp://clickhouse-clickhouse.flow-visibility.svc:9000"
+
+ # TLS configuration options, when using TLS to connect to the ClickHouse service.
+ tls:
+ # InsecureSkipVerify determines whether to skip the verification of the server's certificate chain and host name.
+ # Default is false.
+ insecureSkipVerify: false
+
+ # CACert indicates whether to use custom CA certificate. Default root CAs will be used if this field is false.
+ # If true, a Secret named "clickhouse-ca" must be provided with the following keys:
+ # ca.crt:
+ caCert: false
+
+ # Debug enables debug logs from ClickHouse sql driver.
+ debug: false
+
+ # Compress enables lz4 compression when committing flow records.
+ compress: true
+
+ # CommitInterval is the periodical interval between batch commit of flow records to DB.
+ # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+ # The minimum interval is 1s based on ClickHouse documentation for best performance.
+ commitInterval: "8s"
+```
+
+Please note that the default values for `activeFlowRecordTimeout`,
+`inactiveFlowRecordTimeout`, `aggregatorTransportProtocol` parameters are set to
+`60s`, `90s` and `tls` respectively. Please make sure that
+`aggregatorTransportProtocol` and protocol of `flowCollectorAddr` in
+`agent-agent.conf` are set to `tls` to guarantee secure communication works
+properly. Protocol of `flowCollectorAddr` and `aggregatorTransportProtocol` must
+always match, so TLS must either be enabled for both sides or disabled for both
+sides. Please modify the parameters as per your requirements.
+
+Please note that the default value for `recordContents.podLabels` is `false`,
+which indicates source and destination Pod labels will not be included in the
+flow records exported to `flowCollector` and `clickHouse`. If you would like
+to include them, you can modify the value to `true`.
+
+Please note that the default value for `apiServer.apiPort` is `10348`, which
+is the port used to expose the Flow Aggregator's APIServer. Please modify the
+parameters as per your requirements.
+
+Please note that the default value for `clickHouse.commitInterval` is `8s`,
+which is based on experiment results to achieve best ClickHouse write
+performance and data retention. Based on ClickHouse recommendation for best
+performance, this interval is required be no shorter than `1s`. Also note
+that flow aggregator has a cache limit of ~500k records for ClickHouse-Grafana
+collector. If `clickHouse.commitInterval` is set to a value too large, there's
+a risk of losing records.
+
+### IPFIX Information Elements (IEs) in an Aggregated Flow Record
+
+In addition to IPFIX information elements provided in the [above section](#ipfix-information-elements-ies-in-a-flow-record),
+the Flow Aggregator adds the following fields to the flow records.
+
+#### IEs from Antrea IE Registry
+
+| IPFIX Information Element | Field ID | Type | Description |
+|-------------------------------------------|----------|-------------|-------------|
+| packetTotalCountFromSourceNode | 120 | unsigned64 | The cumulative number of packets for this flow as reported by the source Node, since the flow started. |
+| octetTotalCountFromSourceNode | 121 | unsigned64 | The cumulative number of octets for this flow as reported by the source Node, since the flow started. |
+| packetDeltaCountFromSourceNode | 122 | unsigned64 | The number of packets for this flow as reported by the source Node, since the previous report for this flow at the observation point. |
+| octetDeltaCountFromSourceNode | 123 | unsigned64 | The number of octets for this flow as reported by the source Node, since the previous report for this flow at the observation point. |
+| reversePacketTotalCountFromSourceNode | 124 | unsigned64 | The cumulative number of reverse packets for this flow as reported by the source Node, since the flow started. |
+| reverseOctetTotalCountFromSourceNode | 125 | unsigned64 | The cumulative number of reverse octets for this flow as reported by the source Node, since the flow started. |
+| reversePacketDeltaCountFromSourceNode | 126 | unsigned64 | The number of reverse packets for this flow as reported by the source Node, since the previous report for this flow at the observation point. |
+| reverseOctetDeltaCountFromSourceNode | 127 | unsigned64 | The number of reverse octets for this flow as reported by the source Node, since the previous report for this flow at the observation point. |
+| packetTotalCountFromDestinationNode | 128 | unsigned64 | The cumulative number of packets for this flow as reported by the destination Node, since the flow started. |
+| octetTotalCountFromDestinationNode | 129 | unsigned64 | The cumulative number of octets for this flow as reported by the destination Node, since the flow started. |
+| packetDeltaCountFromDestinationNode | 130 | unsigned64 | The number of packets for this flow as reported by the destination Node, since the previous report for this flow at the observation point. |
+| octetDeltaCountFromDestinationNode | 131 | unsigned64 | The number of octets for this flow as reported by the destination Node, since the previous report for this flow at the observation point. |
+| reversePacketTotalCountFromDestinationNode| 132 | unsigned64 | The cumulative number of reverse packets for this flow as reported by the destination Node, since the flow started. |
+| reverseOctetTotalCountFromDestinationNode | 133 | unsigned64 | The cumulative number of reverse octets for this flow as reported by the destination Node, since the flow started. |
+| reversePacketDeltaCountFromDestinationNode| 134 | unsigned64 | The number of reverse packets for this flow as reported by the destination Node, since the previous report for this flow at the observation point. |
+| reverseOctetDeltaCountFromDestinationNode | 135 | unsigned64 | The number of reverse octets for this flow as reported by the destination Node, since the previous report for this flow at the observation point. |
+| sourcePodLabels | 143 | string | |
+| destinationPodLabels | 144 | string | |
+| throughput | 145 | unsigned64 | The average amount of traffic flowing from source to destination, since the previous report for this flow at the observation point. The unit is bits per second. |
+| reverseThroughput | 146 | unsigned64 | The average amount of reverse traffic flowing from destination to source, since the previous report for this flow at the observation point. The unit is bits per second. |
+| throughputFromSourceNode | 147 | unsigned64 | The average amount of traffic flowing from source to destination, since the previous report for this flow at the observation point, based on the records sent from the source Node. The unit is bits per second. |
+| throughputFromDestinationNode | 148 | unsigned64 | The average amount of traffic flowing from source to destination, since the previous report for this flow at the observation point, based on the records sent from the destination Node. The unit is bits per second. |
+| reverseThroughputFromSourceNode | 149 | unsigned64 | The average amount of reverse traffic flowing from destination to source, since the previous report for this flow at the observation point, based on the records sent from the source Node. The unit is bits per second. |
+| reverseThroughputFromDestinationNode | 150 | unsigned64 | The average amount of reverse traffic flowing from destination to source, since the previous report for this flow at the observation point, based on the records sent from the destination Node. The unit is bits per second. |
+| flowEndSecondsFromSourceNode | 151 | unsigned32 | The absolute timestamp of the last packet of this flow, based on the records sent from the source Node. The unit is seconds. |
+| flowEndSecondsFromDestinationNode | 152 | unsigned32 | The absolute timestamp of the last packet of this flow, based on the records sent from the destination Node. The unit is seconds. |
+
+### Supported Capabilities
+
+#### Storage of Flow Records
+
+Flow Aggregator stores the received flow records from Antrea Agents in a hash map,
+where the flow key is 5-tuple of a network connection. 5-tuple consists of Source IP,
+Destination IP, Source Port, Destination Port and Transport protocol. Therefore,
+Flow Aggregator maintains one flow record for any given connection, and this flow
+record gets updated till the connection in the Kubernetes cluster becomes invalid.
+
+#### Correlation of Flow Records
+
+In the case of inter-Node flows, there are two flow records, one
+from the source Node, where the flow originates from, and another one from the destination
+Node, where the destination Pod resides. Both the flow records contain incomplete
+information as mentioned [here](#types-of-flows-and-associated-information). Flow
+Aggregator provides support for the correlation of the flow records from the
+source Node and the destination Node, and it exports a single flow record with complete
+information for both inter-Node and intra-Node flows.
+
+#### Aggregation of Flow Records
+
+Flow Aggregator aggregates the flow records that belong to a single connection.
+As part of aggregation, fields such as flow timestamps, flow statistics etc. are
+updated. For the purpose of updating flow statistics fields, Flow Aggregator introduces
+the [new fields](#ies-from-antrea-ie-registry) in Antrea Enterprise IPFIX registry
+corresponding to the Source Node and Destination Node, so that flow statistics from
+different Nodes can be preserved.
+
+### Antctl Support
+
+antctl can access the Flow Aggregator API to dump flow records and print metrics
+about flow record processing. Refer to the
+[antctl documentation](antctl.md#flow-aggregator-commands) for more information.
+
+## Quick Deployment
+
+If you would like to quickly try Network Flow Visibility feature, you can deploy
+Antrea, the Flow Aggregator Service, the Grafana Flow Collector on the
+[Vagrant setup](../test/e2e/README.md).
+
+### Image-building Steps
+
+Build required image under antrea by using make command:
+
+```shell
+make
+make flow-aggregator-image
+```
+
+### Deployment Steps
+
+Given any external IPFIX flow collector, you can deploy Antrea and the Flow
+Aggregator Service on a default Vagrant setup by running the following commands:
+
+```shell
+./infra/vagrant/provision.sh
+./infra/vagrant/push_antrea.sh --flow-collector
+```
+
+If you would like to deploy the Grafana Flow Collector, you can run the following command:
+
+```shell
+./infra/vagrant/provision.sh
+./infra/vagrant/push_antrea.sh --flow-collector Grafana
+```
+
+## Flow Collectors
+
+Here we list two choices the external configured flow collector: go-ipfix collector
+and Grafana flow collector. For each collector, we introduce how to deploy it and
+how to output or visualize the collected flow records information.
+
+### Go-ipfix Collector
+
+#### Deployment Steps
+
+The go-ipfix collector can be built from [go-ipfix library](https://github.com/vmware/go-ipfix).
+It is used to collect, decode and log the IPFIX records.
+
+* To deploy a released version of the go-ipfix collector, please choose one
+deployment manifest from the list of releases (supported after v0.5.2).
+For any given release (e.g. v0.5.2), you can deploy the collector as follows:
+
+```shell
+kubectl apply -f https://github.com/vmware/go-ipfix/releases/download//ipfix-collector.yaml
+```
+
+* To deploy the latest version of the go-ipfix collector (built from the main branch),
+use the checked-in [deployment manifest](https://github.com/vmware/go-ipfix/blob/main/build/yamls/ipfix-collector.yaml):
+
+```shell
+kubectl apply -f https://raw.githubusercontent.com/vmware/go-ipfix/main/build/yamls/ipfix-collector.yaml
+```
+
+Go-ipfix collector also supports customization on its parameters: port and protocol.
+Please follow the [go-ipfix documentation](https://github.com/vmware/go-ipfix#readme)
+to configure those parameters if needed.
+
+#### Output Flow Records
+
+To output the flow records collected by the go-ipfix collector, use the command below:
+
+```shell
+kubectl logs -n ipfix
+```
+
+### Grafana Flow Collector (migrated)
+
+**Starting with Antrea v1.8, support for the Grafana Flow Collector has been migrated to Theia.**
+
+The Grafana Flow Collector was added in Antrea v1.6.0. In Antrea v1.7.0, we
+start to move the network observability and analytics functionalities of Antrea
+to [Project Theia](https://github.com/antrea-io/theia), including the Grafana
+Flow Collector. Going forward, further development of the Grafana Flow Collector
+will be in the Theia repo. For the up-to-date version of Grafana Flow Collector
+and other Theia features, please refer to the
+[Theia document](https://github.com/antrea-io/theia/blob/main/docs/network-flow-visibility.md).
+
+### ELK Flow Collector (removed)
+
+**Starting with Antrea v1.7, support for the ELK Flow Collector has been removed.**
+Please consider using the [Grafana Flow Collector](#grafana-flow-collector-migrated)
+instead, which is actively maintained.
+
+## Layer 7 Network Flow Exporter
+
+In addition to layer 4 network visibility, Antrea adds layer 7 network flow
+export.
+
+### Prerequisites
+
+To achieve L7 (Layer 7) network flow export, the `L7FlowExporter` feature gate
+must be enabled.
+
+### Usage
+
+To export layer 7 flows of a Pod or a Namespace, user can annotate Pods or
+Namespaces with the annotation key `visibility.antrea.io/l7-export` and set the
+value to indicate the traffic flow direction, which can be `ingress`, `egress`
+or `both`.
+
+For example, to enable L7 flow export in the ingress direction on
+Pod test-pod in the default Namespace, you can use:
+
+```bash
+kubectl annotate pod test-pod visibility.antrea.io/l7-export=ingress
+```
+
+Based on the annotation, Flow Exporter will export the L7 flow data to the
+Flow Aggregator or configured IPFix collector using the fields `appProtocolName`
+and `httpVals`.
+
+* `appProtocolName` field is used to indicate the application layer protocol
+name (e.g. http) and it will be empty if application layer data is not exported.
+* `httpVals` stores a serialized JSON dictionary with every HTTP request for
+a connection mapped to a unique transaction ID. This format lets us group all
+the HTTP transactions pertaining to the same connection, into the same exported
+record.
+
+An example of `httpVals` is :
+
+`"{\"0\":{\"hostname\":\"10.10.0.1\",\"url\":\"/public/\",\"http_user_agent\":\"curl/7.74.0\",\"http_content_type\":\"text/html\",\"http_method\":\"GET\",\"protocol\":\"HTTP/1.1\",\"status\":200,\"length\":153}}"`
+
+HTTP fields in the `httpVals` are:
+
+| Http field | Description |
+|-------------------|--------------------------------------------------------|
+| hostname | IP address of the sender |
+| URL | url requested on the server |
+| http_user_agent | application used for HTTP |
+| http_content_type | type of content being returned by the server |
+| http_method | HTTP method used for the request |
+| protocol | HTTP protocol version used for the request or response |
+| status | HTTP status code |
+| length | size of the response body |
+
+As of now, the only supported layer 7 protocol is `HTTP1.1`. Support for more
+protocols may be added in the future. Antrea supports L7FlowExporter feature only
+on Linux Nodes.
diff --git a/content/docs/v1.15.0/docs/network-requirements.md b/content/docs/v1.15.0/docs/network-requirements.md
new file mode 100644
index 00000000..ed804568
--- /dev/null
+++ b/content/docs/v1.15.0/docs/network-requirements.md
@@ -0,0 +1,19 @@
+# Network Requirements
+
+Antrea has a few network requirements to get started, ensure that your hosts and
+firewalls allow the necessary traffic based on your configuration.
+
+| Configuration | Host(s) | ports/protocols | Other |
+|------------------------------------------------|----------------------------|--------------------------------------------|------------------------------|
+| Antrea with VXLAN enabled | All | UDP 4789 | |
+| Antrea with Geneve enabled | All | UDP 6081 | |
+| Antrea with STT enabled | All | TCP 7471 | |
+| Antrea with GRE enabled | All | IP Protocol ID 47 | No support for IPv6 clusters |
+| Antrea with IPsec ESP enabled | All | IP protocol ID 50 and 51, UDP 500 and 4500 | |
+| Antrea with WireGuard enabled | All | UDP 51820 | |
+| Antrea Multi-cluster with WireGuard encryption | Multi-cluster Gateway Node | UDP 51821 | |
+| All | kube-apiserver host | TCP 443 or 6443\* | |
+| All | All | TCP 10349, 10350, 10351, UDP 10351 | |
+
+\* _The value passed to kube-apiserver using the --secure-port flag. If you cannot
+locate this, check the targetPort value returned by kubectl get svc kubernetes -o yaml._
diff --git a/content/docs/v1.15.0/docs/node-port-local.md b/content/docs/v1.15.0/docs/node-port-local.md
new file mode 100644
index 00000000..89cf54c7
--- /dev/null
+++ b/content/docs/v1.15.0/docs/node-port-local.md
@@ -0,0 +1,227 @@
+# NodePortLocal (NPL)
+
+## Table of Contents
+
+
+- [What is NodePortLocal?](#what-is-nodeportlocal)
+- [Prerequisites](#prerequisites)
+- [Usage](#usage)
+ - [Usage pre Antrea v1.7](#usage-pre-antrea-v17)
+ - [Usage pre Antrea v1.4](#usage-pre-antrea-v14)
+ - [Usage pre Antrea v1.2](#usage-pre-antrea-v12)
+- [Limitations](#limitations)
+- [Integrations with External Load Balancers](#integrations-with-external-load-balancers)
+ - [AVI](#avi)
+
+
+## What is NodePortLocal?
+
+`NodePortLocal` (NPL) is a feature that runs as part of the Antrea Agent,
+through which each port of a Service backend Pod can be reached from the
+external network using a port of the Node on which the Pod is running. NPL
+enables better integration with external Load Balancers which can take advantage
+of the feature: instead of relying on NodePort Services implemented by
+kube-proxy, external Load-Balancers can consume NPL port mappings published by
+the Antrea Agent (as K8s Pod annotations) and load-balance Service traffic
+directly to backend Pods.
+
+## Prerequisites
+
+NodePortLocal was introduced in v0.13 as an alpha feature, and was graduated to
+beta in v1.4, at which time it was enabled by default. Prior to v1.4, a feature
+gate, `NodePortLocal`, must be enabled on the antrea-agent for the feature to
+work. Starting from Antrea v1.7, NPL is supported on the Windows antrea-agent.
+From Antrea v1.14, NPL is GA.
+
+## Usage
+
+In addition to enabling the NodePortLocal feature gate (if needed), you need to
+ensure that the `nodePortLocal.enable` flag is set to true in the Antrea Agent
+configuration. The `nodePortLocal.portRange` parameter can also be set to change
+the range from which Node ports will be allocated. Otherwise, the default range
+of `61000-62000` will be used by default. When using the NodePortLocal feature,
+your `antrea-agent` ConfigMap should look like this:
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ # True by default starting with Antrea v1.4
+ # NodePortLocal: true
+ nodePortLocal:
+ enable: true
+ # Uncomment if you need to change the port range.
+ # portRange: 61000-62000
+```
+
+Pods can be selected for `NodePortLocal` by tagging a Service with annotation:
+`nodeportlocal.antrea.io/enabled: "true"`. Consequently, `NodePortLocal` is
+enabled for all the Pods which are selected by the Service through a selector,
+and the ports of these Pods will be reachable through Node ports allocated from
+the port range. The selected Pods will be annotated with the details about
+allocated Node port(s) for the Pod.
+
+For example, given the following Service and Deployment definitions:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: nginx
+ annotations:
+ nodeportlocal.antrea.io/enabled: "true"
+spec:
+ ports:
+ - name: web
+ port: 80
+ protocol: TCP
+ targetPort: 80
+ selector:
+ app: nginx
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+spec:
+ selector:
+ matchLabels:
+ app: nginx
+ replicas: 3
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx
+```
+
+If the NodePortLocal feature gate is enabled, then all the Pods in the
+Deployment will be annotated with the `nodeportlocal.antrea.io` annotation. The
+value of this annotation is a serialized JSON array. In our example, a given Pod
+in the `nginx` Deployment may look like this:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: nginx-6799fc88d8-9rx8z
+ labels:
+ app: nginx
+ annotations:
+ nodeportlocal.antrea.io: '[{"podPort":8080,"nodeIP":"10.10.10.10","nodePort":61002,"protocol":"tcp","protocols":["tcp"]}]'
+```
+
+This annotation indicates that port 8080 of the Pod can be reached through port
+61002 of the Node with IP Address 10.10.10.10 for TCP traffic.
+
+The `nodeportlocal.antrea.io` annotation is generated and managed by Antrea. It
+is not meant to be created or modified by users directly. A user-provided
+annotation is likely to be overwritten by Antrea, or may lead to unexpected
+behavior.
+
+NodePortLocal can only be used with Services of type `ClusterIP` or
+`LoadBalancer`. The `nodeportlocal.antrea.io` annotation has no effect for
+Services of type `NodePort` or `ExternalName`. The annotation also has no effect
+for Services with an empty or missing Selector.
+
+Starting from the Antrea v1.7 minor release, the `protocols` field in the
+annotation is deprecated. The array contains a single member, equal to the
+`protocol` field.
+The `protocols` field will be removed from Antrea for minor releases post March 2023,
+as per our deprecation policy.
+
+### Usage pre Antrea v1.7
+
+Prior to the Antrea v1.7 minor release, the `nodeportlocal.antrea.io` annotation
+could contain multiple members in `protocols`.
+An example may look like this:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: nginx-6799fc88d8-9rx8z
+ labels:
+ app: nginx
+ annotations:
+ nodeportlocal.antrea.io: '[{"podPort":8080,"nodeIP":"10.10.10.10","nodePort":61002}, "protocols":["tcp","udp"]]'
+```
+
+This annotation indicates that port 8080 of the Pod can be reached through port
+61002 of the Node with IP Address 10.10.10.10 for both TCP and UDP traffic.
+
+Prior to v1.7, the implementation would always allocate the same nodePort value
+for all the protocols exposed for a given podPort.
+Starting with v1.7, there will be multiple annotations for the different protocols
+for a given podPort, and the allocated nodePort may be different for each one.
+
+### Usage pre Antrea v1.4
+
+Prior to the Antrea v1.4 minor release, the `nodePortLocal` option group in the
+Antrea Agent configuration did not exist. To enable the NodePortLocal feature,
+one simply needed to enable the feature gate, and the port range could be
+configured using the (now deprecated) `nplPortRange` parameter.
+
+### Usage pre Antrea v1.2
+
+Prior to the Antrea v1.2 minor release, the NodePortLocal feature suffered from
+a known [issue](https://github.com/antrea-io/antrea/issues/1912). In order to
+use the feature, the correct list of ports exposed by each container had to be
+provided in the Pod specification (`.spec.containers[*].Ports`). The
+NodePortLocal implementation would then use this information to decide which
+ports to map for each Pod. In the above example, the Deployment definition would
+need to be changed to:
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+spec:
+ selector:
+ matchLabels:
+ app: nginx
+ replicas: 3
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx
+ ports:
+ - containerPort: 80
+```
+
+This was error-prone because providing this list of ports is typically optional
+in K8s and omitting it does not prevent ports from being exposed, which means
+that many user may omit this information and expect NPL to work. Starting with
+Antrea v1.2, we instead rely on the `service.spec.ports[*].targetPort`
+information, for each NPL-enabled Service, to determine which ports need to be
+mapped.
+
+## Limitations
+
+This feature is currently only supported for Nodes running Linux or Windows
+with IPv4 addresses. Only TCP & UDP Service ports are supported (not SCTP).
+
+## Integrations with External Load Balancers
+
+### AVI
+
+When using AVI and the AVI Kubernetes Operator (AKO), the AKO `serviceType`
+configuration parameter can be set to `NodePortLocal`. After that, annotating
+Services manually with `nodeportlocal.antrea.io` is no longer required. AKO will
+automatically annotate Services of type `LoadBalancer`, along with backend
+ClusterIP Services used by Ingress resources (for which AVI is the Ingress
+class). For more information refer to the [AKO
+documentation](https://avinetworks.com/docs/ako/1.5/handling-objects/).
diff --git a/content/docs/v1.15.0/docs/noencap-hybrid-modes.md b/content/docs/v1.15.0/docs/noencap-hybrid-modes.md
new file mode 100644
index 00000000..a7ee5963
--- /dev/null
+++ b/content/docs/v1.15.0/docs/noencap-hybrid-modes.md
@@ -0,0 +1,176 @@
+# NoEncap and Hybrid Traffic Modes of Antrea
+
+Besides the default `Encap` mode, in which Pod traffic across Nodes will be
+encapsulated and sent over tunnels, Antrea also supports `NoEncap` and `Hybrid`
+traffic modes. In `NoEncap` mode, Antrea does not encapsulate Pod traffic, but
+relies on the Node network to route the traffic across Nodes. In `Hybrid` mode,
+Antrea encapsulates Pod traffic when the source Node and the destination Node
+are in different subnets, but does not encapsulate when the source and the
+destination Nodes are in the same subnet. This document describes how to
+configure Antrea with the `NoEncap` and `Hybrid` modes.
+
+The NoEncap and Hybrid traffic modes require AntreaProxy to support correct
+NetworkPolicy enforcement, which is why trying to disable AntreaProxy in these
+modes will normally cause the Antrea Agent to fail. It is possible to override
+this behavior and force AntreaProxy to be disabled by setting the
+ALLOW_NO_ENCAP_WITHOUT_ANTREA_PROXY environment variable to true for the Antrea
+Agent in the [Antrea deployment yaml](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/antrea.yml).
+For example:
+
+```yaml
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ name: antrea-agent
+ labels:
+ component: antrea-agent
+spec:
+ template:
+ spec:
+ containers:
+ - name: antrea-agent
+ env:
+ - name: ALLOW_NO_ENCAP_WITHOUT_ANTREA_PROXY
+ value: "true"
+```
+
+## Hybrid Mode
+
+Let us start from `Hybrid` mode which is simpler to configure. `Hybrid` mode
+does not encapsulate Pod traffic when the source and the destination Nodes are
+in the same subnet. Thus it requires the Node network to allow Pod IP addresses
+to be sent out from the Nodes' NICs. This requirement is not supported in all
+the networks and clouds, or in some cases it might require specific
+configuration of the Node network. For example:
+
+* On AWS, the source/destination checks must be disabled on the EC2 instances of
+the Kubernetes Nodes, as described in the
+[AWS documentation](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html#EIP_Disable_SrcDestCheck).
+
+* On Google Compute Engine, IP forwarding must be enabled on the VM instances as
+described in the [Google Cloud documentation](https://cloud.google.com/vpc/docs/using-routes#canipforward).
+
+* On Azure, there is no way to let VNet forward unknown IPs, hence Antrea
+`Hybrid` mode cannot work on Azure.
+
+If the Node network does allow Pod IPs sent out from the Nodes, you can
+configure Antrea to run in the `Hybrid` mode by setting the `trafficEncapMode`
+config parameter of `antrea-agent` to `hybrid`. The `trafficEncapMode` config
+parameter is defined in `antrea-agent.conf` of the `antrea` ConfigMap in the
+[Antrea deployment yaml](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/antrea.yml).
+
+```yaml
+ antrea-agent.conf: |
+ trafficEncapMode: hybrid
+```
+
+After changing the config parameter, you can deploy Antrea in `Hybrid` mode with
+the usual command:
+
+```bash
+kubectl apply -f antrea.yml
+```
+
+## NoEncap Mode
+
+In `NoEncap` mode, Antrea never encapsulates Pod traffic. Just like `Hybrid`
+mode, the Node network needs to allow Pod IP addresses sent out from Nodes. When
+the Nodes are not in the same subnet, `NoEncap` mode additionally requires the
+Node network be able to route the Pod traffic from the source Node to the
+destination Node. There are two possibilities to enable this routing by Node
+network:
+
+* Leverage Route Controller of [Kubernetes Cloud Controller Manager](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller).
+The Kubernetes Cloud Providers that implement Route Controller can add routes
+to the cloud network routers for the Pod CIDRs of Nodes, and then the cloud
+network is able to route Pod traffic between Nodes. This Route Controller
+functionality is supported by the Cloud Provider implementations of the major
+clouds, including: [AWS](https://github.com/kubernetes/cloud-provider-aws),
+[Azure](https://github.com/kubernetes-sigs/cloud-provider-azure),
+[GCE](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/legacy-cloud-providers/gce),
+and [vSphere (with NSX-T)](https://github.com/kubernetes/cloud-provider-vsphere).
+
+* Run a routing protocol or even manually configure routers to add routes to
+the Node network routers. For example, Antrea can work with [kube-router](https://www.kube-router.io)
+and leverage kube-router to advertise Pod CIDRs to routers using BGP. Section
+[Using kube-router for BGP](#using-kube-router-for-bgp) describes how to
+configure Antrea and kube-router to work together.
+
+When the Node network can support forwarding and routing of Pod traffic, Antrea
+can be configured to run in the `NoEncap` mode, by setting the `trafficEncapMode`
+config parameter of `antrea-agent` to `noEncap`. By default, Antrea performs SNAT
+(source network address translation) for the outbound connections from a Pod to
+outside of the Pod network, using the Node's IP address as the SNAT IP. In the
+`NoEncap` mode, as the Node network knows about Pod IP addresses, the SNAT by
+Antrea might be unnecessary. In this case, you can disable it by setting the
+`noSNAT` config parameter to `true`. The `trafficEncapMode` and `noSNAT` config
+parameters are defined in `antrea-agent.conf` of the `antrea` ConfigMap in the
+[Antrea deployment yaml](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/antrea.yml).
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ trafficEncapMode: noEncap
+ noSNAT: false # Set to true to disable Antrea SNAT for external traffic
+```
+
+After changing the parameters, you can deploy Antrea in `noEncap` mode by applying
+the deployment yaml.
+
+### Using kube-router for BGP
+
+We can run kube-router in advertisement-only mode to advertise Pod CIDRs to the
+peered routers, so the routers can know how to route Pod traffic to the Nodes.
+To deploy kube-router in advertisement-only mode, first download the
+[kube-router DaemonSet template](https://raw.githubusercontent.com/cloudnativelabs/kube-router/v0.4.0/daemonset/generic-kuberouter-only-advertise-routes.yaml):
+
+```bash
+curl -LO https://raw.githubusercontent.com/cloudnativelabs/kube-router/v0.4.0/daemonset/generic-kuberouter-only-advertise-routes.yaml
+```
+
+Then edit the yaml file and set the following kube-router arguments:
+
+```yaml
+- "--run-router=true"
+- "--run-firewall=false"
+- "--run-service-proxy=false"
+- "--enable-cni=false"
+- "--enable-ibgp=false"
+- "--enable-overlay=false"
+- "--enable-pod-egress=false"
+- "--peer-router-ips="
+- "--peer-router-asns="
+```
+
+The BGP peers should be configured by specifying the `--peer-router-asns` and
+`--peer-router-ips` parameters. Note, the ASNs and IPs must match the
+configuration on the peered routers. For example:
+
+```yaml
+- "--peer-router-ips=192.168.1.99,192.168.1.100
+- "--peer-router-asns=65000,65000"
+```
+
+Then you can deploy the kube-router DaemonSet with:
+
+```bash
+kubectl apply -f generic-kuberouter-only-advertise-routes.yaml
+```
+
+You can verify that the kube-router Pods are running on the Nodes of your
+Kubernetes cluster by (the cluster in the following example has only two Nodes):
+
+```bash
+$ kubectl -n kube-system get pods -l k8s-app=kube-router
+NAME READY STATUS RESTARTS AGE
+kube-router-rn4xc 1/1 Running 0 1m
+kube-router-vhrf5 1/1 Running 0 1m
+```
+
+Antrea can be deployed either before or after kube-router, with the `NoEncap`
+mode.
diff --git a/content/docs/v1.15.0/docs/octant-plugin-installation.md b/content/docs/v1.15.0/docs/octant-plugin-installation.md
new file mode 100644
index 00000000..bce821ca
--- /dev/null
+++ b/content/docs/v1.15.0/docs/octant-plugin-installation.md
@@ -0,0 +1,6 @@
+# Octant and antrea-octant-plugin installation
+
+***Octant is no longer maintained and the antrea-octant-plugin has been removed
+ as of Antrea v1.13. Please refer to [#4640](https://github.com/antrea-io/antrea/issues/4640)
+ for more information, and check out the [Antrea web UI](https://github.com/antrea-io/antrea-ui)
+ for an alternative.***
diff --git a/content/docs/v1.15.0/docs/os-issues.md b/content/docs/v1.15.0/docs/os-issues.md
new file mode 100644
index 00000000..78cab439
--- /dev/null
+++ b/content/docs/v1.15.0/docs/os-issues.md
@@ -0,0 +1,119 @@
+# OS-specific known issues
+
+The following issues were encountered when testing Antrea on different OSes, or
+reported by Antrea users. When possible we try to provide a workaround.
+
+## CoreOS
+
+| Issues |
+| ------ |
+| [#626](https://github.com/antrea-io/antrea/issues/626) |
+
+**CoreOS Container Linux has reached its
+ [end-of-life](https://www.openshift.com/learn/topics/coreos) on May 26, 2020
+ and no longer receives updates. It is recommended to migrate to another
+ Operating System as soon as possible.**
+
+CoreOS uses networkd for network configuration. By default, all interfaces are
+managed by networkd because of the [configuration
+files](https://github.com/coreos/init/tree/master/systemd/network) that ship
+with CoreOS. Unfortunately, that includes the gateway interface created by
+Antrea (`antrea-gw0` by default). Most of the time, this is not an issue, but if
+networkd is restarted for any reason, it will cause the interface to lose its IP
+configuration, and all the routes associated with the interface will be
+deleted. To avoid this issue, we recommend that you create the following
+configuration files:
+
+```text
+# /etc/systemd/network/90-antrea-ovs.network
+[Match]
+# use the correct name for the gateway if you changed the Antrea configuration
+Name=antrea-gw0 ovs-system
+Driver=openvswitch
+
+[Link]
+Unmanaged=yes
+```
+
+```text
+# /etc/systemd/network/90-antrea-veth.network
+# may be redundant with 50-docker-veth.network (name may differ based on CoreOS version), which should not be an issue
+[Match]
+Driver=veth
+
+[Link]
+Unmanaged=yes
+```
+
+```text
+# /etc/systemd/network/90-antrea-tun.network
+[Match]
+Name=genev_sys_* vxlan_sys_* gre_sys stt_sys_*
+
+[Link]
+Unmanaged=yes
+```
+
+Note that this fix requires a version of CoreOS `>= 1262.0.0` (Dec 2016), as the
+networkd `Unmanaged` option was not supported before that.
+
+## Photon OS 3.0
+
+| Issues |
+| ------ |
+| [#591](https://github.com/antrea-io/antrea/issues/591) |
+| [#1516](https://github.com/antrea-io/antrea/issues/1516) |
+
+If your K8s Nodes are running Photon OS 3.0, you may see error messages in the
+antrea-agent logs like this one: `"Received bundle error msg: [...]"`. These
+messages indicate that some flow entries could not be added to the OVS
+bridge. This usually indicates that the Kernel was not compiled with the
+`CONFIG_NF_CONNTRACK_ZONES` option, as this option was only enabled recently in
+Photon OS. This option is required by the Antrea OVS datapath. To confirm that
+this is indeed the issue, you can run the following command on one of your
+Nodes:
+
+```bash
+grep CONFIG_NF_CONNTRACK_ZONES= /boot/config-`uname -r`
+```
+
+If you do *not* see the following output, then it confirms that your Kernel is
+indeed missing this option:
+
+```text
+CONFIG_NF_CONNTRACK_ZONES=y
+```
+
+To fix this issue and be able to run Antrea on your Photon OS Nodes, you will
+need to upgrade to a more recent version: `>= 4.19.87-4` (Jan 2020). You can
+achieve this by running `tdnf upgrade linux-esx` on all your Nodes.
+
+After this fix, all the Antrea Agents should be running correctly. If you still
+experience connectivity issues, it may be because of Photon's default firewall
+rules, which are quite strict by
+[default](https://vmware.github.io/photon/assets/files/html/3.0/photon_admin/default-firewall-settings.html). The
+easiest workaround is to accept all traffic on the gateway interface created by
+Antrea (`antrea-gw0` by default), which enables traffic to flow between the Node
+and the Pod network:
+
+```bash
+iptables -A INPUT -i antrea-gw0 -j ACCEPT
+```
+
+### Pod Traffic Shaping
+
+Antrea provides support for Pod [Traffic Shaping](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#support-traffic-shaping)
+by leveraging the open-source [bandwidth plugin](https://github.com/containernetworking/plugins/tree/master/plugins/meta/bandwidth)
+maintained by the CNI project. This plugin requires the following Kernel
+modules: `ifb`, `sch_tbf` and `sch_ingress`. It seems that at the moment Photon
+OS 3.0 is built without the `ifb` Kernel module, which you can confirm by
+running `modprobe --dry-run ifb`: an error would indicate that the module is
+indeed missing. Without this module, Pods with the
+`kubernetes.io/egress-bandwidth` annotation cannot be created successfully. Pods
+with no traffic shaping annotation, or which only use the
+`kubernetes.io/ingress-bandwidth` annotation, can still be created successfully
+as they do not require the creation of an `ifb` device.
+
+If Photon OS is patched to enable `ifb`, we will update this documentation to
+reflect this change, and include information about which Photon OS version can
+support egress traffic shaping.
diff --git a/content/docs/v1.15.0/docs/ovs-offload.md b/content/docs/v1.15.0/docs/ovs-offload.md
new file mode 100644
index 00000000..51eb6336
--- /dev/null
+++ b/content/docs/v1.15.0/docs/ovs-offload.md
@@ -0,0 +1,211 @@
+# OVS Hardware Offload
+
+The OVS software based solution is CPU intensive, affecting system performance
+and preventing fully utilizing available bandwidth. OVS 2.8 and above support
+a feature called OVS Hardware Offload which improves performance significantly.
+This feature allows offloading the OVS data-plane to the NIC while maintaining
+OVS control-plane unmodified. It is using SR-IOV technology with VF representor
+host net-device. The VF representor plays the same role as TAP devices
+in Para-Virtual (PV) setup. A packet sent through the VF representor on the host
+arrives to the VF, and a packet sent through the VF is received by its representor.
+
+## Supported Ethernet controllers
+
+The following manufacturers are known to work:
+
+- Mellanox ConnectX-5 and above
+
+## Prerequisites
+
+- Antrea v0.9.0 or greater
+- Linux Kernel 5.7 or greater
+- iproute 4.12 or greater
+
+## Instructions for Mellanox ConnectX-5 and Above
+
+In order to enable Open vSwitch hardware offload, the following steps
+are required. Please make sure you have root privileges to run the commands
+below.
+
+Check the Number of VF Supported on the NIC
+
+```bash
+cat /sys/class/net/enp3s0f0/device/sriov_totalvfs
+8
+```
+
+Create the VFs
+
+```bash
+echo '4' > /sys/class/net/enp3s0f0/device/sriov_numvfs
+```
+
+Verify that the VFs are created
+
+```bash
+ip link show enp3s0f0
+8: enp3s0f0: mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000
+ link/ether a0:36:9f:8f:3f:b8 brd ff:ff:ff:ff:ff:ff
+ vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
+ vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
+ vf 2 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
+ vf 3 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
+```
+
+Set up the PF to be up
+
+```bash
+ip link set enp3s0f0 up
+```
+
+Unbind the VFs from the driver
+
+```bash
+echo 0000:03:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
+echo 0000:03:00.3 > /sys/bus/pci/drivers/mlx5_core/unbind
+echo 0000:03:00.4 > /sys/bus/pci/drivers/mlx5_core/unbind
+echo 0000:03:00.5 > /sys/bus/pci/drivers/mlx5_core/unbind
+```
+
+Configure SR-IOV VFs to switchdev mode
+
+```bash
+devlink dev eswitch set pci/0000:03:00.0 mode switchdev
+ethtool -K enp3s0f0 hw-tc-offload on
+```
+
+Bind the VFs to the driver
+
+```bash
+echo 0000:03:00.2 > /sys/bus/pci/drivers/mlx5_core/bind
+echo 0000:03:00.3 > /sys/bus/pci/drivers/mlx5_core/bind
+echo 0000:03:00.4 > /sys/bus/pci/drivers/mlx5_core/bind
+echo 0000:03:00.5 > /sys/bus/pci/drivers/mlx5_core/bind
+```
+
+## SR-IOV network device plugin configuration
+
+Create a ConfigMap that defines SR-IOV resource pool configuration
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: sriovdp-config
+ namespace: kube-system
+data:
+ config.json: |
+ {
+ "resourceList": [{
+ "resourcePrefix": "mellanox.com",
+ "resourceName": "cx5_sriov_switchdev",
+ "isRdma": true,
+ "selectors": {
+ "vendors": ["15b3"],
+ "devices": ["1018"],
+ "drivers": ["mlx5_core"]
+ }
+ }
+ ]
+ }
+```
+
+Deploy SR-IOV network device plugin as DaemonSet. See .
+
+Deploy multus CNI as DaemonSet. See .
+
+Create NetworkAttachementDefinition CRD with Antrea CNI config.
+
+```yaml
+apiVersion: "k8s.cni.cncf.io/v1"
+kind: NetworkAttachmentDefinition
+metadata:
+ name: default
+ namespace: kube-system
+ annotations:
+ k8s.v1.cni.cncf.io/resourceName: mellanox.com/cx5_sriov_switchdev
+spec:
+ config: '{
+ "cniVersion": "0.3.1",
+ "name": "antrea",
+ "plugins": [ { "type": "antrea", "ipam": { "type": "host-local" } }, { "type": "portmap", "capabilities": {"portMappings": true} }, { "type": "bandwidth", "capabilities": {"bandwidth": true} }]
+}'
+
+```
+
+## Deploy Antrea Image with hw-offload enabled
+
+Modify the build/yamls/antrea.yml with offload flag
+
+```yaml
+ - command:
+ - start_ovs
+ - --hw-offload
+```
+
+## Deploy POD with OVS hardware-offload
+
+Create POD spec and request a VF
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: ovs-offload-pod1
+ annotations:
+ v1.multus-cni.io/default-network: default
+spec:
+ containers:
+ - name: ovs-offload-app
+ image: networkstatic/iperf3
+ command:
+ - sh
+ - -c
+ - |
+ sleep 1000000
+ resources:
+ requests:
+ mellanox.com/cx5_sriov_switchdev: '1'
+ limits:
+ mellanox.com/cx5_sriov_switchdev: '1'
+```
+
+## Verify Hardware-Offloads is Working
+
+Run iperf3 server on POD 1
+
+```bash
+kubectl exec -it ovs-offload-pod1 -- iperf3 -s
+```
+
+Run iperf3 client on POD 2
+
+```bash
+kubectl exec -it ovs-offload-pod2 -- iperf3 -c 192.168.1.17 -t 100
+```
+
+Check traffic on the VF representor port. Verify only TCP connection establishment appears
+
+```text
+tcpdump -i mofed-te-b5583b tcp
+listening on mofed-te-b5583b, link-type EN10MB (Ethernet), capture size 262144 bytes
+22:24:44.969516 IP 192.168.1.16.43558 > 192.168.1.17.targus-getdata1: Flags [S], seq 89800743, win 64860, options [mss 1410,sackOK,TS val 491087056 ecr 0,nop,wscale 7], length 0
+22:24:44.969773 IP 192.168.1.17.targus-getdata1 > 192.168.1.16.43558: Flags [S.], seq 1312764151, ack 89800744, win 64308, options [mss 1410,sackOK,TS val 4095895608 ecr 491087056,nop,wscale 7], length 0
+22:24:45.085558 IP 192.168.1.16.43558 > 192.168.1.17.targus-getdata1: Flags [.], ack 1, win 507, options [nop,nop,TS val 491087222 ecr 4095895608], length 0
+22:24:45.085592 IP 192.168.1.16.43558 > 192.168.1.17.targus-getdata1: Flags [P.], seq 1:38, ack 1, win 507, options [nop,nop,TS val 491087222 ecr 4095895608], length 37
+22:24:45.086311 IP 192.168.1.16.43560 > 192.168.1.17.targus-getdata1: Flags [S], seq 3802331506, win 64860, options [mss 1410,sackOK,TS val 491087279 ecr 0,nop,wscale 7], length 0
+22:24:45.086462 IP 192.168.1.17.targus-getdata1 > 192.168.1.16.43560: Flags [S.], seq 441940709, ack 3802331507, win 64308, options [mss 1410,sackOK,TS val 4095895725 ecr 491087279,nop,wscale 7], length 0
+22:24:45.086624 IP 192.168.1.16.43560 > 192.168.1.17.targus-getdata1: Flags [.], ack 1, win 507, options [nop,nop,TS val 491087279 ecr 4095895725], length 0
+22:24:45.086654 IP 192.168.1.16.43560 > 192.168.1.17.targus-getdata1: Flags [P.], seq 1:38, ack 1, win 507, options [nop,nop,TS val 491087279 ecr 4095895725], length 37
+22:24:45.086715 IP 192.168.1.17.targus-getdata1 > 192.168.1.16.43560: Flags [.], ack 38, win 503, options [nop,nop,TS val 4095895725 ecr 491087279], length 0
+```
+
+Check datapath rules are offloaded
+
+```text
+ovs-appctl dpctl/dump-flows --names type=offloaded
+recirc_id(0),in_port(eth0),eth(src=16:fd:c6:0b:60:52),eth_type(0x0800),ipv4(src=192.168.1.17,frag=no), packets:2235857, bytes:147599302, used:0.550s, actions:ct(zone=65520),recirc(0x18)
+ct_state(+est+trk),ct_mark(0),recirc_id(0x18),in_port(eth0),eth(dst=42:66:d7:45:0d:7e),eth_type(0x0800),ipv4(dst=192.168.1.0/255.255.255.0,frag=no), packets:2235857, bytes:147599302, used:0.550s, actions:eth1
+recirc_id(0),in_port(eth1),eth(src=42:66:d7:45:0d:7e),eth_type(0x0800),ipv4(src=192.168.1.16,frag=no), packets:133410141, bytes:195255745684, used:0.550s, actions:ct(zone=65520),recirc(0x16)
+ct_state(+est+trk),ct_mark(0),recirc_id(0x16),in_port(eth1),eth(dst=16:fd:c6:0b:60:52),eth_type(0x0800),ipv4(dst=192.168.1.0/255.255.255.0,frag=no), packets:133410138, bytes:195255745483, used:0.550s, actions:eth0
+```
diff --git a/content/docs/v1.15.0/docs/prometheus-integration.md b/content/docs/v1.15.0/docs/prometheus-integration.md
new file mode 100644
index 00000000..d7479add
--- /dev/null
+++ b/content/docs/v1.15.0/docs/prometheus-integration.md
@@ -0,0 +1,613 @@
+# Prometheus Integration
+
+## Purpose
+
+Prometheus server can monitor various metrics and provide an observation of the
+Antrea Controller and Agent components. The doc provides general guidelines to
+the configuration of Prometheus server to operate with the Antrea components.
+
+## About Prometheus
+
+[Prometheus](https://prometheus.io/) is an open source monitoring and alerting
+server. Prometheus is capable of collecting metrics from various Kubernetes
+components, storing and providing alerts.
+Prometheus can provide visibility by integrating with other products such as
+[Grafana](https://grafana.com/).
+
+One of Prometheus capabilities is self-discovery of Kubernetes services which
+expose their metrics. So Prometheus can scrape the metrics of any additional
+components which are added to the cluster without further configuration changes.
+
+## Antrea Configuration
+
+Enable Prometheus metrics listener by setting `enablePrometheusMetrics`
+parameter to true in the Controller and the Agent configurations.
+
+## Prometheus Configuration
+
+### Prometheus version
+
+Prometheus integration with Antrea is validated as part of CI using Prometheus v2.46.0.
+
+### Prometheus RBAC
+
+Prometheus requires access to Kubernetes API resources for the service discovery
+capability. Reading metrics also requires access to the "/metrics" API
+endpoints.
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: prometheus
+rules:
+- apiGroups: [""]
+ resources:
+ - nodes
+ - nodes/proxy
+ - services
+ - endpoints
+ - pods
+ verbs: ["get", "list", "watch"]
+- apiGroups:
+ - networking.k8s.io
+ resources:
+ - ingresses
+ verbs: ["get", "list", "watch"]
+- nonResourceURLs: ["/metrics"]
+ verbs: ["get"]
+```
+
+### Antrea Metrics Listener Access
+
+To scrape the metrics from Antrea Controller and Agent, Prometheus needs the
+following permissions
+
+```yaml
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: prometheus-antrea
+rules:
+- nonResourceURLs:
+ - /metrics
+ verbs:
+ - get
+```
+
+### Antrea Components Scraping configuration
+
+Add the following jobs to Prometheus scraping configuration to enable metrics
+collection from Antrea components. Antrea Agent metrics endpoint is exposed through
+Antrea apiserver on `apiport` config parameter given in `antrea-agent.conf` (default
+value is 10350). Antrea Controller metrics endpoint is exposed through Antrea apiserver
+on `apiport` config parameter given in `antrea-controller.conf` (default value is 10349).
+
+#### Controller Scraping
+
+```yaml
+- job_name: 'antrea-controllers'
+kubernetes_sd_configs:
+- role: endpoints
+scheme: https
+tls_config:
+ ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
+ insecure_skip_verify: true
+bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
+relabel_configs:
+- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_pod_container_name]
+ action: keep
+ regex: kube-system;antrea-controller
+- source_labels: [__meta_kubernetes_pod_node_name, __meta_kubernetes_pod_name]
+ target_label: instance
+```
+
+#### Agent Scraping
+
+```yaml
+- job_name: 'antrea-agents'
+kubernetes_sd_configs:
+- role: pod
+scheme: https
+tls_config:
+ ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
+ insecure_skip_verify: true
+bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
+relabel_configs:
+- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_pod_container_name]
+ action: keep
+ regex: kube-system;antrea-agent
+- source_labels: [__meta_kubernetes_pod_node_name, __meta_kubernetes_pod_name]
+ target_label: instance
+```
+
+For further reference see the enclosed
+[configuration file](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/antrea-prometheus.yml).
+
+The configuration file above can be used to deploy Prometheus Server with
+scraping configuration for Antrea services.
+To deploy this configuration use
+`kubectl apply -f build/yamls/antrea-prometheus.yml`
+
+## Antrea Prometheus Metrics
+
+Antrea Controller and Agents expose various metrics, some of which are provided
+by the Antrea components and others which are provided by 3rd party components
+used by the Antrea components.
+
+Below is a list of metrics, provided by the components and by 3rd parties.
+
+### Antrea Metrics
+
+#### Antrea Agent Metrics
+
+- **antrea_agent_conntrack_antrea_connection_count:** Number of connections
+in the Antrea ZoneID of the conntrack table. This metric gets updated at
+an interval specified by flowPollInterval, a configuration parameter for
+the Agent.
+- **antrea_agent_conntrack_max_connection_count:** Size of the conntrack
+table. This metric gets updated at an interval specified by flowPollInterval,
+a configuration parameter for the Agent.
+- **antrea_agent_conntrack_total_connection_count:** Number of connections
+in the conntrack table. This metric gets updated at an interval specified
+by flowPollInterval, a configuration parameter for the Agent.
+- **antrea_agent_denied_connection_count:** Number of denied connections
+detected by Flow Exporter deny connections tracking. This metric gets updated
+when a flow is rejected/dropped by network policy.
+- **antrea_agent_egress_networkpolicy_rule_count:** Number of egress
+NetworkPolicy rules on local Node which are managed by the Antrea Agent.
+- **antrea_agent_flow_collector_reconnection_count:** Number of re-connections
+between Flow Exporter and flow collector. This metric gets updated whenever
+the connection is re-established between the Flow Exporter and the flow
+collector (e.g. the Flow Aggregator).
+- **antrea_agent_ingress_networkpolicy_rule_count:** Number of ingress
+NetworkPolicy rules on local Node which are managed by the Antrea Agent.
+- **antrea_agent_local_pod_count:** Number of Pods on local Node which are
+managed by the Antrea Agent.
+- **antrea_agent_networkpolicy_count:** Number of NetworkPolicies on local
+Node which are managed by the Antrea Agent.
+- **antrea_agent_ovs_flow_count:** Flow count for each OVS flow table. The
+TableID and TableName are used as labels.
+- **antrea_agent_ovs_flow_ops_count:** Number of OVS flow operations,
+partitioned by operation type (add, modify and delete).
+- **antrea_agent_ovs_flow_ops_error_count:** Number of OVS flow operation
+errors, partitioned by operation type (add, modify and delete).
+- **antrea_agent_ovs_flow_ops_latency_milliseconds:** The latency of OVS
+flow operations, partitioned by operation type (add, modify and delete).
+- **antrea_agent_ovs_meter_packet_dropped_count:** Number of packets dropped by
+OVS meter. The value is greater than 0 when the packets exceed the rate-limit.
+- **antrea_agent_ovs_total_flow_count:** Total flow count of all OVS flow
+tables.
+
+#### Antrea Controller Metrics
+
+- **antrea_controller_acnp_status_updates:** The total number of actual
+status updates performed for Antrea ClusterNetworkPolicy Custom Resources
+- **antrea_controller_address_group_processed:** The total number of
+address-group processed
+- **antrea_controller_address_group_sync_duration_milliseconds:** The duration
+of syncing address-group
+- **antrea_controller_annp_status_updates:** The total number of actual
+status updates performed for Antrea NetworkPolicy Custom Resources
+- **antrea_controller_applied_to_group_processed:** The total number of
+applied-to-group processed
+- **antrea_controller_applied_to_group_sync_duration_milliseconds:** The
+duration of syncing applied-to-group
+- **antrea_controller_length_address_group_queue:** The length of
+AddressGroupQueue
+- **antrea_controller_length_applied_to_group_queue:** The length of
+AppliedToGroupQueue
+- **antrea_controller_length_network_policy_queue:** The length of
+InternalNetworkPolicyQueue
+- **antrea_controller_network_policy_processed:** The total number of
+internal-networkpolicy processed
+- **antrea_controller_network_policy_sync_duration_milliseconds:** The
+duration of syncing internal-networkpolicy
+
+#### Antrea Proxy Metrics
+
+- **antrea_proxy_sync_proxy_rules_duration_seconds:** SyncProxyRules duration
+of AntreaProxy in seconds
+- **antrea_proxy_total_endpoints_installed:** The number of Endpoints
+installed by AntreaProxy
+- **antrea_proxy_total_endpoints_updates:** The cumulative number of Endpoint
+updates received by AntreaProxy
+- **antrea_proxy_total_services_installed:** The number of Services installed
+by AntreaProxy
+- **antrea_proxy_total_services_updates:** The cumulative number of Service
+updates received by AntreaProxy
+
+### Common Metrics Provided by Infrastructure
+
+#### Apiserver Metrics
+
+- **apiserver_audit_event_total:** Counter of audit events generated and
+sent to the audit backend.
+- **apiserver_audit_requests_rejected_total:** Counter of apiserver requests
+rejected due to an error in audit logging backend.
+- **apiserver_client_certificate_expiration_seconds:** Distribution of the
+remaining lifetime on the certificate used to authenticate a request.
+- **apiserver_current_inflight_requests:** Maximal number of currently used
+inflight request limit of this apiserver per request kind in last second.
+- **apiserver_delegated_authn_request_duration_seconds:** Request latency
+in seconds. Broken down by status code.
+- **apiserver_delegated_authn_request_total:** Number of HTTP requests
+partitioned by status code.
+- **apiserver_delegated_authz_request_duration_seconds:** Request latency
+in seconds. Broken down by status code.
+- **apiserver_delegated_authz_request_total:** Number of HTTP requests
+partitioned by status code.
+- **apiserver_envelope_encryption_dek_cache_fill_percent:** Percent of the
+cache slots currently occupied by cached DEKs.
+- **apiserver_flowcontrol_read_vs_write_current_requests:** EXPERIMENTAL:
+Observations, at the end of every nanosecond, of the number of requests
+(as a fraction of the relevant limit) waiting or in regular stage of execution
+- **apiserver_flowcontrol_seat_fair_frac:** Fair fraction of server's
+concurrency to allocate to each priority level that can use it
+- **apiserver_longrunning_requests:** Gauge of all active long-running
+apiserver requests broken out by verb, group, version, resource, scope and
+component. Not all requests are tracked this way.
+- **apiserver_request_duration_seconds:** Response latency distribution in
+seconds for each verb, dry run value, group, version, resource, subresource,
+scope and component.
+- **apiserver_request_filter_duration_seconds:** Request filter latency
+distribution in seconds, for each filter type
+- **apiserver_request_sli_duration_seconds:** Response latency distribution
+(not counting webhook duration) in seconds for each verb, group, version,
+resource, subresource, scope and component.
+- **apiserver_request_slo_duration_seconds:** Response latency distribution
+(not counting webhook duration) in seconds for each verb, group, version,
+resource, subresource, scope and component.
+- **apiserver_request_total:** Counter of apiserver requests broken out
+for each verb, dry run value, group, version, resource, scope, component,
+and HTTP response code.
+- **apiserver_response_sizes:** Response size distribution in bytes for each
+group, version, verb, resource, subresource, scope and component.
+- **apiserver_storage_data_key_generation_duration_seconds:** Latencies in
+seconds of data encryption key(DEK) generation operations.
+- **apiserver_storage_data_key_generation_failures_total:** Total number of
+failed data encryption key(DEK) generation operations.
+- **apiserver_storage_envelope_transformation_cache_misses_total:** Total
+number of cache misses while accessing key decryption key(KEK).
+- **apiserver_tls_handshake_errors_total:** Number of requests dropped with
+'TLS handshake error from' error
+- **apiserver_watch_events_sizes:** Watch event size distribution in bytes
+- **apiserver_watch_events_total:** Number of events sent in watch clients
+- **apiserver_webhooks_x509_insecure_sha1_total:** Counts the number of
+requests to servers with insecure SHA1 signatures in their serving certificate
+OR the number of connection failures due to the insecure SHA1 signatures
+(either/or, based on the runtime environment)
+- **apiserver_webhooks_x509_missing_san_total:** Counts the number of requests
+to servers missing SAN extension in their serving certificate OR the number
+of connection failures due to the lack of x509 certificate SAN extension
+missing (either/or, based on the runtime environment)
+
+#### Authenticated Metrics
+
+- **authenticated_user_requests:** Counter of authenticated requests broken
+out by username.
+
+#### Authentication Metrics
+
+- **authentication_attempts:** Counter of authenticated attempts.
+- **authentication_duration_seconds:** Authentication duration in seconds
+broken out by result.
+- **authentication_token_cache_active_fetch_count:**
+- **authentication_token_cache_fetch_total:**
+- **authentication_token_cache_request_duration_seconds:**
+- **authentication_token_cache_request_total:**
+
+#### Disabled Metrics
+
+- **disabled_metric_total:** The count of disabled metrics.
+
+#### Field Metrics
+
+- **field_validation_request_duration_seconds:** Response latency distribution
+in seconds for each field validation value and whether field validation is
+enabled or not
+
+#### Go Metrics
+
+- **go_cgo_go_to_c_calls_calls_total:** Count of calls made from Go to C by
+the current process.
+- **go_cpu_classes_gc_mark_assist_cpu_seconds_total:** Estimated total CPU
+time goroutines spent performing GC tasks to assist the GC and prevent it
+from falling behind the application. This metric is an overestimate, and
+not directly comparable to system CPU time measurements. Compare only with
+other /cpu/classes metrics.
+- **go_cpu_classes_gc_mark_dedicated_cpu_seconds_total:** Estimated total
+CPU time spent performing GC tasks on processors (as defined by GOMAXPROCS)
+dedicated to those tasks. This metric is an overestimate, and not directly
+comparable to system CPU time measurements. Compare only with other
+/cpu/classes metrics.
+- **go_cpu_classes_gc_mark_idle_cpu_seconds_total:** Estimated total CPU
+time spent performing GC tasks on spare CPU resources that the Go scheduler
+could not otherwise find a use for. This should be subtracted from the
+total GC CPU time to obtain a measure of compulsory GC CPU time. This
+metric is an overestimate, and not directly comparable to system CPU time
+measurements. Compare only with other /cpu/classes metrics.
+- **go_cpu_classes_gc_pause_cpu_seconds_total:** Estimated total CPU time spent
+with the application paused by the GC. Even if only one thread is running
+during the pause, this is computed as GOMAXPROCS times the pause latency
+because nothing else can be executing. This is the exact sum of samples in
+/gc/pause:seconds if each sample is multiplied by GOMAXPROCS at the time
+it is taken. This metric is an overestimate, and not directly comparable to
+system CPU time measurements. Compare only with other /cpu/classes metrics.
+- **go_cpu_classes_gc_total_cpu_seconds_total:** Estimated total CPU
+time spent performing GC tasks. This metric is an overestimate, and not
+directly comparable to system CPU time measurements. Compare only with other
+/cpu/classes metrics. Sum of all metrics in /cpu/classes/gc.
+- **go_cpu_classes_idle_cpu_seconds_total:** Estimated total available CPU
+time not spent executing any Go or Go runtime code. In other words, the part of
+/cpu/classes/total:cpu-seconds that was unused. This metric is an overestimate,
+and not directly comparable to system CPU time measurements. Compare only
+with other /cpu/classes metrics.
+- **go_cpu_classes_scavenge_assist_cpu_seconds_total:** Estimated total CPU
+time spent returning unused memory to the underlying platform in response
+eagerly in response to memory pressure. This metric is an overestimate,
+and not directly comparable to system CPU time measurements. Compare only
+with other /cpu/classes metrics.
+- **go_cpu_classes_scavenge_background_cpu_seconds_total:** Estimated total
+CPU time spent performing background tasks to return unused memory to the
+underlying platform. This metric is an overestimate, and not directly
+comparable to system CPU time measurements. Compare only with other
+/cpu/classes metrics.
+- **go_cpu_classes_scavenge_total_cpu_seconds_total:** Estimated total
+CPU time spent performing tasks that return unused memory to the underlying
+platform. This metric is an overestimate, and not directly comparable to system
+CPU time measurements. Compare only with other /cpu/classes metrics. Sum of
+all metrics in /cpu/classes/scavenge.
+- **go_cpu_classes_total_cpu_seconds_total:** Estimated total available CPU
+time for user Go code or the Go runtime, as defined by GOMAXPROCS. In other
+words, GOMAXPROCS integrated over the wall-clock duration this process has been
+executing for. This metric is an overestimate, and not directly comparable to
+system CPU time measurements. Compare only with other /cpu/classes metrics. Sum
+of all metrics in /cpu/classes.
+- **go_cpu_classes_user_cpu_seconds_total:** Estimated total CPU time spent
+running user Go code. This may also include some small amount of time spent
+in the Go runtime. This metric is an overestimate, and not directly comparable
+to system CPU time measurements. Compare only with other /cpu/classes metrics.
+- **go_gc_cycles_automatic_gc_cycles_total:** Count of completed GC cycles
+generated by the Go runtime.
+- **go_gc_cycles_forced_gc_cycles_total:** Count of completed GC cycles
+forced by the application.
+- **go_gc_cycles_total_gc_cycles_total:** Count of all completed GC cycles.
+- **go_gc_duration_seconds:** A summary of the pause duration of garbage
+collection cycles.
+- **go_gc_gogc_percent:** Heap size target percentage configured by the
+user, otherwise 100. This value is set by the GOGC environment variable,
+and the runtime/debug.SetGCPercent function.
+- **go_gc_gomemlimit_bytes:** Go runtime memory limit configured by the user,
+otherwise math.MaxInt64. This value is set by the GOMEMLIMIT environment
+variable, and the runtime/debug.SetMemoryLimit function.
+- **go_gc_heap_allocs_by_size_bytes:** Distribution of heap allocations
+by approximate size. Bucket counts increase monotonically. Note that this
+does not include tiny objects as defined by /gc/heap/tiny/allocs:objects,
+only tiny blocks.
+- **go_gc_heap_allocs_bytes_total:** Cumulative sum of memory allocated to
+the heap by the application.
+- **go_gc_heap_allocs_objects_total:** Cumulative count of heap allocations
+triggered by the application. Note that this does not include tiny objects
+as defined by /gc/heap/tiny/allocs:objects, only tiny blocks.
+- **go_gc_heap_frees_by_size_bytes:** Distribution of freed heap allocations
+by approximate size. Bucket counts increase monotonically. Note that this
+does not include tiny objects as defined by /gc/heap/tiny/allocs:objects,
+only tiny blocks.
+- **go_gc_heap_frees_bytes_total:** Cumulative sum of heap memory freed by
+the garbage collector.
+- **go_gc_heap_frees_objects_total:** Cumulative count of heap allocations
+whose storage was freed by the garbage collector. Note that this does
+not include tiny objects as defined by /gc/heap/tiny/allocs:objects, only
+tiny blocks.
+- **go_gc_heap_goal_bytes:** Heap size target for the end of the GC cycle.
+- **go_gc_heap_live_bytes:** Heap memory occupied by live objects that were
+marked by the previous GC.
+- **go_gc_heap_objects_objects:** Number of objects, live or unswept,
+occupying heap memory.
+- **go_gc_heap_tiny_allocs_objects_total:** Count of small allocations that
+are packed together into blocks. These allocations are counted separately
+from other allocations because each individual allocation is not tracked
+by the runtime, only their block. Each block is already accounted for in
+allocs-by-size and frees-by-size.
+- **go_gc_limiter_last_enabled_gc_cycle:** GC cycle the last time the GC CPU
+limiter was enabled. This metric is useful for diagnosing the root cause
+of an out-of-memory error, because the limiter trades memory for CPU time
+when the GC's CPU time gets too high. This is most likely to occur with use
+of SetMemoryLimit. The first GC cycle is cycle 1, so a value of 0 indicates
+that it was never enabled.
+- **go_gc_pauses_seconds:** Distribution of individual GC-related
+stop-the-world pause latencies. Bucket counts increase monotonically.
+- **go_gc_scan_globals_bytes:** The total amount of global variable space
+that is scannable.
+- **go_gc_scan_heap_bytes:** The total amount of heap space that is scannable.
+- **go_gc_scan_stack_bytes:** The number of bytes of stack that were scanned
+last GC cycle.
+- **go_gc_scan_total_bytes:** The total amount space that is scannable. Sum
+of all metrics in /gc/scan.
+- **go_gc_stack_starting_size_bytes:** The stack size of new goroutines.
+- **go_godebug_non_default_behavior_execerrdot_events_total:** The number of
+non-default behaviors executed by the os/exec package due to a non-default
+GODEBUG=execerrdot=... setting.
+- **go_godebug_non_default_behavior_gocachehash_events_total:** The number
+of non-default behaviors executed by the cmd/go package due to a non-default
+GODEBUG=gocachehash=... setting.
+- **go_godebug_non_default_behavior_gocachetest_events_total:** The number
+of non-default behaviors executed by the cmd/go package due to a non-default
+GODEBUG=gocachetest=... setting.
+- **go_godebug_non_default_behavior_gocacheverify_events_total:** The number
+of non-default behaviors executed by the cmd/go package due to a non-default
+GODEBUG=gocacheverify=... setting.
+- **go_godebug_non_default_behavior_http2client_events_total:** The number of
+non-default behaviors executed by the net/http package due to a non-default
+GODEBUG=http2client=... setting.
+- **go_godebug_non_default_behavior_http2server_events_total:** The number of
+non-default behaviors executed by the net/http package due to a non-default
+GODEBUG=http2server=... setting.
+- **go_godebug_non_default_behavior_installgoroot_events_total:** The number
+of non-default behaviors executed by the go/build package due to a non-default
+GODEBUG=installgoroot=... setting.
+- **go_godebug_non_default_behavior_jstmpllitinterp_events_total:** The
+number of non-default behaviors executed by the html/template package due
+to a non-default GODEBUG=jstmpllitinterp=... setting.
+- **go_godebug_non_default_behavior_multipartmaxheaders_events_total:**
+The number of non-default behaviors executed by the mime/multipart package
+due to a non-default GODEBUG=multipartmaxheaders=... setting.
+- **go_godebug_non_default_behavior_multipartmaxparts_events_total:** The
+number of non-default behaviors executed by the mime/multipart package due
+to a non-default GODEBUG=multipartmaxparts=... setting.
+- **go_godebug_non_default_behavior_multipathtcp_events_total:** The number
+of non-default behaviors executed by the net package due to a non-default
+GODEBUG=multipathtcp=... setting.
+- **go_godebug_non_default_behavior_panicnil_events_total:** The number of
+non-default behaviors executed by the runtime package due to a non-default
+GODEBUG=panicnil=... setting.
+- **go_godebug_non_default_behavior_randautoseed_events_total:** The number of
+non-default behaviors executed by the math/rand package due to a non-default
+GODEBUG=randautoseed=... setting.
+- **go_godebug_non_default_behavior_tarinsecurepath_events_total:** The
+number of non-default behaviors executed by the archive/tar package due to
+a non-default GODEBUG=tarinsecurepath=... setting.
+- **go_godebug_non_default_behavior_tlsmaxrsasize_events_total:** The
+number of non-default behaviors executed by the crypto/tls package due to
+a non-default GODEBUG=tlsmaxrsasize=... setting.
+- **go_godebug_non_default_behavior_x509sha1_events_total:** The number of
+non-default behaviors executed by the crypto/x509 package due to a non-default
+GODEBUG=x509sha1=... setting.
+- **go_godebug_non_default_behavior_x509usefallbackroots_events_total:**
+The number of non-default behaviors executed by the crypto/x509 package due
+to a non-default GODEBUG=x509usefallbackroots=... setting.
+- **go_godebug_non_default_behavior_zipinsecurepath_events_total:** The
+number of non-default behaviors executed by the archive/zip package due to
+a non-default GODEBUG=zipinsecurepath=... setting.
+- **go_goroutines:** Number of goroutines that currently exist.
+- **go_info:** Information about the Go environment.
+- **go_memory_classes_heap_free_bytes:** Memory that is completely free and
+eligible to be returned to the underlying system, but has not been. This
+metric is the runtime's estimate of free address space that is backed by
+physical memory.
+- **go_memory_classes_heap_objects_bytes:** Memory occupied by live objects
+and dead objects that have not yet been marked free by the garbage collector.
+- **go_memory_classes_heap_released_bytes:** Memory that is completely free
+and has been returned to the underlying system. This metric is the runtime's
+estimate of free address space that is still mapped into the process, but
+is not backed by physical memory.
+- **go_memory_classes_heap_stacks_bytes:** Memory allocated from the
+heap that is reserved for stack space, whether or not it is currently
+in-use. Currently, this represents all stack memory for goroutines. It also
+includes all OS thread stacks in non-cgo programs. Note that stacks may be
+allocated differently in the future, and this may change.
+- **go_memory_classes_heap_unused_bytes:** Memory that is reserved for heap
+objects but is not currently used to hold heap objects.
+- **go_memory_classes_metadata_mcache_free_bytes:** Memory that is reserved
+for runtime mcache structures, but not in-use.
+- **go_memory_classes_metadata_mcache_inuse_bytes:** Memory that is occupied
+by runtime mcache structures that are currently being used.
+- **go_memory_classes_metadata_mspan_free_bytes:** Memory that is reserved
+for runtime mspan structures, but not in-use.
+- **go_memory_classes_metadata_mspan_inuse_bytes:** Memory that is occupied
+by runtime mspan structures that are currently being used.
+- **go_memory_classes_metadata_other_bytes:** Memory that is reserved for
+or used to hold runtime metadata.
+- **go_memory_classes_os_stacks_bytes:** Stack memory allocated by the
+underlying operating system. In non-cgo programs this metric is currently
+zero. This may change in the future.In cgo programs this metric includes OS
+thread stacks allocated directly from the OS. Currently, this only accounts
+for one stack in c-shared and c-archive build modes, and other sources of
+stacks from the OS are not measured. This too may change in the future.
+- **go_memory_classes_other_bytes:** Memory used by execution trace buffers,
+structures for debugging the runtime, finalizer and profiler specials,
+and more.
+- **go_memory_classes_profiling_buckets_bytes:** Memory that is used by the
+stack trace hash map used for profiling.
+- **go_memory_classes_total_bytes:** All memory mapped by the Go runtime
+into the current process as read-write. Note that this does not include
+memory mapped by code called via cgo or via the syscall package. Sum of all
+metrics in /memory/classes.
+- **go_memstats_alloc_bytes:** Number of bytes allocated and still in use.
+- **go_memstats_alloc_bytes_total:** Total number of bytes allocated, even
+if freed.
+- **go_memstats_buck_hash_sys_bytes:** Number of bytes used by the profiling
+bucket hash table.
+- **go_memstats_frees_total:** Total number of frees.
+- **go_memstats_gc_sys_bytes:** Number of bytes used for garbage collection
+system metadata.
+- **go_memstats_heap_alloc_bytes:** Number of heap bytes allocated and still
+in use.
+- **go_memstats_heap_idle_bytes:** Number of heap bytes waiting to be used.
+- **go_memstats_heap_inuse_bytes:** Number of heap bytes that are in use.
+- **go_memstats_heap_objects:** Number of allocated objects.
+- **go_memstats_heap_released_bytes:** Number of heap bytes released to OS.
+- **go_memstats_heap_sys_bytes:** Number of heap bytes obtained from system.
+- **go_memstats_last_gc_time_seconds:** Number of seconds since 1970 of last
+garbage collection.
+- **go_memstats_lookups_total:** Total number of pointer lookups.
+- **go_memstats_mallocs_total:** Total number of mallocs.
+- **go_memstats_mcache_inuse_bytes:** Number of bytes in use by mcache
+structures.
+- **go_memstats_mcache_sys_bytes:** Number of bytes used for mcache structures
+obtained from system.
+- **go_memstats_mspan_inuse_bytes:** Number of bytes in use by mspan
+structures.
+- **go_memstats_mspan_sys_bytes:** Number of bytes used for mspan structures
+obtained from system.
+- **go_memstats_next_gc_bytes:** Number of heap bytes when next garbage
+collection will take place.
+- **go_memstats_other_sys_bytes:** Number of bytes used for other system
+allocations.
+- **go_memstats_stack_inuse_bytes:** Number of bytes in use by the stack
+allocator.
+- **go_memstats_stack_sys_bytes:** Number of bytes obtained from system for
+stack allocator.
+- **go_memstats_sys_bytes:** Number of bytes obtained from system.
+- **go_sched_gomaxprocs_threads:** The current runtime.GOMAXPROCS setting,
+or the number of operating system threads that can execute user-level Go
+code simultaneously.
+- **go_sched_goroutines_goroutines:** Count of live goroutines.
+- **go_sched_latencies_seconds:** Distribution of the time goroutines have
+spent in the scheduler in a runnable state before actually running. Bucket
+counts increase monotonically.
+- **go_sync_mutex_wait_total_seconds_total:** Approximate cumulative time
+goroutines have spent blocked on a sync.Mutex or sync.RWMutex. This metric
+is useful for identifying global changes in lock contention. Collect a
+mutex or block profile using the runtime/pprof package for more detailed
+contention data.
+- **go_threads:** Number of OS threads created.
+
+#### Hidden Metrics
+
+- **hidden_metric_total:** The count of hidden metrics.
+
+#### Process Metrics
+
+- **process_cpu_seconds_total:** Total user and system CPU time spent
+in seconds.
+- **process_max_fds:** Maximum number of open file descriptors.
+- **process_open_fds:** Number of open file descriptors.
+- **process_resident_memory_bytes:** Resident memory size in bytes.
+- **process_start_time_seconds:** Start time of the process since unix epoch
+in seconds.
+- **process_virtual_memory_bytes:** Virtual memory size in bytes.
+- **process_virtual_memory_max_bytes:** Maximum amount of virtual memory
+available in bytes.
+
+#### Registered Metrics
+
+- **registered_metric_total:** The count of registered metrics broken by
+stability level and deprecation version.
+
+#### Workqueue Metrics
+
+- **workqueue_adds_total:** Total number of adds handled by workqueue
+- **workqueue_depth:** Current depth of workqueue
+- **workqueue_longest_running_processor_seconds:** How many seconds has the
+longest running processor for workqueue been running.
+- **workqueue_queue_duration_seconds:** How long in seconds an item stays
+in workqueue before being requested.
+- **workqueue_retries_total:** Total number of retries handled by workqueue
+- **workqueue_unfinished_work_seconds:** How many seconds of work has
+done that is in progress and hasn't been observed by work_duration. Large
+values indicate stuck threads. One can deduce the number of stuck threads
+by observing the rate at which this increases.
+- **workqueue_work_duration_seconds:** How long in seconds processing an
+item from workqueue takes.
diff --git a/content/docs/v1.15.0/docs/securing-control-plane.md b/content/docs/v1.15.0/docs/securing-control-plane.md
new file mode 100644
index 00000000..0269a1f8
--- /dev/null
+++ b/content/docs/v1.15.0/docs/securing-control-plane.md
@@ -0,0 +1,169 @@
+# Securing Control Plane
+
+All API communication between Antrea control plane components is encrypted with
+TLS. The TLS certificates that Antrea requires can be automatically generated.
+You can also provide your own certificates. This page explains the certificates
+that Antrea requires and how to configure and rotate them for Antrea.
+
+## Table of Contents
+
+
+- [What certificates are required by Antrea](#what-certificates-are-required-by-antrea)
+- [How certificates are used by Antrea](#how-certificates-are-used-by-antrea)
+- [Providing your own certificates](#providing-your-own-certificates)
+ - [Using kubectl](#using-kubectl)
+ - [Using cert-manager](#using-cert-manager)
+- [Certificate rotation](#certificate-rotation)
+
+
+## What certificates are required by Antrea
+
+Currently Antrea only requires a single server certificate for the
+antrea-controller API server endpoint, which is for the following communication:
+
+- The antrea-agents talks to the antrea-controller for fetching the computed
+ NetworkPolicies
+- The kube-aggregator (i.e. kube-apiserver) talks to the antrea-controller for
+ proxying antctl's requests (when run in "controller" mode)
+
+Antrea doesn't require client certificates for its own components as it
+delegates authentication and authorization to the Kubernetes API, using
+Kubernetes [ServiceAccount tokens](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens)
+for client authentication.
+
+## How certificates are used by Antrea
+
+By default, antrea-controller generates a self-signed certificate. You can
+override the behavior by [providing your own certificates](#providing-your-own-certificates).
+Either way, the antrea-controller will distribute the CA certificate as a
+ConfigMap named `antrea-ca` in the Antrea deployment Namespace and inject it
+into the APIServices resources created by Antrea in order to allow its clients
+(i.e. antrea-agent, kube-apiserver) to perform authentication.
+
+Typically, clients that wish to access the antrea-controller API can
+authenticate the server by validating against the CA certificate published in
+the `antrea-ca` ConfigMap.
+
+## Providing your own certificates
+
+Since Antrea v0.7.0, you can provide your own certificates to Antrea. To do so,
+you must set the `selfSignedCert` field of `antrea-controller.conf` to `false`,
+so that the antrea-controller will read the certificate key pair from the
+`antrea-controller-tls` Secret. The example manifests and descriptions below
+assume Antrea is deployed in the `kube-system` Namespace. If you deploy Antrea
+in a different Namepace, please update the Namespace name in the manifests
+accordingly.
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ labels:
+ app: antrea
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-controller.conf: |
+ selfSignedCert: false
+```
+
+You can generate the required certificate manually, or through
+[cert-manager](https://cert-manager.io/docs/). Either way, the certificate must
+be issued with the following key usages and DNS names:
+
+X509 key usages:
+
+- digital signature
+- key encipherment
+- server auth
+
+DNS names:
+
+- antrea.kube-system.svc
+- antrea.kube-system.svc.cluster.local
+
+**Note: It assumes you are using `cluster.local` as the cluster domain, you
+should replace it with the actual one of your Kubernetes cluster.**
+
+You can then create the `antrea-controller-tls` Secret with the certificate key
+pair and the CA certificate in the following form:
+
+```yaml
+apiVersion: v1
+kind: Secret
+# The type can also be Opaque.
+type: kubernetes.io/tls
+metadata:
+ name: antrea-controller-tls
+ namespace: kube-system
+data:
+ ca.crt:
+ tls.crt:
+ tls.key:
+```
+
+### Using kubectl
+
+You can use `kubectl apply -f ` to create the above secret,
+or use `kubectl create secret`:
+
+```bash
+kubectl create secret generic antrea-controller-tls -n kube-system \
+ --from-file=ca.crt= --from-file=tls.crt= --from-file=tls.key=
+```
+
+### Using cert-manager
+
+If you set up [cert-manager](https://cert-manager.io/docs/) to manage your
+certificates, it can be used to issue and renew the certificate required by
+Antrea.
+
+To get started, follow the [cert-manager installation documentation](
+https://cert-manager.io/docs/installation/kubernetes/) to deploy cert-manager
+and configure `Issuer` or `ClusterIssuer` resources.
+
+The `Certificate` should be created in the `kube-system` namespace. For example,
+A `Certificate` may look like:
+
+```yaml
+apiVersion: cert-manager.io/v1
+kind: Certificate
+metadata:
+ name: antrea-controller-tls
+ namespace: kube-system
+spec:
+ secretName: antrea-controller-tls
+ commonName: antrea
+ dnsNames:
+ - antrea.kube-system.svc
+ - antrea.kube-system.svc.cluster.local
+ usages:
+ - digital signature
+ - key encipherment
+ - server auth
+ issuerRef:
+ # Replace the name with the real Issuer you configured.
+ name: ca-issuer
+ # We can reference ClusterIssuers by changing the kind here.
+ # The default value is Issuer (i.e. a locally namespaced Issuer)
+ kind: Issuer
+```
+
+Once the `Certificate` is created, you should see the `antrea-controller-tls`
+Secret created in the `kube-system` Namespace.
+
+**Note it may take up to 1 minute for Kubernetes to propagate the Secret update
+to the antrea-controller Pod if the Pod starts before the Secret is created.**
+
+## Certificate rotation
+
+Antrea v0.7.0 and higher supports certificate rotation. It can be achieved by
+simply updating the `antrea-controller-tls` Secret. The
+antrea-controller will react to the change, updating its serving certificate and
+re-distributing the latest CA certificate (if applicable).
+
+If you are using cert-manager to issue the certificate, it will renew the
+certificate before expiry and update the Secret automatically.
+
+If you are using certificates signed by Antrea, Antrea will rotate the
+certificate automatically before expiration.
diff --git a/content/docs/v1.15.0/docs/security.md b/content/docs/v1.15.0/docs/security.md
new file mode 100644
index 00000000..81e2fe68
--- /dev/null
+++ b/content/docs/v1.15.0/docs/security.md
@@ -0,0 +1,185 @@
+# Security Recommendations
+
+This document describes some security recommendations when deploying Antrea in a
+cluster, and in particular a [multi-tenancy
+cluster](https://cloud.google.com/kubernetes-engine/docs/concepts/multitenancy-overview#what_is_multi-tenancy).
+
+To report a vulnerability in Antrea, please refer to
+[SECURITY.md](../SECURITY.md).
+
+For information about securing Antrea control-plane communications, refer to
+this [document](securing-control-plane.md).
+
+## Protecting Your Cluster Against Privilege Escalations
+
+### Antrea Agent
+
+Like all other K8s Network Plugins, Antrea runs an agent (the Antrea Agent) on
+every Node on the cluster, using a K8s DaemonSet. And just like for other K8s
+Network Plugins, this agent requires a specific set of permissions which grant
+it access to the K8s API using
+[RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/). These
+permissions are required to implement the different features offered by
+Antrea. If any Node in the cluster happens to become compromised (e.g., by an
+escaped container) and the token for the `antrea-agent` ServiceAccount is
+harvested by the attacker, some of these permissions can be leveraged to
+negatively affect other workloads running on the cluster. In particular, the
+Antrea Agent is granted the following permissions:
+
+* `patch` the `pods/status` resources: a successful attacker could abuse this
+ permission to re-label Pods to facilitate [confused deputy
+ attacks](https://en.wikipedia.org/wiki/Confused_deputy_problem) against
+ built-in controllers. For example, making a Pod match a Service selector in
+ order to man-in-the-middle (MITM) the Service traffic, or making a Pod match a
+ ReplicaSet selector so that the ReplicaSet controller deletes legitimate
+ replicas.
+* `patch` the `nodes/status` resources: a successful attacker could abuse this
+ permission to affect scheduling by modifying Node fields like labels,
+ capacity, and conditions.
+
+In both cases, the Antrea Agent only requires the ability to mutate the
+annotations field for all Pods and Nodes, but with K8s RBAC, the lowest
+permission level that we can grant the Antrea Agent to satisfy this requirement
+is the `patch` verb for the `status` subresource for Pods and Nodes (which also
+provides the ability to mutate labels).
+
+To mitigate the risk presented by these permissions in case of a compromised
+token, we suggest that you use
+[Gatekeeper](https://github.com/open-policy-agent/gatekeeper), with the
+appropriate policy. We provide the following Gatekeeper policy, consisting of a
+`ConstraintTemplate` and the corresponding `Constraint`. When using this policy,
+it will no longer be possible for the `antrea-agent` ServiceAccount to mutate
+anything besides annotations for the Pods and Nodes resources.
+
+```yaml
+# ConstraintTemplate
+apiVersion: templates.gatekeeper.sh/v1
+kind: ConstraintTemplate
+metadata:
+ name: antreaagentstatusupdates
+ annotations:
+ description: >-
+ Disallows unauthorized updates to status subresource by Antrea Agent
+ Only annotations can be mutated
+spec:
+ crd:
+ spec:
+ names:
+ kind: AntreaAgentStatusUpdates
+ targets:
+ - target: admission.k8s.gatekeeper.sh
+ rego: |
+ package antreaagentstatusupdates
+ username := object.get(input.review.userInfo, "username", "")
+ targetUsername := "system:serviceaccount:kube-system:antrea-agent"
+
+ allowed_mutation(object, oldObject) {
+ object.status == oldObject.status
+ object.metadata.labels == oldObject.metadata.labels
+ }
+
+ violation[{"msg": msg}] {
+ username == targetUsername
+ input.review.operation == "UPDATE"
+ input.review.requestSubResource == "status"
+ not allowed_mutation(input.review.object, input.review.oldObject)
+ msg := "Antrea Agent is not allowed to mutate this field"
+ }
+```
+
+```yaml
+# Constraint
+apiVersion: constraints.gatekeeper.sh/v1beta1
+kind: AntreaAgentStatusUpdates
+metadata:
+ name: antrea-agent-status-updates
+spec:
+ match:
+ kinds:
+ - apiGroups: [""]
+ kinds: ["Pod", "Node"]
+```
+
+***Please ensure that the `ValidatingWebhookConfiguration` for your Gatekeeper
+ installation enables policies to be applied on the `pods/status` and
+ `nodes/status` subresources, which may not be the case by default.***
+
+As a reference, the following `ValidatingWebhookConfiguration` rule will cause
+policies to be applied to all resources and their subresources:
+
+```yaml
+ - apiGroups:
+ - '*'
+ apiVersions:
+ - '*'
+ operations:
+ - CREATE
+ - UPDATE
+ resources:
+ - '*/*'
+ scope: '*'
+```
+
+while the following rule will cause policies to be applied to all resources, but
+not their subresources:
+
+```yaml
+ - apiGroups:
+ - '*'
+ apiVersions:
+ - '*'
+ operations:
+ - CREATE
+ - UPDATE
+ resources:
+ - '*'
+ scope: '*'
+```
+
+### Antrea Controller
+
+The Antrea Controller, which runs as a single-replica Deployment, enjoys higher
+level permissions than the Antrea Agent. We recommend for production clusters
+running Antrea to schedule the `antrea-controller` Pod on a "secure" Node, which
+could for example be the Node (or one of the Nodes) running the K8s
+control-plane.
+
+## Protecting Access to Antrea Configuration Files
+
+Antrea relies on persisting files on each K8s Node's filesystem, in order to
+minimize disruptions to network functions across Antrea Agent restarts, in
+particular during an upgrade. All these files are located under
+`/var/run/antrea/`. The most notable of these files is
+`/var/run/antrea/openvswitch/conf.db`, which stores the Open vSwitch
+database. Prior to Antrea v0.10, any user had read access to the file on the
+host (permissions were set to `0644`). Starting with v0.10, this is no longer
+the case (permissions are now set to `0640`). Starting with v0.13, we further
+remove access to the `/var/run/antrea/` directory for non-root users
+(permissions are set to `0750`).
+
+If a malicious Pod can gain read access to this file, or, prior to Antrea v0.10,
+if an attacker can gain access to the host, they can potentially access
+sensitive information stored in the database, most notably the Pre-Shared Key
+(PSK) used to configure [IPsec tunnels](traffic-encryption.md), which is stored
+in plaintext in the database. If a PSK is leaked, an attacker can mount a
+man-in-the-middle attack and intercept tunnel traffic.
+
+If a malicious Pod can gain write access to this file, it can modify the
+contents of the database, and therefore impact network functions.
+
+Administrators of multi-tenancy clusters running Antrea should take steps to
+restrict the access of Pods to `/var/run/antrea/`. One way to achieve this is to
+use a
+[PodSecurityPolicy](https://kubernetes.io/docs/concepts/policy/pod-security-policy)
+and restrict the set of allowed
+[volumes](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems)
+to exclude `hostPath`. **This guidance applies to all multi-tenancy clusters and
+is not specific to Antrea.** To quote the K8s documentation:
+
+> There are many ways a container with unrestricted access to the host
+ filesystem can escalate privileges, including reading data from other
+ containers, and abusing the credentials of system services, such as Kubelet.
+
+An alternative solution to K8s PodSecurityPolicies is to use
+[Gatekeeper](https://github.com/open-policy-agent/gatekeeper) to constrain usage
+of the host filesystem by Pods.
diff --git a/content/docs/v1.15.0/docs/service-loadbalancer.md b/content/docs/v1.15.0/docs/service-loadbalancer.md
new file mode 100644
index 00000000..cb931789
--- /dev/null
+++ b/content/docs/v1.15.0/docs/service-loadbalancer.md
@@ -0,0 +1,365 @@
+# Service of type LoadBalancer
+
+## Table of Contents
+
+
+- [Service external IP management by Antrea](#service-external-ip-management-by-antrea)
+ - [Preparation](#preparation)
+ - [Configuration](#configuration)
+ - [Enable Service external IP management feature](#enable-service-external-ip-management-feature)
+ - [Create an ExternalIPPool custom resource](#create-an-externalippool-custom-resource)
+ - [Create a Service of type LoadBalancer](#create-a-service-of-type-loadbalancer)
+ - [Validate Service external IP](#validate-service-external-ip)
+ - [Limitations](#limitations)
+- [Using MetalLB with Antrea](#using-metallb-with-antrea)
+ - [Install MetalLB](#install-metallb)
+ - [Configure MetalLB with layer 2 mode](#configure-metallb-with-layer-2-mode)
+ - [Configure MetalLB with BGP mode](#configure-metallb-with-bgp-mode)
+- [Interoperability with kube-proxy IPVS mode](#interoperability-with-kube-proxy-ipvs-mode)
+ - [Issue with Antrea Egress](#issue-with-antrea-egress)
+
+
+In Kubernetes, implementing Services of type LoadBalancer usually requires
+an external load balancer. On cloud platforms (including public clouds
+and platforms like NSX-T) that support load balancers, Services of type
+LoadBalancer can be implemented by the Kubernetes Cloud Provider, which
+configures the cloud load balancers for the Services. However, the load
+balancer support is not available on all platforms, or in some cases, it is
+complex or has extra cost to deploy external load balancers. This document
+describes two options for supporting Services of type LoadBalancer with Antrea,
+without an external load balancer:
+
+1. Using Antrea's built-in external IP management for Services of type
+LoadBalancer
+2. Leveraging [MetalLB](https://metallb.universe.tf)
+
+## Service external IP management by Antrea
+
+Antrea supports external IP management for Services of type LoadBalancer
+since version 1.5, which can work together with `AntreaProxy` or
+`kube-proxy` to implement Services of type LoadBalancer, without requiring an
+external load balancer. With the external IP management feature, Antrea can
+allocate an external IP for a Service of type LoadBalancer from an
+[ExternalIPPool](egress.md#the-externalippool-resource), and select a Node
+based on the ExternalIPPool's NodeSelector to host the external IP. Antrea
+configures the Service's external IP on the selected Node, and thus Service
+requests to the external IP will get to the Node, and they are then handled by
+`AntreaProxy` or `kube-proxy` on the Node and distributed to the Service's
+Endpoints. Antrea also implements a Node failover mechanism for Service
+external IPs. When Antrea detects a Node hosting an external IP is down, it
+will move the external IP to another available Node of the ExternalIPPool.
+
+### Preparation
+
+If you are using `kube-proxy` in IPVS mode, you need to make sure `strictARP` is
+enabled in the `kube-proxy` configuration. For more information about how to
+configure `kube-proxy`, please refer to the [Interoperability with kube-proxy
+IPVS mode](#interoperability-with-kube-proxy-ipvs-mode) section.
+
+If you are using `kube-proxy` iptables mode or [`AntreaProxy` with `proxyAll`](antrea-proxy.md#antreaproxy-with-proxyall),
+no extra configuration change is needed.
+
+### Configuration
+
+#### Enable Service external IP management feature
+
+At this moment, external IP management for Services is an alpha feature of
+Antrea. The `ServiceExternalIP` feature gate of `antrea-agent` and
+`antrea-controller` must be enabled for the feature to work. You can enable
+the `ServiceExternalIP` feature gate in the `antrea-config` ConfigMap in
+the Antrea deployment YAML:
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ ServiceExternalIP: true
+ antrea-controller.conf: |
+ featureGates:
+ ServiceExternalIP: true
+```
+
+The feature works with both `AntreaProxy` and `kube-proxy`, including the
+following configurations:
+
+- `AntreaProxy` without `proxyAll` enabled - this is `antrea-agent`'s default
+configuration, in which `kube-proxy` serves the request traffic for Services
+of type LoadBalancer (while `AntreaProxy` handles Service requests from Pods).
+- `AntreaProxy` with `proxyAll` enabled - in this case, `AntreaProxy` handles
+all Service traffic, including Services of type LoadBalancer.
+- `AntreaProxy` disabled - `kube-proxy` handles all Service traffic, including
+Services of type LoadBalancer.
+
+#### Create an ExternalIPPool custom resource
+
+Service external IPs are allocated from an ExternalIPPool, which defines a pool
+of external IPs and the set of Nodes to which the external IPs can be assigned.
+To learn more information about ExternalIPPool, please refer to [the Egress
+documentation](egress.md#the-externalippool-resource). The example below
+defines an ExternalIPPool with IP range "10.10.0.2 - 10.10.0.10", and it
+selects the Nodes with label "network-role: ingress-node" to host the external
+IPs:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: ExternalIPPool
+metadata:
+ name: service-external-ip-pool
+spec:
+ ipRanges:
+ - start: 10.10.0.2
+ end: 10.10.0.10
+ nodeSelector:
+ matchLabels:
+ network-role: ingress-node
+```
+
+#### Create a Service of type LoadBalancer
+
+For Antrea to manage the externalIP for a Service of type LoadBalancer, the
+Service should be annotated with `service.antrea.io/external-ip-pool`. For
+example:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+ annotations:
+ service.antrea.io/external-ip-pool: "service-external-ip-pool"
+spec:
+ selector:
+ app: MyApp
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
+ type: LoadBalancer
+```
+
+You can also request a particular IP from an ExternalIPPool by setting
+the loadBalancerIP field in the Service spec to that specific IP available
+in the ExternalIPPool, Antrea will allocate the IP from the ExternalIPPool
+for the Service. For example:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+ annotations:
+ service.antrea.io/external-ip-pool: "service-external-ip-pool"
+spec:
+ selector:
+ app: MyApp
+ loadBalancerIP: "10.10.0.2"
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
+ type: LoadBalancer
+```
+
+#### Validate Service external IP
+
+Once Antrea allocates an external IP for a Service of type LoadBalancer, it
+will set the IP to the `loadBalancer.ingress` field in the Service resource
+`status`. For example:
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: my-service
+ annotations:
+ service.antrea.io/external-ip-pool: "service-external-ip-pool"
+spec:
+ selector:
+ app: MyApp
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 9376
+ clusterIP: 10.96.0.11
+ type: LoadBalancer
+status:
+ loadBalancer:
+ ingress:
+ - ip: 10.10.0.2
+ hostname: node-1
+```
+
+You can validate that the Service can be accessed from the client using the
+`:` (`10.10.0.2:80/TCP` in the above example).
+
+### Limitations
+
+As described above, the Service externalIP management by Antrea configures a
+Service's external IP to a Node, so that the Node can receive Service requests.
+However, this requires that the externalIP on the Node be reachable through the
+Node network. The simplest way to achieve this is to reserve a range of IPs
+from the Node network subnet, and define Service ExternalIPPools with the
+reserved IPs, when the Nodes are connected to a layer 2 subnet. Or, another
+possible way might be to manually configure Node network routing (e.g. by
+adding a static route entry to the underlay router) to route the Service
+traffic to the Node that hosts the Service's externalIP.
+
+As of now, Antrea supports Service externalIP management only on Linux Nodes.
+Windows Nodes are not supported yet.
+
+## Using MetalLB with Antrea
+
+MetalLB also implements external IP management for Services of type
+LoadBalancer, and it can be deployed to a Kubernetes cluster with Antrea.
+MetalLB supports two modes - layer 2 mode and BGP mode - to advertise an
+Service external IP to the Node network. The layer 2 mode is similar to what
+Antrea external IP management implements and has the same limitation that the
+external IPs must be allocated from the Node network subnet. The BGP mode
+leverages BGP to advertise external IPs to the Node network router. It does
+not have the layer 2 subnet limitation, but requires the Node network to
+support BGP.
+
+MetalLB will automatically allocate external IPs for every Service of type
+LoadBalancer, and it sets the allocated IP to the `loadBalancer.ingress` field
+in the Service resource `status`. MetalLB also supports user specified `loadBalancerIP`
+in the Service spec. For more information, please refer to the [MetalLB usage](https://metallb.universe.tf/usage).
+
+To learn more about MetalLB concepts and functionalities, you can read the
+[MetalLB concepts](https://metallb.universe.tf/concepts).
+
+### Install MetalLB
+
+You can run the following commands to install MetalLB using the YAML manifests:
+
+```bash
+kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.11/config/manifests/metallb-native.yaml
+```
+
+The commands will deploy MetalLB version 0.13.11 into Namespace
+`metallb-system`. You can also refer to this [MetalLB installation
+guide](https://metallb.universe.tf/installation) for other ways of installing
+MetalLB.
+
+As MetalLB will allocate external IPs for all Services of type LoadBalancer,
+once it is running, the Service external IP management feature of Antrea should
+not be enabled to avoid conflicts with MetalLB. You can deploy Antrea with the
+default configuration (in which the `ServiceExternalIP` feature gate of
+`antrea-agent` is set to `false`). MetalLB can work with both `AntreaProxy` and
+`kube-proxy` configurations of `antrea-agent`.
+
+### Configure MetalLB with layer 2 mode
+
+Similar to the case of Antrea Service external IP management, MetalLB layer 2
+mode also requires `kube-proxy`'s `strictARP` configuration to be enabled, when
+you are using `kube-proxy` IPVS. Please refer to the [Interoperability with
+kube-proxy IPVS mode](#interoperability-with-kube-proxy-ipvs-mode) section for
+more information.
+
+MetalLB is configured through Custom Resources (since v0.13). To configure
+MetalLB to work in the layer 2 mode, you need to create an `L2Advertisement`
+resource, as well as an `IPAddressPool` resource, which provides the IP ranges
+to allocate external IPs from. The IP ranges should be from the Node network
+subnet.
+
+For example:
+
+```yaml
+apiVersion: metallb.io/v1beta1
+kind: IPAddressPool
+metadata:
+ name: first-pool
+ namespace: metallb-system
+spec:
+ addresses:
+ - 10.10.0.2-10.10.0.10
+---
+apiVersion: metallb.io/v1beta1
+kind: L2Advertisement
+metadata:
+ name: example
+ namespace: metallb-system
+```
+
+### Configure MetalLB with BGP mode
+
+The BGP mode of MetalLB requires more configuration parameters to establish BGP
+peering to the router. The example resources below configure MetalLB using AS
+number 64500 to connect to peer router 10.0.0.1 with AS number 64501:
+
+```yaml
+apiVersion: metallb.io/v1beta2
+kind: BGPPeer
+metadata:
+ name: sample
+ namespace: metallb-system
+spec:
+ myASN: 64500
+ peerASN: 64501
+ peerAddress: 10.0.0.1
+---
+apiVersion: metallb.io/v1beta1
+kind: IPAddressPool
+metadata:
+ name: first-pool
+ namespace: metallb-system
+spec:
+ addresses:
+ - 10.10.0.2-10.10.0.10
+---
+apiVersion: metallb.io/v1beta1
+kind: BGPAdvertisement
+metadata:
+ name: example
+ namespace: metallb-system
+```
+
+In addition to the basic layer 2 and BGP mode configurations described in this
+document, MetalLB supports a few more advanced BGP configurations and supports
+configuring multiple IP pools which can use different modes. For more
+information, please refer to the [MetalLB configuration guide](https://metallb.universe.tf/configuration).
+
+## Interoperability with kube-proxy IPVS mode
+
+Both Antrea Service external IP management and MetalLB layer 2 mode require
+`kube-proxy`'s `strictARP` configuration to be enabled, to work with
+`kube-proxy` in IPVS mode. You can check the `strictARP` configuration in the
+`kube-proxy` ConfigMap:
+
+```bash
+$ kubectl describe configmap -n kube-system kube-proxy | grep strictARP
+ strictARP: false
+```
+
+You can set `strictARP` to `true` by editing the `kube-proxy` ConfigMap:
+
+```bash
+kubectl edit configmap -n kube-system kube-proxy
+```
+
+Or, simply run the following command to set it:
+
+```bash
+$ kubectl get configmap kube-proxy -n kube-system -o yaml | \
+ sed -e "s/strictARP: false/strictARP: true/" | \
+ kubectl apply -f - -n kube-system
+```
+
+Last, to check the change is made:
+
+```bash
+$ kubectl describe configmap -n kube-system kube-proxy | grep strictARP
+ strictARP: true
+```
+
+### Issue with Antrea Egress
+
+If you are using Antrea v1.7.0 or later, please ignore the issue. The previous
+implementation of Antrea Egress before v1.7.0 does not work with the `strictARP`
+configuration of `kube-proxy`. It means Antrea Egress cannot work together with
+Service external IP management or MetalLB layer 2 mode, when `kube-proxy` IPVS
+is used. This issue was fixed in Antrea v1.7.0.
diff --git a/content/docs/v1.15.0/docs/support-bundle-guide.md b/content/docs/v1.15.0/docs/support-bundle-guide.md
new file mode 100644
index 00000000..43ff1d89
--- /dev/null
+++ b/content/docs/v1.15.0/docs/support-bundle-guide.md
@@ -0,0 +1,263 @@
+# Support Bundle User Guide
+
+## What is Support Bundle
+
+Antrea supports collecting support bundle tarballs, which include the information
+from Antrea Controller and Antrea Agent. The collected information can help
+debugging issues in the Kubernetes cluster.
+
+**Be aware that the generated support bundle includes a lot of information,
+including logs, so please review the contents before sharing it on Github
+and ensure that you do not share any sensitive information.**
+
+There are two ways of generating support bundles. Firstly, you can run `antctl supportbundle`
+directly in the Antrea Agent Pod, Antrea Controller Pod, or on a host with a
+`kubeconfig` file for the target cluster. Secondly, you can also apply
+`SupportBundleCollection` CRs to create support bundles for K8s Nodes
+or external Nodes. We name this feature as `SupportBundleCollection` in Antrea.
+The details are provided in section [Usage examples](#usage-examples).
+
+## Table of Contents
+
+
+- [Prerequisites](#prerequisites)
+- [The SupportBundleCollection CRD](#the-supportbundlecollection-crd)
+- [Usage examples](#usage-examples)
+ - [Running antctl commands](#running-antctl-commands)
+ - [Applying SupportBundleCollection CR](#applying-supportbundlecollection-cr)
+- [List of collected items](#list-of-collected-items)
+- [Limitations](#limitations)
+
+
+## Prerequisites
+
+The `antctl supportbundle` command is supported in Antrea since version 0.7.0.
+
+The `SupportBundleCollection` CRD is introduced in Antrea v1.10.0 as an alpha
+feature. The feature gate must be enabled in both antrea-controller and
+antrea-agent configurations. If you plan to collect support bundle on an external
+Node, you should enable it in the configuration on the external Node as well.
+
+```yaml
+ antrea-agent.conf: |
+ featureGates:
+ # Enable collecting support bundle files with SupportBundleCollection CRD.
+ SupportBundleCollection: true
+```
+
+```yaml
+ antrea-controller.conf: |
+ featureGates:
+ # Enable collecting support bundle files with SupportBundleCollection CRD.
+ SupportBundleCollection: true
+```
+
+A single Namespace (e.g., default) is created for saving the Secrets that are
+used to access the support bundle file server, and the permission to read Secrets
+in this Namespace is given to antrea-controller by modifying and applying the
+[RBAC file](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/externalnode/support-bundle-collection-rbac.yml).
+
+```yaml
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: antrea-read-secrets
+ namespace: default # Change the Namespace to where the Secret for file server's authentication credential is created.
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: antrea-secret-reader
+subjects:
+ - kind: ServiceAccount
+ name: antrea-controller
+ namespace: kube-system
+```
+
+## The SupportBundleCollection CRD
+
+SupportBundleCollection CRD is introduced to supplement the `antctl` command
+with three additional features:
+
+1. Allow users to collect support bundle files on external Nodes.
+2. Upload all the support bundle files into a user-provided SFTP Server.
+3. Support tracking status of a SupportBundleCollection CR.
+
+## Usage examples
+
+### Running antctl commands
+
+Please refer to the [antctl user guide section](antctl.md#collecting-support-information).
+Note: `antctl supportbundle` only works on collecting the support bundles from
+Antrea Controller and Antrea Agent that is running on a K8s Node, but it does
+not work for the Agent on an external Node.
+
+### Applying SupportBundleCollection CR
+
+In this section, we will create two SupportBundleCollection CRs for K8s Nodes
+and external Nodes. Note, it is supported to specify Nodes/ExternalNodes by their
+names or by matching their labels in a SupportBundleCollection CR.
+
+Assume we have a cluster with Nodes named "worker1" and "worker2". In addition,
+we have set up two external Nodes named "vm1" and "vm2" in the "vm-ns" Namespace
+by following the instruction of the [VM installation guide](external-node.md#install-antrea-agent-on-vm).
+In addition, an SFTP server needs to be provided in advance to collect the bundle.
+You can host the SFTP server by applying YAML `hack/externalnode/sftp-deployment.yml`
+or deploy one by yourself.
+
+A Secret needs to be created in advance with the username and password of the SFTP
+Server. The Secret will be referred as `authSecret` in the following YAML examples.
+
+```bash
+# Set username and password with `--from-literal=username='foo' --from-literal=password='pass'`
+# if the sftp server is deployed with sftp-deployment.yml
+kubectl create secret generic support-bundle-secret --from-literal=username='your-sftp-username' --from-literal=password='your-sftp-password'
+```
+
+Then we can apply the following YAML files. The first one is to collect support
+bundle on K8s Nodes "worker1" and "worker2": "worker1" is specified by the name,
+and "worker2" is specified by matching label "role: workers". The second one is to
+collect support bundle on external Nodes "vm1" and "vm2" in Namespace "vm-ns":
+"vm1" is specified by the name, and "vm2" is specified by matching label "role: vms".
+
+```bash
+cat << EOF | kubectl apply -f -
+apiVersion: crd.antrea.io/v1alpha1
+kind: SupportBundleCollection
+metadata:
+ name: support-bundle-for-nodes
+spec:
+ nodes: # All Nodes will be selected if both nodeNames and matchLabels are empty
+ nodeNames:
+ - worker1
+ matchLabels:
+ role: workers
+ expirationMinutes: 10 # expirationMinutes is the requested duration of validity of the SupportBundleCollection. A SupportBundleCollection will be marked as Failed if it does not finish before expiration.
+ sinceTime: 2h # Collect the logs in the latest 2 hours. Collect all available logs if the time is not set.
+ fileServer:
+ url: sftp://yourtestdomain.com:22/root/test
+ authentication:
+ authType: "BasicAuthentication"
+ authSecret:
+ name: support-bundle-secret
+ namespace: default # antrea-controller must be given the permission to read Secrets in "default" Namespace.
+EOF
+```
+
+```bash
+cat << EOF | kubectl apply -f -
+apiVersion: crd.antrea.io/v1alpha1
+kind: SupportBundleCollection
+metadata:
+ name: support-bundle-for-vms
+spec:
+ externalNodes: # All ExternalNodes in the Namespace will be selected if both nodeNames and matchLabels are empty
+ nodeNames:
+ - vm1
+ nodeSelector:
+ matchLabels:
+ role: vms
+ namespace: vm-ns # namespace is mandatory when collecting support bundle from external Nodes.
+ fileServer:
+ url: yourtestdomain.com:22/root/test # Scheme sftp can be omitted. The url of "$controlplane_node_ip:30010/upload" is used if deployed with sftp-deployment.yml.
+ authentication:
+ authType: "BasicAuthentication"
+ authSecret:
+ name: support-bundle-secret
+ namespace: default # antrea-controller must be given the permission to read Secrets in "default" Namespace.
+EOF
+```
+
+For more information about the supported fields in a "SupportBundleCollection"
+CR, please refer to the [CRD definition](https://github.com/antrea-io/antrea/blob/v1.15.0/build/charts/antrea/crds/supportbundlecollection.yaml)
+
+You can check the status of `SupportBundleCollection` by running command
+`kubectl get supportbundlecollections [NAME] -ojson`.
+The following example shows a successful realization of `SupportBundleCollection`.
+`desiredNodes` shows the expected number of Nodes/ExternalNodes to collect with
+this request, while `collectedNodes` shows the number of Nodes/ExternalNodes
+which have already uploaded bundle files to the target file server. If the
+collection completes successfully, `collectedNodes` and `desiredNodes`should
+have an equal value which should match the number of Nodes/ExternalNodes you
+want to collect support bundle.
+
+If the following two conditions are presented, it means a bundle collection
+succeeded,
+
+1. "Completed" is true
+2. "CollectionFailure" is false.
+
+If any expected Node/ExternalNode failed to upload the bundle files in the
+required time, the "CollectionFailure" condition will be set to true.
+
+```bash
+$ kubectl get supportbundlecollections support-bundle-for-nodes -ojson
+
+...
+ "status": {
+ "collectedNodes": 1,
+ "conditions": [
+ {
+ "lastTransitionTime": "2022-12-08T06:49:35Z",
+ "status": "True",
+ "type": "Started"
+ },
+ {
+ "lastTransitionTime": "2022-12-08T06:49:41Z",
+ "status": "True",
+ "type": "BundleCollected"
+ },
+ {
+ "lastTransitionTime": "2022-12-08T06:49:35Z",
+ "status": "False",
+ "type": "CollectionFailure"
+ },
+ {
+ "lastTransitionTime": "2022-12-08T06:49:41Z",
+ "status": "True",
+ "type": "Completed"
+ }
+ ],
+ "desiredNodes": 1
+ }
+```
+
+The collected bundle should include three tarballs. To access these files, you
+can download the files from the SFTP server `yourtestdomain.com`. There will be
+two tarballs for `support-bundle-for-nodes`: "support-bundle-for-nodes_worker1.tar.gz"
+and "support-bundle-for-nodes_worker2.tar.gz", and two for `support-bundle-for-vms`:
+"support-bundle-for-vms_vm1.tar.gz" and "support-bundle-for-vms_vm2.tar.gz", in
+the `/root/test` folder. Run the `tar xvf $TARBALL_NAME` command to extract the
+files from the tarballs.
+
+## List of collected items
+
+Depending on the methods you use to collect the support bundle, the contents in
+the bundle may differ. The following table shows the differences.
+
+We use `agent`,`controller`, `outside` to represent running command
+`antctl supportbundle` in Antrea Agent, Antrea Controller, out-of-cluster
+respectively. Also, we use `Node` and `ExternalNode` to represent
+"create SupportBundleCollection CR for Nodes" and "create SupportBundleCollection
+CR for external Nodes".
+
+| Collected Item | Supported Collecting Method | Explanation |
+|-----------------------------|----------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Antrea Agent Log | `agent`, `outside`, `Node`, `ExternalNode` | Antrea Agent log files |
+| Antrea Controller Log | `controller`, `outside` | Antrea Controller log files |
+| iptables (Linux Only) | `agent`, `outside`, `Node`, `ExternalNode` | Output of `ip6tables-save` and `iptable-save` with counters |
+| OVS Ports | `agent`, `outside`, `Node`, `ExternalNode` | Output of `ovs-ofctl dump-ports-desc` |
+| NetworkPolicy Resources | `agent`, `controller`, `outside`, `Node`, `ExternalNode` | YAML output of `antctl get appliedtogroups` and `antctl get addressgroups` commands |
+| Heap Pprof | `agent`, `controller`, `outside`, `Node`, `ExternalNode` | Output of [`pprof.WriteHeapProfile`](https://pkg.go.dev/runtime/pprof#WriteHeapProfile) |
+| HNSResources (Windows Only) | `agent`, `outside`, `Node`, `ExternalNode` | Output of `Get-HNSNetwork` and `Get-HNSEndpoint` commands |
+| Antrea Agent Info | `agent`, `outside`, `Node`, `ExternalNode` | YAML output of `antctl get agentinfo` |
+| Antrea Controller Info | `controller`, `outside` | YAML output of `antctl get controllerinfo` |
+| IP Address Info | `agent`, `outside`, `Node`, `ExternalNode` | Output of `ip address` command on Linux or `ipconfig /all` command on Windows |
+| IP Route Info | `agent`, `outside`, `Node`, `ExternalNode` | Output of `ip route` on Linux or `route print` on Windows |
+| IP Link Info | `agent`, `outside`, `Node`, `ExternalNode` | Output of `ip link` on Linux or `Get-NetAdapter` on Windows |
+| Cluster Information | `outside` | Dump of resources in the cluster, including: 1. all Pods, Deployments, Replicasets and Daemonsets in all Namespaces with any resourceVersion. 2. all Nodes with any resourceVersion. 3. all ConfigMaps in all Namespaces with any resourceVersion and label `app=antrea`. |
+| Memberlist State | `agent`, `outside` | YAML output of `antctl get memberlist` |
+
+## Limitations
+
+Only SFTP basic authentication is supported for SupportBundleCollection.
+Other authentication methods will be added in the future.
diff --git a/content/docs/v1.15.0/docs/traceflow-guide.md b/content/docs/v1.15.0/docs/traceflow-guide.md
new file mode 100644
index 00000000..5cf8b5d4
--- /dev/null
+++ b/content/docs/v1.15.0/docs/traceflow-guide.md
@@ -0,0 +1,197 @@
+# Traceflow User Guide
+
+Antrea supports using Traceflow for network diagnosis. It can inject a packet
+into OVS on a Node and trace the forwarding path of the packet across Nodes, and
+it can also trace a matched packet of real traffic from or to a Pod. In either
+case, a Traceflow operation is triggered by a Traceflow CRD which specifies the
+type of Traceflow, the source and destination of the packet to trace, and the
+headers of the packet. And the Traceflow results will be populated to the
+`status` field of the Traceflow CRD, which include the observations of the trace
+packet at various observations points in the forwarding path. Besides creating
+the Traceflow CRD using kubectl, users can also start a Traceflow using
+`antctl`, or from the [Antrea web UI](https://github.com/antrea-io/antrea-ui).
+When using the Antrea web UI, the Traceflow results can be visualized using a
+graph.
+
+## Table of Contents
+
+
+- [Prerequisites](#prerequisites)
+- [Start a New Traceflow](#start-a-new-traceflow)
+ - [Using kubectl and YAML file (IPv4)](#using-kubectl-and-yaml-file-ipv4)
+ - [Using kubectl and YAML file (IPv6)](#using-kubectl-and-yaml-file-ipv6)
+ - [Live-traffic Traceflow](#live-traffic-traceflow)
+ - [Using antctl](#using-antctl)
+ - [Using the Antrea web UI](#using-the-antrea-web-ui)
+- [View Traceflow Result and Graph](#view-traceflow-result-and-graph)
+- [RBAC](#rbac)
+
+
+## Prerequisites
+
+The Traceflow feature is enabled by default since Antrea version 0.11.0. If you
+are using an Antrea version before 0.11.0, you need to enable Traceflow from the
+featureGates map defined in antrea.yml for both Controller and Agent. In order
+to use a Service as the destination in traces, you also need to ensure [AntreaProxy](feature-gates.md)
+is enabled in the Agent configuration:
+
+```yaml
+ antrea-controller.conf: |
+ featureGates:
+ # Enable traceflow which provides packet tracing feature to diagnose network issue.
+ Traceflow: true
+ antrea-agent.conf: |
+ featureGates:
+ # Enable traceflow which provides packet tracing feature to diagnose network issue.
+ Traceflow: true
+ # Enable AntreaProxy which provides ServiceLB for in-cluster Services in antrea-agent.
+ # It should be enabled on Windows, otherwise NetworkPolicy will not take effect on
+ # Service traffic.
+ AntreaProxy: true
+```
+
+## Start a New Traceflow
+
+You can choose to use `kubectl` together with a YAML file, the `antctl traceflow`
+command, or the Antrea UI to start a new trace.
+
+When starting a new trace, you can provide the following information which will be used to build the trace packet:
+
+* source Pod
+* destination Pod, Service or destination IP address
+* transport protocol (TCP/UDP/ICMP)
+* transport ports
+
+### Using kubectl and YAML file (IPv4)
+
+You can start a new trace by creating Traceflow CRD via kubectl and a YAML file which contains the essential
+configuration of Traceflow CRD. An example YAML file of Traceflow CRD might look like this:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: Traceflow
+metadata:
+ name: tf-test
+spec:
+ source:
+ namespace: default
+ pod: tcp-sts-0
+ destination:
+ namespace: default
+ pod: tcp-sts-2
+ # destination can also be an IP address ('ip' field) or a Service name ('service' field); the 3 choices are mutually exclusive.
+ packet:
+ ipHeader: # If ipHeader/ipv6Header is not set, the default value is IPv4+ICMP.
+ protocol: 6 # Protocol here can be 6 (TCP), 17 (UDP) or 1 (ICMP), default value is 1 (ICMP)
+ transportHeader:
+ tcp:
+ srcPort: 10000 # Source port needs to be set when Protocol is TCP/UDP.
+ dstPort: 80 # Destination port needs to be set when Protocol is TCP/UDP.
+ flags: 2 # Construct a SYN packet: 2 is also the default value when the flags field is omitted.
+```
+
+The CRD above starts a new trace from port 10000 of source Pod named `tcp-sts-0` to port 80
+of destination Pod named `tcp-sts-2` using TCP protocol.
+
+### Using kubectl and YAML file (IPv6)
+
+Antrea Traceflow supports IPv6 traffic. An example YAML file of Traceflow CRD might look like this:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: Traceflow
+metadata:
+ name: tf-test-ipv6
+spec:
+ source:
+ namespace: default
+ pod: tcp-sts-0
+ destination:
+ namespace: default
+ pod: tcp-sts-2
+ # destination can also be an IPv6 address ('ip' field) or a Service name ('service' field); the 3 choices are mutually exclusive.
+ packet:
+ ipv6Header: # ipv6Header MUST be set to run Traceflow in IPv6, and ipHeader will be ignored when ipv6Header set.
+ nextHeader: 58 # Protocol here can be 6 (TCP), 17 (UDP) or 58 (ICMPv6), default value is 58 (ICMPv6)
+```
+
+The CRD above starts a new trace from source Pod named `tcp-sts-0` to destination Pod named `tcp-sts-2` using ICMPv6
+protocol.
+
+### Live-traffic Traceflow
+
+Starting from Antrea version 1.0.0, you can trace a packet of the real traffic
+from or to a Pod, instead of the injected packet. To start such a Traceflow, add
+`liveTraffic: true` to the Traceflow `spec`. Then, the first packet of the first
+connection that matches the Traceflow spec will be traced (connections opened
+before the Traceflow was initiated will be ignored), and the headers of the
+packet will be captured and reported in the `status` field of the Traceflow CRD,
+in addition to the observations. A live-traffic Traceflow requires only one of
+`source` and `destination` to be specified. When `source` or `destination` is
+not specified, it means that a packet can be captured regardless of its source
+or destination. One of `source` and `destination` must be a Pod. When `source`
+is not specified, or is an IP address, only the receiver Node will capture the
+packet and trace it after the L2 forwarding observation point. This means that
+even if the source of the packet is on the same Node as the destination, no
+observations on the sending path will be reported for the Traceflow. By default,
+a live-traffic Traceflow (the same as a normal Traceflow) will timeout in 20
+seconds, and if no matched packet captured before the timeout the Traceflow
+will fail. But you can specify a different timeout value, by adding
+`timeout: ` to the Traceflow `spec`.
+
+In some cases, it might be useful to capture the packets dropped by
+NetworkPolicies (inc. K8s NetworkPolicies or Antrea native policies). You can
+add `droppedOnly: true` to the live-traffic Traceflow `spec`, then the first
+packet that matches the Traceflow spec and is dropped by a NetworkPolicy will
+be captured and traced.
+
+The following example is a live-traffic Traceflow that captures a dropped UDP
+packet to UDP port 1234 of Pod udp-server, within 1 minute:
+
+```yaml
+apiVersion: crd.antrea.io/v1beta1
+kind: Traceflow
+metadata:
+ name: tf-test
+spec:
+ liveTraffic: true
+ droppedOnly: true
+ destination:
+ namespace: default
+ pod: udp-server
+ packet:
+ transportHeader:
+ udp:
+ dstPort: 1234
+ timeout: 60
+```
+
+### Using antctl
+
+Please refer to the corresponding [antctl page](antctl.md#traceflow).
+
+### Using the Antrea web UI
+
+Please refer to the [Antrea UI documentation](https://github.com/antrea-io/antrea-ui)
+for installation instructions. Once you can access the UI in your browser,
+navigate to the `Traceflow` page.
+
+## View Traceflow Result and Graph
+
+You can always view Traceflow result directly via Traceflow CRD status and see if the packet is successfully delivered
+or somehow dropped by certain packet-processing stage. Antrea also provides a more user-friendly way by showing the
+Traceflow result via a trace graph when using the Antrea UI.
+
+## RBAC
+
+Traceflow CRDs are meant for admins to troubleshoot and diagnose the network
+by injecting a packet from a source workload to a destination workload. Thus,
+access to manage these CRDs must be granted to subjects which
+have the authority to perform these diagnostic actions. On cluster
+initialization, Antrea grants the permissions to edit these CRDs with `admin`
+and the `edit` ClusterRole. In addition to this, Antrea also grants the
+permission to view these CRDs with the `view` ClusterRole. Cluster admins can
+therefore grant these ClusterRoles to any subject who may be responsible to
+troubleshoot the network. The admins may also decide to share the `view`
+ClusterRole to a wider range of subjects to allow them to read the traceflows
+that are active in the cluster.
diff --git a/content/docs/v1.15.0/docs/traffic-control.md b/content/docs/v1.15.0/docs/traffic-control.md
new file mode 100644
index 00000000..da36e7a8
--- /dev/null
+++ b/content/docs/v1.15.0/docs/traffic-control.md
@@ -0,0 +1,278 @@
+# Traffic Control With Antrea
+
+## Table of Contents
+
+
+- [What is TrafficControl?](#what-is-trafficcontrol)
+- [Prerequisites](#prerequisites)
+- [The TrafficControl resource](#the-trafficcontrol-resource)
+ - [AppliedTo](#appliedto)
+ - [Direction](#direction)
+ - [Action](#action)
+ - [TargetPort](#targetport)
+ - [ReturnPort](#returnport)
+- [Examples](#examples)
+ - [Mirroring all traffic to remote analyzer](#mirroring-all-traffic-to-remote-analyzer)
+ - [Redirecting specific traffic to local receiver](#redirecting-specific-traffic-to-local-receiver)
+- [What's next](#whats-next)
+
+
+## What is TrafficControl?
+
+`TrafficControl` is a CRD API that manages and manipulates the transmission of
+Pod traffic. It allows users to mirror or redirect specific traffic originating
+from specific Pods or destined for specific Pods to a local network device or a
+remote destination via a tunnel of various types. It provides full visibility
+into network traffic, including both north-south and east-west traffic.
+
+You may be interested in using this capability if any of the following apply:
+
+- You want to monitor network traffic passing in or out of a set of Pods for
+ purposes such as troubleshooting, intrusion detection, and so on.
+
+- You want to redirect network traffic passing in or out of a set of Pods to
+ applications that enforce policies, and reject traffic to prevent intrusion.
+
+This guide demonstrates how to configure `TrafficControl` to achieve the above
+goals.
+
+## Prerequisites
+
+TrafficControl was introduced in v1.7 as an alpha feature. A feature gate,
+`TrafficControl` must be enabled on the antrea-agent in the `antrea-config`
+ConfigMap for the feature to work, like the following:
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ featureGates:
+ TrafficControl: true
+```
+
+## The TrafficControl resource
+
+A TrafficControl in Kubernetes is a REST object. Like all the REST objects, you
+can POST a TrafficControl definition to the API server to create a new instance.
+For example, supposing you have a set of Pods which contain a label `app=web`,
+the following specification creates a new TrafficControl object named
+"mirror-web-app", which mirrors all traffic from or to any Pod with the
+`app=web` label and send them to a receiver running on "10.0.10.2" encapsulated
+within a VXLAN tunnel:
+
+```yaml
+apiVersion: crd.antrea.io/v1alpha2
+kind: TrafficControl
+metadata:
+ name: mirror-web-app
+spec:
+ appliedTo:
+ podSelector:
+ matchLabels:
+ app: web
+ direction: Both
+ action: Mirror
+ targetPort:
+ vxlan:
+ remoteIP: 10.0.10.2
+```
+
+### AppliedTo
+
+The `appliedTo` field specifies the grouping criteria of Pods to which the
+TrafficControl applies to. Pods can be selected cluster-wide using
+`podSelector`. If set with a `namespaceSelector`, all Pods from Namespaces
+selected by the `namespaceSelector` will be selected. Specific Pods from
+specific Namespaces can be selected by providing both a `podSelector` and a
+`namespaceSelector`. Empty `appliedTo` selects nothing. The field is mandatory.
+
+### Direction
+
+The `direction` field specifies the direction of traffic that should be matched.
+It can be `Ingress`, `Egress`, or `Both`.
+
+### Action
+
+The `action` field specifies which action should be taken for the traffic. It
+can be `Mirror` or `Redirect`. For the `Mirror` action, `targetPort` must be
+set to the port to which the traffic will be mirrored. For the `Redirect`
+action, both `targetPort` and `returnPort` need to be specified, the latter of
+which represents the port from which the traffic could be sent back to OVS and
+be forwarded to its original destination. Once redirected, a packet should be
+either dropped or sent back to OVS without modification, otherwise it would lead
+to undefined behavior.
+
+### TargetPort
+
+The `targetPort` field specifies the port to which the traffic should be
+redirected or mirrored. There are five kinds of ports that can be used to
+receive mirrored traffic:
+
+**ovsInternal**: This specifies an OVS internal port on all Nodes. A Pod's
+traffic will be redirected or mirrored to the OVS internal port on the same Node
+that hosts the Pod. The port doesn't need to exist in advance, Antrea will
+create the port if it doesn't exist. To use an OVS internal port, the `name` of
+the port must be provided:
+
+```yaml
+ovsInternal:
+ name: tap0
+```
+
+**device**: This specifies a network device on all Nodes. A Pod's traffic will
+be redirected or mirrored to the network device on the same Node that hosts the
+Pod. The network device must exist on all Nodes and Antrea will attach it to the
+OVS bridge if not already attached. To use a network device, the `name` of the
+device must be provided:
+
+```yaml
+device:
+ name: eno2
+```
+
+**geneve**: This specifies a remote destination for a GENEVE tunnel. All
+selected Pods' traffic will be redirected or mirrored to the destination via
+a GENEVE tunnel. The `remoteIP` field must be provided to specify the IP address
+of the destination. Optionally, the `destinationPort` field could be used to
+specify the UDP destination port of the tunnel, or 6081 will be used by default.
+If Virtual Network Identifier (VNI) is desired, the `vni` field can be specified
+to an integer in the range 0-16,777,215:
+
+```yaml
+geneve:
+ remoteIP: 10.0.10.2
+ destinationPort: 6081
+ vni: 1
+```
+
+**vxlan**: This specifies a remote destination for a VXLAN tunnel. All
+selected Pods' traffic will be redirected or mirrored to the destination via
+a VXLAN tunnel. The `remoteIP` field must be provided to specify the IP address
+of the destination. Optionally, the `destinationPort` field could be used to
+specify the UDP destination port of the tunnel, or 4789 will be used by default.
+If Virtual Network Identifier (VNI) is desired, the `vni` field can be specified
+to an integer in the range 0-16,777,215:
+
+```yaml
+vxlan:
+ remoteIP: 10.0.10.2
+ destinationPort: 4789
+ vni: 1
+```
+
+**gre**: This specifies a remote destination for a GRE tunnel. All selected
+Pods' traffic will be redirected or mirrored to the destination via a GRE
+tunnel. The `remoteIP` field must be provided to specify the IP address of the
+destination. If GRE key is desired, the `key` field can be specified to an
+integer in the range 0-4,294,967,295:
+
+```yaml
+gre:
+ remoteIP: 10.0.10.2
+ key: 1
+```
+
+**erspan**: This specifies a remote destination for an ERSPAN tunnel. All
+selected Pods' traffic will be mirrored to the destination via an ERSPAN tunnel.
+The `remoteIP` field must be provided to specify the IP address of the
+destination. If ERSPAN session ID is desired, the `sessionID` field can be
+specified to an integer in the range 0-1,023. The `version` field must be
+provided to specify the ERSPAN version: 1 for version 1 (type II), or 2 for
+version 2 (type III).
+
+For version 1, the `index` field can be specified to associate with the ERSPAN
+traffic's source port and direction. An example of version 1 might look like
+this:
+
+```yaml
+erspan:
+ remoteIP: 10.0.10.2
+ sessionID: 1
+ version: 1
+ index: 1
+```
+
+For version 2, the `dir` field can be specified to indicate the mirrored
+traffic's direction: 0 for ingress traffic, 1 for egress traffic. The
+`hardwareID` field can be specified as an unique identifier of an ERSPAN v2
+engine. An example of version 2 might look like this:
+
+```yaml
+erspan:
+ remoteIP: 10.0.10.2
+ sessionID: 1
+ version: 2
+ dir: 0
+ hardwareID: 4
+```
+
+### ReturnPort
+
+The `returnPort` field should only be set when the `action` is `Redirect`. It is
+similar to the `targetPort` field, but meant for specifying the port from which
+the traffic will be sent back to OVS and be forwarded to its original
+destination.
+
+## Examples
+
+### Mirroring all traffic to remote analyzer
+
+In this example, we will mirror all Pods' traffic and send them to a remote
+destination via a GENEVE tunnel:
+
+```yaml
+apiVersion: crd.antrea.io/v1alpha2
+kind: TrafficControl
+metadata:
+ name: mirror-all-to-remote
+spec:
+ appliedTo:
+ podSelector: {}
+ direction: Both
+ action: Mirror
+ targetPort:
+ geneve:
+ remoteIP: 10.0.10.2
+```
+
+### Redirecting specific traffic to local receiver
+
+In this example, we will redirect traffic of all Pods in the Namespace `prod` to
+OVS internal ports named `tap0` configured on Nodes that these Pods run on.
+The `returnPort` configuration means, if the traffic is sent back to OVS from
+OVS internal ports named `tap1`, it will be forwarded to its original
+destination. Therefore, if an intrusion prevention system or a network firewall
+is configured to capture and forward traffic between `tap0` and `tap1`, it can
+actively scan forwarded network traffic for malicious activities and known
+attack patterns, and drop the traffic determined to be malicious.
+
+```yaml
+apiVersion: crd.antrea.io/v1alpha2
+kind: TrafficControl
+metadata:
+ name: redirect-prod-to-local
+spec:
+ appliedTo:
+ namespaceSelector:
+ matchLabels:
+ kubernetes.io/metadata.name: prod
+ direction: Both
+ action: Redirect
+ targetPort:
+ ovsInternal:
+ name: tap0
+ returnPort:
+ ovsInternal:
+ name: tap1
+```
+
+## What's next
+
+With the `TrafficControl` capability, Antrea can be used with threat detection
+engines to provide network-based IDS/IPS to Pods. We provide a reference
+cookbook on how to implement IDS using Suricata. For more information, refer to
+the [cookbook](cookbooks/ids).
diff --git a/content/docs/v1.15.0/docs/traffic-encryption.md b/content/docs/v1.15.0/docs/traffic-encryption.md
new file mode 100644
index 00000000..ea47ab67
--- /dev/null
+++ b/content/docs/v1.15.0/docs/traffic-encryption.md
@@ -0,0 +1,129 @@
+# Traffic Encryption with Antrea
+
+Antrea supports encrypting traffic across Linux Nodes with IPsec ESP or
+WireGuard. Traffic encryption is not supported on Windows Nodes yet.
+
+## IPsec
+
+IPsec encryption works for all tunnel types supported by OVS including Geneve,
+GRE, VXLAN, and STT tunnel.
+
+Note that GRE is not supported for IPv6 clusters (IPv6-only or dual-stack
+clusters). For such clusters, please choose a different tunnel type such as
+Geneve or VXLAN.
+
+### Prerequisites
+
+IPsec requires a set of Linux kernel modules. Check the required kernel modules
+listed in the [strongSwan documentation](https://wiki.strongswan.org/projects/strongswan/wiki/KernelModules).
+Make sure the required kernel modules are loaded on the Kubernetes Nodes before
+deploying Antrea with IPsec encryption enabled.
+
+If you want to enable IPsec with Geneve, please make sure [this commit](https://github.com/torvalds/linux/commit/34beb21594519ce64a55a498c2fe7d567bc1ca20)
+is included in the kernel. For Ubuntu 18.04, kernel version should be at least
+`4.15.0-128`. For Ubuntu 20.04, kernel version should be at least `5.4.70`.
+
+### Antrea installation
+
+You can simply apply the [Antrea IPsec deployment yaml](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/antrea-ipsec.yml)
+to deploy Antrea with IPsec encryption enabled. To deploy a released version of
+Antrea, pick a version from the [list of releases](https://github.com/antrea-io/antrea/releases).
+Note that IPsec support was added in release 0.3.0, which means you can not
+pick a release older than 0.3.0. For any given release `` (e.g. `v0.3.0`),
+get the Antrea IPsec deployment yaml at:
+
+```text
+https://github.com/antrea-io/antrea/releases/download//antrea-ipsec.yml
+```
+
+To deploy the latest version of Antrea (built from the main branch), get the
+IPsec deployment yaml at:
+
+```text
+https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-ipsec.yml
+```
+
+Antrea leverages strongSwan as the IKE daemon, and supports using pre-shared key
+(PSK) for IKE authentication. The deployment yaml creates a Kubernetes Secret
+`antrea-ipsec` to store the PSK string. For security consideration, we recommend
+to change the default PSK string in the yaml file. You can edit the yaml file,
+and update the `psk` field in the `antrea-ipsec` Secret spec to any string you
+want to use. Check the `antrea-ipsec` Secret spec below:
+
+```yaml
+---
+apiVersion: v1
+kind: Secret
+metadata:
+ name: antrea-ipsec
+ namespace: kube-system
+stringData:
+ psk: changeme
+type: Opaque
+```
+
+After updating the PSK value, deploy Antrea with:
+
+```bash
+kubectl apply -f antrea-ipsec.yml
+```
+
+By default, the deployment yaml uses GRE as the tunnel type, which you can
+change by editing the file. You will need to change the tunnel type to another
+one if your cluster supports IPv6.
+
+## WireGuard
+
+Antrea can leverage [WireGuard](https://www.wireguard.com) to encrypt Pod traffic
+between Nodes. WireGuard encryption works like another tunnel type, and when it
+is enabled the `tunnelType` parameter in the `antrea-agent` configuration file
+will be ignored.
+
+### Prerequisites
+
+WireGuard encryption requires the `wireguard` kernel module be present on the
+Kubernetes Nodes. `wireguard` module is part of mainline kernel since Linux 5.6.
+Or, you can compile the module from source code with a kernel version >= 3.10.
+[This WireGuard installation guide](https://www.wireguard.com/install) documents how to
+install WireGuard together with the kernel module on various operating systems.
+
+### Antrea installation
+
+First, download the [Antrea deployment yaml](https://github.com/antrea-io/antrea/blob/v1.15.0/build/yamls/antrea.yml). To deploy
+a released version of Antrea, pick a version from the [list of releases](https://github.com/antrea-io/antrea/releases).
+Note that WireGuard support was added in release 1.3.0, which means you can not
+pick a release older than 1.3.0. For any given release `` (e.g. `v1.3.0`),
+get the Antrea deployment yaml at:
+
+```text
+https://github.com/antrea-io/antrea/releases/download//antrea.yml
+```
+
+To deploy the latest version of Antrea (built from the main branch), get the
+deployment yaml at:
+
+```text
+https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml
+```
+
+To enable WireGuard encryption, the `trafficEncryptionMode` config parameter of
+`antrea-agent` to `wireGuard`. The `trafficEncryptionMode` config parameter is
+defined in `antrea-agent.conf` of `antrea` ConfigMap in the Antrea deployment
+yaml:
+
+```yaml
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: antrea-config
+ namespace: kube-system
+data:
+ antrea-agent.conf: |
+ trafficEncryptionMode: wireGuard
+```
+
+After saving the yaml file change, deploy Antrea with:
+
+```bash
+kubectl apply -f antrea.yml
+```
diff --git a/content/docs/v1.15.0/docs/troubleshooting.md b/content/docs/v1.15.0/docs/troubleshooting.md
new file mode 100644
index 00000000..d3a7fe4e
--- /dev/null
+++ b/content/docs/v1.15.0/docs/troubleshooting.md
@@ -0,0 +1,302 @@
+# Troubleshooting
+
+## Table of Contents
+
+
+- [Looking at the Antrea logs](#looking-at-the-antrea-logs)
+- [Accessing the antrea-controller API](#accessing-the-antrea-controller-api)
+ - [Using antctl](#using-antctl)
+ - [Using kubectl proxy](#using-kubectl-proxy)
+ - [Using antctl proxy](#using-antctl-proxy)
+ - [Directly accessing the antrea-controller API](#directly-accessing-the-antrea-controller-api)
+- [Accessing the antrea-agent API](#accessing-the-antrea-agent-api)
+ - [Using antctl](#using-antctl-1)
+ - [Using antctl proxy](#using-antctl-proxy-1)
+ - [Directly accessing the antrea-agent API](#directly-accessing-the-antrea-agent-api)
+- [Accessing the flow-aggregator API](#accessing-the-flow-aggregator-api)
+ - [Using antctl](#using-antctl-2)
+ - [Directly accessing the flow-aggregator API](#directly-accessing-the-flow-aggregator-api)
+- [Troubleshooting Open vSwitch](#troubleshooting-open-vswitch)
+- [Troubleshooting with antctl](#troubleshooting-with-antctl)
+- [Profiling Antrea components](#profiling-antrea-components)
+- [Ask your questions to the Antrea community](#ask-your-questions-to-the-antrea-community)
+
+
+## Looking at the Antrea logs
+
+You can inspect the `antrea-controller` logs in the `antrea-controller` Pod by
+running this `kubectl` command:
+
+```bash
+kubectl logs -n kube-system
+```
+
+To check the logs of the `antrea-agent`, `antrea-ovs`, and `antrea-ipsec`
+containers in an `antrea-agent` Pod, run command:
+
+```bash
+kubectl logs -n kube-system -c [antrea-agent|antrea-ovs|antrea-ipsec]
+```
+
+To check the OVS daemon logs (e.g. if the `antrea-ovs` container logs indicate
+that one of the OVS daemons generated an error), you can use `kubectl exec`:
+
+```bash
+kubectl exec -n kube-system -c antrea-ovs -- tail /var/log/openvswitch/.log
+```
+
+The `antrea-controller` Pod and the list of `antrea-agent` Pods, along with the
+Nodes on which the Pods are scheduled, can be returned by command:
+
+```bash
+kubectl get pods -n kube-system -l app=antrea -o wide
+```
+
+Logs of `antrea-controller`, `antrea-agent`, OVS and strongSwan daemons are also
+stored in the filesystem of the Node (i.e. the Node on which the
+`antrea-controller` or `antrea-agent` Pod is scheduled).
+
+- `antrea-controller` logs are stored in directory: `/var/log/antrea` (on the
+Node where the `antrea-controller` Pod is scheduled.
+- `antrea-agent` logs are stored in directory: `/var/log/antrea` (on the Node
+where the `antrea-agent` Pod is scheduled).
+- Logs of the OVS daemons - `ovs-vswitchd`, `ovsdb-server`, `ovs-monitor-ipsec` -
+are stored in directory: `/var/log/antrea/openvswitch` (on the Node where the
+`antrea-agent` Pod is scheduled).
+- strongSwan daemon logs are stored in directory: `/var/log/antrea/strongswan`
+(on the Node where the `antrea-agent` Pod is scheduled).
+
+To increase the log level for the `antrea-agent` and the `antrea-controller`, you
+can edit the `--v=0` arg in the Antrea manifest to a desired level.
+Alternatively, you can generate an Antrea manifest with increased log level of
+4 (maximum debug level) using `generate_manifest.sh`:
+
+```bash
+hack/generate-manifest.sh --mode dev --verbose-log
+```
+
+## Accessing the antrea-controller API
+
+antrea-controller runs as a Deployment, exposes its API via a Service and
+registers an APIService to aggregate into the Kubernetes API. To access the
+antrea-controller API, you need to know its address and have the credentials
+to access it. There are multiple ways in which you can access the API:
+
+### Using antctl
+
+Typically, `antctl` handles locating the Kubernetes API server and
+authentication when it runs in an environment with kubeconfig set up. Same as
+`kubectl`, `antctl` looks for a file named config in the $HOME/.kube directory.
+You can specify other kubeconfig files by setting the `--kubeconfig` flag.
+
+For example, you can view internal NetworkPolicy objects with this command:
+
+```bash
+antctl get networkpolicy
+```
+
+### Using kubectl proxy
+
+As the antrea-controller API is aggregated into the Kubernetes API, you can
+access it through the Kubernetes API using the appropriate URL paths. The
+following command runs `kubectl` in a mode where it acts as a reverse proxy for
+the Kubernetes API and handles authentication.
+
+```bash
+# Start the proxy in the background
+kubectl proxy &
+# Access the antrea-controller API path
+curl 127.0.0.1:8001/apis/controlplane.antrea.io
+```
+
+### Using antctl proxy
+
+Antctl supports running a reverse proxy (similar to the kubectl one) which
+enables access to the entire Antrea Controller API (not just aggregated API
+Services), but does not secure the TLS connection between the proxy and the
+Controller. Refer to the [antctl documentation](antctl.md#antctl-proxy) for more
+information.
+
+### Directly accessing the antrea-controller API
+
+If you want to directly access the antrea-controller API, you need to get its
+address and pass an authentication token when accessing it, like this:
+
+```bash
+# Get the antrea Service address
+ANTREA_SVC=$(kubectl get service antrea -n kube-system -o jsonpath='{.spec.clusterIP}')
+# Get the token value of antctl account, you can use any ServiceAccount that has permissions to antrea API.
+TOKEN=$(kubectl get secret/antctl-service-account-token -n kube-system -o jsonpath="{.data.token}"|base64 --decode)
+# Access antrea API with TOKEN
+curl --insecure --header "Authorization: Bearer $TOKEN" https://$ANTREA_SVC/apis
+```
+
+## Accessing the antrea-agent API
+
+antrea-agent runs as a DaemonSet Pod on each Node and exposes its API via a
+local endpoint. There are two ways you can access it:
+
+### Using antctl
+
+To use `antctl` to access the antrea-agent API, you need to exec into the
+antrea-agent container first. `antctl` is embedded in the image so it can be
+used directly.
+
+For example, you can view the internal NetworkPolicy objects for a specific
+agent with this command:
+
+```bash
+# Get into the antrea-agent container
+kubectl exec -it -n kube-system -c antrea-agent -- bash
+# View the agent's NetworkPolicy
+antctl get networkpolicy
+```
+
+### Using antctl proxy
+
+Antctl supports running a reverse proxy (similar to the kubectl one) which
+enables access to the entire Antrea Agent API, but does not secure the TLS
+connection between the proxy and the Controller. Refer to the [antctl
+documentation](antctl.md#antctl-proxy) for more information.
+
+### Directly accessing the antrea-agent API
+
+If you want to directly access the antrea-agent API, you need to log into the
+Node that the antrea-agent runs on or exec into the antrea-agent container. Then
+access the local endpoint directly using the Bearer Token stored in the file
+system:
+
+```bash
+TOKEN=$(cat /var/run/antrea/apiserver/loopback-client-token)
+curl --insecure --header "Authorization: Bearer $TOKEN" https://127.0.0.1:10350/
+```
+
+Note that you can also access the antrea-agent API from outside the Node by
+using the authentication token of the `antctl` ServiceAccount:
+
+```bash
+# Get the token value of antctl account.
+TOKEN=$(kubectl get secret/antctl-service-account-token -n kube-system -o jsonpath="{.data.token}"|base64 --decode)
+# Access antrea API with TOKEN
+curl --insecure --header "Authorization: Bearer $TOKEN" https://:10350/podinterfaces
+```
+
+However, in this case you will be limited to the endpoints that `antctl` is
+allowed to access, as defined
+[here](https://github.com/antrea-io/antrea/blob/v1.15.0/build/charts/antrea/templates/antctl/clusterrole.yaml).
+
+## Accessing the flow-aggregator API
+
+flow-aggregator runs as a Deployment and exposes its API via a local endpoint.
+There are two ways you can access it:
+
+### Using antctl
+
+To use `antctl` to access the flow-aggregator API, you need to exec into the
+flow-aggregator container first. `antctl` is embedded in the image so it can be
+used directly.
+
+For example, you can dump the flow records with this command:
+
+```bash
+# Get into the flow-aggregator container
+kubectl exec -it -n flow-aggregator -- bash
+# View the flow records
+antctl get flowrecords
+```
+
+### Directly accessing the flow-aggregator API
+
+If you want to directly access the flow-aggregator API, you need to exec into
+the flow-aggregator container. Then access the local endpoint directly using the
+Bearer Token stored in the file system:
+
+```bash
+TOKEN=$(cat /var/run/antrea/apiserver/loopback-client-token)
+curl --insecure --header "Authorization: Bearer $TOKEN" https://127.0.0.1:10348/
+```
+
+## Troubleshooting Open vSwitch
+
+OVS daemons (`ovsdb-server` and `ovs-vswitchd`) run inside the `antrea-ovs`
+container of the `antrea-agent` Pod. You can use `kubectl exec` to execute OVS
+command line tools (e.g. `ovs-vsctl`, `ovs-ofctl`, `ovs-appctl`) in the
+container, for example:
+
+```bash
+kubectl exec -n kube-system -c antrea-ovs -- ovs-vsctl show
+```
+
+By default the host directory `/var/run/antrea/openvswitch/` is mounted to
+`/var/run/openvswitch/` of the `antrea-ovs` container and is used as the parent
+directory of the OVS UNIX domain sockets and configuration database file.
+Therefore, you may execute some OVS command line tools (inc. `ovs-vsctl` and
+`ovs-ofctl`) from a Kubernetes Node - assuming they are installed on the Node -
+by specifying the socket file path explicitly, for example:
+
+```bash
+ovs-vsctl --db unix:/var/run/antrea/openvswitch/db.sock show
+ovs-ofctl show unix:/var/run/antrea/openvswitch/br-int.mgmt
+```
+
+Commands to check basic OVS and OpenFlow information include:
+
+- `ovs-vsctl show`: dump OVS bridge and port configuration. Outputs of the
+command are like:
+
+```bash
+f06768ee-17ec-4abb-a971-b3b76abc8cda
+ Bridge br-int
+ datapath_type: system
+ Port coredns--e526c8
+ Interface coredns--e526c8
+ Port antrea-tun0
+ Interface antrea-tun0
+ type: geneve
+ options: {key=flow, remote_ip=flow}
+ Port antrea-gw0
+ Interface antrea-gw0
+ type: internal
+ ovs_version: "2.17.7"
+```
+
+- `ovs-ofctl show br-int`: show OpenFlow information of the OVS bridge.
+- `ovs-ofctl dump-flows br-int`: dump OpenFlow entries of the OVS bridge.
+- `ovs-ofctl dump-ports br-int`: dump traffic statistics of the OVS ports.
+
+For more information on the usage of the OVS CLI tools, check the
+[Open vSwitch Manpages](https://www.openvswitch.org/support/dist-docs).
+
+## Troubleshooting with antctl
+
+`antctl` provides some useful commands to troubleshoot Antrea Controller and
+Agent, which can print the runtime information of `antrea-controller` and
+`antrea-agent`, dump NetworkPolicy objects, dump Pod network interface
+information on a Node, dump Antrea OVS flows, and perform OVS packet tracing.
+Refer to the [`antctl` guide](antctl.md#usage) to learn how to use these
+commands.
+
+## Profiling Antrea components
+
+The easiest way to profile the Antrea components is to use the Go
+[pprof](https://golang.org/pkg/net/http/pprof/) tool. Both the Antrea Agent and
+the Antrea Controller use the K8s apiserver library to serve their API, and this
+library enables the pprof HTTP server by default. In order to access it without
+having to worry about authentication, you can use the antctl proxy function.
+
+For example, this is what you would do to look at a 30-second CPU profile for
+the Antrea Controller:
+
+```bash
+# Start the proxy in the background
+antctl proxy --controller&
+# Look at a 30-second CPU profile
+go tool pprof http://127.0.0.1:8001/debug/pprof/profile?seconds=30
+```
+
+## Ask your questions to the Antrea community
+
+If you are running into issues when running Antrea and you need help, ask your
+questions on [Github](https://github.com/antrea-io/antrea/issues/new/choose)
+or [reach out to us on Slack or during the Antrea office
+hours](../README.md#community).
diff --git a/content/docs/v1.15.0/docs/versioning.md b/content/docs/v1.15.0/docs/versioning.md
new file mode 100644
index 00000000..c8d13b78
--- /dev/null
+++ b/content/docs/v1.15.0/docs/versioning.md
@@ -0,0 +1,263 @@
+# Antrea Versioning
+
+## Table of Contents
+
+
+- [Versioning scheme](#versioning-scheme)
+ - [Minor releases and patch releases](#minor-releases-and-patch-releases)
+ - [Feature stability](#feature-stability)
+- [Release cycle](#release-cycle)
+- [Antrea upgrade and supported version skew](#antrea-upgrade-and-supported-version-skew)
+- [Supported K8s versions](#supported-k8s-versions)
+- [Deprecation policies](#deprecation-policies)
+ - [Prometheus metrics deprecation policy](#prometheus-metrics-deprecation-policy)
+ - [APIs deprecation policy](#apis-deprecation-policy)
+- [Introducing new API resources](#introducing-new-api-resources)
+ - [Introducing new CRDs](#introducing-new-crds)
+
+
+## Versioning scheme
+
+Antrea versions are expressed as `x.y.z`, where `x` is the major version, `y` is
+the minor version, and `z` is the patch version, following [Semantic Versioning]
+terminology.
+
+### Minor releases and patch releases
+
+Unlike minor releases, patch releases should not contain miscellaneous feature
+additions or improvements. No incompatibilities should ever be introduced
+between patch versions of the same minor version. API groups / versions must not
+be introduced or removed as part of patch releases.
+
+Patch releases are intended for important bug fixes to recent minor versions,
+such as addressing security vulnerabilities, fixes to problems preventing Antrea
+from being deployed & used successfully by a significant number of users, severe
+problems with no workaround, and blockers for products (including commercial
+products) which rely on Antrea.
+
+When it comes to dependencies, the following rules are observed between patch
+versions of the same Antrea minor versions:
+
+* the same minor OVS version should be used
+* the same minor version should be used for all Go dependencies, unless
+ updating to a new minor / major version is required for an important bug fix
+* for Antrea Docker images shipped as part of a patch release, the same version
+ must be used for the base Operating System (Linux distribution / Windows
+ server), unless an update is required to fix a critical bug. If important
+ updates are available for a given Operating System version (e.g. which address
+ security vulnerabilities), they should be included in Antrea patch releases.
+
+### Feature stability
+
+For every Antrea minor release, the stability level of supported features may be
+updated (from `Alpha` to `Beta` or from `Beta` to `GA`). Refer to the the
+[CHANGELOG] for information about feature stability level for each release. For
+features controlled by a feature gate, this information is also present in a
+more structured way in [feature-gates.md](feature-gates.md).
+
+## Release cycle
+
+New Antrea minor releases are currently shipped every 6 to 8 weeks. This fast
+release cadence enables us to ship new features quickly and frequently. It may
+change in the future. Compared to deploying the top-of-tree of the Antrea main
+branch, using a released version should provide more stability
+guarantees:
+
+* despite our CI pipelines, some bugs can sneak into the branch and be fixed
+ shortly after
+* merge conflicts can break the top-of-tree temporarily
+* some CI jobs are run periodically and not for every pull request before merge;
+ as much as possible we run the entire test suite for each release candidate
+
+Antrea maintains release branches for the two most recent minor releases
+(e.g. the `release-0.10` and `release-0.11` branches are maintained until Antrea
+0.12 is released). As part of this maintenance process, patch versions are
+released as frequently as needed, following these
+[guidelines](#minor-releases-and-patch-releases). With the current release
+cadence, this means that each minor release receives approximately 3 months of
+patch support. This may seem short, but was done on purpose to encourage users
+to upgrade Antrea often and avoid potential incompatibility issues. In the
+future, we may reduce our release cadence for minor releases and simultaneously
+increase the support window for each release.
+
+## Antrea upgrade and supported version skew
+
+Our goal is to support "graceful" upgrades for Antrea. By "graceful", we notably
+mean that there should be no significant disruption to data-plane connectivity
+nor to policy enforcement, beyond the necessary disruption incurred by the
+restart of individual components:
+
+* during the Antrea Controller restart, new policies will not be
+ processed. Because the Controller also runs the validation webhook for
+ [Antrea-native policies](antrea-network-policy.md), an attempt to create an
+ Antrea-native policy resource before the restart is complete may return an
+ error.
+* during an Antrea Agent restart, the Node's data-plane will be impacted: new
+ connections to & from the Node will not be possible, and existing connections
+ may break.
+
+In particular, it should be possible to upgrade Antrea without compromising
+enforcement of existing network policies for both new and existing Pods.
+
+In order to achieve this, the different Antrea components need to support
+version skew.
+
+* **Antrea Controller**: must be upgraded first
+* **Antrea Agent**: must not be newer than the **Antrea Controller**, and may be
+ up to 4 minor versions older
+* **Antctl**: must not be newer than the **Antrea Controller**, and may be up to
+ 4 minor versions older
+
+The supported version skew means that we only recommend Antrea upgrades to a new
+release up to 4 minor versions newer. For example, a cluster using 0.10 can be
+upgraded to one of 0.11, 0.12, 0.13 or 0.14, but we discourage direct upgrades
+to 0.15 and beyond. With the current release cadence, this provides a 6-month
+window of compatibility. If we reduce our release cadence in the future, we may
+revisit this policy as well.
+
+When directly applying a newer Antrea YAML manifest, as provided for each
+[release](https://github.com/antrea-io/antrea/releases), there is no
+guarantee that the Antrea Controller will be upgraded first. In practice, the
+Controller would be upgraded simultaneously with the first Agent(s) to be
+upgraded by the rolling update of the Agent DaemonSet. This may create some
+transient issues and compromise the "graceful" upgrade. For upgrade scenarios,
+we therefore recommend that you "split-up" the manifest to ensure that the
+Controller is upgraded first.
+
+## Supported K8s versions
+
+Each Antrea minor release should support [maintained K8s
+releases](https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions)
+at the time of release (3 up to K8s 1.19, 4 after that). For example, at the
+time that Antrea 0.10 was released, the latest K8s version was 1.19; as a result
+we guarantee that 0.10 supports at least 1.19, 1.18 and 1.17 (in practice it
+also supports K8s 1.16).
+
+In addition, we strive to support the K8s versions used by default in
+cloud-managed K8s services ([EKS], [AKS] and [GKE] regular channel).
+
+## Deprecation policies
+
+### Prometheus metrics deprecation policy
+
+Antrea follows a similar policy as
+[Kubernetes](https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/#metric-lifecycle)
+for metrics deprecation.
+
+Alpha metrics have no stability guarantees; as such they can be modified or
+deleted at any time.
+
+Stable metrics are guaranteed to not change; specifically, stability means:
+
+* the metric itself will not be renamed
+* the type of metric will not be modified
+
+Eventually, even a stable metric can be deleted. In this case, the metric must
+be marked as deprecated first and the metric must stay deprecated for at least
+one minor release. The [CHANGELOG] must announce both metric deprecations and
+metric deletions.
+
+Before deprecation:
+
+```bash
+# HELP some_counter this counts things
+# TYPE some_counter counter
+some_counter 0
+```
+
+After deprecation:
+
+```bash
+# HELP some_counter (Deprecated since 0.10.0) this counts things
+# TYPE some_counter counter
+some_counter 0
+```
+
+In the future, we may introduce the same concept of [hidden
+metric](https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/#show-hidden-metrics)
+as K8s, as an additional part of the metric lifecycle.
+
+### APIs deprecation policy
+
+The Antrea APIs are built using K8s (they are a combination of
+CustomResourceDefinitions and aggregation layer APIServices) and we follow the
+same versioning scheme as the K8s APIs and the same [deprecation
+policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/).
+
+Other than the most recent API versions in each track, older API versions must
+be supported after their announced deprecation for a duration of no less than:
+
+* GA: 12 months
+* Beta: 9 months
+* Alpha: N/A (can be removed immediately)
+
+This also applies to the `controlplane` API. In particular, introduction and
+removal of new versions for this API must respect the ["graceful" upgrade
+guarantee](#antrea-upgrade-and-supported-version-skew). The `controlplane` API
+(which is exposed using the aggregation layer) is often referred to as an
+"internal" API as it is used by the Antrea components to communicate with each
+other, and is usually not consumed by end users, e.g. cluster admins. However,
+this API may also be used for integration with other software, which is why we
+abide to the same deprecation policy as for other more "user-facing" APIs
+(e.g. Antrea-native policy CRDs).
+
+K8s has a [moratorium](https://github.com/kubernetes/kubernetes/issues/52185) on
+the removal of API object versions that have been persisted to storage. At the
+moment, none of Antrea APIServices (which use the aggregation layer) persist
+objects to storage. So the only objects we need to worry about are
+CustomResources, which are persisted by the K8s apiserver. For them, we adopt
+the following rules:
+
+* Alpha API versions may be removed at any time.
+* The [`deprecated` field] must be used for CRDs to indicate that a particular
+ version of the resource has been deprecated.
+* Beta and GA API versions must be supported after deprecation for the
+ respective durations stipulated above before they can be removed.
+* For deprecated Beta and GA API versions, a [conversion webhook] must be
+ provided along with each Antrea release, until the API version is removed
+ altogether.
+
+## Introducing new API resources
+
+### Introducing new CRDs
+
+Starting with Antrea v1.0, all Custom Resource Definitions (CRDs) for Antrea are
+defined in the same API group, `crd.antrea.io`, and all CRDs in this group are
+versioned individually. For example, at the time of writing this (v1.3 release
+timeframe), the Antrea CRDs include:
+
+* `ClusterGroup` in `crd.antrea.io/v1alpha2`
+* `ClusterGroup` in `crd.antrea.io/v1alpha3`
+* `Egress` in `crd.antrea.io/v1alpha2`
+* etc.
+
+Notice how 2 versions of `ClusterGroup` are supported: the one in
+`crd.antrea.io/v1alpha2` was introduced in v1.0, and is being deprecated as it
+was replaced by the one in `crd.antrea.io/v1alpha3`, introduced in v1.1.
+
+When introducing a new version of a CRD, [the API deprecation policy should be
+followed](#apis-deprecation-policy).
+
+When introducing a CRD, the following rule should be followed in order to avoid
+potential dependency cycles (and thus import cycles in Go): if the CRD depends on
+other object types spread across potentially different versions of
+`crd.antrea.io`, the CRD should be defined in a group version greater or equal
+to all of these versions. For example, if we want to introduce a new CRD which
+depends on types `v1alpha1.X` and `v1alpha2.Y`, it needs to go into `v1alpha2`
+or a more recent version of `crd.antrea.io`. As a rule it should probably go
+into `v1alpha2` unless it is closely related to other CRDs in a later version,
+in which case it can be defined alongside these CRDs, in order to avoid user
+confusion.
+
+If a new CRD does not have dependencies and is not closely related to an
+existing CRD, it will typically be defined in `v1alpha1`. In some rare cases, a
+CRD can be defined in `v1beta1` directly if there is enough confidence in the
+stability of the API.
+
+[Semantic Versioning]: https://semver.org/
+[CHANGELOG]: ../CHANGELOG.md
+[EKS]: https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html
+[AKS]: https://docs.microsoft.com/en-us/azure/aks/supported-kubernetes-versions
+[GKE]: https://cloud.google.com/kubernetes-engine/docs/release-notes
+[`deprecated` field]: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#version-deprecation
+[conversion webhook]: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#webhook-conversion
diff --git a/content/docs/v1.15.0/docs/windows.md b/content/docs/v1.15.0/docs/windows.md
new file mode 100644
index 00000000..1aff4ef1
--- /dev/null
+++ b/content/docs/v1.15.0/docs/windows.md
@@ -0,0 +1,671 @@
+# Deploying Antrea on Windows
+
+## Table of Contents
+
+
+- [Overview](#overview)
+ - [Components that run on Windows](#components-that-run-on-windows)
+ - [Antrea Windows demo](#antrea-windows-demo)
+- [Deploying Antrea on Windows worker Nodes](#deploying-antrea-on-windows-worker-nodes)
+ - [Prerequisites](#prerequisites)
+ - [Installation as a Pod](#installation-as-a-pod)
+ - [Download & Configure Antrea for Linux](#download--configure-antrea-for-linux)
+ - [Add Windows antrea-agent DaemonSet](#add-windows-antrea-agent-daemonset)
+ - [Join Windows worker Nodes](#join-windows-worker-nodes)
+ - [1. (Optional) Install OVS (provided by Antrea or your own)](#1-optional-install-ovs-provided-by-antrea-or-your-own)
+ - [2. Disable Windows Firewall](#2-disable-windows-firewall)
+ - [3. Install kubelet, kubeadm and configure kubelet startup params](#3-install-kubelet-kubeadm-and-configure-kubelet-startup-params)
+ - [4. Prepare Node environment needed by antrea-agent](#4-prepare-node-environment-needed-by-antrea-agent)
+ - [5. Run kubeadm to join the Node](#5-run-kubeadm-to-join-the-node)
+ - [Verify your installation](#verify-your-installation)
+ - [Installation as a Service](#installation-as-a-service)
+ - [Installation as a Pod using wins for Docker (DEPRECATED)](#installation-as-a-pod-using-wins-for-docker-deprecated)
+ - [Add Windows antrea-agent DaemonSet](#add-windows-antrea-agent-daemonset-1)
+ - [Join Windows worker Nodes](#join-windows-worker-nodes-1)
+ - [Add Windows kube-proxy DaemonSet (only for Kubernetes versions prior to 1.26)](#add-windows-kube-proxy-daemonset-only-for-kubernetes-versions-prior-to-126)
+ - [Common steps](#common-steps)
+ - [For containerd](#for-containerd)
+ - [For Docker](#for-docker)
+ - [Manually run kube-proxy and antrea-agent on Windows worker Nodes](#manually-run-kube-proxy-and-antrea-agent-on-windows-worker-nodes)
+- [Known issues](#known-issues)
+
+
+## Overview
+
+Antrea supports Windows worker Nodes. On Windows Nodes, Antrea sets up an overlay
+network to forward packets between Nodes and implements NetworkPolicies. Currently
+Geneve, VXLAN, and STT tunnels are supported.
+
+This page shows how to install antrea-agent on Windows Nodes and register the
+Node to an existing Kubernetes cluster.
+
+For the detailed design of how antrea-agent works on Windows, please refer to
+the [design doc](design/windows-design.md).
+
+### Components that run on Windows
+
+The following components should be configured and run on the Windows Node.
+
+* [kubernetes components](https://kubernetes.io/docs/setup/production-environment/windows/user-guide-windows-nodes/)
+* OVS daemons
+* antrea-agent
+* kube-proxy
+
+antrea-agent and kube-proxy run as processes on host and are managed by
+management Pods. It is recommended to run OVS daemons as Windows services.
+We also support running OVS processes inside a container. If you don't want to
+run antrea-agent and kube-proxy from the management Pods Antrea also provides
+scripts which help to install and run these two components directly without Pod.
+Please see [Manually run kube-proxy and antrea-agent on Windows worker Nodes](#manually-run-kube-proxy-and-antrea-agent-on-windows-worker-nodes)
+section for details.
+
+### Antrea Windows demo
+
+Watch this [demo video](https://www.youtube.com/watch?v=NjeVPGgaNFU) of running
+Antrea in a Kubernetes cluster with both Linux and Windows Nodes. The demo also
+shows the Antrea OVS bridge configuration on a Windows Node, and NetworkPolicy
+enforcement for Windows Pods. Note, OVS driver and daemons are pre-installed on
+the Windows Nodes in the demo.
+
+## Deploying Antrea on Windows worker Nodes
+
+Running Antrea on Windows Nodes requires the containerd container runtime. The
+recommended installation method is [Installation as a
+Pod](#installation-as-a-pod), and it requires containerd 1.6 or higher. If you
+prefer running the Antrea Agent as a Windows service, or if you are using
+containerd 1.5, you can use the [Installation as a
+Service](#installation-as-a-service) method.
+
+Note that [Docker support](#installation-as-a-pod-using-wins-for-docker-deprecated)
+is deprecated. We no longer test Antrea support with Docker on Windows, and the
+installation method will be removed from the documentation in a later release.
+
+### Prerequisites
+
+* Create a Kubernetes cluster.
+* Obtain a Windows Server 2019 license (or higher) in order to configure the
+ Windows Nodes that will host Windows containers. And install the latest
+ Windows updates.
+* On each Windows Node, install the following:
+ - [Hyper-V](https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server)
+ with management tools. If your Nodes do not have the virtualization
+ capabilities required by Hyper-V, use the workaround described in the
+ [Known issues](#known-issues) section.
+ - [containerd](https://learn.microsoft.com/en-us/virtualization/windowscontainers/quick-start/set-up-environment?tabs=containerd#windows-server-1).
+
+### Installation as a Pod
+
+This installation method requires Antrea 1.10 or higher, and containerd 1.6 or
+higher (containerd 1.7 or higher is recommended). It relies on support for
+[Windows HostProcess Pods](https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/),
+which is generally available starting with K8s 1.26.
+
+Starting with Antrea v1.13, Antrea will take over all the responsibilities of
+kube-proxy for Windows nodes by default. Since Kubernetes 1.26, kube-proxy
+should not be deployed on Windows Nodes with Antrea, as kube-proxy userspace
+mode is deprecated. For Kubernetes versions prior to 1.26, Antrea can work
+with userspace kube-proxy on Windows Nodes.
+For more information refer to section [Add Windows kube-proxy DaemonSet (only for Kubernetes versions prior to 1.26)](#add-windows-kube-proxy-daemonset-only-for-kubernetes-versions-prior-to-126)
+
+#### Download & Configure Antrea for Linux
+
+Deploy Antrea for Linux on the control-plane Node following [Getting started](getting-started.md)
+document. The following command deploys Antrea with the version specified by ``:
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea.yml
+```
+
+#### Add Windows antrea-agent DaemonSet
+
+Starting from Antrea 1.13, you need to manually set the `kubeAPIServerOverride`
+field in the YAML configuration file as the Antrea Proxy `proxyAll` mode is
+enabled by default.
+
+```yaml
+ # Provide the address of Kubernetes apiserver, to override any value provided in kubeconfig or InClusterConfig.
+ # Defaults to "". It must be a host string, a host:port pair, or a URL to the base of the apiserver.
+ kubeAPIServerOverride: "10.10.1.1:6443"
+
+ # Option antreaProxy contains AntreaProxy related configuration options.
+ antreaProxy:
+ # ProxyAll tells antrea-agent to proxy ClusterIP Service traffic, regardless of where they come from.
+ # Therefore, running kube-proxy is no longer required. This requires the AntreaProxy feature to be enabled.
+ # Note that this option is experimental. If kube-proxy is removed, option kubeAPIServerOverride must be used to access
+ # apiserver directly.
+ proxyAll: true
+```
+
+For earlier versions of Antrea, you will need to enable `proxyAll` manually.
+
+Starting with Antrea 1.13, you can run both the Antrea Agent and the OVS daemons
+on Windows Nodes using a single DaemonSet, by applying the file
+`antrea-windows-containerd-with-ovs.yml`. This is the recommended installation
+method. The following commands download the manifest, set
+`kubeAPIServerOverride`, and create the DaemonSet:
+
+```bash
+KUBE_APISERVER=$(kubectl config view -o jsonpath='{.clusters[0].cluster.server}') && \
+curl -sL https://github.com/antrea-io/antrea/releases/download//antrea-windows-containerd-with-ovs.yml | \
+sed "s|.*kubeAPIServerOverride: \"\"| kubeAPIServerOverride: \"${KUBE_APISERVER}\"|g" | \
+kubectl apply -f -
+```
+
+Alternatively, to deploy the antrea-agent Windows DaemonSet without the OVS
+daemons, apply the file `antrea-windows-containerd.yml` with the following
+commands:
+
+```bash
+KUBE_APISERVER=$(kubectl config view -o jsonpath='{.clusters[0].cluster.server}') && \
+curl -sL https://github.com/antrea-io/antrea/releases/download//antrea-windows-containerd.yml | \
+sed "s|.*kubeAPIServerOverride: \"\"| kubeAPIServerOverride: \"${KUBE_APISERVER}\"|g" | \
+kubectl apply -f -
+```
+
+When using `antrea-windows-containerd.yml`, you will need to install OVS
+userspace daemons as services when you prepare your Windows worker Nodes, in the
+next section.
+
+#### Join Windows worker Nodes
+
+##### 1. (Optional) Install OVS (provided by Antrea or your own)
+
+Depending on which method you are using to install Antrea on Windows, and
+depending on whether you are using your own [signed](https://docs.microsoft.com/en-us/windows-hardware/drivers/install/driver-signing)
+OVS kernel driver or you want to use the test-signed driver provided by Antrea,
+you will need to invoke the `Install-OVS.ps1` script differently (or not at all).
+
+| Containerized OVS daemons? | Test-signed OVS driver? | Run this command |
+| -------------------------- | ----------------------- | ---------------- |
+| Yes | Yes | `.\Install-OVS.ps1 -InstallUserspace $false` |
+| Yes | No | N/A |
+| No | Yes | `.\Install-OVS.ps1` |
+| No | No | `.\Install-OVS.ps1 -ImportCertificate $false -Local -LocalFile ` |
+
+If you used `antrea-windows-containerd-with-ovs.yml` to create the antrea-agent
+Windows DaemonSet, then you are using "Containerized OVS daemons". For all other
+methods, you are *not* using "Containerized OVS daemons".
+
+Antrea provides a pre-built OVS package which contains a test-signed OVS kernel
+driver. If you don't have a self-signed OVS package and just want to try Antrea
+on Windows, this package can be used for testing.
+
+**[Test-only]** If you are using test-signed driver (such as the one provided with Antrea),
+please make sure to [enable test-signed](https://docs.microsoft.com/en-us/windows-hardware/drivers/install/the-testsigning-boot-configuration-option):
+
+```powershell
+Bcdedit.exe -set TESTSIGNING ON
+Restart-Computer
+```
+
+As an example, if you are using containerized OVS
+(`antrea-windows-containerd-with-ovs.yml`), and you want to use the test-signed
+OVS kernel driver provided by Antrea (not recommended for production), you would
+run the following commands:
+
+```powershell
+curl.exe -LO https://raw.githubusercontent.com/antrea-io/antrea/main/hack/windows/Install-OVS.ps1
+.\Install-OVS.ps1 -InstallUserspace $false
+```
+
+And, if you want to run OVS as Windows native services, and you are bringing
+your own OVS package with a signed OVS kernel driver, you would run:
+
+```powershell
+curl.exe -LO https://raw.githubusercontent.com/antrea-io/antrea/main/hack/windows/Install-OVS.ps1
+.\Install-OVS.ps1 -ImportCertificate $false -Local -LocalFile
+
+# verify that the OVS services are installed
+get-service ovsdb-server
+get-service ovs-vswitchd
+```
+
+##### 2. Disable Windows Firewall
+
+```powershell
+Set-NetFirewallProfile -Profile Domain,Public,Private -Enabled False
+```
+
+##### 3. Install kubelet, kubeadm and configure kubelet startup params
+
+Firstly, install kubelet and kubeadm using the provided `PrepareNode.ps1`
+script. Specify the Node IP, Kubernetes Version and container runtime while
+running the script. The following command downloads and executes
+`Prepare-Node.ps1`:
+
+```powershell
+# Example:
+curl.exe -LO "https://raw.githubusercontent.com/antrea-io/antrea/main/hack/windows/Prepare-Node.ps1"
+.\Prepare-Node.ps1 -KubernetesVersion v1.29.0 -NodeIP 192.168.1.10
+```
+
+##### 4. Prepare Node environment needed by antrea-agent
+
+Run the following commands to prepare the Node environment needed by antrea-agent:
+
+```powershell
+mkdir c:\k\antrea
+cd c:\k\antrea
+$TAG="v1.14.0"
+curl.exe -LO https://raw.githubusercontent.com/antrea-io/antrea/${TAG}/hack/windows/Clean-AntreaNetwork.ps1
+curl.exe -LO https://raw.githubusercontent.com/antrea-io/antrea/${TAG}/hack/windows/Prepare-ServiceInterface.ps1
+curl.exe -LO https://raw.githubusercontent.com/antrea-io/antrea/${TAG}/hack/windows/Prepare-AntreaAgent.ps1
+# use -RunOVSServices $false for containerized OVS!
+.\Prepare-AntreaAgent.ps1 -InstallKubeProxy $false [-RunOVSServices $false]
+```
+
+The script `Prepare-AntreaAgent.ps1` performs the following tasks:
+
+* Remove stale network resources created by antrea-agent.
+
+ After the Windows Node reboots, there will be stale network resources which
+ need to be cleaned before starting antrea-agent.
+
+* Ensure OVS services are running.
+
+ This script starts OVS services on the Node if they are not running. This
+ step needs to be skipped in case of OVS containerization. In that case, you
+ need to specify the parameter `RunOVSServices` as false.
+
+ ```powershell
+ .\Prepare-AntreaAgent.ps1 -InstallKubeProxy $false -RunOVSServices $false
+ ```
+
+The script must be executed every time you restart the Node to prepare the
+environment for antrea-agent.
+
+You can ensure that the script is executed automatically after each Windows
+startup by using different methods. Here are two examples for your reference:
+
+* Example 1: Update kubelet service.
+
+Insert following line in kubelet service script `c:\k\StartKubelet.ps1` to invoke
+`Prepare-AntreaAgent.ps1` when starting kubelet service:
+
+```powershell
+& C:\k\antrea\Prepare-AntreaAgent.ps1 -InstallKubeProxy $false -RunOVSServices $false
+```
+
+* Example 2: Create a ScheduledJob that runs at startup.
+
+```powershell
+$trigger = New-JobTrigger -AtStartup -RandomDelay 00:00:30
+$options = New-ScheduledJobOption -RunElevated
+Register-ScheduledJob -Name PrepareAntreaAgent -Trigger $trigger -ScriptBlock { Invoke-Expression C:\k\antrea\Prepare-AntreaAgent.ps1 -InstallKubeProxy $false -RunOVSServices $false } -ScheduledJobOption $options
+```
+
+##### 5. Run kubeadm to join the Node
+
+On Windows Nodes, run the `kubeadm join` command to join the cluster. The token
+is provided by the control-plane Node. If you lost the token, or the token has
+expired, you can run `kubeadm token create --print-join-command` (on the
+control-plane Node) to generate a new token and join command. An example
+`kubeadm join` command is like below:
+
+```powershell
+kubeadm join 192.168.101.5:6443 --token tdp0jt.rshv3uobkuoobb4v --discovery-token-ca-cert-hash sha256:84a163e57bf470f18565e44eaa2a657bed4da9748b441e9643ac856a274a30b9
+```
+
+##### Verify your installation
+
+There will be temporary network interruption on Windows worker Node on the
+first startup of antrea-agent. It's because antrea-agent will set the OVS to
+take over the host network. After that you should be able to view the Windows
+Nodes and Pods in your cluster by running:
+
+```bash
+# Show Nodes
+kubectl get nodes -o wide -n kube-system
+NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
+control-plane Ready control-plane 1h v1.29.0 10.176.27.168 Ubuntu 22.04.3 LTS 6.2.0-1017-generic containerd://1.6.26
+win-5akrf2tpq91 Ready 1h v1.29.0 10.176.27.150 Windows Server 2019 Datacenter 10.0.17763.5206 containerd://1.6.6
+win-5akrf2tpq92 Ready 1h v1.29.0 10.176.27.197 Windows Server 2019 Datacenter 10.0.17763.5206 containerd://1.6.6
+
+# Show antrea-agent and kube-proxy Pods
+kubectl get pods -o wide -n kube-system | grep windows
+antrea-agent-windows-6hvkw 1/1 Running 0 100s
+kube-proxy-windows-2d45w 1/1 Running 0 102s
+```
+
+### Installation as a Service
+
+Install Antrea (v0.13.0+ is required for containerd) as usual. The following
+command deploys Antrea with the version specified by ``:
+
+```bash
+kubectl apply -f https://github.com/antrea-io/antrea/releases/download//antrea.yml
+```
+
+When running the Antrea Agent as a Windows service, no DaemonSet is created for
+Windows worker Nodes. You will need to ensure that [nssm](https://nssm.cc/) is
+installed on all your Windows Nodes. `nssm` is a handy tool to manage services
+on Windows.
+
+To prepare your Windows worker Nodes, follow the steps in [Join Windows worker Nodes](#join-windows-worker-nodes).
+With this installation method, OVS daemons are always run as services (not
+containerized), and you will need to run `Install-OVS.ps1` to install them.
+
+When your Nodes are ready, run the following scripts to install the antrea-agent
+service. NOTE: ``, `` and
+`` should be set by you. `` is
+an optional parameter that is specific to kube-proxy mode. For example:
+
+```powershell
+# kube-proxy mode is no longer supported starting with K8s version 1.26
+$InstallKubeProxy=$false
+$KubernetesVersion="v1.23.5"
+$KubeConfig="C:/Users/Administrator/.kube/config" # admin kubeconfig
+$KubeletKubeconfigPath="C:/etc/kubernetes/kubelet.conf"
+if ($InstallKubeProxy) { $KubeProxyKubeconfigPath="C:/Users/Administrator/kubeproxy.conf" }
+```
+
+```powershell
+$TAG="v1.14.0"
+$KubernetesVersion=""
+$KubeConfig=""
+$KubeletKubeconfigPath=""
+if ($InstallKubeProxy) { $KubeProxyKubeconfigPath="" }
+$KubernetesHome="c:/k"
+$AntreaHome="c:/k/antrea"
+$KubeProxyLogPath="c:/var/log/kube-proxy"
+
+curl.exe -LO "https://raw.githubusercontent.com/antrea-io/antrea/${TAG}/hack/windows/Helper.psm1"
+Import-Module ./Helper.psm1
+
+Install-AntreaAgent -KubernetesVersion "$KubernetesVersion" -KubernetesHome "$KubernetesHome" -KubeConfig "$KubeConfig" -AntreaVersion "$TAG" -AntreaHome "$AntreaHome"
+New-KubeProxyServiceInterface
+
+New-DirectoryIfNotExist "${AntreaHome}/logs"
+New-DirectoryIfNotExist "${KubeProxyLogPath}"
+# Install kube-proxy service
+if ($InstallKubeProxy) { nssm install kube-proxy "${KubernetesHome}/kube-proxy.exe" "--proxy-mode=userspace --kubeconfig=${KubeProxyKubeconfigPath} --log-dir=${KubeProxyLogPath} --logtostderr=false --alsologtostderr" }
+nssm install antrea-agent "${AntreaHome}/bin/antrea-agent.exe" "--config=${AntreaHome}/etc/antrea-agent.conf --logtostderr=false --log_dir=${AntreaHome}/logs --alsologtostderr --log_file_max_size=100 --log_file_max_num=4"
+
+nssm set antrea-agent DependOnService ovs-vswitchd
+if ($InstallKubeProxy) { nssm set antrea-agent DependOnService kube-proxy ovs-vswitchd }
+nssm set antrea-agent Start SERVICE_DELAYED_AUTO_START
+
+if ($InstallKubeProxy) { Start-Service kube-proxy }
+Start-Service antrea-agent
+```
+
+### Installation as a Pod using wins for Docker (DEPRECATED)
+
+*Dockershim was deprecated in K8s 1.20, and removed in K8s version 1.24. These
+ steps may work with [cri-dockerd](https://github.com/Mirantis/cri-dockerd) but
+ this is not something we validated. Antrea is no longer tested with Docker on
+ Windows, and we intend to remove these steps from the documentation in Antrea
+ version 2.0.*
+
+Running Antrea with Docker on Windows uses
+[wins](https://github.com/rancher/wins), which lets you run services on the
+Window hosts, while managing them as if they were Pods.
+
+#### Add Windows antrea-agent DaemonSet
+
+For example, these commands will download the antrea-agent manifest, set
+`kubeAPIServerOverride`, and deploy the antrea-agent DaemonSet when using the
+Docker container runtime:
+
+```bash
+KUBE_APISERVER=$(kubectl config view -o jsonpath='{.clusters[0].cluster.server}') && \
+curl -sL https://github.com/antrea-io/antrea/releases/download//antrea-windows.yml | \
+sed "s|.*kubeAPIServerOverride: \"\"| kubeAPIServerOverride: \"${KUBE_APISERVER}\"|g" | \
+kubectl apply -f -
+```
+
+#### Join Windows worker Nodes
+
+The steps to join Windows worker Nodes are similar to the ones for the
+containerd runtime, with the following differences:
+
+1. OVS containerization is not supported, so OVS userspace processes need to be
+ run as Windows native services.
+2. When running the `Prepare-Node.ps1` script, you will need to explicitly
+ specify that you are using the Docker container runtime. The script will then
+ take care of installing wins. For example:
+
+ ```powershell
+ .\Prepare-Node.ps1 -KubernetesVersion v1.23.5 -NodeIP 192.168.1.10 -ContainerRuntime docker
+ ```
+
+If you want to install and use userspace kube-proxy on the Node (no longer
+supported since K8s version 1.26), follow instructions in [Add Windows
+kube-proxy DaemonSet (only for Kubernetes versions prior to 1.26)](#add-windows-kube-proxy-daemonset-only-for-kubernetes-versions-prior-to-126).
+
+### Add Windows kube-proxy DaemonSet (only for Kubernetes versions prior to 1.26)
+
+Starting from Kubernetes 1.26, Antrea no longer supports Windows kube-proxy
+because the kube-proxy userspace mode has been removed, and the kernel
+implementation does not work with Antrea. Clusters using recent K8s versions
+will need to follow the normal [installation guide](#deploying-antrea-on-windows-worker-nodes)
+and use AntreaProxy with `proxyAll` enabled.
+
+For older K8s versions, you can use kube-proxy userspace mode by following the
+instructions below.
+
+#### Common steps
+
+When running `Prepare-Node.ps1`, make sure that you set `InstallKubeProxy` to
+true. For example:
+
+```powershell
+.\Prepare-Node.ps1 -KubernetesVersion v1.25.0 -InstallKubeProxy:$true -NodeIP 192.168.1.10
+```
+
+When running `Prepare-AntreaAgent.ps1`, make sure that you set
+`InstallKubeProxy` to true. For example:
+
+```powershell
+.\Prepare-AntreaAgent.ps1 -InstallKubeProxy $true`
+```
+
+This will take care of preparing the network adapter for kube-proxy. kube-proxy
+needs a network adapter to configure Kubernetes Services IPs and uses the
+adapter for proxying connections to Services. The adapter will be deleted
+automatically by Windows after the Windows Node reboots
+(`Prepare-AntreaAgent.ps1` needs to run at every startup).
+
+After that, you will need to deploy a Windows-compatible version of
+kube-proxy. You can download `kube-proxy.yml` from the Kubernetes github
+repository to deploy kube-proxy. The kube-proxy version in the YAML file must be
+set to a Windows compatible version. The following command downloads
+`kube-proxy.yml`:
+
+```bash
+curl -L "https://github.com/kubernetes-sigs/sig-windows-tools/releases/download/v0.1.5/kube-proxy.yml" | sed 's/VERSION-nanoserver/v1.20.0/g' > kube-proxy.yml
+```
+
+Before applying the downloaded manifest, you will need to make some changes
+(which depend on your container runtime).
+
+#### For containerd
+
+Replace the content of `run-script.ps1` in the `kube-proxy-windows` ConfigMap
+with the following:
+
+```yaml
+apiVersion: v1
+data:
+ run-script.ps1: |-
+ $mountPath = $env:CONTAINER_SANDBOX_MOUNT_POINT
+ $mountPath = ($mountPath.Replace('\', '/')).TrimEnd('/')
+ New-Item -Path "c:/var/lib" -Name "kube-proxy" -ItemType "directory" -Force
+ ((Get-Content -path $mountPath/var/lib/kube-proxy/kubeconfig.conf -Raw) -replace '/var',"$($mountPath)/var") | Set-Content -Path /var/lib/kube-proxy/kubeconfig.conf
+ ((Get-Content -path /var/lib/kube-proxy/kubeconfig.conf -Raw) -replace '\/',"/") | Set-Content -Path /var/lib/kube-proxy/kubeconfig.conf
+ sed -i 's/mode: iptables/mode: \"\"/g' $mountPath/var/lib/kube-proxy/config.conf
+ & "$mountPath/k/kube-proxy/kube-proxy.exe" --config=$mountPath/var/lib/kube-proxy/config.conf --v=10 --proxy-mode=userspace --hostname-override=$env:NODE_NAME
+kind: ConfigMap
+metadata:
+ labels:
+ app: kube-proxy
+ name: kube-proxy-windows
+ namespace: kube-system
+```
+
+Set the `hostNetwork` option to `true` and add the following to the
+kube-proxy-windows DaemonSet spec:
+
+```yaml
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ labels:
+ k8s-app: kube-proxy
+ name: kube-proxy-windows
+ namespace: kube-system
+spec:
+ selector:
+ matchLabels:
+ k8s-app: kube-proxy-windows
+ template:
+ metadata:
+ labels:
+ k8s-app: kube-proxy-windows
+ spec:
+ securityContext:
+ windowsOptions:
+ hostProcess: true
+ runAsUserName: "NT AUTHORITY\\SYSTEM"
+ hostNetwork: true
+ serviceAccountName: kube-proxy
+ containers:
+ - command:
+ - pwsh
+ args:
+ - -file
+ - $env:CONTAINER_SANDBOX_MOUNT_POINT/var/lib/kube-proxy-windows/run-script.ps1
+```
+
+#### For Docker
+
+Replace the content of `run-script.ps1` in the `kube-proxy-windows` ConfigMap
+with the following:
+
+```yaml
+apiVersion: v1
+data:
+ run-script.ps1: |-
+ $ErrorActionPreference = "Stop";
+ mkdir -force /host/var/lib/kube-proxy/var/run/secrets/kubernetes.io/serviceaccount
+ mkdir -force /host/k/kube-proxy
+
+ cp -force /k/kube-proxy/* /host/k/kube-proxy
+ cp -force /var/lib/kube-proxy/* /host/var/lib/kube-proxy
+ cp -force /var/run/secrets/kubernetes.io/serviceaccount/* /host/var/lib/kube-proxy/var/run/secrets/kubernetes.io/serviceaccount
+
+ wins cli process run --path /k/kube-proxy/kube-proxy.exe --args "--v=3 --config=/var/lib/kube-proxy/config.conf --proxy-mode=userspace --hostname-override=$env:NODE_NAME"
+
+kind: ConfigMap
+metadata:
+ labels:
+ app: kube-proxy
+ name: kube-proxy-windows
+ namespace: kube-system
+```
+
+Set the `hostNetwork` option to `true` in the spec of kube-proxy-windows
+DaemonSet spec:
+
+```yaml
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ labels:
+ k8s-app: kube-proxy
+ name: kube-proxy-windows
+ namespace: kube-system
+spec:
+ selector:
+ matchLabels:
+ k8s-app: kube-proxy-windows
+ template:
+ metadata:
+ labels:
+ k8s-app: kube-proxy-windows
+ spec:
+ hostNetwork: true
+```
+
+### Manually run kube-proxy and antrea-agent on Windows worker Nodes
+
+Aside from starting kube-proxy and antrea-agent from the management Pods, Antrea
+also provides powershell scripts which help install and run these two components
+directly without Pods. Please complete the steps in
+[Installation](#installation-as-a-pod) section, and skip the
+[Add Windows antrea-agent DaemonSet](#add-windows-antrea-agent-daemonset) step.
+Then run the following commands in powershell.
+
+```powershell
+mkdir c:\k\antrea
+cd c:\k\antrea
+curl.exe -LO https://github.com/antrea-io/antrea/releases/download//Start-AntreaAgent.ps1
+# Run antrea-agent without kube-proxy
+# $KubeConfigPath is the path of kubeconfig file
+./Start-AntreaAgent.ps1 -kubeconfig $KubeConfigPath -StartKubeProxy $false
+# Run Antrea-Agent with kube-proxy (deprecated since Kubernetes 1.26)
+# ./Start-AntreaAgent.ps1 -kubeconfig $KubeConfigPath -StartKubeProxy $true
+```
+
+> Note: Some features such as supportbundle collection are not supported in this
+> way. It's recommended to start kube-proxy and antrea-agent through management
+> Pods.
+
+## Known issues
+
+1. HNS Network is not persistent on Windows. So after the Windows Node reboots,
+the HNS Network created by antrea-agent is removed, and the Open vSwitch
+Extension is disabled by default. In this case, the stale OVS bridge and ports
+should be removed. A help script [Clean-AntreaNetwork.ps1](https://raw.githubusercontent.com/antrea-io/antrea/main/hack/windows/Clean-AntreaNetwork.ps1)
+can be used to clean the OVS bridge.
+
+ ```powershell
+ # If OVS userspace processes were running as a Service on Windows host
+ ./Clean-AntreaNetwork.ps1 -OVSRunMode "service"
+ # If OVS userspace processes were running inside container in antrea-agent Pod
+ ./Clean-AntreaNetwork.ps1 -OVSRunMode "container"
+ ```
+
+2. Hyper-V feature cannot be installed on Windows Node due to the processor not
+having the required virtualization capabilities.
+
+ If the processor of the Windows Node does not have the required
+ virtualization capabilities. The installation of Hyper-V feature will fail
+ with the following error:
+
+ ```powershell
+ PS C:\Users\Administrator> Install-WindowsFeature Hyper-V
+
+ Success Restart Needed Exit Code Feature Result
+ ------- -------------- --------- --------------
+ False Maybe Failed {}
+ Install-WindowsFeature : A prerequisite check for the Hyper-V feature failed.
+ 1. Hyper-V cannot be installed: The processor does not have required virtualization capabilities.
+ At line:1 char:1
+ + Install-WindowsFeature hyper-v
+ + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ + CategoryInfo : InvalidOperation: (Hyper-V:ServerComponentWrapper) [Install-WindowsFeature], Exception
+ + FullyQualifiedErrorId : Alteration_PrerequisiteCheck_Failed,Microsoft.Windows.ServerManager.Commands.AddWindowsF
+ eatureCommand
+ ```
+
+ The capabilities are required by the Hyper-V `hypervisor` components to
+ support [Hyper-V isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container#hyper-v-isolation).
+ If you only need [Process Isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container#process-isolation)
+ on the Nodes. You could apply the following workaround to skip CPU check for
+ Hyper-V feature installation.
+
+ ```powershell
+ # 1. Install containers feature
+ Install-WindowsFeature containers
+
+ # 2. Install Hyper-V management powershell module
+ Install-WindowsFeature Hyper-V-Powershell
+
+ # 3. Install Hyper-V feature without CPU check and disable the "hypervisor"
+ dism /online /enable-feature /featurename:Microsoft-Hyper-V /all /NoRestart
+ dism /online /disable-feature /featurename:Microsoft-Hyper-V-Online /NoRestart
+
+ # 4. Restart-Computer to take effect
+ Restart-Computer
+ ```
diff --git a/data/docs/toc-mapping.yml b/data/docs/toc-mapping.yml
index ac7273d0..6dafa9fb 100644
--- a/data/docs/toc-mapping.yml
+++ b/data/docs/toc-mapping.yml
@@ -52,3 +52,4 @@ v1.14.1: v1.14.1-toc
v1.12.3: v1.12.3-toc
v1.13.3: v1.13.3-toc
v1.14.2: v1.14.2-toc
+v1.15.0: v1.15.0-toc
diff --git a/data/docs/v1.15.0-toc.yml b/data/docs/v1.15.0-toc.yml
new file mode 100644
index 00000000..8791fdfc
--- /dev/null
+++ b/data/docs/v1.15.0-toc.yml
@@ -0,0 +1,123 @@
+toc:
+ - title: Introduction
+ subfolderitems:
+ - page: Overview
+ url: /
+ - page: Getting Started
+ url: /docs/getting-started
+ - page: Support for K8s Installers
+ url: /docs/kubernetes-installers
+ - page: Deploying on Kind
+ url: /docs/kind
+ - page: Deploying on Minikube
+ url: /docs/minikube
+ - page: Configuration
+ url: /docs/configuration
+ - page: Installing with Helm
+ url: /docs/helm
+ - title: Cloud Deployment
+ subfolderitems:
+ - page: EKS Installation
+ url: /docs/eks-installation
+ - page: AKS Installation
+ url: /docs/aks-installation
+ - page: GKE Installation (Alpha)
+ url: /docs/gke-installation
+ - page: Running Antrea In Policy Only Mode
+ url: /docs/design/policy-only
+ - title: Reference
+ subfolderitems:
+ - page: Antrea Network Policy
+ url: /docs/antrea-network-policy
+ - page: Antctl
+ url: /docs/antctl
+ - page: Architecture
+ url: /docs/design/architecture
+ - page: Traffic Encryption (Ipsec / WireGuard)
+ url: /docs/traffic-encryption
+ - page: Securing Control Plane
+ url: /docs/securing-control-plane
+ - page: Security considerations
+ url: /docs/security
+ - page: Troubleshooting
+ url: /docs/troubleshooting
+ - page: OS-specific Known Issues
+ url: /docs/os-issues
+ - page: OVS Pipeline
+ url: /docs/design/ovs-pipeline
+ - page: Feature Gates
+ url: /docs/feature-gates
+ - page: Antrea Proxy
+ url: /docs/antrea-proxy
+ - page: Network Flow Visibility
+ url: /docs/network-flow-visibility
+ - page: Traceflow Guide
+ url: /docs/traceflow-guide
+ - page: NoEncap and Hybrid Traffic Modes
+ url: /docs/noencap-hybrid-modes
+ - page: Egress Guide
+ url: /docs/egress
+ - page: NodePortLocal Guide
+ url: /docs/node-port-local
+ - page: Antrea IPAM Guide
+ url: /docs/antrea-ipam
+ - page: Exposing Services of type LoadBalancer
+ url: /docs/service-loadbalancer
+ - page: Traffic Control
+ url: /docs/traffic-control
+ - page: Versioning
+ url: /docs/versioning
+ - page: Antrea API Groups
+ url: /docs/api
+ - page: Antrea API Reference
+ url: /docs/api-reference
+ - title: Windows
+ subfolderitems:
+ - page: Windows Deployment
+ url: /docs/windows
+ - page: Windows Design
+ url: /docs/design/windows-design
+ - title: Integrations
+ subfolderitems:
+ - page: Octant Plugin Installation
+ url: /docs/octant-plugin-installation
+ - page: Prometheus Integration
+ url: /docs/prometheus-integration
+ - title: Cookbooks
+ subfolderitems:
+ - page: Using Antrea with Multus
+ url: /docs/cookbooks/multus
+ - page: Using Fluentd to collect Network policy logs
+ url: /docs/cookbooks/fluentd
+ - title: Multicluster
+ subfolderitems:
+ - page: Quick Start
+ url: /docs/multicluster/quick-start
+ - page: User guide
+ url: /docs/multicluster/user-guide
+ - page: Antctl
+ url: /docs/multicluster/antctl
+ - page: Architecture
+ url: /docs/multicluster/architecture
+ - title: Developer Guide
+ subfolderitems:
+ - page: Code Generation
+ url: /docs//contributors/code-generation
+ - page: Release Instructions
+ url: /docs/maintainers/release
+ - page: Issue Management
+ url: /docs/contributors/issue-management
+ - page: GitHub Labels
+ url: /docs/contributors/github-labels
+ - title: Project Information
+ subfolderitems:
+ - page: Contributing to Antrea
+ url: /contributing
+ - page: Roadmap
+ url: /roadmap
+ - page: Change Log
+ url: /changelog
+ - page: Code of Conduct
+ url: /code_of_conduct
+ - page: Antrea Adopters
+ url: /adopters