Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enhancement(tests): Kubernetes E2E test framework #2702

Merged
merged 69 commits into from
Jul 29, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
d2ea174
Correct test-integration-kubernetes at Makefile
MOZGIII May 28, 2020
8eff64c
Fix the tag overwrite logic at scripts/deploy-kubernetes-test.sh
MOZGIII May 28, 2020
656df58
Make scripts/test-integration-kubernetes.sh more tweakable
MOZGIII May 28, 2020
e2f03f2
Reorder namespace and global config deletion command
MOZGIII Jun 15, 2020
fd97475
Add kubernetes-test-framework
MOZGIII May 28, 2020
eff8b30
Implement a first PoC kubernetes test
MOZGIII May 28, 2020
3b4fd8e
K8s integration test is really an e2e test, rename accordingly
MOZGIII Jun 1, 2020
ff65e04
Do not even publish container image at CI since we use "none" minikub…
MOZGIII Jun 1, 2020
17d6af9
Isolate kubernetes e2e tests via requried-features
MOZGIII Jun 1, 2020
81fc716
Add lock to the test framework
MOZGIII Jun 10, 2020
ef47617
Add some test cases to k8s e2e tests
MOZGIII Jun 10, 2020
c1c36ac
Add the ability to use quick debug builds in e2e tests
MOZGIII Jun 15, 2020
ab5703d
Use a single thread for test
MOZGIII Jun 15, 2020
0783690
Made test framework async
MOZGIII Jun 15, 2020
0d5a541
Allow specifying scope
MOZGIII Jun 15, 2020
291e513
Correct arguments preparation for cargo test at scripts/test-e2e-kube…
MOZGIII Jun 16, 2020
b067258
Get rid of $(RUN) at test-e2e-kubernetes target at Makefile
MOZGIII Jun 16, 2020
e241e62
Set LOG at distribution/kubernetes/vector-namespaced.yaml
MOZGIII Jun 16, 2020
5459179
Add a test to validate the pods are properly excluded
MOZGIII Jun 18, 2020
2289834
Fix a typo
MOZGIII Jun 18, 2020
c96c14c
Add test to assert we properly collect logs from multiple namespaces
MOZGIII Jun 18, 2020
b22a26d
Polish the test framework API
MOZGIII Jun 18, 2020
ea926d7
Add E2E tests section to the contribution guide
MOZGIII Jun 18, 2020
3a23d85
Kubernetes E2E tests are no longer experimental, should work consiste…
MOZGIII Jun 18, 2020
d82e44b
Add kubernetes version to the test name
MOZGIII Jun 18, 2020
1716843
Bump minikube
MOZGIII Jun 18, 2020
1b3631c
Bump kubernetes releases
MOZGIII Jun 18, 2020
c3656b9
Use minikube cache instead of manually moving image around
MOZGIII Jun 23, 2020
ae62815
Test against multiple container runtimes
MOZGIII Jun 23, 2020
e27ade7
Remove unused repeating_echo_cmd
MOZGIII Jun 23, 2020
b34f78f
Display timeout
MOZGIII Jun 23, 2020
a619453
Shorter title
MOZGIII Jun 23, 2020
4b56580
Switch to docker driver at minikube
MOZGIII Jun 23, 2020
6aaa21d
Remove the no_newline_at_eol test
MOZGIII Jun 25, 2020
d0d02cb
Increase timeout to rollout vector to 30s
MOZGIII Jun 25, 2020
945ac52
Temporarily disable crio
MOZGIII Jun 25, 2020
977d5fc
Apply workaround for CRIO
MOZGIII Jun 25, 2020
3f2e1d0
Fix clippy
MOZGIII Jun 25, 2020
c793ab3
Unset log level in skaffold dev config to fallback to the one set in …
MOZGIII Jul 1, 2020
9bcd7a9
Add exec_tail to the test framework
MOZGIII Jul 2, 2020
e9548cc
Fix a typo at the comment
MOZGIII Jul 7, 2020
2691e10
Fix the typos and styling at the crate doccomment
MOZGIII Jul 7, 2020
71690f3
Bump k8s versions for E2E tests at CI
MOZGIII Jul 7, 2020
7cb5dc7
Rename template params to pascal case
MOZGIII Jul 19, 2020
db531ca
Remove Drop from ResourceFile
MOZGIII Jul 19, 2020
de903fd
Proper authors
MOZGIII Jul 21, 2020
97af531
Rename crate to k8s-test-framework
MOZGIII Jul 21, 2020
0ca655d
Correct kubectl comment at the interface
MOZGIII Jul 21, 2020
3ae989f
Bumped k8s and minikube versions at CI
MOZGIII Jul 22, 2020
3110bb8
Add a comment explaining the timeout at pod filtering test
MOZGIII Jul 22, 2020
5b7f17b
Rollback minikube to 0.11.0
MOZGIII Jul 22, 2020
bf37696
Update CONTRIBUTING.md
MOZGIII Jul 23, 2020
dba975f
Update distribution/kubernetes/vector-namespaced.yaml
MOZGIII Jul 23, 2020
a173f5d
Fix an error at CONTRIBUTING.md
MOZGIII Jul 23, 2020
ff84036
Remove a trivial line from the doc
MOZGIII Jul 23, 2020
f29e56f
Do second attemtp to start up minikube if the first one failed
MOZGIII Jul 24, 2020
3f4fdd7
Print minikube logs if it fails to start
MOZGIII Jul 24, 2020
f1bd433
Provide a default for CONTAINER_IMAGE_REPO if USE_MINIKUBE_CACHE is set
MOZGIII Jul 24, 2020
b418c47
Update the CONTRIBUTING.md for CONTAINER_IMAGE_REPO default if USE_MI…
MOZGIII Jul 24, 2020
949cae4
Increase all rollout/wait timeouts to one minute
MOZGIII Jul 24, 2020
b995cbc
Fix syntax error around minikube start command
MOZGIII Jul 24, 2020
c73c968
Rollback k8s v1.16.13 to v1.16.12 at CI
MOZGIII Jul 25, 2020
a9b63c8
Add minikube cache autodetection
MOZGIII Jul 28, 2020
ec3f9c5
Document USE_MINIKUBE_CACHE=auto mode
MOZGIII Jul 28, 2020
c418e3a
Add a note on minikube bug to CONTRIBUTING.md
MOZGIII Jul 28, 2020
7f16dce
Add a note on minikube on ZFS to CONTRIBUTING.md
MOZGIII Jul 28, 2020
1e702ba
Fix the doc comment at scripts/deploy-kubernetes-test.sh
MOZGIII Jul 28, 2020
ae93ca3
Apply a workaround for kubectl from snap
MOZGIII Jul 28, 2020
ce7284c
Extract and reuse scripts/skaffold-dockerignore.sh
MOZGIII Jul 28, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 41 additions & 13 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -265,21 +265,49 @@ jobs:
- run: make slim-builds
- run: make test-integration-splunk

test-integration-kubernetes:
name: Integration - Linux, Kubernetes, flaky
# This is an experimental test. Allow it to fail without failing the whole
# workflow, but keep it executing on every build to gather stats.
continue-on-error: true
test-e2e-kubernetes:
name: E2E - K8s ${{ matrix.kubernetes_version }} / ${{ matrix.container_runtime }}
runs-on: ubuntu-latest
strategy:
matrix:
kubernetes:
- v1.18.2
- v1.17.5
- v1.16.9
- v1.15.11
- v1.14.10
minikube_version:
- 'v1.11.0' # https://github.com/kubernetes/minikube/issues/8799
kubernetes_version:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this is... 5 * 3 test runners just for k8s?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, it was worth it during development - helped to find a couple of interesting bugs along the way in unexpected places, that were only present in non-trivial combinations.
I don't mind reducing it, but it's not clear what subset of those is reasonable.

On a side note, there are a lot of ways we could optimize the E2E tests in general, to make them run quicker so that they don't create as much load on the system. Technically, the "payload" of these tests is very lightweight, and the most time, yet again, is spent during env prep and build. If we could share build artifacts - it would be great.

I feel like if we optimize the process in general we won't have to worry about having 15 tests...

We could move them into a separate workflow though, such that they'll live under their own section.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this would consume actually 15% of our total possible github actions runners so we should consider the value here very, very carefully.

Copy link
Contributor Author

@MOZGIII MOZGIII Jul 23, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. Well, we can exclude some of the matrix incarnations if we start facing issues... Just need to figure out which ones are the safest to omit. Alternatively, this can be transformed into a single sequential invocation - but I'd hold on with that until we hit an actual problem.

- 'v1.18.6'
- 'v1.17.9'
- 'v1.16.12' # v1.16.13 is broken, see https://github.com/kubernetes/minikube/issues/8840
- 'v1.15.12'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we test these versions?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to the RFC, we test all versions from MSKV to the latest released version. MSKV is 1.15.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't really have to test 1.14, but we do it cause people asked if we'll support it, and our code works for 1.14 too. We might want to reduce MSKV to 1.14.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So over time as k8s releases grow, so will this list? Meaning more and more of our CI jobs will be just k8s tests?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm afraid so, at least that's the plan. We'll probably bump MSKV along. K8s unfortunately have plenty of edge cases, and we'd better test as many variants as we can.
The good thing is, though, it's quite easy to maintain. I'll improve this a little bit later so we get alerted to add new k8s versions.

- 'v1.14.10'
container_runtime:
- docker
- containerd
- crio
fail-fast: false
steps:
- name: Temporarily off
run: "true"
- name: Setup Minikube
run: |
set -xeuo pipefail

curl -Lo kubectl \
'https://storage.googleapis.com/kubernetes-release/release/${{ matrix.kubernetes_version }}/bin/linux/amd64/kubectl'
sudo install kubectl /usr/local/bin/

curl -Lo minikube \
'https://storage.googleapis.com/minikube/releases/${{ matrix.minikube_version }}/minikube-linux-amd64'
sudo install minikube /usr/local/bin/

minikube config set profile minikube
minikube config set vm-driver docker
minikube config set kubernetes-version '${{ matrix.kubernetes_version }}'
minikube config set container-runtime '${{ matrix.container_runtime }}'
# Start minikube, try again once if fails and print logs if the second
# attempt fails too.
minikube start || minikube delete && minikube start || minikube logs
kubectl cluster-info
- name: Checkout
uses: actions/checkout@v1
- run: USE_CONTAINER=none make slim-builds
- run: make test-e2e-kubernetes
env:
USE_MINIKUBE_CACHE: "true"
PACKAGE_DEB_USE_CONTAINER: docker
96 changes: 92 additions & 4 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,8 @@ expanding into more specifics.
1. [Benchmarking](#benchmarking)
1. [Profiling](#profiling)
1. [Kubernetes](#kubernetes)
1. [Dev flow](#kubernetes-dev-flow)
1. [E2E tests](#kubernetes-e2e-tests)
1. [Humans](#humans)
1. [Documentation](#documentation)
1. [Changelog](#changelog)
Expand Down Expand Up @@ -550,13 +552,15 @@ navigated in your favorite web browser.

### Kubernetes

#### Kubernetes Dev Flow

There is a special flow for when you develop portions of Vector that are
designed to work with Kubernetes, like `kubernetes_logs` source or the
`deployment/kubernetes/*.yaml` configs.

This flow facilitates building Vector and deploying it into a cluster.

#### Requirements
##### Requirements

There are some extra requirements besides what you'd normally need to work on
Vector:
Expand All @@ -570,7 +574,7 @@ Vector:
* [`minikube`](https://minikube.sigs.k8s.io/)-powered or other k8s cluster
* [`cargo watch`](https://github.com/passcod/cargo-watch)

#### The dev flow
##### The dev flow

Once you have the requirements, use the `scripts/skaffold.sh dev` command.

Expand All @@ -596,7 +600,7 @@ the cluster state and exit.
`scripts/skaffold.sh` wraps `skaffold`, you can use other `skaffold` subcommands
if it fits you better.

#### Troubleshooting
##### Troubleshooting

You might need to tweak `skaffold`, here are some hints:

Expand All @@ -614,7 +618,7 @@ You might need to tweak `skaffold`, here are some hints:
* For the rest of the `skaffold` tweaks you might want to apply check out
[this page](https://skaffold.dev/docs/environment/).

#### Going through the dev flow manually
##### Going through the dev flow manually

Is some cases `skaffold` may not work. It's possible to go through the dev flow
manually, without `skaffold`.
Expand All @@ -627,6 +631,90 @@ required.
Essentially, the steps you have to take to deploy manually are the same that
`skaffold` will perform, and they're outlined at the previous section.

#### Kubernetes E2E tests

Kubernetes integration has a lot of parts that can go wrong.

To cope with the complexity and ensure we maintain high quality, we use
E2E (end-to-end) tests.

> E2E tests normally run at CI, so there's typically no need to run them
> manually.

##### Requirements

* `kubernetes` cluster (`minikube` has special support, but any cluster should
work)
* `docker`
* `kubectl`
* `bash`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So these tests can't work from a Windows machine?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically, you should be able to run tests on Windows via git bash. I didn't test it though...
Also, I want to note that running tests on Windows doesn't correlate to testing Windows clusters, and that we don't address that yet.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I have bash through busybox so I think this might work. I think this is fine for now. We'll need to support windows fully at some point though. We do support users developing on windows.

Copy link
Contributor Author

@MOZGIII MOZGIII Jul 23, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do support users developing on windows.

Yeah, that's a bit tricky to do for k8s, but we'll get there.

The dev flow (the one that uses skaffold) is currently limited to linux (on the contrary to e2e tests, what can be launched from Linux/macOS/Windows in theory). We can add support for using Windows/macOS there if we cross-compile vector for linux - but that would significantly increas the build time and won't be fit for quick iterations anymore.
I want to address it at some point. But not now - and most likely after we ship the "MVP for the integration".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tried it on Windows - got stuck at the building vector into release artifacts. This should be easy to fix.
We just need to give Windows a little bit more attention to solve all the necessary dependencies before thinking about k8s-specific stuff.


Vector release artifacts are prepared for E2E tests, so the ability to do that
is required too, see Vector [docs](https://vector.dev) for more details.

> Note: `minikube` has a bug in the latest versions that affects our test
> process - see https://github.com/kubernetes/minikube/issues/8799.
> Use version `1.11.0` for now.

> Note: `minikube` has troubles running on ZFS systems. If you're using ZFS, we
> suggest using a cloud cluster or [`minik8s`](https://microk8s.io/) with local
> registry.

##### Running the E2E tests

To run the E2E tests, use the following command:

```shell
CONTAINER_IMAGE_REPO=<your name>/vector-test make test-e2e-kubernetes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not use timberio/vector:dev or something for a default so users don't need this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Users don't have push access to timberio/vector:dev, they'll have to specify something either way.
I was thinking about providing a default value if minikube cache is used, sane value in that case would be localhost/<something> due to some CRI internals.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds reasonable. :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added the default when USE_MINIKUBE_CACHE=true is set.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You still have to pass USE_MINIKUBE_CACHE=true.
We can actually autodetect whether minikube is used or now based on the kubectl context.
However, this would complicate the test setup a bit too much, and I chose not to do it. Besides technical reasons, the user experience won't change much, as k8s e2e tests still require a lot of involvement from the user. There was an attempt in eliminating this involvement before, with our previous k8s impl, but it wasn't very successful due to the amounts of edge cases. So, this time we took a different route and made things more robust, but less automatic.

CONTRIBUTING.md clearly documents the proper way of invoking the command, so this shouldn't be a problem for our users.

```

Where `CONTAINER_IMAGE_REPO` is the docker image repo name to use, without part
after the `:`. Replace `<your name>` with your Docker Hub username.

You can also pass additional parameters to adjust the behavior of the test:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest you namespace the k8s specific variables to like KUBERNETES_USE_CACHE and KUBERNETES_CONTAINER_IMAGE_REPO so we don't pollute the global environment namespace with variables whose names are relatively unclear.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Prefixing all of them with KUBERNETES_ is way too much characters to type, but K8S_ would be fine I guess.
That said, I don't think global env pollution is an issue here. Unless the variables set globally at the system level (i.e. like in ~/.profile) - but then it'd require more than just K8S_ for them to make sense universally. I woundn't worry about that case.

Given that those are used in a very specific command, I don't think it's big issue. It's easy to type and remember though - and this is a big plus IMO.
If using them in .envrc or similar - it's often possible to use comments.


* `QUICK_BUILD=true` - use development build and a skaffold image from the dev
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about DEVELOPMENT=true? I'd like to use that same env in other tasks so it'd be nice to keep the number of vars down.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm pretty sure DEVELOPMENT didn't exist when I implemented this 😄 I'll see it if fits - we might want to deliberately have those different. I'll check and respond here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't yet! :( You've probably noticed we use separate make jobs for this distinction. Maybe that's appropriate here too?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're thinking about make test-e2e-kubernetes and a make test-e2e-kubernetes-quick, sth like that? I thought about doing it like that, but I discarded it because we'd still need to pass env vars is some cases, and adding entry points complicates things worse than just having one configurable one imo. It's a good idea though, thx for bringing this up.

I understand what you mean with DEVELOPMENT now. Well, I wouldn't worry about it now. Let's streamline it when it becomes a problem. For now, QUICK_BUILD captures the distinction with the regular flow better (for the "how do I make it quick!!1" situations 😄 ), and with lack of widely used DEVELOPMENT var - I think it'd be easier to find.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think QUICK_BUILD is a good name for this value.

flow instead of a production docker image. Significantly speeds up the
preparation process, but doesn't guarantee the correctness in the release
build. Useful for development of the tests or Vector code to speed up the
iteration cycles.

* `USE_MINIKUBE_CACHE=true` - instead of pushing the built docker image to the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So by default when someone runs make test-e2e-kubernetes CONTAINER_IMAGE_REPO=foo/bar that pushes to the Docker hub? And this is the default?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep. This shouldn't be surprising. Delivering images via registry is a standard way to make an image available to the Kubernetes. There's simply no other generic way of doing it. minikube has special support for delivering images into a cluster it manages without the registry, but we don't want to reimplement all that logic for a generic k8s cluster.
Users can spin up local registry if they don't want to use a cloud one.

registry under the specified name, directly load the image into
a `minikube`-controlled cluster node.
Requires you to test against a `minikube` cluster. Eliminates the need to have
a registry to run tests.
When `USE_MINIKUBE_CACHE=true` is set, we provide a default value for the
`CONTAINER_IMAGE_REPO` so it can be omitted.
Can be set to `auto` (default) to automatically detect whether to use
`minikube cache` or not, based on the current `kubectl` context. To opt-out,
set `USE_MINIKUBE_CACHE=false`.

* `CONTAINER_IMAGE=<your name>/vector-test:tag` - completely skip the step
of building the Vector docker image, and use the specified image instead.
Useful to speed up the iterations speed when you already have a Vector docker
image you want to test against.

* `SKIP_CONTAINER_IMAGE_PUBLISHING=true` - completely skip the image publishing
step. Useful when you want to speed up the iteration speed and when you know
the Vector image you want to test is already available to the cluster you're
testing against.

* `SCOPE` - pass a filter to the `cargo test` command to filter out the tests,
effectively equivalent to `cargo test -- $SCOPE`.

Passing additional commands is done like so:

```shell
QUICK_BUILD=true USE_MINIKUBE_CACHE=true make test-e2e-kubernetes
```

or

```shell
QUICK_BUILD=true CONTAINER_IMAGE_REPO=<your name>/vector-test make test-e2e-kubernetes
```

## Humans

After making your change, you'll want to prepare it for Vector's users
Expand Down
12 changes: 12 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

10 changes: 9 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ members = [
"lib/file-source",
"lib/tracing-limit",
"lib/vector-wasm",
"lib/k8s-test-framework",
]

[dependencies]
Expand Down Expand Up @@ -195,6 +196,7 @@ tokio-test = "0.2"
tokio = { version = "0.2", features = ["test-util"] }
assert_cmd = "1.0"
reqwest = { version = "0.10.6", features = ["json"] }
k8s-test-framework = { version = "0.1", path = "lib/k8s-test-framework" }

[features]
# Default features for *-unknown-linux-gnu and *-apple-darwin
Expand Down Expand Up @@ -431,11 +433,13 @@ kafka-integration-tests = ["sources-kafka", "sinks-kafka"]
loki-integration-tests = ["sinks-loki"]
pulsar-integration-tests = ["sinks-pulsar"]
splunk-integration-tests = ["sinks-splunk_hec", "warp"]
kubernetes-integration-tests = ["sources-kubernetes-logs"]

shutdown-tests = ["sources","sinks-console","sinks-prometheus","sinks-blackhole","unix","rdkafka","transforms-log_to_metric","transforms-lua"]
disable-resolv-conf = []

# E2E tests
kubernetes-e2e-tests = ["k8s-openapi"]

[[bench]]
name = "bench"
harness = false
Expand All @@ -453,5 +457,9 @@ name = "wasm"
harness = false
required-features = ["transforms-wasm", "transforms-lua"]

[[test]]
name = "kubernetes-e2e"
required-features = ["kubernetes-e2e-tests"]

[patch.'https://github.com/tower-rs/tower']
tower-layer = "0.3"
6 changes: 3 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -281,9 +281,9 @@ ifeq ($(AUTODESPAWN), true)
${MAYBE_ENVIRONMENT_EXEC} $(CONTAINER_TOOL)-compose stop
endif

PACKAGE_DEB_USE_CONTAINER ?= "$(USE_CONTAINER)"
test-integration-kubernetes: ## Runs Kubernetes integration tests (Sorry, no `ENVIRONMENT=true` support)
PACKAGE_DEB_USE_CONTAINER="$(PACKAGE_DEB_USE_CONTAINER)" USE_CONTAINER=none $(RUN) test-integration-kubernetes
PACKAGE_DEB_USE_CONTAINER ?= $(USE_CONTAINER)
test-e2e-kubernetes: ## Runs Kubernetes E2E tests (Sorry, no `ENVIRONMENT=true` support)
PACKAGE_DEB_USE_CONTAINER="$(PACKAGE_DEB_USE_CONTAINER)" scripts/test-e2e-kubernetes.sh

test-shutdown: ## Runs shutdown tests
ifeq ($(AUTOSPAWN), true)
Expand Down
5 changes: 5 additions & 0 deletions distribution/kubernetes/vector-namespaced.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,11 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# Set a reasonable log level to avoid issues with internal logs
# overwriting console output at E2E tests. Feel free to change at
# a real deployment.
- name: LOG
value: info
volumeMounts:
- name: var-log
mountPath: /var/log/
Expand Down
3 changes: 3 additions & 0 deletions kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,6 @@ resources:
- skaffold/manifests/namespace.yaml
- skaffold/manifests/config.yaml
- distribution/kubernetes/vector-namespaced.yaml

patchesStrategicMerge:
- skaffold/manifests/patches/env.yaml
16 changes: 16 additions & 0 deletions lib/k8s-test-framework/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
[package]
name = "k8s-test-framework"
version = "0.1.0"
authors = ["Vector Contributors <[email protected]>"]
edition = "2018"
description = "Kubernetes Test Framework used to test Vector in Kubernetes"

[dependencies]
k8s-openapi = { version = "0.9", default-features = false, features = ["v1_15"] }
serde_json = "1"
tempfile = "3"
once_cell = "1"
tokio = { version = "0.2", features = ["process", "io-util"] }

[dev-dependencies]
tokio = { version = "0.2", features = ["macros", "rt-threaded"] }
32 changes: 32 additions & 0 deletions lib/k8s-test-framework/src/exec_tail.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
//! Perform a log lookup.

use super::{Reader, Result};
use std::process::Stdio;
use tokio::process::Command;

/// Exec a `tail` command reading the specified `file` within a `Container`
/// in a `Pod` of a specified `resource` at the specified `namespace` via the
/// specified `kubectl_command`.
/// Returns a [`Reader`] that managed the reading process.
pub fn exec_tail(
kubectl_command: &str,
namespace: &str,
resource: &str,
file: &str,
) -> Result<Reader> {
let mut command = Command::new(kubectl_command);

command.stdin(Stdio::null()).stderr(Stdio::inherit());

command.arg("exec");
command.arg("-n").arg(namespace);
command.arg(resource);
command.arg("--");
command.arg("tail");
command.arg("--follow=name");
command.arg("--retry");
command.arg(file);

let reader = Reader::spawn(command)?;
Ok(reader)
}
Loading