Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(kubernetes source)!: Use Pod's UID to exclude Vector's logs #2188

Closed
wants to merge 2 commits into from

Conversation

ktff
Copy link
Contributor

@ktff ktff commented Mar 31, 2020

Closes #2171.

With this we will always exclude current vector's pod's logs.

Enables testing of v1.13.12 in CI.


Deployment update

To update your previous Vector deployment .yaml, add this section to the container definition of Vector:

        env:
        - name: VECTOR_POD_UID
          valueFrom:
            fieldRef:
              fieldPath: metadata.uid

This will expose Pod's UID to Vector so it knows what logs not to collect. Full example can be found here.

Signed-off-by: Kruno Tomola Fabro <[email protected]>
@ktff ktff requested a review from LucioFranco April 1, 2020 08:15
@ktff ktff self-assigned this Apr 1, 2020
@ktff
Copy link
Contributor Author

ktff commented Apr 1, 2020

I'll mark this as a breaking change, because it requires changing how vector with kubernetes source is to be deployed. Simple steps on how to update previous deployments is in the opening comment.

@ktff ktff changed the title fix(kubernetes source): Use Pod's UID to exclude Vector's logs fix(kubernetes source)!: Use Pod's UID to exclude Vector's logs Apr 1, 2020
@ktff ktff added the meta: breaking change Anything that breaks backward compatibility. label Apr 1, 2020
@binarylogic
Copy link
Contributor

Thanks @ktff, could you make sure that #1450 is accurate with these changes as well?

@@ -65,8 +75,16 @@ impl SourceConfig for KubernetesConfig {

let now = TimeFilter::new();

let (file_recv, file_source) =
file_source_builder::FileSourceBuilder::new(self).build(name, globals, shutdown)?;
let vector_pod_uid = env::var(VECTOR_POD_UID_ENV).map_err(|error| BuildError::PodUid {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know I asked this before but is this env var always available? If it is lets make sure we document this very clearly, if not we need to find a way to work around this and that may be just ignoring this and letting vector logs come through.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also curious.

Copy link
Contributor Author

@ktff ktff Apr 3, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is present if .yaml is configured with

       env:
        - name: VECTOR_POD_UID
          valueFrom:
            fieldRef:
              fieldPath: metadata.uid

, and if it isn't configured like that, I wouldn't consider it a proper configuration. In the same way it isn't a proper configuration to run vector without mapping the folders necessary for it's operation.

Passing the logs would make it easy for us brick(EDIT: to strong of a word, the Github runners became unresponsive so they were stopped automatically after some time) the node, as it happened with Github's nodes, if we accidentally, or forget, add a info log somewhere that depends on the number of logs passing. And even worse if the size of the info depends on the size of the log.

For other ways, we filtered out logs for containers which had names starting with vector, which could be suprising for users that name their containers that way, but we could name vector container in some unique way, but then we are back at the beginning where we need the .yaml to be configured in a proper way.

There is also another way, that just poped into my head, we could with intention log a message with unique string of characters and once we detect that string as a message in the log we would know which UID is ours and could filter them out, although file source would still unecessary be picking them up from the log file.

What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not really happy with this required env var personally. I think its a pretty bad UX hazard.

Do we know what other solutions do here? What does fluentd do?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes! I am very much in favor of anything that improves reliability and confidence for v1 of our Kubernetes integration. I do not want to compromise that supporting old versions right now. If requiring >= 1.14 achieves that then let's do it.

AWS EKS still supports v1.13.

That's ok. EKS also offers 1.14 and 1.15.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, simplifying the source would be great. It seems like 1.14 is over a year old and technically not supported anymore (see here and here).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add my 2 cents, we've been using in production for quite some time fluentd and there has been an attempt to use fluent-bit, and now we're replacing fluent-bit with vector. In all our configurations we've been deploying the entire log-related stack in its own namespace. Mostly because with an appropriated Role and PSP it can be easily restricted.
Even with the current (0.8.2) kubernetes source it works perfectly fine (because you can just whitelist any namespace except vector's).
Just saying that it can be a suggested pattern of deployment and in this case the solution can be less invasive. Otherwise filtering using Pod's UID can be opted-in.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Alexx-G we really appreciate your feedback! So if I understand correctly for you all it would make sense to have some way in the vector config to set the namespace you are deploying your log stack in. From this we can ignore all logs from that namespace? This to me seems like the ideal solution.

Copy link
Contributor

@MOZGIII MOZGIII Apr 4, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using a separate namespace is a particular deployment pattern, and not everyone can use that. So we can't solely rely on that - it would be a very significant limiting factor.

As an alternative to env vars, we could use downward API mounts: https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/

We can also communicate with the cluster API to determine the current container info: https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-api-from-within-a-pod

Also, a hostname in the container environment is by default set to the pod name.

That said, I like the explicit configuration as this PR proposes. This, being explicit, is much easier to understand and maintain for infra teams. I.e. when they operate on a configuration for vector - it becomes very simple to reason which logs are ignored and which are picked up. And this is, arguably, a better tradeoff when we're talking k8s.
Also, since the industry standard in k8s-compatible apps is to ship the deployment configuration as part of the release, I don't see this configuration detail as particularly problematic. We should provide a helm chart for our stuff anyway, as well as applicable daemonset/sidecar deployment files. This would be a proper implementation of our "good defaults" paradigm in the k8s sense.

@binarylogic
Copy link
Contributor

Just noting, I'd like to get this in 0.9.0, and will wait to release until we get this merged. I'd like to get 0.9.0 out this week.

Copy link
Contributor

@Hoverbear Hoverbear left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other than @LucioFranco 's comment , looks good.

@ktff ktff mentioned this pull request Apr 3, 2020
@ktff
Copy link
Contributor Author

ktff commented Apr 3, 2020

Thanks @ktff, could you make sure that #1450 is accurate with these changes as well?

No need, they are the same. This only changes the vector-daemonset.yaml, and those that are using it or have created their own custom .yaml, based on this one, should also update it once they update Vector. But we could avoid all that if we implement the last option in this comment .

Copy link
Contributor

@LucioFranco LucioFranco left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like us to not merge this PR until we have found a proper solution for this. Unless we are happy to merge a not proper solution.

@Hoverbear Hoverbear self-requested a review April 5, 2020 23:06
@binarylogic
Copy link
Contributor

I'd like for us to agree on our approach in #2222 before merging this. It's likely that our approach will allow us to exclude Vector's logs through deployment conventions.

@binarylogic
Copy link
Contributor

@MOZGIII feel free to close if you'd like to take a different approach.

@binarylogic binarylogic assigned MOZGIII and unassigned ktff Apr 8, 2020
@Hoverbear Hoverbear removed their request for review April 9, 2020 18:08
@Hoverbear Hoverbear self-requested a review April 16, 2020 16:28
@Hoverbear
Copy link
Contributor

Is this issue still relevant? (cc @MOZGIII )

@MOZGIII
Copy link
Contributor

MOZGIII commented May 25, 2020

No, closing in favor of #2222.

@MOZGIII MOZGIII closed this May 25, 2020
@binarylogic binarylogic deleted the exclude_self branch July 23, 2020 17:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
meta: breaking change Anything that breaks backward compatibility.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Excluding vector logs on Kubernets v1.13 and lower when include_namespace is not empty
7 participants