Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logs flooded with TLS handshake error: EOF #521

Open
meljishi opened this issue Nov 28, 2024 · 2 comments
Open

Logs flooded with TLS handshake error: EOF #521

meljishi opened this issue Nov 28, 2024 · 2 comments
Assignees
Labels
bug Something isn't working

Comments

@meljishi
Copy link

meljishi commented Nov 28, 2024

Describe the bug:

The intents operator is flooding our logs with TLS handshake error: EOF errors. We are starting to implement network policies on the cluster but they don't seem to be working. Currently not enforced, just analyzing traffic.

image Logs from the intents operator image Number of logs on our system

Expected behaviour:

Intents operator should just see the created clientIntents and just report them for the network mapper to show on the app dashboard

Steps to reproduce the bug:

We're just following the simples road we can. No mtls involved (supposedly), no enforcement or anything, so there should be no connection errors

Anything else we need to know?:

Not that I can think of

Environment details::

  • Kubernetes version:
  • 1.30
  • Cloud-provider/provisioner:
  • GKE
  • intents-operator image version:
  • otterize/intents-operator:v2.0.11
  • Install method: e.g. helm/static manifests
  • Helm
@orishoshan orishoshan added the bug Something isn't working label Dec 11, 2024
@l-rossetti
Copy link

shame issue here.

  • kubernetes version: 1.30
  • cloud provider: EKS with VPC CNI
  • install method: helm
  • intents-operator version: otterize/intents-operator:v3.0.0

@amitlicht
Copy link
Contributor

Thank you for reporting this issue @meljishi @l-rossetti.

We have investigated this issue and concluded that other than being noisy, these EOF errors do not indicate any real problem. In our test environments, most of these reports resulted from connections from the Kubernetes control plane, which terminated abruptly without any specific reason.

That said, we agree that the noisy logging is worth fixing. We've invested time and are planning to invest more in figuring out how to make the 3rd-party dependency that's using net/http mute those errors. We will keep you posted through this thread when we have a fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants