Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPV6 check doesn't work as expected on AWS EKS #2787

Open
2 tasks done
nc-marco opened this issue Jul 4, 2024 · 6 comments
Open
2 tasks done

IPV6 check doesn't work as expected on AWS EKS #2787

nc-marco opened this issue Jul 4, 2024 · 6 comments
Labels

Comments

@nc-marco
Copy link

nc-marco commented Jul 4, 2024

Prerequisites

Description

In the current implementation, HAS_IPV6 is being used to determine if the host has an ipv6 stack and if it does, to use it. This unfortunately prevents locust.io from working in a standard EKS cluster in AWS. This is because though the container supports IPV6, the service resource in EKS does not support dual stack (as per documentation of AWS: https://docs.aws.amazon.com/eks/latest/userguide/cni-ipv6.html) and only supports IPV4 (or only IPV6 if you disable IPV4 which is of course not the default). Thus rpc between master and workers does not work. I removed the below lines from the code, rebuilt the container, and now it works. I would suggest, instead of the code trying to guess what stack to use, to actually be able to define it with an environmental variable.
Either way, even if this isn't deemed important enough to address, I leave this here so others can hopefully find this useful.

Command line

locust -f mylocustfile.py -

Locustfile contents

if HAS_IPV6:
            self.socket.setsockopt(zmq.IPV6, 1)

Python version

3.11.9

Locust version

2.29.1

Operating system

Linux 5.15.0-1056-aws

@nc-marco nc-marco added the bug label Jul 4, 2024
@cyberw
Copy link
Collaborator

cyberw commented Jul 4, 2024

That sounds really annoying! Either we can add a --ipv4 flag to force v4. But I'm thinking: Wouldnt the best be to open two ports? At least on the server side it should be easy (accept messages on both, respond on same). Client side might be sligthly more messy because we'd have to decide which one to send on.

Anyways, a PR for the above would be amazing, because I might not get around to it myself right now :)

@nc-marco
Copy link
Author

nc-marco commented Jul 4, 2024

Thanks for the quick response.  I was talking to a colleague of mine today about the situation with other hyperscalers, in particular GKE.  This supports a dual stack for the service resource just as it should. So my feeling is that this is a bit of an edge case where something like your first suggestion, that of just specifying an argument to force only ipv4, would be a reasonable solution.  I mean, I labeled this as a bug, but it makes perfect sense to assume that if an OS supports dual stack then it should be able to communicate with the master with either of the two protocols and to default to ipv6.  Furthermore, I don't see an elegant way to handle the client, as you pointed out, because it simply can't know until it tries and fails with ipv6.

So, if you are OK with this, I can create a small  PR for just the simpler solution which would be appreciated probably only by those who need to run locust on EKS ;-). 

Copy link

github-actions bot commented Sep 3, 2024

This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 10 days.

@github-actions github-actions bot added the stale Issue had no activity. Might still be worth fixing, but dont expect someone else to fix it label Sep 3, 2024
@nc-marco
Copy link
Author

nc-marco commented Sep 4, 2024

I had a vacation in the middle and rather busy with work. I hope to get a PR done within a week.

@cyberw cyberw removed the stale Issue had no activity. Might still be worth fixing, but dont expect someone else to fix it label Sep 4, 2024
@nc-marco
Copy link
Author

I have a small update on this. After attempting to adjust the code a bit by adding an option of "ipv4-only", I didn't particularly like what I was seeing. I needed to add an extra parameter to both Server and Client class init methods and all the consequences (there are no options available inside the rpc subdirectory). This can be done, but I had another idea and I wouldn't mind your feedback before implementing. What if we made it so that if one sets "master-bind-host" to an IPv4 address, then we do not enable IVP6? I tried setting "0.0.0.0" to this variable but currently this doesn't do it. However I can make a check to verify if it is an IPv4 address, and if so not enable IPv6 in zmqrpc.py (the default for this variable at the moment is "*"). Thoughts?

@cyberw
Copy link
Collaborator

cyberw commented Sep 25, 2024

That sounds like a reasonable way to do it, I'm completely fine with it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants