Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Network topology awareness #669

Open
phillebaba opened this issue Dec 17, 2024 · 5 comments
Open

Network topology awareness #669

phillebaba opened this issue Dec 17, 2024 · 5 comments
Labels
enhancement New feature or request

Comments

@phillebaba
Copy link
Member

phillebaba commented Dec 17, 2024

Describe the problem to be solved

When doing lookups in the DHT the first peer found will be returned. The query does not take into consideration the network topology of the Kuberentes cluster. Most cloud providers have multiple zones per region. Network traffic within the same zone is usually faster and cheaper compared to cross zone traffic. Spegel should prioritize peers in the same zone over other peers when possible.

Proposed solution to the problem

There has been some interesting research already done on the topic about partitioning.

https://research.protocol.ai/publications/enriching-kademlia-by-partitioning/monteiro2022.pdf

We need to decide on the best solution to partition the DHT to avoid added latency for lookups.

Relates to #551 and #530

@phillebaba phillebaba added the enhancement New feature or request label Dec 17, 2024
@phillebaba phillebaba moved this to Todo in Roadmap Dec 17, 2024
@craig-seeman
Copy link

I'm curious what your thoughts are on an (admittedly rather naive) but simple approach to topology. I could be misinterpreting the paper linked, but it almost sounds like this sort of feature currently does not exist in the libraries and would have to be implemented? I have not looked too deeply into the code that translate sha key on the p2p side to how we interface with containerd, so this may not even work though.

If this is the case however, the approach I was wondering about would be to essentially just leverage a majority of the existing code and simply (if some sort of spegel topology setting is enabled) advertise an additional image with a single kubernetes label added in the p2p key. This would obviously double the keyspace; but given the kubernetes best practices etc - this count shouldn't get too out of hand.

For example:
Image: 983487d9c4b7451b0e7d282114470d3a0ad50dc5e554971a4d1cda04acde670b

What could happen is spegel advertises out [983487d9c4b7451b0e7d282114470d3a0ad50dc5e554971a4d1cda04acde670b] as it currently does, but also advertise out a key of
sha256("983487d9c4b7451b0e7d282114470d3a0ad50dc5e554971a4d1cda04acde670b" + "topology.kubernetes.io/zone: "us-east-1c"") [589aa1323f0a4834bcbfe3a50a157c4fdded821c79873e2a21e08d3e36654f42]

During an image pull - if the topology setting is enabled, the first lookup would be to the topology (in this case looking for hash 589aa1323f0a4834bcbfe3a50a157c4fdded821c79873e2a21e08d3e36654f42) if none is found (none on the local zone) a possible additional setting could be in place that would determine what to do next (try a cluster-wide lookup or pull from the repository directly instead) and would do a query for 983487d9c4b7451b0e7d282114470d3a0ad50dc5e554971a4d1cda04acde670b.

@phillebaba
Copy link
Member Author

I do not think you are wrong with your analysis. The paper describes a situation that is a lot more complicated than anything that we are really facing. Spegel is aimed to run in private clusters so it does not have the same internet scale challenges. We only have to deal with a fixed set of regions, for most this would be 3 availability zones. For others it may be more if they are running clusters cross regions, but I find that this is a lot more uncommon.

While it would increase the advertised keys I do think the benefits from prioritizing peers in the same zone outweighs any cost. Either way we can easily run the benchmarks and verify that this is the case.

I have been looking around and have not found a good way to pass the topology key into the running pods. I have found a couple of issues in the Kubernetes repository but most people suggest using external projects to solve this. Are you aware of an easy way to do this?

I am open to making this an opt in feature initially, to start evaluating the cost of the increased advertised keys. Down the road if we come up with a smarter solution, we can always make changes.

@phillebaba
Copy link
Member Author

Stumbled upon some more good research on this topic which seems to be linked to the original paper I linked.

https://asc.di.fct.unl.pt/~jleitao/thesis/MonteiroMSc.pdf

@craig-seeman
Copy link

craig-seeman commented Jan 14, 2025

I have been looking around and have not found a good way to pass the topology key into the running pods. I have found a couple of issues in the Kubernetes repository but most people suggest using external projects to solve this. Are you aware of an easy way to do this?

One of the primary ways in my experience in how we deal with this is using the downward api directly in the manifest and just throwing it at the pod as an env variable in the manifest definition of the container's env section -

        - name: NODE_ZONE
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['topology.kubernetes.io/zone']

The other method using the downwards api would be to mount the api as a volume that the container can read and all of the various options will be exposed as a directory within the container itself.

One thing to note though, this approach of using the downwards api can SOMETIMES get tricky if they're reading custom labels tagged by another daemonset and that daemon starts after your pod.

https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/

@craig-seeman
Copy link

craig-seeman commented Jan 23, 2025

Hey @phillebaba I was having a think the past few days and keep thinking about #670 wondering if there is some common approach that could solve the two issues. Unless my reading of the libp2p docs (and possible chatgpt hallucinations) are incorrect, I think we may be able to utilize the k/v "store" built into dht/kdht to store host information such as AZ opposed to just the cid/sha?

https://github.com/libp2p/go-libp2p/blob/2a580cf7c8f53c66806100036c0d08bef0f343e0/core/routing/routing.go#L60

From what I can gather, though, there is no current way to call findproviders and include a specific host/peer k/v, so we'd have to query and get a list of multiple hosts and sort through their k/vs which would include the AZ until we find a suitable match.

What I'm wondering here too is with this approach, since we are able to see the number of hosts/providers of a specific key/image sha, if there are less than, say 3 provider hosts, we could do a secondary query to a host directly which would somehow publish a metric of it's current bandwidth utilization (I'm not sure if this would work with publishing the k/v with bandwidth util on the current host as those are not done in a rapid manner in which bandwidth utilization would be changing)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: Todo
Development

No branches or pull requests

2 participants