-
Notifications
You must be signed in to change notification settings - Fork 132
Auto-start of #1
Comments
That is a good point. Indeed, redis cluster management is a bit tedious. If a self-managing way would have existed, I don't think you would even need a So if I understand you correctly you want to do something of the following. (let's assume a 1:1 master-slave ratio):
I like the idea, but it has some complications:
That being said, I wrote this how-to with the assumption that you do not scale your redis cluster up and down multiple times a day. Compared to spinning up new VMs and configuring them by hand, having to perform a handful of copy-and-paste commands to scale up your cluster whenever you need seems like a fair compromise, don't you think? |
Yeah. I had not run redis in k8s before (lots of kube, lots of redis, never together), so I had never gone down the full automation path. The assumption that a human will, at least once, manually create the cluster and add/remove nodes is not exactly cloud-friendly.
Actually, I would like something even a step beyond. FYI, this is how I automate
We can weave master-vs-slave logic in here, or have a separate For the complications:
From my perspective, the biggest stumbling block to getting it fully automated is the initial cluster setup and join.
As long as I can do them via CI (kube config files) and not logging in, sure. I think I am going to fork your repo and make some suggestions. Good idea? You have MIT license, so I figure you don't mind? |
I don't have much time to look into your comment now, sorry. But:
Of course! That's why it's open source after all. I'm curious what you'll come up with 😉 |
Ugh, I am coming up against all sorts of limitations. See my SO question here basically:
The last one is solvable with timeouts (wait for all other nodes), but that is fragile There is a "right" solution to this, which is the Kafka model (or consul's but with sharding):
Essentially, there only are masters, they are self-adjusting and self-healing and self-joining. But Redis is built in a different way entirely. Sigh. |
Right, that's kind of what I also encountered. Essentially, all of your points are the main reason I used StatefulSets in the first place. I admire your courage in trying to find an all-automated solution though 😉 . |
Yeah but even statefulsets don't solve it. They make the hostname consistent and if you map external persistent volumes (I prefer not to) consistently mounted, but the fundamental protocol and cluster problems continue. Basically, redis is a great KV tech that was built pre cloud. |
@sanderploegsma Does this project support k8s 1.8+? |
@KeithTt not sure why you're hijacking the thread, but yeah, it should. |
This is a nice clean example of redis cluster running in k8s. The one challenge is the cluster initialization and adding/removing nodes.
Is there any clean self-managing (i.e. autonomous) way to do it? Since you are using a
StatefulSet
, you know the names of the (initial) pods will beredis-cluster-0
,redis-cluster-1
, etc. You probably even could do 2StatefulSet
s if you wanted guarantees as to which pods are master vs slave.Is there no way to have
redis-cluster-0
automatically come up and initialize a cluster withcluster-1
andcluster-2
, or for that matter, just itself, and havecluster-1
andcluster-2
self-register tocluster-0
? Same for expansion.In a kube environment, having to do
kubectl exec ...
is not optimal (or recommended).I am kind of surprised that redis doesn't have any option in the config file for saying, "here are your initial cluster peers".
You kind of wish an InitContainer could do this, but they complete before the pod containers run, so that will not help. Perhaps some extension of the image, so it spins up a child process? Or a sidecar container (although that would have a one-off and shouldn't be restarted, whereas restart policies are pod-level, not container-level)? Or a
post-start
lifecycle hook?The text was updated successfully, but these errors were encountered: