You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While reviewing #368, I noticed a few point that I gathered here.
indeed, docker-compose seems to be working fine
While testing the helm chart, I noticed that the current version has quite some strong requirements on an existing infra (See affinity for instance). I see that many sections have been commented out, IMO affinity can be present but should also be commented out by default
Most defaults in values.yaml are really parity specifc
I don't have a LoadBalancer in my cluster. It is quite easy to pass values so the shard service uses NodePort. Setting this as default, may allow more users to have it work right away.
after fixing the above, almost everything started fine expect an issue with the readiness probe of the frontend: Readiness probe failed: Get "http://10.244.0.184:8000/": dial tcp 10.244.0.184:8000: connect: connection refused and the frontend is stuck in a crash loop. This issue is due to the fact that I switched to a NodePort but the template only considers targetPort so the 2 probes are testing on the wrong port. A quick fix is to adjust the targetPort that makes it a bit hacky
IMO value.yml should work for a "neutral" cluster such as a brand new minikube. We could have a value-parity.yml in the repo for our infra but that may not be the ideal location for such a file
In the same line I think the frontend could be exposed as NodePort by default
After fixing the above, I could install the chart. However, my http requests to frontend and shard failed from outside the cluster. A curl from inside the cluster from a test pod did repond though.
While reviewing #368, I noticed a few point that I gathered here.
affinity
for instance). I see that many sections have been commented out, IMOaffinity
can be present but should also be commented out by defaultvalues.yaml
are really parity specifcReadiness probe failed: Get "http://10.244.0.184:8000/": dial tcp 10.244.0.184:8000: connect: connection refused
and the frontend is stuck in a crash loop. This issue is due to the fact that I switched to a NodePort but the template only considerstargetPort
so the 2 probes are testing on the wrong port. A quick fix is to adjust thetargetPort
that makes it a bit hackyvalue.yml
should work for a "neutral" cluster such as a brand new minikube. We could have avalue-parity.yml
in the repo for our infra but that may not be the ideal location for such a fileNodePort
by defaultAfter fixing the above, I could install the chart. However, my http requests to
frontend
andshard
failed from outside the cluster. A curl from inside the cluster from a test pod did repond though.So here is my test setup:
affinity
sectionThe text was updated successfully, but these errors were encountered: