-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set up test deployment on our test k8s cluster #27
Comments
I think one of the last things we'll need to do is switch the volume driver over to ceph. I'll be providing an estimate for the amount of space our volumes will need. Persistent volumes are on the cluster level, which the slinky account doesn't have access to (for security reasons). Giving an estimate of the space should give administrators a good estimate of how much space to allocate for PV's on the cluster which the PVC's can then fill. More information on ceph can be found in the NCEAS Computing repository. |
@ThomasThelen @amoeba A PV is setup on the k8s dev cluster that slinky can use. The PV was created with the admin context.
The PV and PVC use NFS, which is temporary until Ceph can be setup, but this should be sufficient for testing. |
Thanks @gothub! I talked with @ThomasThelen and he's going to test it out soon. |
The core services are deployed and stable; once we determine precisely how we want to expose virtuoso we can close this issue. |
Linked to #41 because after it's accepted, we'll have a fully protected dev deployment that we can test on. It's also been confirmed that the graph has been populating using the geolink convention. |
As of #3, @ThomasThelen has gotten Slinky running under minikube so the next thing to work on is to get a real deployment under k8s going so we can test all the odds and ends of getting that done (ie Docker Hub, ingress, etc).
What would be ideal would be hooking things up enough that I can, as a developer, make changes to services like the worker or scheduler, and see my changes reflected in the test cluster relatively quickly.
The text was updated successfully, but these errors were encountered: