Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set up test deployment on our test k8s cluster #27

Closed
amoeba opened this issue Mar 26, 2021 · 5 comments
Closed

Set up test deployment on our test k8s cluster #27

amoeba opened this issue Mar 26, 2021 · 5 comments
Assignees
Milestone

Comments

@amoeba
Copy link
Contributor

amoeba commented Mar 26, 2021

As of #3, @ThomasThelen has gotten Slinky running under minikube so the next thing to work on is to get a real deployment under k8s going so we can test all the odds and ends of getting that done (ie Docker Hub, ingress, etc).

What would be ideal would be hooking things up enough that I can, as a developer, make changes to services like the worker or scheduler, and see my changes reflected in the test cluster relatively quickly.

@amoeba amoeba added this to the 0.2.0 milestone Mar 26, 2021
@ThomasThelen
Copy link
Member

I think one of the last things we'll need to do is switch the volume driver over to ceph. I'll be providing an estimate for the amount of space our volumes will need. Persistent volumes are on the cluster level, which the slinky account doesn't have access to (for security reasons). Giving an estimate of the space should give administrators a good estimate of how much space to allocate for PV's on the cluster which the PVC's can then fill.

More information on ceph can be found in the NCEAS Computing repository.

@gothub
Copy link
Collaborator

gothub commented May 1, 2021

@ThomasThelen @amoeba A PV is setup on the k8s dev cluster that slinky can use. The PV was created with the admin context.
Using the slinky 'kubeconfig' that is in Keybase, here is an example (run on my laptop) of creating a slinky PVC that uses the PV

avatar:slinky slaughter$ kubectl create -f pvc.yaml
persistentvolumeclaim/slinky-pvc created
avatar:slinky slaughter$ kubectl get pvc
NAME         STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
slinky-pvc   Bound    nfs2-pv   50Gi       RWX                           6s
avatar:slinky slaughter$ more pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: slinky-pvc
  namespace: slinky
spec:
  accessModes:
  - ReadWriteMany
  resources:
     requests:
       storage: 50Gi
  storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set
  volumeName: nfs2-pv

The PV and PVC use NFS, which is temporary until Ceph can be setup, but this should be sufficient for testing.
Please let me know if you encounter any problems with the PV or PVC.

@amoeba
Copy link
Contributor Author

amoeba commented May 5, 2021

Thanks @gothub! I talked with @ThomasThelen and he's going to test it out soon.

@ThomasThelen
Copy link
Member

ThomasThelen commented Jun 29, 2021

The core services are deployed and stable; once we determine precisely how we want to expose virtuoso we can close this issue.

@ThomasThelen
Copy link
Member

Linked to #41 because after it's accepted, we'll have a fully protected dev deployment that we can test on. It's also been confirmed that the graph has been populating using the geolink convention.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants