Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add the ability to mount multiple persistent volumes by using the EAP operator #190

Open
yersan opened this issue Mar 26, 2021 · 6 comments

Comments

@yersan
Copy link
Collaborator

yersan commented Mar 26, 2021

Overview

At the moment there are no possibilities to mount arbitrary persistent volumes to the server pods created by the Operator.

The Operator allows the configuration of a persistent volume for the ${jboss.server.data.dir} by using the storage attribute on the Custom Resource Definition (CRD). The persistent Volume Claim (PVC) to request a volume binding for the server data directory is automatically created by the Operator. This volume is never shared by other pod replicas.

Users could have other requirements, for example, they could have the need to mount additional volumes on specific paths outside of the server directories, and optionally, want to have this volume shared across all the replicas of the server pod by using an existing Persistent Volume Claim available in the pod namespace.

The goal of this feature is to expose to the Operator CRD the uses of the standard Kubernetes PersistentVolumeClaim and VolumeMount elements so the users can add PersistentVolumenClaims per pod and mount them into the server pod.
We will leave the current storage configuration to handle only the ${jboss.server.data.dir} persistent volume.

Optionally we could also include the possibility to add any volume type supported by the cloud provider by configuring Volume and make them available as shared volume.

Issue Metadata

https://issues.redhat.com/browse/EAP7-1675

Related Issues

Dev Contacts

jacopotessera (Community user)

QE Contacts

TBD

Testing By

TBD

Affected Projects or Components

WildFly Operator

Other Interested Projects

N/A

Requirements

Hard Requirements

  • The following element will be available on the CRD:
    • VolumeClaimTemplates: List of claims configurations that pods are allowed to reference.
  • The configuration available for VolumeMount will be directly available at the VolumeClaimTemplates

Configuration example

spec:
  applicationImage: "....."
  replicas: 2
  volumeClaimTemplates:
  - name: log-storage
     accessModes: [ "ReadWriteOnce" ]
     storage: 1Gi
     mountPath: /var/logs

The following configuration will create the following PersistenVolumenClaims:
log-storage-0 (bound always to the first replica)
log-storage-1 (bound always to the second replica)
The volume of each claim will be mounted at /var/logs
The storage is not shared across pod replicas

Nice-to-Have Requirements

  • Add also the ability to configure Volumes on the CR to mount them as shared volumes :
    • Volumes: List of volumes that can be mounted by containers belonging to the pod.
  • The configuration available for VolumeMount will be directly available at the Volumes element.
  • It is left to the user application to deal with concurrency when two application instances are accessing simultaneously the shared volume.

Configuration example

spec:
  applicationImage: "....."
  replicas: 2
  volumes:
  - name: shared-storage
    persistentVolumeClaim:
       claimName: shared-storage-pvc
    mountPath: /usr/share

The following configuration will not create any PVC. It assumes there is a PVC named shared-storage-pvc available on the namespace where the CR is being created.
The volume of the claim will be mounted at /usr/share
The storage is shared across all pod replicas

Non-Requirements

N/A

Test Plan

  • A unit test validating the logic is able to create the stateful set with the expected information.

Community Documentation

The user guide documentation and the WildFlyServer CRD documentation will involve reflecting the changes introduced by this RFE.

Release Note Content

Added the ability to mount additional volumes to the server pod

@jmesnil
Copy link
Member

jmesnil commented Mar 29, 2021

As a part of that RFE, should we also allow configuring volumeClaimTemplates for the statefulset?

@jmesnil
Copy link
Member

jmesnil commented Mar 29, 2021

Ideally, as a part of this RFE, I would like to deprecate the StorageSpec but we would need a way to properly configure the HOME directory in the bootable jar case to be able to mount the volume at the right location (corresponding to server.data.dir)

@yersan
Copy link
Collaborator Author

yersan commented Mar 29, 2021

Ideally, as a part of this RFE, I would like to deprecate the StorageSpec but we would need a way to properly configure the HOME directory in the bootable jar case to be able to mount the volume at the right location (corresponding to server.data.dir)

One thing we have to pay attention to the ability to configure the server.data.dir is that this directory must not be shared across server replicas. Each pod should get its own directory. For this reason, I initially saw the StorageSpec as a good thing to keep under control and dedicated to the server data storage only. Its name is not very descriptive for this unique functionality though.

If we deprecate it in favor of volume/volume mounts which are controlled by the users, we should try to avoid letting the users choose any existing PVC and try to keep the configuration under control to avoid unwanted situations.

@yersan
Copy link
Collaborator Author

yersan commented Apr 6, 2021

As a part of that RFE, should we also allow configuring volumeClaimTemplates for the statefulset?

@jmesnil It would be useful to the users as well. We could embrace two use cases here then:

  1. Be able to share the same PVC across all statefulset instances. A nice to have here would be to have a VolumeClaim configuration available at the CRD as well, so users that want to cover this use-case could define the PVC configuration in the CRD directly without needing to create the PCV manually on the cluster.
  2. Be able to create PVCs, via volumeClaimTemplates. By using the volumeClaimTemplates each storage will be independent and it will not be shared across server instances.

We can cover both on the same RFE.

@jmesnil
Copy link
Member

jmesnil commented Apr 22, 2021

For this RFE, we should focus on #2 to have separate persistent volume for each pod.

Shared storage (#1) might be useful in general but it is better addressed with something on top of the raw volumes (a DB, a shared cache)

@yersan
Copy link
Collaborator Author

yersan commented Jun 3, 2021

@jmesnil I added the shared storage as a nice to have.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants