Skip to content

Commit

Permalink
Commit triggered by a change on the main branch of helm-charts-dev
Browse files Browse the repository at this point in the history
  • Loading branch information
gr4n0t4 committed Jun 17, 2021
1 parent 9fa55d2 commit d03e377
Show file tree
Hide file tree
Showing 88 changed files with 2,320 additions and 100 deletions.
23 changes: 23 additions & 0 deletions charts/ades/.helmignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
23 changes: 23 additions & 0 deletions charts/ades/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
apiVersion: v2
name: ades
description: A Helm chart for the ADES

# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.8

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 0.3.5
156 changes: 156 additions & 0 deletions charts/ades/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,156 @@
# HELM Chart for Application, Deployment Execution Service

## Prerequisites

* This chart requires Docker Engine 1.8+ in any of their supported platforms. Please see vendor requirements [here for more information](https://docs.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker).
* At least 2GB of RAM. Make sure to assign enough memory to the Docker VM if you're running on Docker for Mac or Windows.

## Chart Components

* Creates an ADES deployment
* Creates a Kubernetes Service on specified port (default: 80)
* Creates a processing-manager ades service account with an advanced role allowing to create namespaces and related resources

## Important note about processing manager

The ADES provision a new namespace for each processing job submitted. To do so, it uses a specific service account created during the deployment. This service account will have admin privileges. The service account creates is called `<release-name>-processing-manager`.

## Installing the Chart

You can install the chart with the release name `ades` in `eoepca` namespace as below.

```console
$ helm install ades charts/ades --namespace eoepca
...
```

> Note - If you do not specify a name, helm will select a name for you.
### Stage-in/Out with Stars

By default, CWL values for stage-in and stage-out are not set. Therefore, the default stage-in and stage-out from [`cwl-wrapper`](https://github.com/EOEPCA/cwl-wrapper) project are used. It is strongly recommended to install the default stage-in and stage-out contained in this repository.
This can be done installing or upgrading the chart with

```console
helm upgrade --install ades charts/ades/ --namespace eoepca --set-file workflowExecutor.stagein.cwl=charts/ades/files/cwl/stagein/terradue_stars_t2_latest.cwl --set-file workflowExecutor.stageout.cwl=charts/ades/files/cwl/stageout/terradue_stars_latest.cwl
```

Those stage-in and stage-out includes the [Stars](https://github.com/Terradue/Stars) CLI that are able to read the EOEPCA catalog reference and provision with the assets referenced. In stage-in, data are also harvested to create a [STAC](https://github.com/radiantearth/stac-spec) catalog describing the assets staged.

### Installed Components

You can use `kubectl get` to view all of the installed components.

```console
$ kubectl get all -l app.kubernetes.io/instance=ades -n eoepca
NAME READY STATUS RESTARTS AGE
pod/ades-66fc8f5566-w7456 2/2 Running 0 6d

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ades ClusterIP 172.30.89.159 <none> 80/TCP 8d

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/ades 1 1 1 1 8d

NAME DESIRED CURRENT READY AGE
replicaset.apps/ades-6669bcbc5d 0 0 0 8d
replicaset.apps/ades-66fc8f5566 1 1 1 7d

NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
route.route.openshift.io/ades-5w2ww ades.eoepca.com / ades http edge/Redirect None
```

## Connecting to the ADES

1. Run the following command to get the openAPI document

```console
$ curl -H 'Accept: application/json' https://ades-cpe.terradue.com/terradue/wps3/api
```

## Values

The configuration parameters in this section control the resources requested and utilized by the ADES instance.

| Parameter | Description | Default |
| --------------------------------------- | ---------------------------------------------------------------------------------------------- | -------------------------------- |
| clusterAdminRoleName | Name of the role binding for the ades service account that provision resources for processing | `cluster-admin` |
| useKubeProxy | If the ADES interacts with the kubernetes cluster via proxy or not. If false, workflowExecutor.kubeconfig file location must be provided | `true` |
| workflowExecutor.kubeconfig | kube config file to be used by the ADES to connect to th cluster where to provision resource for the processing. | `files/kubeconfig` |
| workflowExecutor.inputs | Key/Value Dictionary of input values to be passed to all nodes of the application workflow. They will be prefixed with 'ADES_'. e.g. 'APP: ades' will be 'ADES_APP: ades' | `[Empty dictionary]` |
| workflowExecutor.main/stagein/stageout/rulez | data structure for defining the CWL parameter used by [`cwl-wrapper`](https://github.com/EOEPCA/cwl-wrapper) | `empty` |
| workflowExecutor.processingStorageClass | kubernetes storage class to be used for provisioning volumes for processing. Must be ReadWriteMany compliant | `glusterfs-storage` |
| workflowExecutor.processingVolumeTmpSize | Size of the volumes for processing result of one workflow nodeouput | `5Gi`
| workflowExecutor.processingVolumeOutputSize | Size of the volumes for processing result for the whole workflow ouput | `10Gi` |
| workflowExecutor.processingMaxRam | Total maximum RAM pool available for all pods running concurrently | `16Gi` |
| workflowExecutor.processingMaxCores | Total maximum CPU cores pool available for all pods running concurrently | `8` |
| workflowExecutor.processingKeepWorkspace | Name of the secret to use to pull docker images | `false` |
| workflowExecutor.stageincwl | Stage-in CWL workflo file path | `files/stageincwl.cwl` |
| workflowExecutor.imagePullSecrets | ImagePullSecrets is an optional list of references to secrets for the processing namespace to use for pulling any of the images used by the processing pods. If specified, these secrets will be passed to individual puller implementations for them to use. For example, in the case of docker, only DockerConfig type secrets are honored. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod | `[]` |
| wps.maincfgtpl | Main config file template for WPS interface | `files/main.cfg.tpl` |
| wps.usePep | Use the policy Enforcement Point for registering resources | `false` |
| wps.pepBaseUrl | Policy Enforcement Point Base Url | `https://pep.eoepca.terradue.com` |
| persistence.enabled | Persist the user and processing Data of the ADES | `true` |
| persistence.existingUserDataClaim | Identify an existing Claim to be used for the User Data Directory | `Commented Out` |
| persistence.existingProcServicesClaim | Identify an existing Claim to be used for the Processing data directory | `Commented Out` |
| persistence.storageClass | Storage Class to be used | `standard` |
| persistence.userDataAccessMode | Data Access Mode to be used for the user data Directory | `ReadWriteOnce` |
| persistence.userDataSize | PVC Size for user data Directory | `10Gi` |
| persistence.procServicesAccessMode | Data Access Mode to be used for the processing data Directory | `ReadWriteOnce` |
| persistence.procServicesSize | PVC Size for user data Directory | `5Gi` |
| tolerations | List of node taints to tolerate | `[]` |
| affinity | Map of node/pod affinities | `{}` |
| podSecurityContext | SecurityContext to apply to the pod | `{}` |

## Liveness and Readiness

The ADES instance has liveness and readiness checks specified.

## Resources

You can specify the resource limits for this chart in the values.yaml file. Make sure you comment out or remove the curly brackets from the values.yaml file before specifying resource limits.
Example:

```yaml
resources:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 1
memory: 2Gi
```
## Persistence Examples
Persistence in this chart can be enabled by specifying `persistence.enabled=true`. The path to the user and processing data can be customized to fit different requirements.

* Example 1 - Enable persistence in values.yaml without specifying claim
> Note - This is useful for local development in a minikube environment

```yaml
persistence:
enabled: true
# existingUserDataClaim:
# existingProcServicesClaim:
# storageClass: "-"
userDataAccessMode: ReadWriteOnce
userDataSize: 5Gi
procServicesAccessMode: ReadWriteOnce
procServicesSize: 2Gi
```

* Example 2 - Enable persistence in values.yaml with existing claim
> Note - This is useful for production based environments for persistence volumes and claims already exist.

```yaml
persistence:
enabled: true
existingUserDataClaim: pvc-ades-userdata
existingProcServicesClaim: pvc-ades-processingdata
# storageClass: "-"
# userDataAccessMode: ReadWriteOnce
# userDataSize: 1Gi
# procServicesAccessMode: ReadWriteOnce
# procServicesSize: 1Gi
```
18 changes: 18 additions & 0 deletions charts/ades/files/cwl/stagein/eoepca_stage-in_0_2.cwl
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
baseCommand: stage-in
class: CommandLineTool
hints:
DockerRequirement:
dockerPull: eoepca/stage-in:0.2
id: stagein
arguments:
- prefix: -t
position: 1
valueFrom: "./"

inputs: {}
outputs: {}
requirements:
EnvVarRequirement:
envDef:
PATH: /opt/anaconda/envs/env_stagein/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ResourceRequirement: {}
48 changes: 48 additions & 0 deletions charts/ades/files/cwl/stagein/eoepca_stage-in_0_9.cwl
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@

baseCommand: stage-in
arguments: ['-t', './']
class: CommandLineTool
hints:
DockerRequirement:
dockerPull: eoepca/stage-in:0.9
id: stagein
inputs:
stage_in_username:
inputBinding:
position: 1
prefix: -u
type: string?
stage_in_password:
inputBinding:
position: 2
prefix: -p
type: string?
stage_in_s3_endpoint:
inputBinding:
position: 3
prefix: -e
type: string?
stage_in_s3_region:
inputBinding:
position: 4
prefix: -r
type: string?
stage_in_s3_signature_version:
inputBinding:
position: 5
prefix: -s
type: string?
input_reference:
inputBinding:
position: 6
type: string[]
outputs:
results:
outputBinding:
glob: .
type: Any
requirements:
EnvVarRequirement:
envDef:
PATH: /opt/anaconda/envs/env_stagein/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ResourceRequirement: {}
34 changes: 34 additions & 0 deletions charts/ades/files/cwl/stagein/terradue_stars_latest.cwl
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
cwlVersion: v1.0
baseCommand: Stars
doc: "Run Stars for staging input data"
class: CommandLineTool
hints:
DockerRequirement:
dockerPull: terradue/stars:latest
id: stars
arguments:
- copy
- -v
- -rel
- -r
- '4'
- -o
- ./
inputs:
ADES_STAGEIN_AWS_SERVICEURL:
type: string?
ADES_STAGEIN_AWS_ACCESS_KEY_ID:
type: string?
ADES_STAGEIN_AWS_SECRET_ACCESS_KEY:
type: string?
outputs: {}
requirements:
EnvVarRequirement:
envDef:
PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# AWS__Profile: $(inputs.aws_profile)
# AWS__ProfilesLocation: $(inputs.aws_profiles_location.path)
AWS__ServiceURL: $(inputs.ADES_STAGEIN_AWS_SERVICEURL)
AWS_ACCESS_KEY_ID: $(inputs.ADES_STAGEIN_AWS_ACCESS_KEY_ID)
AWS_SECRET_ACCESS_KEY: $(inputs.ADES_STAGEIN_AWS_SECRET_ACCESS_KEY)
ResourceRequirement: {}
35 changes: 35 additions & 0 deletions charts/ades/files/cwl/stagein/terradue_stars_t2_latest.cwl
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
cwlVersion: v1.0
baseCommand: Stars
doc: "Run Stars for staging input data"
class: CommandLineTool
hints:
DockerRequirement:
dockerPull: terradue/stars-t2:0.6.18.19
id: stars
arguments:
- copy
- -v
- -rel
- -r
- '4'
- -o
- ./
- --harvest
inputs:
ADES_STAGEIN_AWS_SERVICEURL:
type: string?
ADES_STAGEIN_AWS_ACCESS_KEY_ID:
type: string?
ADES_STAGEIN_AWS_SECRET_ACCESS_KEY:
type: string?
outputs: {}
requirements:
EnvVarRequirement:
envDef:
PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# AWS__Profile: $(inputs.aws_profile)
# AWS__ProfilesLocation: $(inputs.aws_profiles_location.path)
AWS__ServiceURL: $(inputs.ADES_STAGEIN_AWS_SERVICEURL)
AWS_ACCESS_KEY_ID: $(inputs.ADES_STAGEIN_AWS_ACCESS_KEY_ID)
AWS_SECRET_ACCESS_KEY: $(inputs.ADES_STAGEIN_AWS_SECRET_ACCESS_KEY)
ResourceRequirement: {}
49 changes: 49 additions & 0 deletions charts/ades/files/cwl/stageout/eoepca_stage-out_0_2.cwl
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
class: CommandLineTool
baseCommand: stage-out
inputs:
job:
type: string
inputBinding:
position: 1
prefix: --job
valueFrom: $( inputs.job )

ADES_STAGEOUT_STORAGE_HOST:
type: string
inputBinding:
position: 2
prefix: --store-host


ADES_STAGEOUT_STORAGE_USERNAME:
type: string
inputBinding:
position: 3
prefix: --store-username


ADES_STAGEOUT_STORAGE_APIKEY:
type: string
inputBinding:
position: 4
prefix: --store-apikey


outputfile:
type: string
inputBinding:
position: 5
prefix: --outputfile
valueFrom: $( inputs.outputfile )

outputs: {}
requirements:
InlineJavascriptRequirement: {}
EnvVarRequirement:
envDef:
PATH: /opt/anaconda/envs/env_stageout/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

ResourceRequirement: {}
hints:
DockerRequirement:
dockerPull: eoepca/stage-out:0.2
Loading

0 comments on commit d03e377

Please sign in to comment.