Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm with dashboard changes #4

Closed
wants to merge 25 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
53fc39a
Helm charts for kubernetes components of mvd applications.
farhin23 Oct 12, 2023
c797e70
Helm charts moved to separate directory.
farhin23 Oct 13, 2023
8175869
Script files for building docker images and installing helm charts.
farhin23 Oct 13, 2023
a02c98e
Update instructions in README file.
farhin23 Oct 13, 2023
4c53eeb
Update instructions for Helm in README file.
farhin23 Oct 13, 2023
4812388
Update k8s-resource folder
farhin23 Nov 3, 2023
3d290b0
Implement Helm charts for company data dashboard
farhin23 Nov 3, 2023
2796c49
Add commands for installing data dashboards
farhin23 Nov 3, 2023
3c86d23
Implement ingress for company data dashboard
farhin23 Nov 16, 2023
1a9a42d
Implement ingress for company and update corresponding "managementApi…
farhin23 Nov 16, 2023
6fc47c5
Update azurite service, declare NodePort
farhin23 Nov 16, 2023
938e54c
Update company and company-dashboard manifest files, disable ingress …
farhin23 Nov 29, 2023
8aea229
Update app.config file management and catalog API endpoint
farhin23 Nov 29, 2023
2828369
Add kind config file
farhin23 Nov 29, 2023
e711810
Update README.md for deploying MVD with Kind
farhin23 Nov 29, 2023
0c7890e
Implement Ingress for companies(connector)
farhin23 Feb 9, 2024
5cf458f
Implement Ingress for company dashboards
farhin23 Feb 9, 2024
2228e6a
Add command files
farhin23 Feb 9, 2024
2dab017
fix: Allow write permission for volumes. Fixes 'file transfer termina…
farhin23 Feb 29, 2024
a42e81e
Update formatting in ConfigMaps
farhin23 Feb 29, 2024
d804dbf
Implement configuration for debugging purpose
farhin23 Mar 5, 2024
4163011
Update README.md
farhin23 Mar 12, 2024
2382287
Update README.md
farhin23 Mar 12, 2024
42d74ed
Update docker image version format
farhin23 Mar 12, 2024
aae628b
Update README.md - title and tools description
farhin23 Mar 12, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions launchers/connector/build.gradle.kts
Original file line number Diff line number Diff line change
Expand Up @@ -85,3 +85,11 @@ tasks.withType<com.github.jengelman.gradle.plugins.shadow.tasks.ShadowJar> {
dependsOn(distTar, distZip)
mustRunAfter(distTar, distZip)
}

tasks.withType<JavaCompile> {

options.isDebug = true

options.compilerArgs.add("-g")

}
114 changes: 114 additions & 0 deletions system-tests/helm/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
# MVD on Kubernetes
We have demonstrated a containerized deployment of the MVD in [system-tests/README.md](../README.md). In this section,
we will be deploying the MVD on [kubernetes](https://kubernetes.io/docs/home/).
In addition, we will be using [Helm](https://helm.sh/docs/) to manage all the Kubernetes YAML files.


## Install Tools
For the deployment purpose we will need,
* A kubernetes cluster, for which we have used [kind](https://kind.sigs.k8s.io/) (version 0.20.0). Follow the official [user guide](https://kind.sigs.k8s.io/docs/user/quick-start/)
to install `kind` in your local environment.
* [Kubectl](https://kubernetes.io/docs/reference/kubectl/), to communicate with the kubernetes cluster. Install `kubectl`
following the [kubernetes documentation](https://kubernetes.io/docs/tasks/tools/)
* Helm, to manage our kubernetes components. We have used Helm-3 (version v3.14.2). For the installation,
follow the [instructions](https://helm.sh/docs/intro/install/) provided in their official website.


## MVD build tasks
Build `MVD` by running the following command from the root of the `MVD` project folder:
```bash
./gradlew build
```
Then execute the following commands from the `MVD` root folder, to build the connector JAR and registration service JAR:
```bash
./gradlew -DuseFsVault="true" :launchers:connector:shadowJar
./gradlew -DuseFsVault="true" :launchers:registrationservice:shadowJar
```


## MVD DataDashboard
Clone the repository [edc-dashboard ](https://github.com/FraunhoferISST/edc-dashboard) and checkout
branch `helm_dashboard_changes`.

## Create Cluster
- Navigate to the helm directory ([/system-tests/helm](../../system-tests/helm)): `cd system-tests/helm/`

- Set the environment variable `MVD_UI_PATH` to the path of the DataDashboard repository.
```bash
export MVD_UI_PATH="/path/to/mvd-datadashboard"
```
- Run the following command to build the necessary images from [docker-compose.yml](./docker-compose.yml)
```bash
docker compose -f docker-compose.yml build
```
- Execute the following script to create a kubernetes cluster.
```bash
./kind-run.sh
```
[kind-run.sh](./kind-run.sh) is basically a bash script containing all the commands to,
* create a cluster with the configuration defined in [kind-cluster.yaml](./kind-cluster.yaml) file
* load the docker images to cluster
* apply ingress


## Run MVD
Execute the following command to check if ingress is ready,
```bash
kubectl wait --namespace ingress-nginx --for=condition=ready pod --selector=app.kubernetes.io/component=controller
```
If condition is met, then execute the following command,
```bash
./run-mvd.sh
```
The file [run-mvd.sh](./run-mvd.sh) contains commands to install helm charts that will deploy the kubernetes
components for mvd in our cluster.


Check The container `cli-tools` if it has registered all participants successfully. Run `kubectl get pods`. Copy the
name of the `cli-tools` pod. Then execute `kubectl logs <cli-tools-pod>`. If it shows all the participants
(e.g. `company1`, `company2`, `company3`) are `ONBORDED`, then it has successfully registered all the participants.


### Company DataDashboards
All the company-dashboards can be accessed with the following URLs,
* company1-dashboard: <http:/localhost/company1-datadashboard/>
* company2-dashboard: <http:/localhost/company2-datadashboard/>
* company3-dashboard: <http:/localhost/company3-datadashboard/>

Initially it may take some time to load all the data. Once everything is loaded properly,
each company will have two assets in `assets` tab. Company1 and company2 will have six
assets in `catalog browser`. Company3 will display three assets in its `catalog browser`.


### Run A Standard Scenario Locally

1. Create a test document for company1:

- Follow the instructions in `Run A Standard Scenario Locally` section of the root [README.md](https://github.com/FraunhoferISST/edc-mvd/blob/cc5cc02d8ca0ee69052ca765f611abe3ad82f5f8/README.md) file, to connect
to storage account of company1.
- Replace the `localhost:10000` with `localhost:31000`. If you are using a connection string,
then use:
```bash
DefaultEndpointsProtocol=http;AccountName=company1assets;AccountKey=key1;BlobEndpoint=http://127.0.0.1:31000/company1assets;
```

- Follow the instructions there to create a container and to add a test file with name `text-document.txt`.

2. Request the file from company2:

* Open the dashboard of the company2 <http:/localhost/company2-datadashboard/>
* Go to `Catalog Browser` and select `Negotiate` on asset `test-document_company1`
* Go to `Contracts` and click `Transfer` on the negotiated contract
* Select `AzureStorage` from the dropdown and `Start transfer`
* Wait for transfer complete message

3. Verify if the transfer was successful:
* Connect to storage account of company2. The process will be same as company1.
Use account name `company2assets`
and account key `key2`. If using a connection string, then use:
```bash
DefaultEndpointsProtocol=http;AccountName=company2assets;AccountKey=key2;BlobEndpoint=http://127.0.0.1:31000/company2assets;
```

* If the transfer is successful, then there will be a new container in `Blob containers` with files
`test-document.txt` and `.complete`
41 changes: 41 additions & 0 deletions system-tests/helm/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
services:

# Dataspace registration service authority.
registration-service:
build:
context: ../../launchers/registrationservice
args:
JVM_ARGS: "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5008"
image: registration-service:v0.2.0

edc-connector:
build:
context: ../../launchers/connector
image: edc-connector:v0.2.0

cli-tools:
build:
context: ../resources/cli-tools
image: cli-tools:v0.2.0


# connector-dashboards
edc-connector-dashboard-company1:
build:
context: ${MVD_UI_PATH}
args:
BASE_PATH: "/company1-datadashboard/"
image: edc-connector-dashboard-company1:v0.2.0
edc-connector-dashboard-company2:
build:
context: ${MVD_UI_PATH}
args:
BASE_PATH: "/company2-datadashboard/"
image: edc-connector-dashboard-company2:v0.2.0
edc-connector-dashboard-company3:
build:
context: ${MVD_UI_PATH}
args:
BASE_PATH: "/company3-datadashboard/"
image: edc-connector-dashboard-company3:v0.2.0

23 changes: 23 additions & 0 deletions system-tests/helm/helm-charts/azurite/.helmignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
24 changes: 24 additions & 0 deletions system-tests/helm/helm-charts/azurite/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
apiVersion: v2
name: azurite
description: A Helm chart for Kubernetes

# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
22 changes: 22 additions & 0 deletions system-tests/helm/helm-charts/azurite/templates/NOTES.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "azurite.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "azurite.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "azurite.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "azurite.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
{{- end }}
62 changes: 62 additions & 0 deletions system-tests/helm/helm-charts/azurite/templates/_helpers.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "azurite.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "azurite.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}

{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "azurite.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Common labels
*/}}
{{- define "azurite.labels" -}}
helm.sh/chart: {{ include "azurite.chart" . }}
{{ include "azurite.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

{{/*
Selector labels
*/}}
{{- define "azurite.selectorLabels" -}}
app.kubernetes.io/name: {{ include "azurite.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

{{/*
Create the name of the service account to use
*/}}
{{- define "azurite.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "azurite.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "azurite.fullname" . }}-configmap
labels:
{{- include "azurite.labels" . | nindent 4 }}
data:
AZURITE_ACCOUNTS: "company1assets:key1;company2assets:key2;company3assets:key3"
28 changes: 28 additions & 0 deletions system-tests/helm/helm-charts/azurite/templates/deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "azurite.fullname" . }}
labels:
{{- include "azurite.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "azurite.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "azurite.selectorLabels" . | nindent 8 }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
envFrom:
- configMapRef:
name: {{ include "azurite.fullname" . }}-configmap
32 changes: 32 additions & 0 deletions system-tests/helm/helm-charts/azurite/templates/hpa.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "azurite.fullname" . }}
labels:
{{- include "azurite.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "azurite.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}
Loading
Loading