Skip to content
This repository has been archived by the owner on Jul 31, 2024. It is now read-only.

Commit

Permalink
deploy (#35)
Browse files Browse the repository at this point in the history
* Dev/provider (#5)

* Add keyrock

* Increase chart version

* Add PDP

* Add kong

* Adding AS

* Add AS to DSC values

* Add participant label

* Change issuer version and add data volume

* Fix data volume

* Test AS pre-release

* Change to latest AS release

* Switch to default PDC values

* Rename folder of PDP

* Make DID CM optional

* Renaming walt-id chart

* Renaimg default URLs and secret names for walt chart name change

* verifier using did registry (#6)

* Verifier using DID Registry (#8)

* Allow to disable certain apps when deploying with Helm and various fixes for plain Helm deployment with Ingress (#10)

* Allow to disable certain apps when deploying with Helm

* Adding example values file

* Add waltId ingress

* Updating walt-id config and adding keycloak

* Update doc

* Adding verifier

* Add TIL

* Remove doubled PDP app

* Adding Keyrock and dsba-pdp

* Adding kong

* Adding AS

* Extend doc

* Update examples/service-provider-ips/README.md

Fix typo

Co-authored-by: Tim Smyth <[email protected]>

---------

Co-authored-by: Tim Smyth <[email protected]>
Co-authored-by: Tim Smyth <[email protected]>

* Updated images of keycloak-vc-issuer and waltid (#11)

* Update values.yaml

* Update values.yaml (#14)

* Add TMForum APIs (#13)

* Add TMForum APIs

* Remove spec URL

* Switching to Test-Image

---------

Co-authored-by: Stefan Wiedemann <[email protected]>

* Change TMForum chart (#17)

* Add TMForum APIs

* Remove spec URL

* Switching to Test-Image

* Change TMForum chart

---------

Co-authored-by: Stefan Wiedemann <[email protected]>

* enable the proxy (#18)

* Update values.yaml (#20)

* Update values.yaml (#22)

* Extend documentation (#30)

* Extend documentation

* typo

* Extend doc for providing config parameters (#32)

* Extend documentation

* typo

* Extend doc for providing config parameters

* Update README.md

Co-authored-by: Tim Smyth <[email protected]>

---------

Co-authored-by: Tim Smyth <[email protected]>

* Integration with AWS Garnet (#33)

* Adding folder for AWS STF

* Add TOC

* Fix TOC

* rename aws-smart-territory-framework to aws-garnet in file structure

* add content structure to AWS Garnet integration example documentation

* add placeholder EKS nginx Ingress Controller Configuration

* add resources to help deploy eks cluster

* clean up unused resources

* add steps to create eks cluster

* add steps to deploy nginx ingress controller

* restructure readme separating 2 possible configurations

* add modified cdk stack for deployment of aws garnet iot module only

* add steps to deploy isolated aws garnet iot module and integrate to amazon eks cluster

* fix scenario image order

* improve diagram image quality

* fix diagram order

* fix diagram order

* add useful kubectl scripts for debugging

* add separate structures for scenario 1 and scenario 2

* add instructions for scenario2 deployment

* fix scenario 2 disable orion deployment

* add links to internal files in project structure

* add podLogs placeholder for doc links

* Update ToC link

---------

Co-authored-by: EC2 Default User <[email protected]>
Co-authored-by: asanode-aws <[email protected]>

* Update values.yaml

* Added redis caching support (#34)

Co-authored-by: Stefan Wiedemann <[email protected]>

---------

Co-authored-by: Dennis Wendland <[email protected]>
Co-authored-by: Tim Smyth <[email protected]>
Co-authored-by: Tim Smyth <[email protected]>
Co-authored-by: beknazaresenbek <[email protected]>
Co-authored-by: EC2 Default User <[email protected]>
Co-authored-by: asanode-aws <[email protected]>
  • Loading branch information
7 people authored Nov 27, 2023
1 parent 538a884 commit 43fda0f
Show file tree
Hide file tree
Showing 93 changed files with 12,628 additions and 12 deletions.
37 changes: 33 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,48 @@
# FIWARE Data Space Connector

Connector bundling all components
The FIWARE Data Space Connector is an integrated suite of components implementing DSBA Technical Convergence recommendations, every organization participating
in a data space should deploy to “connect” to a data space.

## Deployment with Helm
This repository provides the charts and deployment recipes.

Even thought a gitops-approach, following the app-of-apps pattern, with [ArgoCD](https://argo-cd.readthedocs.io/en/stable/), is the preferred way to deploy the Data-Space-Connector, not everyone has has it available. Therefor, the Data-Space-Connector is also provided as an [Umbrella-Chart](https://helm.sh/docs/howto/charts_tips_and_tricks/#complex-charts-with-many-dependencies), containing all the sub-charts and their dependencies.
A more extensive documentation about the connector and the supported flows in a data space it supports can be found at the
FIWARE [data-space-connector repository](https://github.com/FIWARE/data-space-connector).



## Deployment


### Deployment with ArgoCD

The FIWARE Data Space Connector is a [Helm](https://helm.sh) chart using a gitops-approach, following
the [app-of-apps pattern](https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping), with [ArgoCD](https://argo-cd.readthedocs.io/en/stable/).

This repository already provides a [deployment Github action](.github/workflows/deploy.yaml) compatible with OpenShift clusters, performing deployments out of
a branch created in the format `deploy/<TARGET_NAMESPACE>` and pulling the `values.yaml` from a specified gitops repository. It also requires to set the
following ENVs for the Github action, `OPENSHIFT_SERVER` and `OPENSHIFT_TOKEN`, specifying the OpenShift target URL and access token, respectively.
For deployment, simply fork this repository, adapt the configuration of the action to your setup and set the necessary ENVs. After creating a
`deploy/<TARGET_NAMESPACE>` branch, it will perform the deployment to the specified namespace.

For a different cluster flavor, the GitHub action needs to be modified before to be compatible.


### Deployment with Helm

Even though a gitops-approach, following the app-of-apps pattern, with ArgoCD, is the preferred way to deploy the Data-Space-Connector, not everyone has it available. Therefore, the Data-Space-Connector is also provided as an [Umbrella-Chart](https://helm.sh/docs/howto/charts_tips_and_tricks/#complex-charts-with-many-dependencies), containing all the sub-charts and their dependencies.

The chart is available at the repository ```https://fiware-ops.github.io/data-space-connector/```. You can install it via:

```shell
# add the repo
helm repo add dsc https://fiware-ops.github.io/data-space-connector/
# install the chart
helm install dsc/data-space-connector
helm install <DeploymentName> dsc/data-space-connector -n <Namespace> -f values.yaml
```
**Note,** that due to the app-of-apps structure of the connector and the different dependencies between the components, a deployment without providing any configuration values will not work. Make sure to provide a
`values.yaml` file for the deployment, specifying all necessary parameters. This includes setting parameters of the connected data space (e.g., trust anchor endpoints), DNS information (providing Ingress or OpenShift Route parameters),
structure and type of the required VCs, internal hostnames of the different connector components and providing the configuration of the DID and keys/certs.
Also have a look at the [examples](#examples).

The chart also contains the [argo-cd applications support](./data-space-connector/templates/), thus it can be used to generate argo-deployments, too. In plain Helm deployments, this should be disabled in the values.yaml:
```yaml
Expand Down
2 changes: 1 addition & 1 deletion applications/dsba-pdp/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ dsba-pdp:
pullPolicy: Always
repository: quay.io/fiware/dsba-pdp
# includes the http policy support
tag: 1.1.0-pre-30
tag: 1.1.0

# Log level
logLevel: TRACE
Expand Down
6 changes: 3 additions & 3 deletions applications/tm-forum-api/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@ name: tm-forum-api
description: Chart holder for argo-cd

type: application
version: 0.0.9
appVersion: "0.4.1"
version: 0.0.4
appVersion: "0.13.2"

dependencies:
- name: tm-forum-api
version: 0.2.2
version: 0.2.3
repository: https://fiware.github.io/helm-charts
4 changes: 4 additions & 0 deletions applications/tm-forum-api/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -71,3 +71,7 @@ tm-forum-api:
- name: service-catalog
image: tmforum-service-catalog
basePath: /tmf-api/serviceCatalogManagement/v4

# redis caching
redis:
enabled: false
2 changes: 1 addition & 1 deletion applications/verifier/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ vcverifier:
# Image
image:
repository: quay.io/fiware/vcverifier
tag: 2.1.0
tag: 2.5.0
pullPolicy: Always

# Logging
Expand Down
35 changes: 32 additions & 3 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,22 @@
# Examples

Different examples for the deployment of the FIWARE Data Space Connector
Different examples for the deployment of the FIWARE Data Space Connector, as well as the integration with
other frameworks.

<details>
<summary><strong>Contents</strong></summary>

## IPS Service Provider (helm)
- [Deployment of service providers](#deployment-of-service-providers)
- [IPS Service Provider (helm)](#ips-service-provider-helm)
- [Packet Delivery Company (ArgoCD)](#packet-delivery-company-argocd)
- [Integration with AWS Garnet Framework](#integration-with-aws-garnet-framework-formerly-aws-smart-territory-framework)

</details>


## Deployment of service providers

### IPS Service Provider (helm)

This is an example of a data service provider, providing a fictitious digital service
for packet delivery services as a company called `IPS`.
Expand All @@ -13,11 +26,12 @@ access to the entities of certain delivery orders.

The example uses plain helm for the deployment.

More information can be found here:
* [./service-provider-ips](./service-provider-ips)



## Packet Delivery Company (ArgoCD)
### Packet Delivery Company (ArgoCD)

This is an example of a data service provider called Packet Delivery Company (PDC).

Expand All @@ -26,3 +40,18 @@ Basically, it's identical to IPS above, but deployment is performed via

The configuration can be found at the
[fiware-gitops repository](https://github.com/FIWARE-Ops/fiware-gitops/tree/master/aws/dsba/packet-delivery/data-space-connector).




## Integration with AWS Garnet Framework (formerly AWS Smart Territory Framework)

This is an example of a data service provider that is integrated with the
[AWS Garnet Framwork (formerly AWS STF)](https://github.com/aws-samples/aws-stf).

In general, this example deploys a data service provider based on the Data Space Connector,
but integrating the FIWARE Context Broker from the STF.

More information can be found here:
* [./aws-garnet](./aws-garnet)

147 changes: 147 additions & 0 deletions examples/aws-garnet/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
# Integration with AWS Garnet Framework

## Overview

AWS Garnet Framework is an open-source framework aimed at simplifying the creation and operation of interoperable platforms across diverse domains, including Smart Cities, Energy, Agriculture, and more.
Compliant with the NGSI-LD open standard and harnessing NGSI-LD compliant Smart Data Models, this solution promotes openness and efficiency.
At its core, AWS Garnet Framework integrates the FIWARE Context Broker, an integral component that facilitates data management.
[In the official project GitHub repository](https://github.com/aws-samples/aws-stf-core), you'll find the necessary resources to deploy both the FIWARE Context Broker and the Garnet IoT module as separate AWS Cloud Development Kit (CDK) nested stacks, offering a flexible and modular approach to enhance and integrate existing solutions over time.

For the context of Data Spaces, the AWS Garnet Framwork can be extended with the capabilities of the FIWARE Data Spaces Connector, which can instrument an existing deployment of the FIWARE Context Broker, as seen in other examples of this repository.

In this example, the procedure to deploy the packet delivery service provider named IPS on AWS is provided. This deployment pattern can be reused to implement data spaces use cases requiring the infrastructure of the FIWARE Data Spaces Connector.

## Prerequisites

This deployment example focuses on 2 possible initial configurations of infrastructure:

* 1/ No existing AWS Garnet Framework deployment in the AWS Account

![Target Architecture for a fresh deployment of AWS Garnet Framework with the DS Connector](./static-assets/garnet-ds-connector-scenario1.png)

* 2/ Existing AWS Garnet Framework deployment in the AWS Account with a Context Broker on AWS ECS Fargate

![Target Architecture for extending the deployment of an existing AWS Garnet Framework](./static-assets/garnet-ds-connector-scenario2.png)

In any of the previous cases, an Amazon EKS Cluster is needed to deploy the Data Space Connector. However, if there is an existing Amazon EKS Cluster in your AWS, it can be leveraged for this deployment and no additional cluster must be created. The next steps will help deploying a new cluster from scratch for the connector deployment.

### Amazon EKS Cluster Creation
If the creation of a dedicated Kubernetes cluster is considered for the deployment of the FIWARE Data Spaces Connector, it is recommended that users follow the instructions to create a new Amazon EKS Cluster available in the [official Amazon EKS Immersion Workshop](https://catalog.workshops.aws/eks-immersionday/en-US/introduction#confirm-eks-setup)

#### AWS EKS Cluster Setup with Fargate Profile

* Assign environment variables to choose the deployment parameters
```shell
export AWS_REGION=eu-west-1
export ekscluster_name="fiware-dsc-cluster"
```

* Create the VPC to host the Amazon EKS cluster on your AWS Account - update the `eks-vpc-3az.yaml` file to select the desired region for your deployment

```shell
aws cloudformation deploy --stack-name "eks-vpc" --template-file "./yaml/eks-vpc-3az.yaml" --capabilities CAPABILITY_NAMED_IAM
```

* Store the VPC ID in an environment variable

```shell
export vpc_ID=$(aws ec2 describe-vpcs --filters Name=tag:Name,Values=eks-vpc | jq -r '.Vpcs[].VpcId')
echo $vpc_ID
```

* Export the Subnet ID, CIDR, and Subnet Name to a text file for tracking

```shell
aws ec2 describe-subnets --filter Name=vpc-id,Values=$vpc_ID | jq -r '.Subnets[]|.SubnetId+" "+.CidrBlock+" "+(.Tags[]|select(.Key=="Name").Value)'
echo $vpc_ID > vpc_subnet.txt
aws ec2 describe-subnets --filter Name=vpc-id,Values=$vpc_ID | jq -r '.Subnets[]|.SubnetId+" "+.CidrBlock+" "+(.Tags[]|select(.Key=="Name").Value)' >> vpc_subnet.txt
cat vpc_subnet.txt
```

* Store VPC ID, Subnet IDs as environment variables that will be used on next steps

```shell
export PublicSubnet01=$(aws ec2 describe-subnets --filter Name=vpc-id,Values=$vpc_ID | jq -r '.Subnets[]|.SubnetId+" "+.CidrBlock+" "+(.Tags[]|select(.Key=="Name").Value)' | awk '/eks-vpc-PublicSubnet01/{print $1}')
export PublicSubnet02=$(aws ec2 describe-subnets --filter Name=vpc-id,Values=$vpc_ID | jq -r '.Subnets[]|.SubnetId+" "+.CidrBlock+" "+(.Tags[]|select(.Key=="Name").Value)' | awk '/eks-vpc-PublicSubnet02/{print $1}')
export PublicSubnet03=$(aws ec2 describe-subnets --filter Name=vpc-id,Values=$vpc_ID | jq -r '.Subnets[]|.SubnetId+" "+.CidrBlock+" "+(.Tags[]|select(.Key=="Name").Value)' | awk '/eks-vpc-PublicSubnet03/{print $1}')
export PrivateSubnet01=$(aws ec2 describe-subnets --filter Name=vpc-id,Values=$vpc_ID | jq -r '.Subnets[]|.SubnetId+" "+.CidrBlock+" "+(.Tags[]|select(.Key=="Name").Value)' | awk '/eks-vpc-PrivateSubnet01/{print $1}')
export PrivateSubnet02=$(aws ec2 describe-subnets --filter Name=vpc-id,Values=$vpc_ID | jq -r '.Subnets[]|.SubnetId+" "+.CidrBlock+" "+(.Tags[]|select(.Key=="Name").Value)' | awk '/eks-vpc-PrivateSubnet02/{print $1}')
export PrivateSubnet03=$(aws ec2 describe-subnets --filter Name=vpc-id,Values=$vpc_ID | jq -r '.Subnets[]|.SubnetId+" "+.CidrBlock+" "+(.Tags[]|select(.Key=="Name").Value)' | awk '/eks-vpc-PrivateSubnet03/{print $1}')
echo "export vpc_ID=${vpc_ID}" | tee -a ~/.bash_profile
echo "export PublicSubnet01=${PublicSubnet01}" | tee -a ~/.bash_profile
echo "export PublicSubnet02=${PublicSubnet02}" | tee -a ~/.bash_profile
echo "export PublicSubnet03=${PublicSubnet03}" | tee -a ~/.bash_profile
echo "export PrivateSubnet01=${PrivateSubnet01}" | tee -a ~/.bash_profile
echo "export PrivateSubnet02=${PrivateSubnet02}" | tee -a ~/.bash_profile
echo "export PrivateSubnet03=${PrivateSubnet03}" | tee -a ~/.bash_profile
source ~/.bash_profile
```

* Use the provided script `eks-cluster-fargateProfiler.sh` [available in this repository](./scripts/eks-cluster-fargateProfiler.sh) to populate your resources IDs to instantiate the Amazon EKS Cluster template

```shell
chmod +x ./scripts/eks-cluster-fargateProfiler.sh
./scripts/eks-cluster-fargateProfiler.sh
```

* Create the Amazon EKS Cluster with Fargate Profile using `eksctl`

```shell
eksctl create cluster --config-file=./yaml/eks-cluster-3az.yaml
```

* Create an IAM Identity Mapping to access your Amazon EKS cluster metadata using the AWS Console

```shell
eksctl create iamidentitymapping --cluster fiware-dsc-cluster --arn arn:aws:iam::<YOUR-AWS-ACCOUNT_ID>:role/<YOUR-AWS-ROLE-FOR-ACCESSING-CONSOLE> --group system:masters --username admin
```

* Check if your cluster is running properly once the Amazon CloudFormation Stack creation is complete

```shell
kubectl get svc
```

* Configuring OIDC ID Provider(IdP) to EKS cluster allows you to use AWS IAM roles for Kubernetes service accounts, and this requires an IAM OIDC provider in the cluster. Let's run the command below to integrate OIDC into the cluster.

```shell
eksctl utils associate-iam-oidc-provider --region ${AWS_REGION} --cluster fiware-dsc-cluster --approve
```

#### (OPTIONAL) Install [AWS Load Balancer Controller](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html) add-on to manage ingress configuration
AWS Load Balancer Controller is a Kubernetes add-on that manages AWS Elastic Load Balancers(ELB) used by Kubernetes cluster.
This controller provides:

* Provision new AWS ALB when Kubernetes Ingress is created.
* Provision new AWS NLB when Kubernetes LoadBalancer is created.

It is recommended to follow the official AWS documentation to install the AWS Load Balancer Controller add-on to control ingress. The step-by-step procedure is available in [this link](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html)

#### nginx Ingress Controller Configuration
In AWS, we use a Network load balancer (NLB) to expose the Ingress-Nginx Controller behind a Service of ```Type=LoadBalancer```. It is advised that the [official Installation Guide is followed for the next steps](https://kubernetes.github.io/ingress-nginx/deploy/#aws)
A short version of the procedure is reproduced below for a quick setup:

* Create an AWS IAM Policy for the Ingress Controller using the provided file `./policies/aws-lbc-iam_policy.json`. The JSON file can also be found [here](https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json)

```shell
aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://./policies/aws-lbc-iam_policy.json
```

* Create an IAM Role and ServiceAccount for the AWS Load Balancer controller

```shell
eksctl create iamserviceaccount --cluster=fiware-dsc-cluster --namespace=kube-system --name=ingress-nginx-controller --attach-policy-arn=arn:aws:iam::${ACCOUNT_ID}:policy/AWSLoadBalancerControllerIAMPolicy --override-existing-serviceaccounts --region ${AWS_REGION} --approve
```

* Deploy the Kubernetes Service for the nginx Ingress Controller on your cluster using the provided file `./yaml/nginx-ingress-controller.yaml` . The default deployment file is also available in [this link](https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml)

```shell
kubectl apply -n kube-system -f ./yaml/nginx-ingress-controller.yaml
```

## Next Steps
Once your Amazon EKS Cluster is ready, head to the specific step-by-step procedure that best describes your current environment in the following links of this documentation:

* [1/ No existing AWS Garnet Framework deployment in the AWS Account](./scenario-1-deployment/)

* [2/ Existing AWS Garnet Framework deployment in the AWS Account with a Context Broker on AWS ECS Fargate](./scenario-2-deployment/)
Loading

0 comments on commit 43fda0f

Please sign in to comment.