Skip to content
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.

Commit

Permalink
Deprecate develop branch (#140)
Browse files Browse the repository at this point in the history
* Feature/zabbix (#110)
* Documentation links fix (#95)
* fix the password to connect to Adminer (#99)
* Update logging doc  (#103)
* fix #105 (#115)
* Feature/rancher proxmox (#117)
* Documentation/binderhub (#112)
* group management (#113)
* Feature/tests (#123) - Setup the testing framework for fadi. Add automated testing of the services using Jest and Puppeteer, test cases and scenarios specifications and implementation.
* Usermanagement documentation (Nifi) + Tensorflow use case (#130)
* NiFi - LDAP Documentation
* Feature/seldon - ML models management (#122)
* Add new flag to helm repo add to overwrite the cetic chart repo if already present (#133)
* Add zakaria2905 to contributors
* Userguide update (#135)
* Monitoring and various documentation fixes (#111)
* Update INSTALL.md
* CI/CD with minikube
* ldap documentation
* elastic-stack ldap documentation
* Details on JHub LDAP documentation
* Helm 3 - Remove deprecated tiller ref, updated traefik install version
* Feature/zabbix (#110)
* Documentation links fix (#95)
* fix the password to connect to Adminer (#99)
* Update logging doc  (#103)
* Zabbix doc: cetic/helm-fadi#27
* fix #105 (#115)
* fix #121

Co-authored-by: Sebastien Dupont <[email protected]>
Co-authored-by: Amen Ayadi <[email protected]>
Co-authored-by: Alexandre Nuttinck <[email protected]>
Co-authored-by: Faiez Zalila <[email protected]>
Co-authored-by: Sellto <[email protected]>
Co-authored-by: Faiez Zalila <[email protected]>
Co-authored-by: Rami Sellami <[email protected]>
  • Loading branch information
8 people authored Nov 15, 2021
1 parent 9478d7f commit 0bdb378
Show file tree
Hide file tree
Showing 119 changed files with 7,050 additions and 70 deletions.
13 changes: 12 additions & 1 deletion .all-contributorsrc
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,18 @@
"contributions": [
"review"
]
}
},
{
"login": "zakaria2905",
"name": "zakaria.hajja",
"avatar_url": "https://avatars.githubusercontent.com/u/48456087?v=4",
"profile": "https://github.com/zakaria2905",
"contributions": [
"code",
"doc"
]
},

],
"contributorsPerLine": 6,
"projectName": "fadi",
Expand Down
10 changes: 9 additions & 1 deletion .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ A clear and concise description of what the bug is.
Provide the environment in which the bug has happened (minikube on a workstation, full fledged Kubernetes cluster, ...)

* **OS** (e.g. from `/etc/os-release`)
* **VM driver** (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName)
* **VM driver** (e.g. `cat ~/.minikube/machines/minikube/config.json | grep DriverName`)
* **Minikube version** (e.g. `minikube version`)

**What happened**:
Expand All @@ -34,6 +34,14 @@ Provide the environment in which the bug has happened (minikube on a workstation

**Output of `minikube logs` (if applicable)**:

**Output of Kubectl for pods, events**

```bash
kubectl get events --all-namespaces
kubectl get events -n fadi
kubectl get pods -n fadi
kubectl logs fadi-nifi
```

**Anything else we need to know**:

4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,6 @@ teardown.log
*.tgz

# https://github.com/ekalinin/github-markdown-toc
gh-md-toc
gh-md-toc

.vscode
12 changes: 12 additions & 0 deletions .gitlab-ci.sample.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ stages:
- tf_plan
- tf_apply
- deployWithHelm
- test

variables:
KUBECONFIG: /etc/deploy/config
Expand Down Expand Up @@ -132,3 +133,14 @@ deployWithHelm:
url: http://$PROJECT
only:
- master

test:
stage: test
image: ceticasbl/puppeteer-jest
script:
- cd tests/
- npm run test
tags:
- docker
only:
- develop
20 changes: 16 additions & 4 deletions FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,13 @@ FAQ - Frequently asked questions

In case you encounter an issue with FADI, have a feature request or any other question, feel free to [open an issue](https://github.com/cetic/fadi/issues/new/choose).

## How can I extend FADI

FADI relies on Helm to integrate the various service together. To add another service to the stack, you can package it inside a [Helm chart](https://helm.sh/docs/howto/) and [add it to your own FADI chart](helm/README.md).

## Why "FADI"?

FADI is the acronym for "Framework d'Analyse de Données Industrielles" ("A Framework for Industrial Data Analysis")
FADI is the acronym for "Framework for Automating the Deployment and orchestration of container-based Infrastructures"

## FADI is not working

Expand All @@ -22,23 +26,31 @@ Please make sure the following steps have been taken beforehand:

* update Minikube to the latest version
* update Helm to the latest version
* check the logs (`minikube logs`) for any suspicious error message
* check the logs for any suspicious error message:

```bash
minikube logs
kubectl get events --all-namespaces
kubectl get events -n fadi
kubectl get pods -n fadi
kubectl logs fadi-nifi
```

## OSx - slow installation

**Note for Mac users :** you need to change the network interface in the Minikube vm: in the VirtualBox GUI, go to `minikube->Configuration->Network->Interface 1->advanced` and change `Interface Type` to `PCnet-FAST III` (the minikube vm should be shut down in order to be able to change the network interface: `minikube stop`

## Windows Installation

This is still not totally supported, some guidelines here #55
Windows support for the Minikube installation should work but is not tested frequently.

## How to configure external access to the deployed services?

When deploying on a generic Kubernetes cluster, you will want to make the services accessible from the outside.

See

* https://github.com/cetic/fadi/blob/feature/documentation/doc/REVERSEPROXY.md for the reverse proxy configuration guide
* [doc/REVERSEPROXY.md](doc/REVERSEPROXY.md) for the reverse proxy configuration guide
* https://github.com/cetic/fadi/issues/81 for port forwarding instructions

In a Minikube setting, make sure the ingress plugin is enabled (`minikube addons enable ingress`), and populate your `/etc/hosts` file accordingly.
Expand Down
15 changes: 11 additions & 4 deletions INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,6 @@ The deployment of the FADI stack is achieved with:
* [Helm v3](https://helm.sh/).
* [Kubernetes](https://kubernetes.io/).

![](doc/images/architecture/helm-architecture.png)

## 1. Local installation

This type of installation provides a quick way to test the platform, and also to adapt it to your needs.
Expand Down Expand Up @@ -70,7 +68,7 @@ To get the Kubernetes dashboard, type:
minikube dashboard
```

This will open a browser window with the [Kubernetes Dashboard](http://127.0.0.1:40053/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/), it should look like this:
This will open a browser window with the Kubernetes Dashboard:

![Minikube initial dashboard](doc/images/installation/minikube_dashboard.png)

Expand Down Expand Up @@ -195,7 +193,16 @@ It is also possible to create the Kubernetes cluster in command line, see: https
## 4. Troubleshooting

* Installation logs are located in the `helm/deploy.log` file.
* Enable local monitoring in minikube: `minikube addons enable metrics-server`
* Check the Minikube and Kubernetes logs:
```bash
minikube logs
kubectl get events --all-namespaces
kubectl get events -n fadi
kubectl get pods -n fadi
kubectl logs fadi-nifi-xxxxx -n fadi
```
* Enable [metrics server](https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server) in minikube: `minikube addons enable metrics-server`
* The [FAQ](FAQ.md) provides some guidance on common issues
* For Windows users, please refer to the following [issue](https://github.com/cetic/fadi/issues/55).

## 5. Continuous integration (CI) and deployment (CD)
Expand Down
4 changes: 3 additions & 1 deletion USERGUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -348,7 +348,9 @@ Choose `Minimal environment` and click on `Spawn`.

![Jupyter processing](examples/basic/images/spark_results.png)

For more information on how to use Superset, see the [official Jupyter documentation](https://jupyter.readthedocs.io/en/latest/)

For more information on how to use Jupyter, see the [official Jupyter documentation](https://jupyter.readthedocs.io/en/latest/)


## 7. Summary

Expand Down
6 changes: 3 additions & 3 deletions doc/MONITORING.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
Montoring
==========
Monitoring
=======

<p align="left";>
<a href="https://www.elastic.co" alt="elk">
<img src="images/logos/zabbix_logo.png" align="center" alt="ELK logo" width="200px" />
<img src="images/logos/zabbix_logo.png" align="center" alt="Zabbix logo" width="200px" />
</a>
</p>

Expand Down
174 changes: 174 additions & 0 deletions doc/RANCHER_PROXMOX.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
Deploy FADI with Rancher and Proxmox
=============

* [0. Prerequisites](#0-prerequisites)
* [1. Upload RancherOS ISO on Proxmox Node](#1-upload-rancheros-iso-on-proxmox)
* [2. Add Proxmox docker-machine driver to Rancher](#2-add-proxmox-docker-machine-driver-to-rancher)
* [3. Create Cluster With Rancher](#3-create-the-kubernetes-cluster-with-rancher)
* [Create Node Template](#Create-Node-Template)
* [Create Cluster](#Create-Cluster)
* [Create The Nodes](#Create-The-Nodes)
* [4.Manage the provisioning of the persistent volumes](#5-Manage-the-provisioning-of-the-persistent-volumes)
* [StorageOS](#StorageOS)
* [Longhorn](#Longhorn)
* [NFS Server](#NFS-Server)
* [Manually](#Manually)
* [5. Control Cluster from local workstation](#5-Control-Cluster-from-local-workstation)


This page provides information on how to create a Kubernetes cluster on the [Proxmox](https://www.proxmox.com/en/) IaaS provider using [Rancher](https://rancher.com/).

<a href="https://www.proxmox.com/" title="ProxMox"> <img src="images/logos/Proxmox.png" width="150px" alt="Proxmox" /></a>

> "Proxmox VE is a complete open-source platform for enterprise virtualization. With the built-in web interface you can easily manage VMs and containers, software-defined storage and networking, high-availability clustering, and multiple out-of-the-box tools on a single solution."
<a href="https://rancher.com/" title="Rancher"> <img src="images/logos/rancher.png" width="150px" alt="Proxmox" /></a>

> "Rancher is open source software that combines everything an organization needs to adopt and run containers in production. Built on Kubernetes, Rancher makes it easy for DevOps teams to test, deploy and manage their applications."

## 0. Prerequisites

This documentation assumes the following prerequisites are met:

* a [Proxmox installation](https://pve.proxmox.com/wiki/Installation) on your self hosted infrastructure
* a [Rancher installation](https://rancher.com/docs/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/advanced/) that can access the Proxmox cluster, this can be done in another Kubernetes cluster to provide high availability, or simply in a virtual machine somewhere on your infrastructure.

## 1. Upload RancherOS ISO on Proxmox

First, download the [rancheros-proxmoxve-autoformat.iso](https://github.com/rancher/os/releases/latest) image and upload it to one of your Proxmox nodes.

## 2. Add Proxmox docker-machine driver to Rancher

Then, you need to allow Rancher to access Proxmox. We have contributed to upgrade an existing [docker-machine driver](https://github.com/lnxbil/docker-machine-driver-proxmox-ve/releases/download/v3/docker-machine-driver-proxmoxve.linux-amd64) to make it compatible with Rancher.

To add [this driver](https://github.com/lnxbil/docker-machine-driver-proxmox-ve/releases/download/v3/docker-machine-driver-proxmoxve.linux-amd64) in your Rancher, follow these steps :

![Proxmox driver](images/installation/proxmoxdriver.gif)

## 3. Create the Kubernetes cluster with Rancher

After connecting to rancher, follow these steps:

### Create Node Template

This is where you have to define the templates to use for the nodes (both masters and workers nodes). To do so, go to: `profile (top right corner)` > `Node templates` > `Add Template` :

Choose `Proxmoxve`
![Proxmoxve](images/installation/Proxmoxve.png)

and then fill the rest of the fields:

* IP of the Proxmox `proxmoxHost`,
* username/password `proxmoxUserName, proxmoxUserPassword `,
* storage of the image file `vmImageFile ` which is in our case `local:iso/rancheros-proxmoxve-autoformat.iso`,
* resources you want to allocate for your node `nodevmCpuCores, vmMemory, vmStorageSize`.

### Create the Kubernetes Cluster

To create your virtual machines cluster on Proxmox:

`Cluster` > `Add Cluster` > `Proxmoxve`

You will need to give a name to your cluster, then specify the nodes in the cluster:

* at first, you may want to start with **one master node**,
* give it a name,
* choose the template created earlier for that node,
* tick all 3 boxes for `etcd`, `Control Plane` and `Worker`,
* choose the Kubernetes version,
* and finally click `create`.

> "you will have to wait for the `VM creation`, the `RancherOS install` and the `IP address retrieving` steps, that might take a while."
### Adding Nodes the Cluster

Once the master node gets its IP address, go to `Cluster` > `Edit Cluster` and add another worker node, untick the worker box from the master node and tick it in the new worker node. It should look to something like this:
![Proxmoxve](images/installation/workernode.png)

If a second (or more) node (master or worker) is needed, you can add another one with a different template by following the same way we just did. You can also add as much nodes as you want using the same template by simply going to `YourCluster (not global)` > `nodes` > `+` and it will add another node of the same kind:

![Proxmoxve](images/installation/addnode.png)

## 4. Persistent storage configuration

Once all your nodes are up and running it is time to deploy your services, but before you do, you need to set your default PVC for the persistent volumes.

Several ways are possible to manage its aspects. We will describe three of them, and leave it to you to choose the method that best meets your requirements.

### StorageOS

<a href="https://www.storageos.com/" title="storageos"> <img src="images/logos/storageos.svg" width="150px" alt="storageos" /></a>

> *StorageOS is a cloud native storage solution that delivers persistent container storage for your stateful applications in production.
Dynamically provision highly available persistent volumes by simply deploying StorageOS anywhere with a single container.*

To deploy the volume plugin `StorageOS`, go to `YourCluster (not global)` > `system` > `apps` > `launch` and search for `StorageOS`. make sure all the fields are filled correctly like the following screenshot:

![StorageOSConfig](images/installation/StorageOS.png)

and now, launch it🚀.

A small animation take back this all steps:

![StorageOSGuide](images/installation/StorageOSGuide.gif)

launching apps usually takes several minutes, you're going to need to wait a few minutes

StorageOS is a very good turnkey solution. However this service gives only the possibility to allocate maximum 50Gi with the basic License.

![StorageOS limits](images/installation/StorageOS_limits.png)

Finally, all that remains is to define the StorageClass **StorageOS** as the one that will be used by default. To do this, go to `Storage`> `StorageClass` and click on the menu (the three little points on the right side). Now, click on `Set as Default`.

This procedure is shown in the below animation :

![StorageClass](images/installation/StorageClassDefault.gif)

### Longhorn

<a href="https://github.com/longhorn/longhorn" title="longhorn"> <img src="images/logos/longhorn.png" width="150px" alt="longhorn" /></a>

> *Longhorn is a distributed block storage system for Kubernetes. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes.*
This tool is really very powerful, based on iSCSI technology. Unfortunately it is not yet supported by RancherOS (The operating system used in this example).

We report the bugs and problems encountered in two opened github issues:

[https://github.com/rancher/os/issues/2937](https://github.com/rancher/os/issues/2937)
[https://github.com/longhorn/longhorn/issues/828](https://github.com/longhorn/longhorn/issues/828)

### NFS Server Provisioner

<a href="https://rancher.com/docs/rancher/v2.x/en/cluster-admin/volumes-and-storage/examples/nfs/" title="nfs"> <img src="images/logos/nfs.jpg" width="150px" alt="nfs" /></a>

>*The Network File System (NFS) is a client/server application that lets a computer user view and optionally store and update files on a remote computer as though they were on the user's own computer.
NFS Server Provisioner is an out-of-tree dynamic provisioner for Kubernetes. You can use it to quickly & easily deploy shared storage that works almost anywhere.*

This solution is very easy to deploy and set up, a basic installation does not require any particular configuration. This plugin supports both the deployment of the NFS server and the management of persistent volumes.

One of the limits and caveat would be that the NFS server is attached to a node: if it crashes, it is possible that the data is lost.

To add this plugin to your cluster go to `Apps` and click on `Launch`. On the `Search bar`, put `nfs-provisioner`.

![images/installation/nfsapp.png](images/installation/nfsapp.png)

Select the plugin and click the `launch` button🚀.

### Manually

It is also possible to manually create the persistent volumes, this way offers a complete control of the volumes but is less flexible. If you choose this option, please refer to the [official documentation of Kubernetes](https://kubernetes.io/docs/concepts/storage/volumes/).

## 5. Control Cluster from local workstation

There are ways to interact with your cluster using the `kubectl` command line tool.

First, **Rancher** offers a restricted terminal where only this tool is available. To access it, go to the monitoring page of your cluster and click on the launch `kubectl` button.

![images/installation/ranchermonitoring.png](images/installation/ranchermonitoring.png)

![images/installation/rancherkubectl.png](images/installation/rancherkubectl.png)

The second approach is to use the `kubectl` tool on your machine. to do so, go to the monitoring page of your cluster again and click on `Kubeconfig File`. Copy and paste all of the informations into the file `~/.kube/config` present on your machine.

> ** You can now use your cluster created with rancher and deploy in Proxmox, enjoy!**
7 changes: 6 additions & 1 deletion doc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,11 @@ FADI Documentation
* [Users management](USERMANAGEMENT.md) - user identification and authorization (LDAP, RBAC, ...)
* [Reverse proxy](REVERSEPROXY.md) - Traefik reverse proxy configuration
* [Security](SECURITY.md) - SSL setup
* [Testing](/tests/README.md) - tests for the FADI framework
* [TSimulus](TSIMULUS.md) - how to simulate sensors and generate realistic data with [TSimulus](https://github.com/cetic/TSimulus)

* [Machine learning models management](SELDON.md) - how to package and score machine learning models using [Seldon Core](https://www.seldon.io/tech/products/core/)
* [Sample self-hosted infrastructure](RANCHER_PROXMOX.md) - How to install FADI on a self hosted infrastructure using
* [Proxmox](https://www.proxmox.com/en/) as a self-hosted private cloud (IaaS) provider. It provides virtual machines for the various Kubernetes nodes.
* [Rancher](https://rancher.com/what-is-rancher/what-rancher-adds-to-kubernetes/) to manage (install, provision, maintain, upgrade, ...) several Kubernetes clusters, e.g. when needing several environments on various IaaS providers or several well separated tenant installations, or doing airgapped installations on premises.

For tutorials and examples, see the [examples section](../examples/README.md)
Loading

0 comments on commit 0bdb378

Please sign in to comment.