From f1c20b6988081db942ea4bf06713897332130ea9 Mon Sep 17 00:00:00 2001 From: Milson Munakami Date: Mon, 16 Dec 2024 15:59:15 -0500 Subject: [PATCH 1/8] added info --- .pre-commit-config.yaml | 4 +- docs/get-started/allocation/coldfront.md | 30 ++-- .../best-practices-for-harvard.md | 20 +-- .../best-practices/best-practices.md | 50 +++--- .../billing-process-for-harvard.md | 10 +- .../cost-billing/how-pricing-works.md | 20 +-- .../explore-the-jupyterlab-environment.md | 18 +-- .../model-serving-in-the-rhoai.md | 74 ++++----- .../testing-model-in-the-rhoai.md | 70 ++++----- .../using-projects-the-rhoai.md | 32 ++-- .../get-started/rhoai-overview.md | 24 +-- docs/openshift-ai/index.md | 20 +-- .../the-rhoai-dashboard-overview.md | 14 +- ...jupyter-notebook-use-gpus-aiml-modeling.md | 6 +- ...ss-s3-data-then-download-and-analyze-it.md | 18 +-- .../scaling-and-performance-guide.md | 80 +++++----- .../decommission-openshift-resources.md | 16 +- .../get-started/openshift-overview.md | 120 +++++++-------- docs/openshift/index.md | 26 ++-- .../access-the-openshift-web-console.md | 8 +- .../openshift/logging-in/the-openshift-cli.md | 8 +- .../logging-in/web-console-overview.md | 14 +- .../access-and-security/create-a-key-pair.md | 6 +- .../access-and-security/security-groups.md | 38 ++--- .../domain-names-for-your-vms.md | 12 +- .../set-up-a-private-network.md | 12 +- .../how-to-build-windows-image.md | 12 +- .../openstack/backup/backup-with-snapshots.md | 26 ++-- .../bastion-host-based-ssh/index.md | 26 ++-- .../create-a-Windows-VM.md | 16 +- .../create-and-connect-to-the-VM/flavors.md | 6 +- .../launch-a-VM.md | 18 +-- .../ssh-to-the-VM.md | 30 ++-- .../using-vpn/sshuttle/index.md | 4 +- .../using-vpn/wireguard/index.md | 16 +- .../data-transfer/data-transfer-from-to-vm.md | 110 ++++++------- .../decommission-openstack-resources.md | 16 +- docs/openstack/index.md | 66 ++++---- .../access-the-openstack-dashboard.md | 8 +- .../logging-in/dashboard-overview.md | 58 +++---- docs/openstack/management/vm-management.md | 48 +++--- .../launch-a-VM-using-openstack-CLI.md | 10 +- docs/openstack/openstack-cli/openstack-CLI.md | 28 ++-- .../attach-the-volume-to-an-instance.md | 4 +- .../create-an-empty-volume.md | 4 +- .../persistent-storage/delete-volumes.md | 4 +- .../persistent-storage/detach-a-volume.md | 4 +- .../persistent-storage/extending-volume.md | 8 +- .../format-and-mount-the-volume.md | 14 +- .../mount-the-object-storage.md | 102 ++++++------- .../persistent-storage/object-storage.md | 90 +++++------ .../persistent-storage/transfer-a-volume.md | 28 ++-- docs/openstack/persistent-storage/volumes.md | 32 ++-- docs/other-tools/CI-CD/CI-CD-pipeline.md | 18 +-- .../setup-github-actions-pipeline.md | 22 +-- .../jenkins/setup-jenkins-CI-CD-pipeline.md | 76 ++++----- docs/other-tools/apache-spark/spark.md | 110 ++++++------- docs/other-tools/index.md | 8 +- docs/other-tools/kubernetes/k0s.md | 50 +++--- .../k3s/k3s-ha-cluster-using-k3d.md | 2 +- .../kubernetes/k3s/k3s-ha-cluster.md | 44 +++--- .../kubernetes/k3s/k3s-using-k3d.md | 16 +- .../kubernetes/k3s/k3s-using-k3sup.md | 12 +- docs/other-tools/kubernetes/k3s/k3s.md | 114 +++++++------- docs/other-tools/kubernetes/kind.md | 20 +-- .../kubeadm/HA-clusters-with-kubeadm.md | 144 +++++++++--------- .../single-master-clusters-with-kubeadm.md | 102 ++++++------- docs/other-tools/kubernetes/kubespray.md | 96 ++++++------ docs/other-tools/kubernetes/microk8s.md | 56 +++---- docs/other-tools/kubernetes/minikube.md | 96 ++++++------ .../nfs/nfs-server-client-setup.md | 32 ++-- nerc-theme/main.html | 6 +- 72 files changed, 1281 insertions(+), 1281 deletions(-) diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index fad79e48..c9fe717d 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -12,7 +12,7 @@ repos: - id: remove-tabs - repo: https://github.com/pre-commit/pre-commit-hooks - rev: v4.6.0 + rev: v5.0.0 hooks: - id: trailing-whitespace args: [--markdown-linebreak-ext=md] @@ -37,7 +37,7 @@ repos: exclude: .*.param.yaml - repo: https://github.com/igorshubovych/markdownlint-cli - rev: v0.41.0 + rev: v0.43.0 hooks: - id: markdownlint args: [-c, .markdownlint.yaml, --fix] diff --git a/docs/get-started/allocation/coldfront.md b/docs/get-started/allocation/coldfront.md index 43aef008..6966c462 100644 --- a/docs/get-started/allocation/coldfront.md +++ b/docs/get-started/allocation/coldfront.md @@ -30,28 +30,28 @@ can see an administrative view of it as [described here](../allocation/allocation-details.md#pi-and-manager-view) and can do the following tasks: -- **Only PI** can add a new project and archive any existing project(s) +- **Only PI** can add a new project and archive any existing project(s) -- Manage existing projects +- Manage existing projects -- Request allocations that fall under projects in NERC's resources such as clusters, - cloud resources, servers, storage, and software licenses +- Request allocations that fall under projects in NERC's resources such as clusters, + cloud resources, servers, storage, and software licenses -- Add/remove user access to/from allocated resources who is a member of the project - without requiring system administrator interaction +- Add/remove user access to/from allocated resources who is a member of the project + without requiring system administrator interaction -- Elevate selected users to 'manager' status, allowing them to handle some of the - PI asks such as request new resource allocations, add/remove users to/from resource - allocations, add project data such as grants and publications +- Elevate selected users to 'manager' status, allowing them to handle some of the + PI asks such as request new resource allocations, add/remove users to/from resource + allocations, add project data such as grants and publications -- Monitor resource utilization such as storage and cloud usage +- Monitor resource utilization such as storage and cloud usage -- Receive email notifications for expiring/renewing access to resources as well - as notifications when allocations change status - i.e. Active, Active (Needs - Renewal), Denied, Expired +- Receive email notifications for expiring/renewing access to resources as well + as notifications when allocations change status - i.e. Active, Active (Needs + Renewal), Denied, Expired -- Provide information such as grants, publications, and other reportable data for - periodic review by center director to demonstrate need for the resources +- Provide information such as grants, publications, and other reportable data for + periodic review by center director to demonstrate need for the resources ## How to login to NERC's ColdFront? diff --git a/docs/get-started/best-practices/best-practices-for-harvard.md b/docs/get-started/best-practices/best-practices-for-harvard.md index b853e7bb..ae7f2016 100644 --- a/docs/get-started/best-practices/best-practices-for-harvard.md +++ b/docs/get-started/best-practices/best-practices-for-harvard.md @@ -100,9 +100,9 @@ Attackers often review running code on the server to see if they can obtain any sensitive credentials that may have been included in each script. To better manage your credentials, we recommend either using: -- [1password Credential Manager](https://1password.com/) +- [1password Credential Manager](https://1password.com/) -- [AWS Secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) +- [AWS Secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) #### Not Running the Application as the Root/Superuser @@ -136,13 +136,13 @@ and we recommend the following ways to strengthen your SSH accounts 1. Disable password only logins - - In file `/etc/ssh/sshd_config` change `PasswordAuthentication` to `no` to - disable tunneled clear text passwords i.e. `PasswordAuthentication no`. + - In file `/etc/ssh/sshd_config` change `PasswordAuthentication` to `no` to + disable tunneled clear text passwords i.e. `PasswordAuthentication no`. - - Uncomment the permit empty passwords option in the second line, and, if - needed, change `yes` to `no` i.e. `PermitEmptyPasswords no`. + - Uncomment the permit empty passwords option in the second line, and, if + needed, change `yes` to `no` i.e. `PermitEmptyPasswords no`. - - Then run `service ssh restart`. + - Then run `service ssh restart`. 2. Use SSH keys with passwords enabled on them @@ -191,11 +191,11 @@ In the event you suspect a security issue has occurred or wanted someone to supp a security assessment, please feel free to reach out to the HUIT Internet Security and Data Privacy group, specifically the Operations & Engineering team. -- [Email Harvard ITSEC-OPS](mailto:itsec-ops@harvard.edu) +- [Email Harvard ITSEC-OPS](mailto:itsec-ops@harvard.edu) -- [Service Queue](https://harvard.service-now.com/ithelp?id=submit_ticket&sys_id=3f1dd0320a0a0b99000a53f7604a2ef9) +- [Service Queue](https://harvard.service-now.com/ithelp?id=submit_ticket&sys_id=3f1dd0320a0a0b99000a53f7604a2ef9) -- [Harvard HUIT Slack](https://harvard-huit.slack.com) Channel: **#isdp-public** +- [Harvard HUIT Slack](https://harvard-huit.slack.com) Channel: **#isdp-public** ## Further References diff --git a/docs/get-started/best-practices/best-practices.md b/docs/get-started/best-practices/best-practices.md index 2bd7257f..9e438918 100644 --- a/docs/get-started/best-practices/best-practices.md +++ b/docs/get-started/best-practices/best-practices.md @@ -35,33 +35,33 @@ containers, workloads, and any code or data generated by the platform. All NERC users are responsible for their use of NERC services, which include: -- Following the best practices for security on NERC services. Please review your - institutional guidelines [next](best-practices-for-my-institution.md). +- Following the best practices for security on NERC services. Please review your + institutional guidelines [next](best-practices-for-my-institution.md). -- Complying with security policies regarding VMs and containers. NERC admins are - not responsible for maintaining or deploying VMs or containers created by PIs - for their projects. See Harvard University and Boston University policies - [here](https://nerc.mghpcc.org/privacy-and-security/). We will be adding more - institutions under this page soon. Without prior notice, NERC reserves the right - to shut down any VM or container that is causing internal or external problems - or violating these policies. +- Complying with security policies regarding VMs and containers. NERC admins are + not responsible for maintaining or deploying VMs or containers created by PIs + for their projects. See Harvard University and Boston University policies + [here](https://nerc.mghpcc.org/privacy-and-security/). We will be adding more + institutions under this page soon. Without prior notice, NERC reserves the right + to shut down any VM or container that is causing internal or external problems + or violating these policies. -- Adhering to institutional restrictions and compliance policies around the data - they upload and provide access to/from NERC. At NERC, we only offer users to - store internal data in which information is chosen to keep confidential but the - disclosure of which would not cause material harm to you, your users and your - institution. Your institution may have already classified and categorized data - and implemented security policies and guidance for each category. If your project - includes sensitive data and information then you might need to contact NERC's - admin as soon as possible to discuss other potential options. +- Adhering to institutional restrictions and compliance policies around the data + they upload and provide access to/from NERC. At NERC, we only offer users to + store internal data in which information is chosen to keep confidential but the + disclosure of which would not cause material harm to you, your users and your + institution. Your institution may have already classified and categorized data + and implemented security policies and guidance for each category. If your project + includes sensitive data and information then you might need to contact NERC's + admin as soon as possible to discuss other potential options. -- [Backups and/or snapshots](../../openstack/backup/backup-with-snapshots.md) - are the user's responsibility for volumes/data, configurations, objects, and - their state, which are useful in the case when users accidentally delete/lose - their data. NERC admins cannot recover lost data. In addition, while NERC stores - data with high redundancy to deal with computer or disk failures, PIs should - ensure they have off-site backups for disaster recovery, e.g., to deal with - occasional disruptions and outages due to the natural disasters that impact the - MGHPCC data center. +- [Backups and/or snapshots](../../openstack/backup/backup-with-snapshots.md) + are the user's responsibility for volumes/data, configurations, objects, and + their state, which are useful in the case when users accidentally delete/lose + their data. NERC admins cannot recover lost data. In addition, while NERC stores + data with high redundancy to deal with computer or disk failures, PIs should + ensure they have off-site backups for disaster recovery, e.g., to deal with + occasional disruptions and outages due to the natural disasters that impact the + MGHPCC data center. --- diff --git a/docs/get-started/cost-billing/billing-process-for-harvard.md b/docs/get-started/cost-billing/billing-process-for-harvard.md index 5f1c7aff..f95afc27 100644 --- a/docs/get-started/cost-billing/billing-process-for-harvard.md +++ b/docs/get-started/cost-billing/billing-process-for-harvard.md @@ -32,11 +32,11 @@ Please follow these two steps to ensure proper billing setup: !!! abstract "What if you already have an existing Customer Code?" - Please note that if you already have an existing active NERC account, you - need to provide your HUIT Customer Code to NERC. If you think your department - may already have a HUIT account but you don’t know the corresponding Customer - Code then you can [contact HUIT Billing](https://billing.huit.harvard.edu/portal/allusers/contactus) - to get the required Customer Code. + Please note that if you already have an existing active NERC account, you + need to provide your HUIT Customer Code to NERC. If you think your department + may already have a HUIT account but you don’t know the corresponding Customer + Code then you can [contact HUIT Billing](https://billing.huit.harvard.edu/portal/allusers/contactus) + to get the required Customer Code. 2. During the Resource Allocation review and approval process, we will utilize the HUIT "Customer Code" provided by the PI in step #1 to align it with the approved diff --git a/docs/get-started/cost-billing/how-pricing-works.md b/docs/get-started/cost-billing/how-pricing-works.md index 5b25d5ef..9baf6da3 100644 --- a/docs/get-started/cost-billing/how-pricing-works.md +++ b/docs/get-started/cost-billing/how-pricing-works.md @@ -54,11 +54,11 @@ of the base SU for the maximum resource they reserve. **GPU SU Example:** -- A Project or VM with: +- A Project or VM with: `1 A100 GPU, 24 vCPUs, 95MiB RAM, 199.2hrs` -- Will be charged: +- Will be charged: `1 A100 GPU SUs x 200hrs (199.2 rounded up) x $1.803` @@ -66,11 +66,11 @@ of the base SU for the maximum resource they reserve. **OpenStack CPU SU Example:** -- A Project or VM with: +- A Project or VM with: `3 vCPU, 20 GiB RAM, 720hrs (24hr x 30days)` -- Will be charged: +- Will be charged: `5 CPU SUs due to the extra RAM (20GiB vs. 12GiB(3 x 4GiB)) x 720hrs x $0.013` @@ -91,7 +91,7 @@ of the base SU for the maximum resource they reserve. **OpenShift CPU SU Example:** -- Project with 3 Pods with: +- Project with 3 Pods with: i. `1 vCPU, 3 GiB RAM, 720hrs (24hr*30days)` @@ -99,7 +99,7 @@ of the base SU for the maximum resource they reserve. iii. `2 vCPU, 4 GiB RAM, 720hrs (24hr*30days)` -- Project Will be charged: +- Project Will be charged: `RoundUP(Sum(` @@ -161,11 +161,11 @@ provisioned until it is deleted. **Storage Example 1:** -- Volume or VM with: +- Volume or VM with: `500GiB for 699.2hrs` -- Will be charged: +- Will be charged: `.5 Storage TiB SU (.5 TiB x 700hrs) x $0.009 TiB/hr` @@ -173,11 +173,11 @@ provisioned until it is deleted. **Storage Example 2:** -- Volume or VM with: +- Volume or VM with: `10TiB for 720hrs (24hr x 30days)` -- Will be charged: +- Will be charged: `10 Storage TiB SU (10TiB x 720 hrs) x $0.009 TiB/hr` diff --git a/docs/openshift-ai/data-science-project/explore-the-jupyterlab-environment.md b/docs/openshift-ai/data-science-project/explore-the-jupyterlab-environment.md index f3d2c173..2105c38f 100644 --- a/docs/openshift-ai/data-science-project/explore-the-jupyterlab-environment.md +++ b/docs/openshift-ai/data-science-project/explore-the-jupyterlab-environment.md @@ -83,15 +83,15 @@ And a cell where we have entered some Python code: ![Jupyter Cell With Python Code](images/jupyter-cell-with-code.png) -- Code cells contain Python code that can be run interactively. It means that you - can modify the code, then run it, but only for this cell, not for the whole - content of the notebook! The code will not run on your computer or in the browser, - but directly in the environment you are connected to NERC RHOAI. - -- To run a code cell, you simply select it (select the cell, or on the left side - of it), and select the Run/Play button from the toolbar (you can also press - `CTRL+Enter` to run a cell, or `Shift+Enter` to run the cell and automatically - select the following one). +- Code cells contain Python code that can be run interactively. It means that you + can modify the code, then run it, but only for this cell, not for the whole + content of the notebook! The code will not run on your computer or in the browser, + but directly in the environment you are connected to NERC RHOAI. + +- To run a code cell, you simply select it (select the cell, or on the left side + of it), and select the Run/Play button from the toolbar (you can also press + `CTRL+Enter` to run a cell, or `Shift+Enter` to run the cell and automatically + select the following one). The Run button on the toolbar: diff --git a/docs/openshift-ai/data-science-project/model-serving-in-the-rhoai.md b/docs/openshift-ai/data-science-project/model-serving-in-the-rhoai.md index ca3d2648..2f715713 100644 --- a/docs/openshift-ai/data-science-project/model-serving-in-the-rhoai.md +++ b/docs/openshift-ai/data-science-project/model-serving-in-the-rhoai.md @@ -4,9 +4,9 @@ To run a **model server** and **deploy a model** on it, you need to have: -- Select the correct data science project and create workbench, see [Populate - the data science project](using-projects-the-rhoai.md#populate-the-data-science-project-with-a-workbench) - for more information. +- Select the correct data science project and create workbench, see [Populate + the data science project](using-projects-the-rhoai.md#populate-the-data-science-project-with-a-workbench) + for more information. ## Create a data connection @@ -20,17 +20,17 @@ Data connections are configurations for remote data location. Within this window enter the information about the S3-compatible object bucket where the model is stored. Enter the following information: -- **Name**: The name you want to give to the data connection. +- **Name**: The name you want to give to the data connection. -- **Access Key**: The access key to the bucket. +- **Access Key**: The access key to the bucket. -- **Secret Key**: The secret for the access key. +- **Secret Key**: The secret for the access key. -- **Endpoint**: The endpoint to connect to the storage. +- **Endpoint**: The endpoint to connect to the storage. -- **Region**: The region to connect to the storage. +- **Region**: The region to connect to the storage. -- **Bucket**: The name of the bucket. +- **Bucket**: The name of the bucket. **NOTE**: However, you are not required to use the S3 service from **Amazon Web Services (AWS)**. Any S3-compatible storage i.e. NERC OpenStack Container (Ceph), @@ -88,24 +88,24 @@ following details: ![Configure A New Model Server](images/configure-a-new-model-server.png) -- **Model server name** +- **Model server name** -- **Serving runtime**: either "OpenVINO Model Server" or "OpenVINO Model Server - (Supports GPUs)" +- **Serving runtime**: either "OpenVINO Model Server" or "OpenVINO Model Server + (Supports GPUs)" -- **Number of model server replicas**: This is the number of instances of the - model server engine that you want to deploy. You can scale it up as needed, - depending on the number of requests you will receive. +- **Number of model server replicas**: This is the number of instances of the + model server engine that you want to deploy. You can scale it up as needed, + depending on the number of requests you will receive. -- **Model server size**: This is the amount of resources, CPU, and RAM that will - be allocated to your server. Select the appropriate configuration for size and - the complexity of your model. +- **Model server size**: This is the amount of resources, CPU, and RAM that will + be allocated to your server. Select the appropriate configuration for size and + the complexity of your model. -- **Model route**: Check this box if you want the serving endpoint (the model serving - API) to be accessible outside of the OpenShift cluster through an external route. +- **Model route**: Check this box if you want the serving endpoint (the model serving + API) to be accessible outside of the OpenShift cluster through an external route. -- **Token authorization**: Check this box if you want to secure or restrict access - to the model by forcing requests to provide an authorization token. +- **Token authorization**: Check this box if you want to secure or restrict access + to the model by forcing requests to provide an authorization token. After adding and selecting options within the **Add model server** pop-up window, click **Add** to create the model server. @@ -146,17 +146,17 @@ initiate the Deploy model pop-up window as shown below: Enter the following information for your new model: -- **Model Name**: The name you want to give to your model (e.g., "coolstore"). +- **Model Name**: The name you want to give to your model (e.g., "coolstore"). -- **Model framework (name-version)**: The framework used to save this model. - At this time, OpenVINO IR or ONNX or Tensorflow are supported. +- **Model framework (name-version)**: The framework used to save this model. + At this time, OpenVINO IR or ONNX or Tensorflow are supported. -- **Model location**: Select the data connection that you created to store the - model. Alternatively, you can create another data connection directly from this - menu. +- **Model location**: Select the data connection that you created to store the + model. Alternatively, you can create another data connection directly from this + menu. -- **Folder path**: If your model is not located at the root of the bucket of your - data connection, you must enter the path to the folder it is in. +- **Folder path**: If your model is not located at the root of the bucket of your + data connection, you must enter the path to the folder it is in. For our example project, let's name the **Model** as "coolstore", select "onnx-1" for the framework, select the Data location you created before for the @@ -184,16 +184,16 @@ for the gRPC and the REST URLs for the inference endpoints as shown below: **Notes:** -- The REST URL displayed is only the base address of the endpoint. You must - append `/v2/models/name-of-your-model/infer` to it to have the full address. - Example: `http://modelmesh-serving.model-serving:8008/v2/models/coolstore/infer` +- The REST URL displayed is only the base address of the endpoint. You must + append `/v2/models/name-of-your-model/infer` to it to have the full address. + Example: `http://modelmesh-serving.model-serving:8008/v2/models/coolstore/infer` -- The full documentation of the API (REST and gRPC) is [available here](https://github.com/kserve/kserve/blob/master/docs/predict-api/v2/required_api.md). +- The full documentation of the API (REST and gRPC) is [available here](https://github.com/kserve/kserve/blob/master/docs/predict-api/v2/required_api.md). -- The gRPC proto file for the Model Server is [available here](https://github.com/kserve/kserve/blob/master/docs/predict-api/v2/grpc_predict_v2.proto). +- The gRPC proto file for the Model Server is [available here](https://github.com/kserve/kserve/blob/master/docs/predict-api/v2/grpc_predict_v2.proto). -- If you have exposed the model through an external route, the Inference endpoint - displays the full URL that you can copy. +- If you have exposed the model through an external route, the Inference endpoint + displays the full URL that you can copy. !!! note "Important Note" diff --git a/docs/openshift-ai/data-science-project/testing-model-in-the-rhoai.md b/docs/openshift-ai/data-science-project/testing-model-in-the-rhoai.md index 0b30b181..40b375ae 100644 --- a/docs/openshift-ai/data-science-project/testing-model-in-the-rhoai.md +++ b/docs/openshift-ai/data-science-project/testing-model-in-the-rhoai.md @@ -10,17 +10,17 @@ we can test it. ![Jupyter Hub Control Panel Menu](images/juyter-hub-control-panel-menu.png) -- In your project in JupyterLab, open the notebook `03_remote_inference.ipynb` - and follow the instructions to see how the model can be queried. +- In your project in JupyterLab, open the notebook `03_remote_inference.ipynb` + and follow the instructions to see how the model can be queried. -- Update the `grpc_url` as [noted before](model-serving-in-the-rhoai.md#deploy-the-model) - for the **the grpc URL value** from the deployed model on the NERC RHOAI Model - server. +- Update the `grpc_url` as [noted before](model-serving-in-the-rhoai.md#deploy-the-model) + for the **the grpc URL value** from the deployed model on the NERC RHOAI Model + server. ![Change grpc URL Value](images/change-grpc-url-value.png) -- Once you've completed the notebook's instructions, the object detection model - can isolate and recognize T-shirts, bottles, and hats in pictures, as shown below: +- Once you've completed the notebook's instructions, the object detection model + can isolate and recognize T-shirts, bottles, and hats in pictures, as shown below: ![Model Test to Detect Objects In An Image](images/model-test-object-detection.png) @@ -68,36 +68,36 @@ environment, or directly [here](https://github.com/nerc-project/nerc_rhoai_mlops To deploy the Pre-Post Processing Service service and the Application: -- From your [NERC's OpenShift Web Console](https://console.apps.shift.nerc.mghpcc.org/), - navigate to your project corresponding to the _NERC RHOAI Data Science Project_ - and select the "Import YAML" button, represented by the "+" icon in the top - navigation bar as shown below: +- From your [NERC's OpenShift Web Console](https://console.apps.shift.nerc.mghpcc.org/), + navigate to your project corresponding to the _NERC RHOAI Data Science Project_ + and select the "Import YAML" button, represented by the "+" icon in the top + navigation bar as shown below: ![YAML Add Icon](images/yaml-upload-plus-icon.png) -- Verify that you selected the correct project. +- Verify that you selected the correct project. ![Correct Project Selected for YAML Editor](images/project-verify-yaml-editor.png) -- Copy/Paste the content of the file `pre_post_processor_deployment.yaml` inside - the opened YAML editor. If you have named your model **coolstore** as instructed, - you're good to go. If not, modify the value on **[line # 35](https://github.com/nerc-project/nerc_rhoai_mlops/blob/33b3b7fa7448756f3defb3d6ae793524d1c5ff14/deployment/pre_post_processor_deployment.yaml#L35C23-L35C32)** - with the name you set. You can then click the **Create** button as shown below: +- Copy/Paste the content of the file `pre_post_processor_deployment.yaml` inside + the opened YAML editor. If you have named your model **coolstore** as instructed, + you're good to go. If not, modify the value on **[line # 35](https://github.com/nerc-project/nerc_rhoai_mlops/blob/33b3b7fa7448756f3defb3d6ae793524d1c5ff14/deployment/pre_post_processor_deployment.yaml#L35C23-L35C32)** + with the name you set. You can then click the **Create** button as shown below: ![YAML Editor Add Pre-Post Processing Service Content](images/pre_post_processor_deployment-yaml-content.png) -- Once Resource is successfully created, you will see the following screen: +- Once Resource is successfully created, you will see the following screen: ![Resources successfully created Importing More YAML](images/yaml-import-new-content.png) -- Click on "Import more YAML" and Copy/Paste the content of the file `intelligent_application_deployment.yaml` - inside the opened YAML editor. Nothing to change here, you can then click the - **Create** button as shown below: +- Click on "Import more YAML" and Copy/Paste the content of the file `intelligent_application_deployment.yaml` + inside the opened YAML editor. Nothing to change here, you can then click the + **Create** button as shown below: ![YAML Editor Pre-Post Processing Service Content](images/intelligent_application_deployment-yaml-content.png) -- If both deployments are successful, you will be able to see both of them grouped - under "intelligent-application" on the **Topology View** menu, as shown below: +- If both deployments are successful, you will be able to see both of them grouped + under "intelligent-application" on the **Topology View** menu, as shown below: ![Intelligent Application Under Topology](images/intelligent_application-topology.png) @@ -112,18 +112,18 @@ You have first to allow it to use your camera, this is the interface you get: You have: -- The current view of your camera. +- The current view of your camera. -- A button to take a picture as shown here: +- A button to take a picture as shown here: ![Capture Camera Image](images/capture-camera-image.png) -- A button to switch from front to rear camera if you are using a phone: +- A button to switch from front to rear camera if you are using a phone: ![Switch Camera View](images/switch-camera-view.png) -- A **QR code** that you can use to quickly open the application on a phone - (much easier than typing the URL!): +- A **QR code** that you can use to quickly open the application on a phone + (much easier than typing the URL!): ![QR code](images/QR-code.png) @@ -137,14 +137,14 @@ below: There are two parameters you can change on this application: -- On the `ia-frontend` Deployment, you can modify the `DISPLAY_BOX` environment - variable from `true` to `false`. It will hide the bounding box and the inference - score, so that you get only the coupon flying over the item. +- On the `ia-frontend` Deployment, you can modify the `DISPLAY_BOX` environment + variable from `true` to `false`. It will hide the bounding box and the inference + score, so that you get only the coupon flying over the item. -- On the `ia-inference` Deployment, the one used for pre-post processing, you - can modify the `COUPON_VALUE` environment variable. The format is simply an - Array with the value of the coupon for the 3 classes: bottle, hat, shirt. As - you see, these values could be adjusted in real time, and this could even be - based on another ML model! +- On the `ia-inference` Deployment, the one used for pre-post processing, you + can modify the `COUPON_VALUE` environment variable. The format is simply an + Array with the value of the coupon for the 3 classes: bottle, hat, shirt. As + you see, these values could be adjusted in real time, and this could even be + based on another ML model! --- diff --git a/docs/openshift-ai/data-science-project/using-projects-the-rhoai.md b/docs/openshift-ai/data-science-project/using-projects-the-rhoai.md index 7b6d8c15..3974e29f 100644 --- a/docs/openshift-ai/data-science-project/using-projects-the-rhoai.md +++ b/docs/openshift-ai/data-science-project/using-projects-the-rhoai.md @@ -27,17 +27,17 @@ details page, as shown below: Within the data science project, you can add the following configuration options: -- **Workbenches**: Development environments within your project where you can access - notebooks and generate models. +- **Workbenches**: Development environments within your project where you can access + notebooks and generate models. -- **Cluster storage**: Storage for your project in your OpenShift cluster. +- **Cluster storage**: Storage for your project in your OpenShift cluster. -- **Data connections**: A list of data sources that your project uses. +- **Data connections**: A list of data sources that your project uses. -- **Pipelines**: A list of created and configured pipeline servers. +- **Pipelines**: A list of created and configured pipeline servers. -- **Models and model servers**: A list of models and model servers that your project - uses. +- **Models and model servers**: A list of models and model servers that your project + uses. As you can see in the project's details figure, our selected data science project currently has no workbenches, storage, data connections, pipelines, or model servers. @@ -58,23 +58,23 @@ On the Create workbench page, complete the following information. **Note**: Not all fields are required. -- Name +- Name -- Description +- Description -- Notebook image (Image selection) +- Notebook image (Image selection) -- Deployment size (Container size and Number of GPUs) +- Deployment size (Container size and Number of GPUs) -- Environment variables +- Environment variables -- Cluster storage name +- Cluster storage name -- Cluster storage description +- Cluster storage description -- Persistent storage size +- Persistent storage size -- Data connections +- Data connections !!! tip "How to specify CPUs, Memory, and GPUs for your JupyterLab workbench?" diff --git a/docs/openshift-ai/get-started/rhoai-overview.md b/docs/openshift-ai/get-started/rhoai-overview.md index b5a24491..01b9361a 100644 --- a/docs/openshift-ai/get-started/rhoai-overview.md +++ b/docs/openshift-ai/get-started/rhoai-overview.md @@ -20,18 +20,18 @@ graphics processing unit (GPU) resources. Recent enhancements to Red Hat OpenShift AI include: -- Implementation **Deployment pipelines** for monitoring AI/ML experiments and - automating ML workflows accelerate the iteration process for data scientists - and developers of intelligent applications. This integration facilitates swift - iteration on machine learning projects and embeds automation into application - deployment and updates. - -- **Model serving** now incorporates GPU assistance for inference tasks and custom - model serving runtimes, enhancing inference performance and streamlining the - deployment of foundational models. - -- With **Model monitoring**, organizations can oversee performance and operational - metrics through a centralized dashboard, enhancing management capabilities. +- Implementation **Deployment pipelines** for monitoring AI/ML experiments and + automating ML workflows accelerate the iteration process for data scientists + and developers of intelligent applications. This integration facilitates swift + iteration on machine learning projects and embeds automation into application + deployment and updates. + +- **Model serving** now incorporates GPU assistance for inference tasks and custom + model serving runtimes, enhancing inference performance and streamlining the + deployment of foundational models. + +- With **Model monitoring**, organizations can oversee performance and operational + metrics through a centralized dashboard, enhancing management capabilities. ## Red Hat OpenShift AI ecosystem diff --git a/docs/openshift-ai/index.md b/docs/openshift-ai/index.md index 7e1f4446..b411871a 100644 --- a/docs/openshift-ai/index.md +++ b/docs/openshift-ai/index.md @@ -9,29 +9,29 @@ the list below. ## NERC OpenShift AI Getting Started -- [NERC Red Hat OpenShift AI (RHOAI) Overview](get-started/rhoai-overview.md) - **<<-- Start Here** +- [NERC Red Hat OpenShift AI (RHOAI) Overview](get-started/rhoai-overview.md) + **<<-- Start Here** ## NERC OpenShift AI dashboard -- [Access the NERC's OpenShift AI dashboard](logging-in/access-the-rhoai-dashboard.md) +- [Access the NERC's OpenShift AI dashboard](logging-in/access-the-rhoai-dashboard.md) -- [The NERC's OpenShift AI dashboard Overview](logging-in/the-rhoai-dashboard-overview.md) +- [The NERC's OpenShift AI dashboard Overview](logging-in/the-rhoai-dashboard-overview.md) ## Using Data Science Project in the NERC RHOAI -- [Using Your Data Science Project (DSP)](data-science-project/using-projects-the-rhoai.md) +- [Using Your Data Science Project (DSP)](data-science-project/using-projects-the-rhoai.md) -- [Explore the JupyterLab Environment](data-science-project/explore-the-jupyterlab-environment.md) +- [Explore the JupyterLab Environment](data-science-project/explore-the-jupyterlab-environment.md) -- [Model Serving in the NERC RHOAI](data-science-project/model-serving-in-the-rhoai.md) +- [Model Serving in the NERC RHOAI](data-science-project/model-serving-in-the-rhoai.md) -- [Test the Model in the NERC RHOAI](data-science-project/testing-model-in-the-rhoai.md) +- [Test the Model in the NERC RHOAI](data-science-project/testing-model-in-the-rhoai.md) ## Other Example Projects -- [How to access, download, and analyze data for S3 usage](other-projects/how-access-s3-data-then-download-and-analyze-it.md) +- [How to access, download, and analyze data for S3 usage](other-projects/how-access-s3-data-then-download-and-analyze-it.md) -- [Configure a Jupyter Notebook to use GPUs for AI/ML modeling](other-projects/configure-jupyter-notebook-use-gpus-aiml-modeling.md) +- [Configure a Jupyter Notebook to use GPUs for AI/ML modeling](other-projects/configure-jupyter-notebook-use-gpus-aiml-modeling.md) --- diff --git a/docs/openshift-ai/logging-in/the-rhoai-dashboard-overview.md b/docs/openshift-ai/logging-in/the-rhoai-dashboard-overview.md index d11483cd..beafa00e 100644 --- a/docs/openshift-ai/logging-in/the-rhoai-dashboard-overview.md +++ b/docs/openshift-ai/logging-in/the-rhoai-dashboard-overview.md @@ -4,10 +4,10 @@ In the NERC's RHOAI dashboard, you can see multiple links on your left hand side 1. **Applications**: - - _Enabled_: Launch your enabled applications, view documentation, or get - started with quick start instructions and tasks. + - _Enabled_: Launch your enabled applications, view documentation, or get + started with quick start instructions and tasks. - - _Explore_: View optional applications for your RHOAI instance. + - _Explore_: View optional applications for your RHOAI instance. **NOTE**: Most of them are disabled by default on NERC RHOAI right now. @@ -26,11 +26,11 @@ In the NERC's RHOAI dashboard, you can see multiple links on your left hand side 3. **Data Science Pipelines**: - - _Pipelines_: Manage your pipelines for a specific project selected from the - dropdown menu. + - _Pipelines_: Manage your pipelines for a specific project selected from the + dropdown menu. - - _Runs_: Manage and view your runs for a specific project selected from the - dropdown menu. + - _Runs_: Manage and view your runs for a specific project selected from the + dropdown menu. 4. **Model Serving**: Manage and view the health and performance of your deployed models across different projects corresponding to your NERC-OCP (OpenShift) diff --git a/docs/openshift-ai/other-projects/configure-jupyter-notebook-use-gpus-aiml-modeling.md b/docs/openshift-ai/other-projects/configure-jupyter-notebook-use-gpus-aiml-modeling.md index b1be38de..aa718e72 100644 --- a/docs/openshift-ai/other-projects/configure-jupyter-notebook-use-gpus-aiml-modeling.md +++ b/docs/openshift-ai/other-projects/configure-jupyter-notebook-use-gpus-aiml-modeling.md @@ -4,9 +4,9 @@ Prepare your Jupyter notebook server for using a GPU, you need to have: -- Select the correct data science project and create workbench, see - [Populate the data science project](../data-science-project/using-projects-the-rhoai.md#populate-the-data-science-project-with-a-workbench) - for more information. +- Select the correct data science project and create workbench, see + [Populate the data science project](../data-science-project/using-projects-the-rhoai.md#populate-the-data-science-project-with-a-workbench) + for more information. Please ensure that you start your Jupyter notebook server with options as depicted in the following configuration screen. This screen provides you with the opportunity diff --git a/docs/openshift-ai/other-projects/how-access-s3-data-then-download-and-analyze-it.md b/docs/openshift-ai/other-projects/how-access-s3-data-then-download-and-analyze-it.md index 3a8f4953..2c65ab11 100644 --- a/docs/openshift-ai/other-projects/how-access-s3-data-then-download-and-analyze-it.md +++ b/docs/openshift-ai/other-projects/how-access-s3-data-then-download-and-analyze-it.md @@ -4,9 +4,9 @@ Prepare your Jupyter notebook server for using a GPU, you need to have: -- Select the correct data science project and create workbench, see - [Populate the data science project](../data-science-project/using-projects-the-rhoai.md#populate-the-data-science-project-with-a-workbench) - for more information. +- Select the correct data science project and create workbench, see + [Populate the data science project](../data-science-project/using-projects-the-rhoai.md#populate-the-data-science-project-with-a-workbench) + for more information. Please ensure that you start your Jupyter notebook server with options as depicted in the following configuration screen. This screen provides you with the opportunity @@ -73,11 +73,11 @@ content section of the environment, on the right. Run each cell in the notebook, using the _Shift-Enter_ key combination, and pay attention to the execution results. Using this notebook, we will: -- Make a connection to an AWS S3 storage bucket +- Make a connection to an AWS S3 storage bucket -- Download a CSV file into the "datasets" folder +- Download a CSV file into the "datasets" folder -- Rename the downloaded CSV file to "newtruckdata.csv" +- Rename the downloaded CSV file to "newtruckdata.csv" ### View your new CSV file @@ -93,11 +93,11 @@ The file contains the data you will analyze and perform some analytics. Since you now have data, you can open the next Jupyter notebook, `simpleCalc.ipynb`, and perform the following operations: -- Create a dataframe. +- Create a dataframe. -- Perform simple total and average calculations. +- Perform simple total and average calculations. -- Print the calculation results. +- Print the calculation results. ## Analyzing your S3 data access run results diff --git a/docs/openshift/applications/scaling-and-performance-guide.md b/docs/openshift/applications/scaling-and-performance-guide.md index f47d95e9..912d2909 100644 --- a/docs/openshift/applications/scaling-and-performance-guide.md +++ b/docs/openshift/applications/scaling-and-performance-guide.md @@ -83,10 +83,10 @@ below: CPU and memory can be specified in a couple of ways: -- Resource **requests** and _limits_ are optional parameters specified at the container - level. OpenShift computes a Pod's request and limit as the sum of requests and - limits across all of its containers. OpenShift then uses these parameters for - scheduling and resource allocation decisions. +- Resource **requests** and _limits_ are optional parameters specified at the container + level. OpenShift computes a Pod's request and limit as the sum of requests and + limits across all of its containers. OpenShift then uses these parameters for + scheduling and resource allocation decisions. The **request** value specifies the min value you will be guaranteed. The request value is also used by the scheduler to assign pods to nodes. @@ -102,13 +102,13 @@ CPU and memory can be specified in a couple of ways: !!! note "Important Information" - If a Pod's total requests are not available on a single node, then the Pod - will remain in a *Pending* state (i.e. not running) until these resources - become available. + If a Pod's total requests are not available on a single node, then the Pod + will remain in a *Pending* state (i.e. not running) until these resources + become available. -- The **limit** value specifies the max value you can consume. Limit is the value - applications should be tuned to use. Pods will be memory, CPU throttled when - they exceed their available memory and CPU limit. +- The **limit** value specifies the max value you can consume. Limit is the value + applications should be tuned to use. Pods will be memory, CPU throttled when + they exceed their available memory and CPU limit. CPU is measured in units called millicores, where 1000 millicores ("m") = 1 vCPU or 1 Core. Each node in a cluster inspects the operating system to determine the @@ -253,8 +253,8 @@ Click on the component node to see the _Overview_ panel to the right. Use the **Details** tab to: -- Scale your pods using the up and down arrows to increase or decrease the number - of pods or instances of the application manually as shown below: +- Scale your pods using the up and down arrows to increase or decrease the number + of pods or instances of the application manually as shown below: ![Scale the Pod Count](images/pod-scale-count-arrow.png) @@ -264,30 +264,30 @@ Use the **Details** tab to: ![Edit the Pod Count](images/scale-pod-count.png) -- Check the Labels, Annotations, and Status of the application. +- Check the Labels, Annotations, and Status of the application. Click the **Resources** tab to: -- See the list of all the pods, view their status, access logs, and click on the - pod to see the pod details. +- See the list of all the pods, view their status, access logs, and click on the + pod to see the pod details. -- See the builds, their status, access logs, and start a new build if needed. +- See the builds, their status, access logs, and start a new build if needed. -- See the services and routes used by the component. +- See the services and routes used by the component. Click the **Observe** tab to: -- See the metrics to see CPU usage, Memory usage and Bandwidth consumption. +- See the metrics to see CPU usage, Memory usage and Bandwidth consumption. -- See the Events. +- See the Events. !!! note "Detailed Monitoring your project and application metrics" - On the left navigation panel of the **Developer** perspective, click - **Observe** to see the Dashboard, Metrics, Alerts, and Events for your project. - For more information about Monitoring project and application metrics - using the Developer perspective, please - [read this](https://docs.openshift.com/container-platform/4.10/applications/odc-monitoring-project-and-application-metrics-using-developer-perspective.html). + On the left navigation panel of the **Developer** perspective, click + **Observe** to see the Dashboard, Metrics, Alerts, and Events for your project. + For more information about Monitoring project and application metrics + using the Developer perspective, please + [read this](https://docs.openshift.com/container-platform/4.10/applications/odc-monitoring-project-and-application-metrics-using-developer-perspective.html). ## Scaling manually @@ -389,42 +389,42 @@ maximum numbers to maintain the specified CPU utilization across all pods. #### To create an HPA in the web console -- In the **Topology** view, click the node to reveal the side pane. +- In the **Topology** view, click the node to reveal the side pane. -- From the _Actions_ drop-down list, select **Add HorizontalPodAutoscaler** as - shown below: +- From the _Actions_ drop-down list, select **Add HorizontalPodAutoscaler** as + shown below: ![Horizontal Pod Autoscaler Popup](images/add-hpa-popup.png) -- This will open the **Add HorizontalPodAutoscaler** form as shown below: +- This will open the **Add HorizontalPodAutoscaler** form as shown below: ![Horizontal Pod Autoscaler Form](images/hpa-form.png) !!! note "Configure via: Form or YAML View" - While creating or editing the horizontal pod autoscaler in the web console, - you can switch from **Form view** to **YAML view**. + While creating or editing the horizontal pod autoscaler in the web console, + you can switch from **Form view** to **YAML view**. -- From the **Add HorizontalPodAutoscaler** form, define the name, minimum and maximum - pod limits, the CPU and memory usage, and click **Save**. +- From the **Add HorizontalPodAutoscaler** form, define the name, minimum and maximum + pod limits, the CPU and memory usage, and click **Save**. #### To edit an HPA in the web console -- In the **Topology** view, click the node to reveal the side pane. +- In the **Topology** view, click the node to reveal the side pane. -- From the **Actions** drop-down list, select **Edit HorizontalPodAutoscaler** - to open the **Edit Horizontal Pod Autoscaler** form. +- From the **Actions** drop-down list, select **Edit HorizontalPodAutoscaler** + to open the **Edit Horizontal Pod Autoscaler** form. -- From the **Edit Horizontal Pod Autoscaler** form, edit the minimum and maximum - pod limits and the CPU and memory usage, and click **Save**. +- From the **Edit Horizontal Pod Autoscaler** form, edit the minimum and maximum + pod limits and the CPU and memory usage, and click **Save**. #### To remove an HPA in the web console -- In the **Topology** view, click the node to reveal the side panel. +- In the **Topology** view, click the node to reveal the side panel. -- From the **Actions** drop-down list, select **Remove HorizontalPodAutoscaler**. +- From the **Actions** drop-down list, select **Remove HorizontalPodAutoscaler**. -- In the confirmation pop-up window, click **Remove** to remove the HPA. +- In the confirmation pop-up window, click **Remove** to remove the HPA. !!! tip "Best Practices" diff --git a/docs/openshift/decommission/decommission-openshift-resources.md b/docs/openshift/decommission/decommission-openshift-resources.md index 35d010bf..8b82b07d 100644 --- a/docs/openshift/decommission/decommission-openshift-resources.md +++ b/docs/openshift/decommission/decommission-openshift-resources.md @@ -5,16 +5,16 @@ below. ## Prerequisite -- **Backup**: Back up any critical data or configurations stored on the resources - that going to be decommissioned. This ensures that important information is not - lost during the process. +- **Backup**: Back up any critical data or configurations stored on the resources + that going to be decommissioned. This ensures that important information is not + lost during the process. -- **Kubernetes Objects (Resources)**: Please review all OpenShift Kubernetes Objects - (Resources) to ensure they are not actively used and ready to be decommissioned. +- **Kubernetes Objects (Resources)**: Please review all OpenShift Kubernetes Objects + (Resources) to ensure they are not actively used and ready to be decommissioned. -- Install and configure the **OpenShift CLI (oc)**, see [How to Setup the - OpenShift CLI Tools](../logging-in/setup-the-openshift-cli.md) - for more information. +- Install and configure the **OpenShift CLI (oc)**, see [How to Setup the + OpenShift CLI Tools](../logging-in/setup-the-openshift-cli.md) + for more information. ## Delete all Data Science Project resources from the NERC's Red Hat OpenShift AI diff --git a/docs/openshift/get-started/openshift-overview.md b/docs/openshift/get-started/openshift-overview.md index 376d9b59..68d2860f 100644 --- a/docs/openshift/get-started/openshift-overview.md +++ b/docs/openshift/get-started/openshift-overview.md @@ -15,65 +15,65 @@ OpenShift is a container orchestration platform that provides a number of compon and tools to help you build, deploy, and manage applications. Here are some of the basic components of OpenShift: -- **Project**: A project is a logical grouping of resources in the NERC's OpenShift - platform that provides isolation from others resources. - -- **Nodes**: Nodes are the physical or virtual machines that run the applications - and services in your OpenShift cluster. - -- **Image**: An image is a non-changing, definition of file structures and programs - for running an application. - -- **Container**: A container is an instance of an image with the addition of other - operating system components such as networking and running programs. Containers - are used to run applications and services in OpenShift. - -- **Pods**: Pods are the smallest deployable units defined, deployed, and managed - in OpenShift, that group related one or more containers that need to share resources. - -- **Services**: Services are logical representations of a set of pods that provide - a network endpoint for access to the application or service. Services can be - used to load balance traffic across multiple pods, and they can be accessed - using a stable DNS name. Services are assigned an IP address and port and proxy - connections to backend pods. This allows the pods to change while the connection - details of the service remain consistent. - -- **Volume**: A volume is a persistent file space available to pods and containers - for storing data. Containers are immutable and therefore upon a restart any - contents are cleared and reset to the original state of the image used to create - the container. Volumes provide storage space for files that need to persist - through container restarts. - -- **Routes**: Routes can be used to expose services to external clients to connections - outside the platform. A route is assigned a name in DNS when set up to make it - easily accessible. They can be configured with custom hostnames and TLS certificates. - -- **Replication Controllers**: A replication controller (rc) is a built-in mechanism - that ensures a defined number of pods are running at all times. An asset that - indicates how many pod replicas are required to run at a time. If a pod unexpectedly - quits or is deleted, a new copy of the pod is created and started. Additionally, - if more pods are running than the defined number, the replication controller - will delete the extra pods to get down to the defined number. - -- **Namespace**: A Namespace is a way to logically isolate resources within the - Cluster. In our case every project gets an unique namespace. - -- **Role-based access control (RBAC)**: A key security control to ensure that cluster - users and workloads have only access to resources required to execute their roles. - -- **Deployment Configurations**: A deployment configuration (dc) is an extension - of a replication controller that is used to push out a new version of application - code. Deployment configurations are used to define the process of deploying - applications and services to OpenShift. Deployment configurations - can be used to specify the number of replicas, the resources required by the - application, and the deployment strategy to use. - -- **Application URL Components**: When an application developer adds an application - to a project, a unique DNS name is created for the application via a Route. All - application DNS names will have a hyphen separator between your application name - and your unique project namespace. If the application is a web application, this - DNS name is also used for the URL to access the application. All names are in - the form of `-.apps.shift.nerc.mghpcc.org`. - For example: `mytestapp-mynamespace.apps.shift.nerc.mghpcc.org`. +- **Project**: A project is a logical grouping of resources in the NERC's OpenShift + platform that provides isolation from others resources. + +- **Nodes**: Nodes are the physical or virtual machines that run the applications + and services in your OpenShift cluster. + +- **Image**: An image is a non-changing, definition of file structures and programs + for running an application. + +- **Container**: A container is an instance of an image with the addition of other + operating system components such as networking and running programs. Containers + are used to run applications and services in OpenShift. + +- **Pods**: Pods are the smallest deployable units defined, deployed, and managed + in OpenShift, that group related one or more containers that need to share resources. + +- **Services**: Services are logical representations of a set of pods that provide + a network endpoint for access to the application or service. Services can be + used to load balance traffic across multiple pods, and they can be accessed + using a stable DNS name. Services are assigned an IP address and port and proxy + connections to backend pods. This allows the pods to change while the connection + details of the service remain consistent. + +- **Volume**: A volume is a persistent file space available to pods and containers + for storing data. Containers are immutable and therefore upon a restart any + contents are cleared and reset to the original state of the image used to create + the container. Volumes provide storage space for files that need to persist + through container restarts. + +- **Routes**: Routes can be used to expose services to external clients to connections + outside the platform. A route is assigned a name in DNS when set up to make it + easily accessible. They can be configured with custom hostnames and TLS certificates. + +- **Replication Controllers**: A replication controller (rc) is a built-in mechanism + that ensures a defined number of pods are running at all times. An asset that + indicates how many pod replicas are required to run at a time. If a pod unexpectedly + quits or is deleted, a new copy of the pod is created and started. Additionally, + if more pods are running than the defined number, the replication controller + will delete the extra pods to get down to the defined number. + +- **Namespace**: A Namespace is a way to logically isolate resources within the + Cluster. In our case every project gets an unique namespace. + +- **Role-based access control (RBAC)**: A key security control to ensure that cluster + users and workloads have only access to resources required to execute their roles. + +- **Deployment Configurations**: A deployment configuration (dc) is an extension + of a replication controller that is used to push out a new version of application + code. Deployment configurations are used to define the process of deploying + applications and services to OpenShift. Deployment configurations + can be used to specify the number of replicas, the resources required by the + application, and the deployment strategy to use. + +- **Application URL Components**: When an application developer adds an application + to a project, a unique DNS name is created for the application via a Route. All + application DNS names will have a hyphen separator between your application name + and your unique project namespace. If the application is a web application, this + DNS name is also used for the URL to access the application. All names are in + the form of `-.apps.shift.nerc.mghpcc.org`. + For example: `mytestapp-mynamespace.apps.shift.nerc.mghpcc.org`. --- diff --git a/docs/openshift/index.md b/docs/openshift/index.md index 03f8ce2d..918ec616 100644 --- a/docs/openshift/index.md +++ b/docs/openshift/index.md @@ -8,41 +8,41 @@ the list below. ## OpenShift Getting Started -- [OpenShift Overview](get-started/openshift-overview.md) - **<<-- Start Here** +- [OpenShift Overview](get-started/openshift-overview.md) + **<<-- Start Here** ## OpenShift Web Console -- [Access the NERC's OpenShift Web Console](logging-in/access-the-openshift-web-console.md) -- [Web Console Overview](logging-in/web-console-overview.md) +- [Access the NERC's OpenShift Web Console](logging-in/access-the-openshift-web-console.md) +- [Web Console Overview](logging-in/web-console-overview.md) ## OpenShift command-line interface (CLI) Tools -- [OpenShift CLI Tools Overview](logging-in/the-openshift-cli.md) -- [How to Setup the OpenShift CLI Tools](logging-in/setup-the-openshift-cli.md) +- [OpenShift CLI Tools Overview](logging-in/the-openshift-cli.md) +- [How to Setup the OpenShift CLI Tools](logging-in/setup-the-openshift-cli.md) ## Creating Your First Application on OpenShift -- [Creating A Sample Application](applications/creating-a-sample-application.md) +- [Creating A Sample Application](applications/creating-a-sample-application.md) -- [Creating Your Own Developer Catalog Service](applications/creating-your-own-developer-catalog-service.md) +- [Creating Your Own Developer Catalog Service](applications/creating-your-own-developer-catalog-service.md) ## Editing Applications -- [Editing your applications](applications/editing-applications.md) +- [Editing your applications](applications/editing-applications.md) -- [Scaling and Performance Guide](applications/scaling-and-performance-guide.md) +- [Scaling and Performance Guide](applications/scaling-and-performance-guide.md) ## Storage -- [Storage Overview](storage/storage-overview.md) +- [Storage Overview](storage/storage-overview.md) ## Deleting Applications -- [Deleting your applications](applications/deleting-applications.md) +- [Deleting your applications](applications/deleting-applications.md) ## Decommission OpenShift Resources -- [Decommission OpenShift Resources](decommission/decommission-openshift-resources.md) +- [Decommission OpenShift Resources](decommission/decommission-openshift-resources.md) --- diff --git a/docs/openshift/logging-in/access-the-openshift-web-console.md b/docs/openshift/logging-in/access-the-openshift-web-console.md index e986ebf5..147e4a95 100644 --- a/docs/openshift/logging-in/access-the-openshift-web-console.md +++ b/docs/openshift/logging-in/access-the-openshift-web-console.md @@ -20,13 +20,13 @@ Next, you will be redirected to CILogon welcome page as shown below: MGHPCC Shared Services (MSS) Keycloak will request approval of access to the following information from the user: -- Your CILogon user identifier +- Your CILogon user identifier -- Your name +- Your name -- Your email address +- Your email address -- Your username and affiliation from your identity provider +- Your username and affiliation from your identity provider which are required in order to allow access your account on NERC's OpenStack web console. diff --git a/docs/openshift/logging-in/the-openshift-cli.md b/docs/openshift/logging-in/the-openshift-cli.md index 3f2b6f8c..9e4d71fb 100644 --- a/docs/openshift/logging-in/the-openshift-cli.md +++ b/docs/openshift/logging-in/the-openshift-cli.md @@ -9,12 +9,12 @@ a command-line tool called _oc_. The OpenShift CLI is ideal in the following situations: -- Working directly with project source code +- Working directly with project source code -- Scripting OpenShift Container Platform operations +- Scripting OpenShift Container Platform operations -- Managing projects while restricted by bandwidth resources and the web console - is unavailable +- Managing projects while restricted by bandwidth resources and the web console + is unavailable It is recommended that developers should be comfortable with simple command-line tasks and the the NERC's OpenShift command-line tool. diff --git a/docs/openshift/logging-in/web-console-overview.md b/docs/openshift/logging-in/web-console-overview.md index 0bf9c5a7..d8170ac8 100644 --- a/docs/openshift/logging-in/web-console-overview.md +++ b/docs/openshift/logging-in/web-console-overview.md @@ -56,9 +56,9 @@ administrators and cluster administrators can view the Administrator perspective !!! note "Important Note" -The default web console perspective that is shown depends on the role of the -user. The **Administrator** perspective is displayed by default if the user is -recognized as an administrator. + The default web console perspective that is shown depends on the role of the + user. The **Administrator** perspective is displayed by default if the user + is recognized as an administrator. ### About the Developer perspective in the web console @@ -67,8 +67,8 @@ services, and databases. !!! info "Important Note" -The default view for the OpenShift Container Platform web console is the **Developer** -perspective. + The default view for the OpenShift Container Platform web console is the **Developer** + perspective. The web console provides a comprehensive set of tools for managing your projects and applications. @@ -82,8 +82,8 @@ located on top navigation as shown below: !!! info "Important Note" -You can identify the currently selected project with **tick** mark and also -you can click on **star** icon to keep the project under your **Favorites** list. + You can identify the currently selected project with **tick** mark and also + you can click on **star** icon to keep the project under your **Favorites** list. ## Navigation Menu diff --git a/docs/openstack/access-and-security/create-a-key-pair.md b/docs/openstack/access-and-security/create-a-key-pair.md index b9c083fe..63c6c619 100644 --- a/docs/openstack/access-and-security/create-a-key-pair.md +++ b/docs/openstack/access-and-security/create-a-key-pair.md @@ -96,9 +96,9 @@ You can now skip ahead to [Adding the key to an ssh-agent](#adding-your-ssh-key- To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see - [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see + [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. To create OpenStack keypair using the CLI, do this: diff --git a/docs/openstack/access-and-security/security-groups.md b/docs/openstack/access-and-security/security-groups.md index c530dcb9..5221238b 100644 --- a/docs/openstack/access-and-security/security-groups.md +++ b/docs/openstack/access-and-security/security-groups.md @@ -71,16 +71,16 @@ dialog box. Enter the following values: -- Rule: SSH +- Rule: SSH -- Remote: CIDR +- Remote: CIDR -- CIDR: 0.0.0.0/0 +- CIDR: 0.0.0.0/0 !!! note "Note" - To accept requests from a particular range of IP addresses, specify the IP - address block in the CIDR box. + To accept requests from a particular range of IP addresses, specify the + IP address block in the CIDR box. The new rule now appears in the list. This signifies that any instances using this newly added Security Group will now have SSH port 22 open for requests @@ -99,13 +99,13 @@ choose "ALL ICMP" from the dropdown. In the Add Rule dialog box, enter the following values: -- Rule: All ICMP +- Rule: All ICMP -- Direction: Ingress +- Direction: Ingress -- Remote: CIDR +- Remote: CIDR -- CIDR: 0.0.0.0/0 +- CIDR: 0.0.0.0/0 ![Adding ICMP - ping in Security Group Rules](images/ping_icmp_security_rule.png) @@ -135,16 +135,16 @@ Choose "RDP" from the Rule dropdown option as shown below: Enter the following values: -- Rule: RDP +- Rule: RDP -- Remote: CIDR +- Remote: CIDR -- CIDR: 0.0.0.0/0 +- CIDR: 0.0.0.0/0 -!!! note "Note" + !!! note "Note" - To accept requests from a particular range of IP addresses, specify the IP - address block in the CIDR box. + To accept requests from a particular range of IP addresses, specify the + IP address block in the CIDR box. The new rule now appears in the list. This signifies that any instances using this newly added Security Group will now have RDP port 3389 open for requests @@ -154,15 +154,15 @@ from any IP address. ## Editing Existing Security Group and Adding New Security Rules -- Navigate to Security Groups: +- Navigate to Security Groups: Navigate to _Project -> Network -> Security Groups_. -- Select the Security Group: +- Select the Security Group: Choose the security group to which you want to add new rules. -- Add New Rule: +- Add New Rule: Look for an option to add a new rule within the selected security group. @@ -173,7 +173,7 @@ from any IP address. ![Add New Security Rules](images/sg_new_rule.png) -- Save Changes: +- Save Changes: Save the changes to apply the new security rules to the selected security group. diff --git a/docs/openstack/advanced-openstack-topics/domain-name-system/domain-names-for-your-vms.md b/docs/openstack/advanced-openstack-topics/domain-name-system/domain-names-for-your-vms.md index 02732f75..94d08498 100644 --- a/docs/openstack/advanced-openstack-topics/domain-name-system/domain-names-for-your-vms.md +++ b/docs/openstack/advanced-openstack-topics/domain-name-system/domain-names-for-your-vms.md @@ -154,16 +154,16 @@ its administrative access) Please fill out the following information on this popup box: -- Scheme: _http_ +- Scheme: _http_ -- Forward Hostname/IP: - _``_ +- Forward Hostname/IP: + _``_ -- Forward Port: _``_ +- Forward Port: _``_ -- Enable all toggles i.e. Cache Assets, Block Common Exploits, Websockets Support +- Enable all toggles i.e. Cache Assets, Block Common Exploits, Websockets Support -- Access List: _Publicly Accessible_ +- Access List: _Publicly Accessible_ For your reference, you can review your selection should looks like below with your own Domain Name and other settings: diff --git a/docs/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md b/docs/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md index 7d72b70c..91eb3b08 100644 --- a/docs/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md +++ b/docs/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md @@ -20,7 +20,7 @@ side of the screen. In the Create Network dialog box, specify the following values. -- Network tab: +- Network tab: Network Name: Specify a name to identify the network. @@ -33,7 +33,7 @@ In the Create Network dialog box, specify the following values. ![Create a Network](images/create_network.png) -- Subnet tab: +- Subnet tab: You do not have to specify a subnet when you create a network, but if you do not specify a subnet, the network can not be attached to an instance. @@ -44,9 +44,9 @@ In the Create Network dialog box, specify the following values. networks, you should use IP addresses which fall within the ranges that are specifically reserved for private networks: - 10.0.0.0/8 - 172.16.0.0/12 - 192.168.0.0/16 + 10.0.0.0/8 + 172.16.0.0/12 + 192.168.0.0/16 In the example below, we configure a network containing addresses 192.168.0.1 to 192.168.0.255 using CIDR 192.168.0.0/24 @@ -62,7 +62,7 @@ In the Create Network dialog box, specify the following values. Disable Gateway: Select this check box to disable a gateway IP address. -- Subnet Details tab +- Subnet Details tab Enable DHCP: Select this check box to enable DHCP so that your VM instances will automatically be assigned an IP on the subnet. diff --git a/docs/openstack/advanced-openstack-topics/setting-up-your-own-images/how-to-build-windows-image.md b/docs/openstack/advanced-openstack-topics/setting-up-your-own-images/how-to-build-windows-image.md index 5df1d680..6a045946 100644 --- a/docs/openstack/advanced-openstack-topics/setting-up-your-own-images/how-to-build-windows-image.md +++ b/docs/openstack/advanced-openstack-topics/setting-up-your-own-images/how-to-build-windows-image.md @@ -58,7 +58,7 @@ b. Download the signed **VirtIO drivers** ISO file from the [Fedora website](htt c. Install [Virtual Machine Manager](https://virt-manager.org/download/) on your local Windows 10 machine using WSL: -- **Enable WSL on your local Windows 10 subsystem for Linux:** +- **Enable WSL on your local Windows 10 subsystem for Linux:** The steps given here are straightforward, however, before following them make sure on Windows 10, you have WSL enabled and have at least Ubuntu @@ -66,12 +66,12 @@ local Windows 10 machine using WSL: that then see our tutorial on [how to enable WSL and install Ubuntu over it](https://www.how2shout.com/how-to/enable-windows-subsystem-linux-feature.html). -- **Download and install MobaXterm:** +- **Download and install MobaXterm:** **MobaXterm** is a free application that can be downloaded using [this link](https://mobaxterm.mobatek.net/download-home-edition.html). After downloading, install it like any other normal Windows software. -- **Open MobaXterm and run WSL Linux:** +- **Open MobaXterm and run WSL Linux:** As you open this advanced terminal for Windows 10, WSL installed Ubuntu app will show on the left side panel of it. Double click on that to start @@ -79,14 +79,14 @@ local Windows 10 machine using WSL: ![MobaXterm WSL Ubuntu-20.04 LTS](images/a.mobaxterm_ubuntu_WSL.png) -- **Install Virt-Manager:** +- **Install Virt-Manager:** ```sh sudo apt update sudo apt install virt-manager ``` -- **Run Virtual Machine Manager:** +- **Run Virtual Machine Manager:** Start the Virtual Machine Manager running this command on the opened terminal: `virt-manager` as shown below: @@ -97,7 +97,7 @@ local Windows 10 machine using WSL: ![Virt-Manager interface](images/0.virtual-manager.png) -- **Connect QEMU/KVM user session on Virt-Manager:** +- **Connect QEMU/KVM user session on Virt-Manager:** ![Virt-Manager Add Connection](images/0.0.add_virtual_connection.png) diff --git a/docs/openstack/backup/backup-with-snapshots.md b/docs/openstack/backup/backup-with-snapshots.md index 37b0c84a..50c62a08 100644 --- a/docs/openstack/backup/backup-with-snapshots.md +++ b/docs/openstack/backup/backup-with-snapshots.md @@ -3,13 +3,13 @@ When you start a new instance, you can choose the Instance Boot Source from the following list: -- boot from image +- boot from image -- boot from instance snapshot +- boot from instance snapshot -- boot from volume +- boot from volume -- boot from volume snapshot +- boot from volume snapshot In its default configuration, when the instance is launched from an **Image** or an **Instance Snapshot**, the choice for utilizing persistent storage is configured @@ -34,12 +34,12 @@ data. This mainly serves two purposes: -- _As a backup mechanism:_ save the main disk of your instance to an image in - Horizon dashboard under _Project -> Compute -> Images_ and later boot a new instance - from this image with the saved data. +- _As a backup mechanism:_ save the main disk of your instance to an image in + Horizon dashboard under _Project -> Compute -> Images_ and later boot a new instance + from this image with the saved data. -- _As a templating mechanism:_ customise and upgrade a base image and save it to - use as a template for new instances. +- _As a templating mechanism:_ customise and upgrade a base image and save it to + use as a template for new instances. !!! info "Considerations: using Instance snapshots" @@ -57,8 +57,8 @@ This mainly serves two purposes: To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. To snapshot an instance to an image using the CLI, do this: @@ -164,8 +164,8 @@ Also, it consumes **less storage space** compared to instance snapshots. To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. To snapshot an instance to an image using the CLI, do this: diff --git a/docs/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md b/docs/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md index 2f8b6d7c..630aba45 100644 --- a/docs/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md +++ b/docs/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md @@ -27,13 +27,13 @@ enabling SSH access to the private instances. Before trying to access instances from the outside world using SSH tunneling via Bastion Host, you need to make sure you have followed these steps: -- You followed the instruction in [Create a Key Pair](../../access-and-security/create-a-key-pair.md) - to set up a public ssh key. You can use the same key for both the bastion - host and the remote instances, or different keys; you'll just need to ensure - that the keys are loaded by ssh-agent appropriately so they can be used as - needed. Please read [this instruction](../../access-and-security/create-a-key-pair.md#adding-your-ssh-key-to-the-ssh-agent) - on how to add ssh-agent and load your private key using ssh-add command to - access the bastion host. +- You followed the instruction in [Create a Key Pair](../../access-and-security/create-a-key-pair.md) + to set up a public ssh key. You can use the same key for both the bastion + host and the remote instances, or different keys; you'll just need to ensure + that the keys are loaded by ssh-agent appropriately so they can be used as + needed. Please read [this instruction](../../access-and-security/create-a-key-pair.md#adding-your-ssh-key-to-the-ssh-agent) + on how to add ssh-agent and load your private key using ssh-add command to + access the bastion host. **Verify you have an SSH agent running. This should match whatever you built your cluster with.** @@ -54,11 +54,11 @@ ssh-add path/to/private/key ssh -A @ ``` -- Your public ssh-key was selected (in the Access and Security tab) while - [launching the instance](../launch-a-VM.md). +- Your public ssh-key was selected (in the Access and Security tab) while + [launching the instance](../launch-a-VM.md). -- Add two Security Groups, one will be used by the Bastion host and another one - will be used by any private instances. +- Add two Security Groups, one will be used by the Bastion host and another one + will be used by any private instances. ![Security Groups](images/security_groups.png) @@ -80,8 +80,8 @@ Group option as shown below: ![Private Instances Security Group](images/private_instances_sg.png) -- [Assign a Floating IP](../assign-a-floating-IP.md) to the Bastion host instance - in order to access it from outside world. +- [Assign a Floating IP](../assign-a-floating-IP.md) to the Bastion host instance + in order to access it from outside world. Make a note of the Floating IP you have associated to your instance. diff --git a/docs/openstack/create-and-connect-to-the-VM/create-a-Windows-VM.md b/docs/openstack/create-and-connect-to-the-VM/create-a-Windows-VM.md index 1f3fb49d..e7cb14e7 100644 --- a/docs/openstack/create-and-connect-to-the-VM/create-a-Windows-VM.md +++ b/docs/openstack/create-and-connect-to-the-VM/create-a-Windows-VM.md @@ -7,9 +7,9 @@ Windows virtual machine, similar steps can be used on other types of virtual machines. The following steps show how to create a virtual machine which boots from an external volume: -- Create a volume with source data from the image +- Create a volume with source data from the image -- Launch a VM with that volume as the system disk +- Launch a VM with that volume as the system disk !!! note "Recommendations" @@ -48,9 +48,9 @@ for the size of the volume as shown below: To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see - [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see + [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. To create a volume from image using the CLI, do this: @@ -168,9 +168,9 @@ Attach a Floating IP to your instance: To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see - [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see + [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. To launch an instance from existing bootable volume using the CLI, do this: diff --git a/docs/openstack/create-and-connect-to-the-VM/flavors.md b/docs/openstack/create-and-connect-to-the-VM/flavors.md index 86a6c41d..cf1c2541 100644 --- a/docs/openstack/create-and-connect-to-the-VM/flavors.md +++ b/docs/openstack/create-and-connect-to-the-VM/flavors.md @@ -310,9 +310,9 @@ process. To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see - [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see + [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. If you want to change the **flavor** that is bound to a VM, then you can run the following openstack client commands, here we are changing flavor of an existing diff --git a/docs/openstack/create-and-connect-to-the-VM/launch-a-VM.md b/docs/openstack/create-and-connect-to-the-VM/launch-a-VM.md index 8c79ee33..f63b7fbf 100644 --- a/docs/openstack/create-and-connect-to-the-VM/launch-a-VM.md +++ b/docs/openstack/create-and-connect-to-the-VM/launch-a-VM.md @@ -2,12 +2,12 @@ **Prerequisites**: -- You followed the instruction in [Create a Key Pair](../access-and-security/create-a-key-pair.md) - to set up a public ssh key. +- You followed the instruction in [Create a Key Pair](../access-and-security/create-a-key-pair.md) + to set up a public ssh key. -- Make sure you have added rules in the - [Security Groups](../access-and-security/security-groups.md#allowing-ssh) to - allow **ssh** using Port 22 access to the instance. +- Make sure you have added rules in the + [Security Groups](../access-and-security/security-groups.md#allowing-ssh) to + allow **ssh** using Port 22 access to the instance. ## Using Horizon dashboard @@ -46,13 +46,13 @@ Double check that in the dropdown "Select Boot Source". When you start a new instance, you can choose the Instance Boot Source from the following list: -- boot from image +- boot from image -- boot from instance snapshot +- boot from instance snapshot -- boot from volume +- boot from volume -- boot from volume snapshot +- boot from volume snapshot In its default configuration, when the instance is launched from an **Image** or an **Instance Snapshot**, the choice for utilizing persistent storage is configured diff --git a/docs/openstack/create-and-connect-to-the-VM/ssh-to-the-VM.md b/docs/openstack/create-and-connect-to-the-VM/ssh-to-the-VM.md index 506642ce..7d11a490 100644 --- a/docs/openstack/create-and-connect-to-the-VM/ssh-to-the-VM.md +++ b/docs/openstack/create-and-connect-to-the-VM/ssh-to-the-VM.md @@ -4,18 +4,18 @@ Before trying to access instances from the outside world, you need to make sure you have followed these steps: -- You followed the instruction in [Create a Key Pair](../access-and-security/create-a-key-pair.md) - to set up a public ssh key. +- You followed the instruction in [Create a Key Pair](../access-and-security/create-a-key-pair.md) + to set up a public ssh key. -- Your public ssh-key has selected (in "Key Pair" tab) while - [launching the instance](launch-a-VM.md). +- Your public ssh-key has selected (in "Key Pair" tab) while + [launching the instance](launch-a-VM.md). -- [Assign a Floating IP](assign-a-floating-IP.md) to the instance in order to - access it from outside world. +- [Assign a Floating IP](assign-a-floating-IP.md) to the instance in order to + access it from outside world. -- Make sure you have added rules in the - [Security Groups](../access-and-security/security-groups.md#allowing-ssh) to - allow **ssh** using Port 22 access to the instance. +- Make sure you have added rules in the + [Security Groups](../access-and-security/security-groups.md#allowing-ssh) to + allow **ssh** using Port 22 access to the instance. !!! info "How to update New Security Group(s) on any running VM?" @@ -33,17 +33,17 @@ In our example, the IP is `199.94.60.66`. Default usernames for all the base images are: -- **all Ubuntu images**: ubuntu +- **all Ubuntu images**: ubuntu -- **all AlmaLinux images**: almalinux +- **all AlmaLinux images**: almalinux -- **all Rocky Linux images**: rocky +- **all Rocky Linux images**: rocky -- **all Fedora images**: fedora +- **all Fedora images**: fedora -- **all Debian images**: debian +- **all Debian images**: debian -- **all RHEL images**: cloud-user +- **all RHEL images**: cloud-user !!! warning "Removed Centos Images" diff --git a/docs/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/index.md b/docs/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/index.md index 43566e52..52ecb585 100644 --- a/docs/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/index.md +++ b/docs/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/index.md @@ -81,7 +81,7 @@ sudo dnf install sshuttle It is also possible to install into a **virtualenv** as _a non-root user_. -- From PyPI: +- From PyPI: ```sh virtualenv -p python3 /tmp/sshuttle @@ -89,7 +89,7 @@ virtualenv -p python3 /tmp/sshuttle pip install sshuttle ``` -- Clone: +- Clone: ```sh virtualenv -p python3 /tmp/sshuttle diff --git a/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md b/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md index 17bfd550..7f11de82 100644 --- a/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md +++ b/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md @@ -102,11 +102,11 @@ file and share it to the new client. It would be kind of pointless to have our VPN server allow anyone to connect. This is where our public & private keys come into play. -- Each **client's \*\*public\*\* key** needs to be added to the - **SERVER'S** configuration file +- Each **client's \*\*public\*\* key** needs to be added to the + **SERVER'S** configuration file -- The **server's \*\*public\*\* key** added to the **CLIENT'S** - configuration file +- The **server's \*\*public\*\* key** added to the **CLIENT'S** + configuration file ### Useful commands @@ -126,10 +126,10 @@ To deactivate config: `wg-quick down /path/to/file_name.config` !!! note "Important Note" - You need to contact your project administrator to get your own WireGUard - configuration file (file with .conf extension). Download it and Keep it in - your local machine so in next steps we can use this configuration client - profile file. + You need to contact your project administrator to get your own WireGUard + configuration file (file with .conf extension). Download it and Keep it in + your local machine so in next steps we can use this configuration client + profile file. A WireGuard client or compatible software is needed to connect to the WireGuard VPN server. Please install[one of these clients](https://www.wireguard.com/install/) diff --git a/docs/openstack/data-transfer/data-transfer-from-to-vm.md b/docs/openstack/data-transfer/data-transfer-from-to-vm.md index 46e9835e..9cd136e6 100644 --- a/docs/openstack/data-transfer/data-transfer-from-to-vm.md +++ b/docs/openstack/data-transfer/data-transfer-from-to-vm.md @@ -254,9 +254,9 @@ given NERC VM. To run the `rclone` commands, you need to have: -- To run the `rclone` commands you will need to have `rclone` installed. - See [Downloading and Installing the latest version of Rclone](https://rclone.org/downloads/) - for more information. +- To run the `rclone` commands you will need to have `rclone` installed. + See [Downloading and Installing the latest version of Rclone](https://rclone.org/downloads/) + for more information. ### Configuring Rclone @@ -396,33 +396,33 @@ using FTP, FTPS, SCP, SFTP, WebDAV, or S3 file transfer protocols. **Prerequisites**: -- WinSCP installed, see [Download and Install the latest version of the WinSCP](https://winscp.net/eng/docs/guide_install) - for more information. +- WinSCP installed, see [Download and Install the latest version of the WinSCP](https://winscp.net/eng/docs/guide_install) + for more information. -- Go to WinSCP menu and open "View > Preferences". +- Go to WinSCP menu and open "View > Preferences". -- When the "Preferences" dialog window appears, select "Transfer" in the options - on the left pane. +- When the "Preferences" dialog window appears, select "Transfer" in the options + on the left pane. -- Click on the "Edit" button. +- Click on the "Edit" button. -- Then, in the popup dialog box, review the "Common options" group and uncheck - the "Preserve timestamp" option as shown below: +- Then, in the popup dialog box, review the "Common options" group and uncheck + the "Preserve timestamp" option as shown below: ![Disable Preserve TimeStamp](images/winscp-preferences-perserve-timestamp-disable.png) #### Configuring WinSCP -- Click on the "New Tab" button as shown below: +- Click on the "New Tab" button as shown below: ![Login](images/winscp-new-tab.png) -- Select either **"SFTP"** or **"SCP"** from the "File protocol" dropdown options - as shown below: +- Select either **"SFTP"** or **"SCP"** from the "File protocol" dropdown options + as shown below: ![Choose SFTP or SCP File Protocol](images/choose_SFTP_or_SCP_protocol.png) -- Provide the following required information: +- Provide the following required information: **"File protocol"**: Choose either "**"SFTP"** or **"SCP"**" @@ -434,24 +434,24 @@ using FTP, FTPS, SCP, SFTP, WebDAV, or S3 file transfer protocols. !!! info "Default User name based on OS" - - **all Ubuntu images**: ubuntu + - **all Ubuntu images**: ubuntu - - **all AlmaLinux images**: almalinux + - **all AlmaLinux images**: almalinux - - **all Rocky Linux images**: rocky + - **all Rocky Linux images**: rocky - - **all Fedora images**: fedora + - **all Fedora images**: fedora - - **all Debian images**: debian + - **all Debian images**: debian - - **all RHEL images**: cloud-user + - **all RHEL images**: cloud-user - If you still have VMs running with deleted **CentOS** images, you need to - use the following default username for your CentOS images: `centos`. + If you still have VMs running with deleted **CentOS** images, you need to + use the following default username for your CentOS images: `centos`. **"Password"**: "``" -- Change Authentication Options +- Change Authentication Options Before saving, click the "Advanced" button. In the "Advanced Site Settings", under "SSH >> Authentication" settings, check @@ -462,12 +462,12 @@ from the file picker. !!! tip "Helpful Tip" - You can save your above configured site with some preferred name by - clicking the "Save" button and then giving a proper name to your site. - This prevents needing to manually enter all of your configuration again the - next time you need to use WinSCP. + You can save your above configured site with some preferred name by + clicking the "Save" button and then giving a proper name to your site. + This prevents needing to manually enter all of your configuration again the + next time you need to use WinSCP. - ![Save Site WinSCP](images/winscp-save-site.png) + ![Save Site WinSCP](images/winscp-save-site.png) #### Using WinSCP @@ -493,20 +493,20 @@ connections to servers, enterprise file sharing, and various cloud storage platf **Prerequisites**: -- Cyberduck installed, see [Download and Install the latest version of the Cyberduck](https://cyberduck.io/download/) - for more information. +- Cyberduck installed, see [Download and Install the latest version of the Cyberduck](https://cyberduck.io/download/) + for more information. #### Configuring Cyberduck -- Click on the "Open Connection" button as shown below: +- Click on the "Open Connection" button as shown below: ![Open Connection](images/cyberduck-open-connection-new.png) -- Select either **"SFTP"** or **"FTP"** from the dropdown options as shown below: +- Select either **"SFTP"** or **"FTP"** from the dropdown options as shown below: ![Choose Amazon S3](images/cyberduck-select-sftp-or-ftp.png) -- Provide the following required information: +- Provide the following required information: **"Server"**: "``" @@ -516,17 +516,17 @@ connections to servers, enterprise file sharing, and various cloud storage platf !!! info "Default User name based on OS" - - **all Ubuntu images**: ubuntu + - **all Ubuntu images**: ubuntu - - **all AlmaLinux images**: almalinux + - **all AlmaLinux images**: almalinux - - **all Rocky Linux images**: rocky + - **all Rocky Linux images**: rocky - - **all Fedora images**: fedora + - **all Fedora images**: fedora - - **all Debian images**: debian + - **all Debian images**: debian - - **all RHEL images**: cloud-user + - **all RHEL images**: cloud-user **"Password"**: "``" @@ -555,25 +555,25 @@ computer (shared drives, Dropbox, etc.) **Prerequisites**: -- Filezilla installed, see - [Download and Install the latest version of the Filezilla](https://wiki.filezilla-project.org/Client_Installation) - for more information. +- Filezilla installed, see + [Download and Install the latest version of the Filezilla](https://wiki.filezilla-project.org/Client_Installation) + for more information. #### Configuring Filezilla -- Click on "Site Manager" icon as shown below: +- Click on "Site Manager" icon as shown below: ![Site Manager](images/filezilla-new-site.png) -- Click on "New Site" as shown below: +- Click on "New Site" as shown below: ![Click New Site](images/filezilla-click-new-site.png) -- Select either **"SFTP"** or **"FTP"** from the dropdown options as shown below: +- Select either **"SFTP"** or **"FTP"** from the dropdown options as shown below: ![Select Protocol](images/filezilla-sftp-or-ftp.png) -- Provide the following required information: +- Provide the following required information: **"Server"**: "``" @@ -585,20 +585,20 @@ computer (shared drives, Dropbox, etc.) !!! info "Default User name based on OS" - - **all Ubuntu images**: ubuntu + - **all Ubuntu images**: ubuntu - - **all AlmaLinux images**: almalinux + - **all AlmaLinux images**: almalinux - - **all Rocky Linux images**: rocky + - **all Rocky Linux images**: rocky - - **all Fedora images**: fedora + - **all Fedora images**: fedora - - **all Debian images**: debian + - **all Debian images**: debian - - **all RHEL images**: cloud-user + - **all RHEL images**: cloud-user - If you still have VMs running with deleted **CentOS** images, you need to - use the following default username for your CentOS images: `centos`. + If you still have VMs running with deleted **CentOS** images, you need to + use the following default username for your CentOS images: `centos`. **"Key file"**: "Browse and choose the appropriate SSH Private Key from you local machine that has corresponding Public Key attached to your VM" diff --git a/docs/openstack/decommission/decommission-openstack-resources.md b/docs/openstack/decommission/decommission-openstack-resources.md index 4b442f99..92fbc0b8 100644 --- a/docs/openstack/decommission/decommission-openstack-resources.md +++ b/docs/openstack/decommission/decommission-openstack-resources.md @@ -5,16 +5,16 @@ below. ## Prerequisite -- **Backup**: Back up any critical data or configurations stored on the resources - that going to be decommissioned. This ensures that important information is not - lost during the process. You can refer to [this guide](../data-transfer/data-transfer-from-to-vm.md) - to initiate and carry out data transfer to and from the virtual machine. +- **Backup**: Back up any critical data or configurations stored on the resources + that going to be decommissioned. This ensures that important information is not + lost during the process. You can refer to [this guide](../data-transfer/data-transfer-from-to-vm.md) + to initiate and carry out data transfer to and from the virtual machine. -- **Shutdown Instances**: If applicable, [Shut Off any running instances](../management/vm-management.md#stopping-and-starting) - to ensure they are not actively processing data during decommissioning. +- **Shutdown Instances**: If applicable, [Shut Off any running instances](../management/vm-management.md#stopping-and-starting) + to ensure they are not actively processing data during decommissioning. -- Setup **OpenStack CLI**, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- Setup **OpenStack CLI**, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. ## Delete all VMs diff --git a/docs/openstack/index.md b/docs/openstack/index.md index 329b4ca7..9943c091 100644 --- a/docs/openstack/index.md +++ b/docs/openstack/index.md @@ -10,62 +10,62 @@ the list below. ## Logging In -- [Access the OpenStack Dashboard](logging-in/access-the-openstack-dashboard.md) - **<<-- Start Here** -- [Dashboard Overview](logging-in/dashboard-overview.md) +- [Access the OpenStack Dashboard](logging-in/access-the-openstack-dashboard.md) + **<<-- Start Here** +- [Dashboard Overview](logging-in/dashboard-overview.md) ## Access and Security -- [Security Groups](access-and-security/security-groups.md) -- [Create a Key Pair](access-and-security/create-a-key-pair.md) +- [Security Groups](access-and-security/security-groups.md) +- [Create a Key Pair](access-and-security/create-a-key-pair.md) ## Create & Connect to the VM -- [Launch a VM](create-and-connect-to-the-VM/launch-a-VM.md) -- [Create a Windows VM](create-and-connect-to-the-VM/create-a-Windows-VM.md) -- [Available Images](create-and-connect-to-the-VM/images.md) -- [Available NOVA Flavors](create-and-connect-to-the-VM/flavors.md) -- [Assign a Floating IP](create-and-connect-to-the-VM/assign-a-floating-IP.md) -- [SSH to the VM](create-and-connect-to-the-VM/ssh-to-the-VM.md) +- [Launch a VM](create-and-connect-to-the-VM/launch-a-VM.md) +- [Create a Windows VM](create-and-connect-to-the-VM/create-a-Windows-VM.md) +- [Available Images](create-and-connect-to-the-VM/images.md) +- [Available NOVA Flavors](create-and-connect-to-the-VM/flavors.md) +- [Assign a Floating IP](create-and-connect-to-the-VM/assign-a-floating-IP.md) +- [SSH to the VM](create-and-connect-to-the-VM/ssh-to-the-VM.md) ## OpenStack CLI -- [OpenStack CLI](openstack-cli/openstack-CLI.md) -- [Launch a VM using OpenStack CLI](openstack-cli/launch-a-VM-using-openstack-CLI.md) +- [OpenStack CLI](openstack-cli/openstack-CLI.md) +- [Launch a VM using OpenStack CLI](openstack-cli/launch-a-VM-using-openstack-CLI.md) ## Persistent Storage ### Block Storage/ Volumes/ Cinder -- [Block Storage/ Volumes/ Cinder](persistent-storage/volumes.md) -- [Create an empty volume](persistent-storage/create-an-empty-volume.md) -- [Attach the volume to an instance](persistent-storage/attach-the-volume-to-an-instance.md) -- [Format and Mount the Volume](persistent-storage/format-and-mount-the-volume.md) -- [Detach a Volume](persistent-storage/detach-a-volume.md) -- [Delete Volumes](persistent-storage/delete-volumes.md) -- [Extending Volume](persistent-storage/extending-volume.md) -- [Transfer a Volume](persistent-storage/transfer-a-volume.md) +- [Block Storage/ Volumes/ Cinder](persistent-storage/volumes.md) +- [Create an empty volume](persistent-storage/create-an-empty-volume.md) +- [Attach the volume to an instance](persistent-storage/attach-the-volume-to-an-instance.md) +- [Format and Mount the Volume](persistent-storage/format-and-mount-the-volume.md) +- [Detach a Volume](persistent-storage/detach-a-volume.md) +- [Delete Volumes](persistent-storage/delete-volumes.md) +- [Extending Volume](persistent-storage/extending-volume.md) +- [Transfer a Volume](persistent-storage/transfer-a-volume.md) ### Object Storage/ Swift -- [Object Storage/ Swift](persistent-storage/object-storage.md) -- [Mount The Object Storage](persistent-storage/mount-the-object-storage.md) +- [Object Storage/ Swift](persistent-storage/object-storage.md) +- [Mount The Object Storage](persistent-storage/mount-the-object-storage.md) ## Data Transfer -- [Data Transfer To/ From NERC VM](data-transfer/data-transfer-from-to-vm.md) +- [Data Transfer To/ From NERC VM](data-transfer/data-transfer-from-to-vm.md) ## Backup your instance and data -- [Backup with snapshots](backup/backup-with-snapshots.md) +- [Backup with snapshots](backup/backup-with-snapshots.md) ## VM Management -- [VM Management](management/vm-management.md) +- [VM Management](management/vm-management.md) ## Decommission OpenStack Resources -- [Decommission OpenStack Resources](decommission/decommission-openstack-resources.md) +- [Decommission OpenStack Resources](decommission/decommission-openstack-resources.md) --- @@ -75,23 +75,23 @@ the list below. ## Setting Up Your Own Network -- [Set up your own Private Network](advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md) -- [Create a Router](advanced-openstack-topics/setting-up-a-network/create-a-router.md) +- [Set up your own Private Network](advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md) +- [Create a Router](advanced-openstack-topics/setting-up-a-network/create-a-router.md) ## Domain or Host Name for your VM -- [Domain Name System (DNS)](advanced-openstack-topics/domain-name-system/domain-names-for-your-vms.md) +- [Domain Name System (DNS)](advanced-openstack-topics/domain-name-system/domain-names-for-your-vms.md) ## Using Terraform to provision NERC resources -- [Terraform on NERC](advanced-openstack-topics/terraform/terraform-on-NERC.md) +- [Terraform on NERC](advanced-openstack-topics/terraform/terraform-on-NERC.md) ## Python SDK -- [Python SDK](advanced-openstack-topics/python-sdk/python-SDK.md) +- [Python SDK](advanced-openstack-topics/python-sdk/python-SDK.md) ## Setting Up Your Own Images -- [Microsoft Windows image](advanced-openstack-topics/setting-up-your-own-images/how-to-build-windows-image.md) +- [Microsoft Windows image](advanced-openstack-topics/setting-up-your-own-images/how-to-build-windows-image.md) --- diff --git a/docs/openstack/logging-in/access-the-openstack-dashboard.md b/docs/openstack/logging-in/access-the-openstack-dashboard.md index 2ceeb2c0..176a978f 100644 --- a/docs/openstack/logging-in/access-the-openstack-dashboard.md +++ b/docs/openstack/logging-in/access-the-openstack-dashboard.md @@ -19,13 +19,13 @@ Next, you will be redirected to CILogon welcome page as shown below: MGHPCC Shared Services (MSS) Keycloak will request approval of access to the following information from the user: -- Your CILogon user identifier +- Your CILogon user identifier -- Your name +- Your name -- Your email address +- Your email address -- Your username and affiliation from your identity provider +- Your username and affiliation from your identity provider which are required in order to allow access your account on NERC's OpenStack dashboard. diff --git a/docs/openstack/logging-in/dashboard-overview.md b/docs/openstack/logging-in/dashboard-overview.md index e7697b3f..79f1edc0 100644 --- a/docs/openstack/logging-in/dashboard-overview.md +++ b/docs/openstack/logging-in/dashboard-overview.md @@ -10,7 +10,7 @@ Beneath that you can see six panels in larger print: "Project", "Compute", Navigate: Project -> Project -- API Access: View API endpoints. +- API Access: View API endpoints. ![Project API Access](images/project_API_access.png) @@ -18,75 +18,75 @@ Navigate: Project -> Project Navigate: Project -> Compute -- Overview: View reports for the project. +- Overview: View reports for the project. ![Compute dashboard](images/horizon_dashboard.png) -- Instances: View, launch, create a snapshot from, stop, pause, or reboot - instances, or connect to them through VNC. +- Instances: View, launch, create a snapshot from, stop, pause, or reboot + instances, or connect to them through VNC. -- Images: View images and instance snapshots created by project users, plus any - images that are publicly available. Create, edit, and delete images, and launch - instances from images and snapshots. +- Images: View images and instance snapshots created by project users, plus any + images that are publicly available. Create, edit, and delete images, and launch + instances from images and snapshots. -- Key Pairs: View, create, edit, import, and delete key pairs. +- Key Pairs: View, create, edit, import, and delete key pairs. -- Server Groups: View, create, edit, and delete server groups. +- Server Groups: View, create, edit, and delete server groups. ## Volume Panel Navigate: Project -> Volume -- Volumes: View, create, edit, delete volumes, and accept volume trnasfer. +- Volumes: View, create, edit, delete volumes, and accept volume trnasfer. -- Backups: View, create, edit, and delete backups. +- Backups: View, create, edit, and delete backups. -- Snapshots: View, create, edit, and delete volume snapshots. +- Snapshots: View, create, edit, and delete volume snapshots. -- Groups: View, create, edit, and delete groups. +- Groups: View, create, edit, and delete groups. -- Group Snapshots: View, create, edit, and delete group snapshots. +- Group Snapshots: View, create, edit, and delete group snapshots. ## Network Panel Navigate: Project -> Network -- Network Topology: View the network topology. +- Network Topology: View the network topology. ![Network Topology](images/network_topology.png) -- Networks: Create and manage public and private networks. +- Networks: Create and manage public and private networks. -- Routers: Create and manage routers. +- Routers: Create and manage routers. -- Security Groups: View, create, edit, and delete security groups and security - group rules.. +- Security Groups: View, create, edit, and delete security groups and security + group rules.. -- Load Balancers: View, create, edit, and delete load balancers. +- Load Balancers: View, create, edit, and delete load balancers. -- Floating IPs: Allocate an IP address to or release it from a project. +- Floating IPs: Allocate an IP address to or release it from a project. -- Trunks: View, create, edit, and delete trunk. +- Trunks: View, create, edit, and delete trunk. ## Orchestration Panel Navigate: Project->Orchestration -- Stacks: Use the REST API to orchestrate multiple composite cloud applications. +- Stacks: Use the REST API to orchestrate multiple composite cloud applications. -- Resource Types: view various resources types and their details. +- Resource Types: view various resources types and their details. -- Template Versions: view different heat templates. +- Template Versions: view different heat templates. -- Template Generator: GUI to generate and save template using drag and drop resources. +- Template Generator: GUI to generate and save template using drag and drop resources. ## Object Store Panel Navigate: Project->Object Store -- Containers: Create and manage containers and objects. In future you would use - this tab to [create Swift object storage](../persistent-storage/object-storage.md) - for your projects on a need basis. +- Containers: Create and manage containers and objects. In future you would use + this tab to [create Swift object storage](../persistent-storage/object-storage.md) + for your projects on a need basis. ![Swift Object Containers](images/object_containers.png) diff --git a/docs/openstack/management/vm-management.md b/docs/openstack/management/vm-management.md index 8fa2d09b..e6027e51 100644 --- a/docs/openstack/management/vm-management.md +++ b/docs/openstack/management/vm-management.md @@ -134,20 +134,20 @@ openstack server restart my-vm ## Create Snapshot -- Click _Action -> Create Snapshot_. +- Click _Action -> Create Snapshot_. -- Instances must have status `Active`, `Suspended`, or `Shutoff` to create snapshot. +- Instances must have status `Active`, `Suspended`, or `Shutoff` to create snapshot. -- This creates an image template from a VM instance also known as "Instance Snapshot" - as [described here](../backup/backup-with-snapshots.md#create-and-use-instance-snapshots). +- This creates an image template from a VM instance also known as "Instance Snapshot" + as [described here](../backup/backup-with-snapshots.md#create-and-use-instance-snapshots). -- The menu will automatically shift to _Project -> Compute -> Images_ once the - image is created. +- The menu will automatically shift to _Project -> Compute -> Images_ once the + image is created. -- The sole distinction between an _image_ directly uploaded to the image data - service, [glance](https://docs.openstack.org/glance) and an _image_ generated - through a snapshot is that the snapshot-created image possesses additional - properties in the glance database and defaults to being **private**. +- The sole distinction between an _image_ directly uploaded to the image data + service, [glance](https://docs.openstack.org/glance) and an _image_ generated + through a snapshot is that the snapshot-created image possesses additional + properties in the glance database and defaults to being **private**. !!! info "Glance Image Service" @@ -290,8 +290,8 @@ There are other options available if you wish to keep the virtual machine for future usage. These do, however, continue to use quota for the project even though the VM is not running. -- **Snapshot the VM** to keep an offline copy of the virtual machine that can be - performed as [described here](../backup/backup-with-snapshots.md#how-to-create-an-instance-snapshot). +- **Snapshot the VM** to keep an offline copy of the virtual machine that can be + performed as [described here](../backup/backup-with-snapshots.md#how-to-create-an-instance-snapshot). If however, the virtual machine is no longer required and no data on the associated system or ephemeral disk needs to be preserved, the following command @@ -315,24 +315,24 @@ Click _Action -> Delete Instance_. the deletion process, as failure to do so may lead to data corruption in both your data and the associated volume. -- If the instance is using [Ephemeral disk](../persistent-storage/volumes.md#ephemeral-disk): - It stops and removes the instance along with the ephemeral disk. - **All data will be permanently lost!** +- If the instance is using [Ephemeral disk](../persistent-storage/volumes.md#ephemeral-disk): + It stops and removes the instance along with the ephemeral disk. + **All data will be permanently lost!** -- If the instance is using [Volume-backed disk](../persistent-storage/volumes.md#volumes): - It stops and removes the instance. If **"Delete Volume on Instance Delete"** - was explicitely set to **Yes**, **All data will be permanently lost!**. If set - to **No** (which is default selected while launching an instance), the volume - may be used to boot a new instance, though any data stored in memory will be - permanently lost. For more in-depth information on making your VM setup and - data persistent, you can explore the details [here](../persistent-storage/volumes.md#how-do-you-make-your-vm-setup-and-data-persistent). +- If the instance is using [Volume-backed disk](../persistent-storage/volumes.md#volumes): + It stops and removes the instance. If **"Delete Volume on Instance Delete"** + was explicitely set to **Yes**, **All data will be permanently lost!**. If set + to **No** (which is default selected while launching an instance), the volume + may be used to boot a new instance, though any data stored in memory will be + permanently lost. For more in-depth information on making your VM setup and + data persistent, you can explore the details [here](../persistent-storage/volumes.md#how-do-you-make-your-vm-setup-and-data-persistent). -- Status will briefly change to **Deleting** while the instance is being removed. +- Status will briefly change to **Deleting** while the instance is being removed. The quota associated with this virtual machine will be returned to the project and you can review and verify that looking at your [OpenStack dashboard overview](../logging-in/dashboard-overview.md#compute-panel). -- Navigate to _Project -> Compute -> Overview_. +- Navigate to _Project -> Compute -> Overview_. --- diff --git a/docs/openstack/openstack-cli/launch-a-VM-using-openstack-CLI.md b/docs/openstack/openstack-cli/launch-a-VM-using-openstack-CLI.md index d97a076d..5076c026 100644 --- a/docs/openstack/openstack-cli/launch-a-VM-using-openstack-CLI.md +++ b/docs/openstack/openstack-cli/launch-a-VM-using-openstack-CLI.md @@ -3,15 +3,15 @@ First find the following details using openstack command, we would required these details during the creation of virtual machine. -- Flavor +- Flavor -- Image +- Image -- Network +- Network -- Security Group +- Security Group -- Key Name +- Key Name Get the flavor list using below openstack command: diff --git a/docs/openstack/openstack-cli/openstack-CLI.md b/docs/openstack/openstack-cli/openstack-CLI.md index 50176ef2..b1d3fa37 100644 --- a/docs/openstack/openstack-cli/openstack-CLI.md +++ b/docs/openstack/openstack-cli/openstack-CLI.md @@ -24,23 +24,23 @@ appropriate environment variables. You can download the environment file with the credentials from the [OpenStack dashboard](https://stack.nerc.mghpcc.org/dashboard/identity/application_credentials/). -- Log in to the [NERC's OpenStack dashboard](https://stack.nerc.mghpcc.org), choose - the project for which you want to download the OpenStack RC file. +- Log in to the [NERC's OpenStack dashboard](https://stack.nerc.mghpcc.org), choose + the project for which you want to download the OpenStack RC file. -- Navigate to _Identity -> Application Credentials_. +- Navigate to _Identity -> Application Credentials_. -- Click on "Create Application Credential" button and provide a **Name** and **Roles** - for the application credential. All other fields are optional and leaving the - "Secret" field empty will set it to autogenerate (recommended). +- Click on "Create Application Credential" button and provide a **Name** and **Roles** + for the application credential. All other fields are optional and leaving the + "Secret" field empty will set it to autogenerate (recommended). ![OpenStackClient Credentials Setup](images/openstack_cli_cred.png) !!! note "Important Note" - Please note that an application credential is only valid for a single - project, and to access multiple projects you need to create an application - credential for each. You can switch projects by clicking on the project name - at the top right corner and choosing from the dropdown under "Project". + Please note that an application credential is only valid for a single + project, and to access multiple projects you need to create an application + credential for each. You can switch projects by clicking on the project name + at the top right corner and choosing from the dropdown under "Project". After clicking "Create Application Credential" button, the **ID** and **Secret** will be displayed and you will be prompted to `Download openrc file` @@ -95,13 +95,13 @@ For more information on configuring the OpenStackClient please see the Generally, the OpenStack terminal client offers the following methods: -- **list**: Lists information about objects currently in the cloud. +- **list**: Lists information about objects currently in the cloud. -- **show**: Displays information about a single object currently in the cloud. +- **show**: Displays information about a single object currently in the cloud. -- **create**: Creates a new object in the cloud. +- **create**: Creates a new object in the cloud. -- **set**: Edits an existing object in the cloud. +- **set**: Edits an existing object in the cloud. To test that you have everything configured, try out some commands. The following command lists all the images available to your project: diff --git a/docs/openstack/persistent-storage/attach-the-volume-to-an-instance.md b/docs/openstack/persistent-storage/attach-the-volume-to-an-instance.md index 33a603bd..4e46bae1 100644 --- a/docs/openstack/persistent-storage/attach-the-volume-to-an-instance.md +++ b/docs/openstack/persistent-storage/attach-the-volume-to-an-instance.md @@ -31,8 +31,8 @@ Make note of the device name of your volume. To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. To attach the volume to an instance using the CLI, do this: diff --git a/docs/openstack/persistent-storage/create-an-empty-volume.md b/docs/openstack/persistent-storage/create-an-empty-volume.md index 6d90b2bc..7b9da48d 100644 --- a/docs/openstack/persistent-storage/create-an-empty-volume.md +++ b/docs/openstack/persistent-storage/create-an-empty-volume.md @@ -43,8 +43,8 @@ A set of volume_image meta data is also copied from the image service. To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. To create a volume using the CLI, do this: diff --git a/docs/openstack/persistent-storage/delete-volumes.md b/docs/openstack/persistent-storage/delete-volumes.md index c868d9e3..262b61e2 100644 --- a/docs/openstack/persistent-storage/delete-volumes.md +++ b/docs/openstack/persistent-storage/delete-volumes.md @@ -30,8 +30,8 @@ confirm the action. To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. To delete a volume using the CLI, do this: diff --git a/docs/openstack/persistent-storage/detach-a-volume.md b/docs/openstack/persistent-storage/detach-a-volume.md index 0157afcb..c9a9bd37 100644 --- a/docs/openstack/persistent-storage/detach-a-volume.md +++ b/docs/openstack/persistent-storage/detach-a-volume.md @@ -33,8 +33,8 @@ This will popup the following interface to proceed: To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. #### Using the openstack client diff --git a/docs/openstack/persistent-storage/extending-volume.md b/docs/openstack/persistent-storage/extending-volume.md index 1927f6c3..93d766ef 100644 --- a/docs/openstack/persistent-storage/extending-volume.md +++ b/docs/openstack/persistent-storage/extending-volume.md @@ -6,9 +6,9 @@ VM and in **"Available"** status. The steps are as follows: -- Extend the volume to its new size +- Extend the volume to its new size -- Extend the filesystem to its new size +- Extend the filesystem to its new size ## Using Horizon dashboard @@ -28,8 +28,8 @@ Specify, the new extened size in GiB: To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. ### Using the openstack client diff --git a/docs/openstack/persistent-storage/format-and-mount-the-volume.md b/docs/openstack/persistent-storage/format-and-mount-the-volume.md index e07b884b..51ff5f7a 100644 --- a/docs/openstack/persistent-storage/format-and-mount-the-volume.md +++ b/docs/openstack/persistent-storage/format-and-mount-the-volume.md @@ -127,22 +127,22 @@ partition style (GPT or MBR), see [Compare partition styles - GPT and MBR](https Format the New Volume: -- Select and hold (or right-click) the unallocated space of the new disk. +- Select and hold (or right-click) the unallocated space of the new disk. -- Select "New Simple Volume" and follow the wizard to create a new partition. +- Select "New Simple Volume" and follow the wizard to create a new partition. ![Windows Simple Volume Wizard Start](images/win_disk_simple_volume.png) -- Choose the file system (usually NTFS for Windows). +- Choose the file system (usually NTFS for Windows). -- Assign a drive letter or mount point. +- Assign a drive letter or mount point. Complete Formatting: -- Complete the wizard to format the new volume. +- Complete the wizard to format the new volume. -- Once formatting is complete, the new volume should be visible in File Explorer - as shown below: +- Once formatting is complete, the new volume should be visible in File Explorer + as shown below: ![Windows Simple Volume Wizard Start](images/win_new_drive.png) diff --git a/docs/openstack/persistent-storage/mount-the-object-storage.md b/docs/openstack/persistent-storage/mount-the-object-storage.md index d219acde..aed80104 100644 --- a/docs/openstack/persistent-storage/mount-the-object-storage.md +++ b/docs/openstack/persistent-storage/mount-the-object-storage.md @@ -5,11 +5,11 @@ We are using following setting for this purpose to mount the object storage to an NERC OpenStack VM: -- 1 Linux machine, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS image, - `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to this VM. +- 1 Linux machine, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS image, + `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to this VM. -- Setup and enable your S3 API credentials: +- Setup and enable your S3 API credentials: To access the API credentials, you must login through the OpenStack Dashboard and navigate to "Projects > API Access" where you can download the "Download @@ -52,14 +52,14 @@ parts are `EC2_ACCESS_KEY` and `EC2_SECRET_KEY`, keep them noted. openstack ec2 credentials create -- Source the downloaded OpenStack RC File from _Projects > API Access_ by using: - `source *-openrc.sh` command. Sourcing the RC File will set the required environment - variables. +- Source the downloaded OpenStack RC File from _Projects > API Access_ by using: + `source *-openrc.sh` command. Sourcing the RC File will set the required environment + variables. -- Allow Other User option by editing fuse config by editing `/etc/fuse.conf` file - and uncomment "user_allow_other" option. +- Allow Other User option by editing fuse config by editing `/etc/fuse.conf` file + and uncomment "user_allow_other" option. - sudo nano /etc/fuse.conf + sudo nano /etc/fuse.conf The output going to look like this: @@ -147,31 +147,31 @@ The object storage container i.e. "bucket1" will be mounted in the directory `~/ In this command, -- `mount-s3` is the Mountpoint for Amazon S3 package as installed in `/usr/bin/` - path we don't need to specify the full path. +- `mount-s3` is the Mountpoint for Amazon S3 package as installed in `/usr/bin/` + path we don't need to specify the full path. -- `--profile` corresponds to the name given on the `~/.aws/credentials` file i.e. - `[nerc]`. +- `--profile` corresponds to the name given on the `~/.aws/credentials` file i.e. + `[nerc]`. -- `--endpoint-url` corresponds to the Object Storage endpoint url for NERC Object - Storage. You don't need to modify this url. +- `--endpoint-url` corresponds to the Object Storage endpoint url for NERC Object + Storage. You don't need to modify this url. -- `--allow-other`: Allows other users to access the mounted filesystem. This is - particularly useful when multiple users need to access the mounted S3 bucket. - Only allowed if `user_allow_other` is set in `/etc/fuse.conf`. +- `--allow-other`: Allows other users to access the mounted filesystem. This is + particularly useful when multiple users need to access the mounted S3 bucket. + Only allowed if `user_allow_other` is set in `/etc/fuse.conf`. -- `--force-path-style`: Forces the use of path-style URLs when accessing the S3 - bucket. This is necessary when working with certain S3-compatible storage services - that do not support virtual-hosted-style URLs. +- `--force-path-style`: Forces the use of path-style URLs when accessing the S3 + bucket. This is necessary when working with certain S3-compatible storage services + that do not support virtual-hosted-style URLs. -- `--debug`: Enables debug mode, providing additional information about the mounting - process. +- `--debug`: Enables debug mode, providing additional information about the mounting + process. -- `bucket1` is the name of the container which contains the NERC Object Storage - resources. +- `bucket1` is the name of the container which contains the NERC Object Storage + resources. -- `~/bucket1` is the location of the folder in which you want to mount the Object - Storage filesystem. +- `~/bucket1` is the location of the folder in which you want to mount the Object + Storage filesystem. !!! tip "Important Note" @@ -436,25 +436,25 @@ The object storage container i.e. "bucket1" will be mounted in the directory `~/ In this command, -- `goofys` is the goofys binary as we already copied this in `/usr/bin/` path we - don't need to specify the full path. +- `goofys` is the goofys binary as we already copied this in `/usr/bin/` path we + don't need to specify the full path. -- `-o` stands for goofys options, and is handled differently. +- `-o` stands for goofys options, and is handled differently. -- `allow_other` Allows goofys with option `allow_other` only allowed if `user_allow_other` - is set in `/etc/fuse.conf`. +- `allow_other` Allows goofys with option `allow_other` only allowed if `user_allow_other` + is set in `/etc/fuse.conf`. -- `--profile` corresponds to the name given on the `~/.aws/credentials` file i.e. - `[nerc]`. +- `--profile` corresponds to the name given on the `~/.aws/credentials` file i.e. + `[nerc]`. -- `--endpoint` corresponds to the Object Storage endpoint url for NERC Object Storage. - You don't need to modify this url. +- `--endpoint` corresponds to the Object Storage endpoint url for NERC Object Storage. + You don't need to modify this url. -- `bucket1` is the name of the container which contains the NERC Object Storage - resources. +- `bucket1` is the name of the container which contains the NERC Object Storage + resources. -- `~/bucket1` is the location of the folder in which you want to mount the Object - Storage filesystem. +- `~/bucket1` is the location of the folder in which you want to mount the Object + Storage filesystem. In order to test whether the mount was successful, navigate to the directory in which you mounted the NERC container repository, for example: @@ -869,11 +869,11 @@ Verify, if the container is mounted successfully: A JuiceFS file system consists of two parts: -- **Object Storage:** Used for data storage. +- **Object Storage:** Used for data storage. -- **Metadata Engine:** A database used for storing metadata. In this case, we will - use a durable [**Redis**](https://redis.io/) in-memory database service that - provides extremely fast performance. +- **Metadata Engine:** A database used for storing metadata. In this case, we will + use a durable [**Redis**](https://redis.io/) in-memory database service that + provides extremely fast performance. #### Installation of the JuiceFS client @@ -921,7 +921,7 @@ init system, change this to `systemd` as shown here: ![Redis Server Config](images/redis-server-config.png) -- Binding to localhost: +- Binding to localhost: By default, Redis is only accessible from `localhost`. We need to verify that by locating this line by running: @@ -977,9 +977,9 @@ Also, check that binding to `localhost` is working fine by running the following !!! warning "Important Note" - The `netstat` command may not be available on your system by default. If - this is the case, you can install it (along with a number of other handy - networking tools) with the following command: `sudo apt install net-tools`. + The `netstat` command may not be available on your system by default. If + this is the case, you can install it (along with a number of other handy + networking tools) with the following command: `sudo apt install net-tools`. ##### Configuring a Redis Password @@ -1273,9 +1273,9 @@ After JuiceFS has been successfully formatted, follow this guide to clean up. JuiceFS client provides the destroy command to completely destroy a file system, which will result in: -- Deletion of all metadata entries of this file system +- Deletion of all metadata entries of this file system -- Deletion of all data blocks of this file system +- Deletion of all data blocks of this file system Use this command in the following format: diff --git a/docs/openstack/persistent-storage/object-storage.md b/docs/openstack/persistent-storage/object-storage.md index 3ac2b366..a75309bf 100644 --- a/docs/openstack/persistent-storage/object-storage.md +++ b/docs/openstack/persistent-storage/object-storage.md @@ -135,8 +135,8 @@ This will deactivate the public URL of the container and then it will show "Disa To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. #### Some Object Storage management examples @@ -253,18 +253,18 @@ To check the space used by a specific container This is a python client for the Swift API. There's a [Python API](https://github.com/openstack/python-swiftclient) (the `swiftclient` module), and a command-line script (`swift`). -- This example uses a `Python3` virtual environment, but you are free to choose - any other method to create a local virtual environment like `Conda`. +- This example uses a `Python3` virtual environment, but you are free to choose + any other method to create a local virtual environment like `Conda`. - python3 -m venv venv + python3 -m venv venv !!! note "Choosing Correct Python Interpreter" - Make sure you are able to use `python` or `python3` or **`py -3`** (For - Windows Only) to create a directory named `venv` (or whatever name you - specified) in your current working directory. + Make sure you are able to use `python` or `python3` or **`py -3`** (For + Windows Only) to create a directory named `venv` (or whatever name you + specified) in your current working directory. -- Activate the virtual environment by running: +- Activate the virtual environment by running: **on Linux/Mac:** `source venv/bin/activate` @@ -272,12 +272,12 @@ This is a python client for the Swift API. There's a [Python API](https://github #### Install [Python Swift Client page at PyPi](https://pypi.org/project/python-swiftclient/) -- Once virtual environment is activated, install `python-swiftclient` and `python-keystoneclient` +- Once virtual environment is activated, install `python-swiftclient` and `python-keystoneclient` pip install python-swiftclient python-keystoneclient -- Swift authenticates using a user, tenant, and key, which map to your OpenStack - username, project,and password. +- Swift authenticates using a user, tenant, and key, which map to your OpenStack + username, project,and password. For this, you need to download the **"NERC's OpenStack RC File"** with the credentials for your NERC project from the [NERC's OpenStack dashboard](https://stack.nerc.mghpcc.org/). @@ -425,11 +425,11 @@ to access object storage on your NERC project. To run the `s3` or `s3api` commands, you need to have: -- AWS CLI installed, see - [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) - for more information. +- AWS CLI installed, see + [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) + for more information. -- The NERC's Swift End Point URL: `https://stack.nerc.mghpcc.org:13808` +- The NERC's Swift End Point URL: `https://stack.nerc.mghpcc.org:13808` !!! note "Understand these Amazon S3 terms" @@ -482,9 +482,9 @@ While clicking on "EC2 Credentials", this will download a file **zip file** incl openstack ec2 credentials create -- Source the downloaded OpenStack RC File from _Projects > API Access_ by using: - `source *-openrc.sh` command. Sourcing the RC File will set the required environment - variables. +- Source the downloaded OpenStack RC File from _Projects > API Access_ by using: + `source *-openrc.sh` command. Sourcing the RC File will set the required environment + variables. Then run aws configuration command which requires the `EC2_ACCESS_KEY` and `EC2_SECRET_KEY` keys that you noted from `ec2rc.sh` file (during the **"Configuring @@ -526,8 +526,8 @@ directory `~/.aws/config` with the ec2 profile and credentials as shown below: !!! note "Information" - We need to have a profile that you use must have permissions to allow - the AWS operations can be performed. + We need to have a profile that you use must have permissions to allow + the AWS operations can be performed. #### Listing buckets using **aws-cli** @@ -628,8 +628,8 @@ the S3 protocol. **Prerequisites**: -- S3cmd installed, see [Download and Install the latest version of the S3cmd](https://s3tools.org/download) - for more information. +- S3cmd installed, see [Download and Install the latest version of the S3cmd](https://s3tools.org/download) + for more information. #### Configuring s3cmd @@ -785,9 +785,9 @@ NERC's containers. To run the `rclone` commands, you need to have: -- `rclone` installed, see - [Downloading and Installing the latest version of the Rclone](https://rclone.org/downloads/) - for more information. +- `rclone` installed, see + [Downloading and Installing the latest version of the Rclone](https://rclone.org/downloads/) + for more information. #### Configuring Rclone @@ -1023,32 +1023,32 @@ using FTP, FTPS, SCP, SFTP, WebDAV or S3 file transfer protocols. **Prerequisites**: -- WinSCP installed, see [Download and Install the latest version of the WinSCP](https://winscp.net/eng/docs/guide_install) - for more information. +- WinSCP installed, see [Download and Install the latest version of the WinSCP](https://winscp.net/eng/docs/guide_install) + for more information. -- Go to WinSCP menu and open "Options > Preferences". +- Go to WinSCP menu and open "Options > Preferences". -- When the "Preferences" dialog window appears, select "Transfer" in the options - on the left pane. +- When the "Preferences" dialog window appears, select "Transfer" in the options + on the left pane. -- Click on "Edit" button. +- Click on "Edit" button. -- Then, on shown popup dialog box review the "Common options" group, uncheck the - "Preserve timestamp" option as shown below: +- Then, on shown popup dialog box review the "Common options" group, uncheck the + "Preserve timestamp" option as shown below: ![Disable Preserve TimeStamp](images/winscp-perserve-timestamp-disable.png) #### Configuring WinSCP -- Click on "New Session" tab button as shown below: +- Click on "New Session" tab button as shown below: ![Login](images/winscp-new-session.png) -- Select **"Amazon S3"** from the "File protocol" dropdown options as shown below: +- Select **"Amazon S3"** from the "File protocol" dropdown options as shown below: ![Choose Amazon S3 File Protocol](images/choose_S3_protocol.png) -- Provide the following required endpoint information: +- Provide the following required endpoint information: **"Host name"**: "stack.nerc.mghpcc.org" @@ -1062,9 +1062,9 @@ respectively. !!! note "Helpful Tips" - You can save your above configured session with some preferred name by - clicking the "Save" button and then giving a proper name to your session. - So that next time you don't need to again manually enter all your configuration. + You can save your above configured session with some preferred name by + clicking the "Save" button and then giving a proper name to your session. + So that next time you don't need to again manually enter all your configuration. #### Using WinSCP @@ -1088,20 +1088,20 @@ servers, enterprise file sharing, and cloud storage. **Prerequisites**: -- Cyberduck installed, see [Download and Install the latest version of the Cyberduck](https://cyberduck.io/download/) - for more information. +- Cyberduck installed, see [Download and Install the latest version of the Cyberduck](https://cyberduck.io/download/) + for more information. #### Configuring Cyberduck -- Click on "Open Connection" tab button as shown below: +- Click on "Open Connection" tab button as shown below: ![Open Connection](images/cyberduck-open-connection.png) -- Select **"Amazon S3"** from the dropdown options as shown below: +- Select **"Amazon S3"** from the dropdown options as shown below: ![Choose Amazon S3](images/cyberduck-select-Amazon-s3.png) -- Provide the following required endpoint information: +- Provide the following required endpoint information: **"Server"**: "stack.nerc.mghpcc.org" diff --git a/docs/openstack/persistent-storage/transfer-a-volume.md b/docs/openstack/persistent-storage/transfer-a-volume.md index 1847b87d..f5e5b776 100644 --- a/docs/openstack/persistent-storage/transfer-a-volume.md +++ b/docs/openstack/persistent-storage/transfer-a-volume.md @@ -75,12 +75,12 @@ below: To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. ### Using the openstack client -- Identifying volume to transfer in your source project +- Identifying volume to transfer in your source project openstack volume list +---------------------------+-----------+-----------+------+-------------+ @@ -89,7 +89,7 @@ openstack volume list | d8a5da4c-...-8b6678ce4936 | my-volume | available | 100 | | +---------------------------+-----------+-----------+------+-------------+ -- Create the transfer request +- Create the transfer request openstack volume transfer request create my-volume +------------+--------------------------------------+ @@ -104,14 +104,14 @@ openstack volume transfer request create my-volume !!! tip "Pro Tip" - If your volume name includes spaces, you need to enclose them in quotes, - i.e. `""`. - For example: `openstack volume transfer request create "My Volume"` + If your volume name includes spaces, you need to enclose them in quotes, + i.e. `""`. + For example: `openstack volume transfer request create "My Volume"` -- The volume can be checked as in the transfer status using - `openstack volume transfer request list` as follows and the volume is in status - `awaiting-transfer` while running `openstack volume show ` - as shown below: +- The volume can be checked as in the transfer status using + `openstack volume transfer request list` as follows and the volume is in status + `awaiting-transfer` while running `openstack volume show ` + as shown below: openstack volume transfer request list +---------------------------+------+--------------------------------------+ @@ -130,8 +130,8 @@ openstack volume show my-volume | status | awaiting-transfer | +------------------------------+--------------------------------------+ -- The user of the destination project can authenticate and receive the authentication - key reported above. The transfer can then be initiated. +- The user of the destination project can authenticate and receive the authentication + key reported above. The transfer can then be initiated. openstack volume transfer request accept --auth-key b92d98fec2766582 a16494cf-cfa0-47f6-b606-62573357922a +-----------+--------------------------------------+ @@ -142,7 +142,7 @@ openstack volume transfer request accept --auth-key b92d98fec2766582 a16494cf-cf | volume_id | d8a5da4c-41c8-4c2d-b57a-8b6678ce4936 | +-----------+--------------------------------------+ -- And the results confirmed in the volume list for the destination project. +- And the results confirmed in the volume list for the destination project. openstack volume list +---------------------------+-----------+-----------+------+-------------+ diff --git a/docs/openstack/persistent-storage/volumes.md b/docs/openstack/persistent-storage/volumes.md index ca11ab1c..fe710f44 100644 --- a/docs/openstack/persistent-storage/volumes.md +++ b/docs/openstack/persistent-storage/volumes.md @@ -44,31 +44,31 @@ another project as [described here](../persistent-storage/transfer-a-volume.md). Some uses for volumes: -- Persistent data storage for ephemeral instances. +- Persistent data storage for ephemeral instances. -- Transfer of data between projects +- Transfer of data between projects -- Bootable image where disk changes persist +- Bootable image where disk changes persist -- Mounting the disk of one instance to another for troubleshooting +- Mounting the disk of one instance to another for troubleshooting ## How do you make your VM setup and data persistent? -- By default, when the instance is launched from an **Image** or an - **Instance Snapshot**, the choice for utilizing persistent storage is configured - by selecting the **Yes** option for **"Create New Volume"**. It's crucial to - note that this configuration automatically creates persistent block storage - in the form of a Volume instead of using Ephemeral disk, which appears in - the "Volumes" list in the Horizon dashboard: _Project -> Volumes -> Volumes_. +- By default, when the instance is launched from an **Image** or an + **Instance Snapshot**, the choice for utilizing persistent storage is configured + by selecting the **Yes** option for **"Create New Volume"**. It's crucial to + note that this configuration automatically creates persistent block storage + in the form of a Volume instead of using Ephemeral disk, which appears in + the "Volumes" list in the Horizon dashboard: _Project -> Volumes -> Volumes_. ![Instance Persistent Storage Option](images/instance-persistent-storage-option.png) -- By default, the setting for **"Delete Volume on Instance Delete"** is configured - to use **No**. This setting ensures that the volume created during the launch - of a virtual machine remains persistent and won't be deleted alongside the - instance unless explicitly chosen as "Yes". Such instances boot from a - **bootable volume**, utilizing an existing volume listed in the - _Project -> Volumes -> Volumes_ menu. +- By default, the setting for **"Delete Volume on Instance Delete"** is configured + to use **No**. This setting ensures that the volume created during the launch + of a virtual machine remains persistent and won't be deleted alongside the + instance unless explicitly chosen as "Yes". Such instances boot from a + **bootable volume**, utilizing an existing volume listed in the + _Project -> Volumes -> Volumes_ menu. To minimize the risk of potential data loss, we highly recommend consistently [creating backups through snapshots](../backup/backup-with-snapshots.md). diff --git a/docs/other-tools/CI-CD/CI-CD-pipeline.md b/docs/other-tools/CI-CD/CI-CD-pipeline.md index 76298584..acc0e113 100644 --- a/docs/other-tools/CI-CD/CI-CD-pipeline.md +++ b/docs/other-tools/CI-CD/CI-CD-pipeline.md @@ -9,19 +9,19 @@ pipelines are a practice focused on improving software delivery using automation The steps that form a CI/CD pipeline are distinct subsets of tasks that are grouped into a pipeline stage. Typical pipeline stages include: -- **Build** - The stage where the application is compiled. +- **Build** - The stage where the application is compiled. -- **Test** - The stage where code is tested. Automation here can save both time - and effort. +- **Test** - The stage where code is tested. Automation here can save both time + and effort. -- **Release** - The stage where the application is delivered to the central repository. +- **Release** - The stage where the application is delivered to the central repository. -- **Deploy** - In this stage code is deployed to production environment. +- **Deploy** - In this stage code is deployed to production environment. -- **Validation and compliance** - The steps to validate a build are determined - by the needs of your organization. Image security scanning, security scanning - and code analysis of applications ensure the quality of images and written application's - code. +- **Validation and compliance** - The steps to validate a build are determined + by the needs of your organization. Image security scanning, security scanning + and code analysis of applications ensure the quality of images and written application's + code. ![CI/CD Pipeline Stages](images/ci-cd-flow.png) _Figure: CI/CD Pipeline Stages_ diff --git a/docs/other-tools/CI-CD/github-actions/setup-github-actions-pipeline.md b/docs/other-tools/CI-CD/github-actions/setup-github-actions-pipeline.md index 76a15d8f..76168e72 100644 --- a/docs/other-tools/CI-CD/github-actions/setup-github-actions-pipeline.md +++ b/docs/other-tools/CI-CD/github-actions/setup-github-actions-pipeline.md @@ -23,7 +23,7 @@ workflow. ## Deploy an Application to your NERC OpenShift Project -- **Prerequisites** +- **Prerequisites** You must have at least one active **NERC-OCP (OpenShift)** type resource allocation. You can refer to [this documentation](../../../get-started/allocation/requesting-an-allocation.md#request-a-new-openshift-resource-allocation-for-an-openshift-project) @@ -77,21 +77,21 @@ workflow. 7. Enable and Update GitHub Actions Pipeline on your own forked repo: - - Enable the OpenShift Workflow in the Actions tab of in your GitHub repository. + - Enable the OpenShift Workflow in the Actions tab of in your GitHub repository. - - Update the provided sample OpenShift workflow YAML file i.e. `openshift.yml`, - which is located at "`https://github.com//simple-node-app/actions/workflows/openshift.yml`". + - Update the provided sample OpenShift workflow YAML file i.e. `openshift.yml`, + which is located at "`https://github.com//simple-node-app/actions/workflows/openshift.yml`". !!! info "Very Important Information" - Workflow execution on OpenShift pipelines follows these steps: + Workflow execution on OpenShift pipelines follows these steps: - 1. Checkout your repository - 2. Perform a container image build - 3. Push the built image to the GitHub Container Registry (GHCR) or - your preferred Registry - 4. Log in to your NERC OpenShift cluster's project space - 5. Create an OpenShift app from the image and expose it to the internet + 1. Checkout your repository + 2. Perform a container image build + 3. Push the built image to the GitHub Container Registry (GHCR) or + your preferred Registry + 4. Log in to your NERC OpenShift cluster's project space + 5. Create an OpenShift app from the image and expose it to the internet 8. Edit the top-level 'env' section as marked with '🖊️' if the defaults are not suitable for your project. diff --git a/docs/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline.md b/docs/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline.md index f4207c29..0e89ec4b 100644 --- a/docs/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline.md +++ b/docs/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline.md @@ -16,20 +16,20 @@ _Figure: CI/CD Pipeline To Deploy To Kubernetes Cluster Using Jenkins on NERC_ ## Setup a Jenkins Server VM -- Launch 1 Linux machine based on `ubuntu-20.04-x86_64` and `cpu-su.2` flavor with - 2vCPU, 8GB RAM, and 20GB storage. +- Launch 1 Linux machine based on `ubuntu-20.04-x86_64` and `cpu-su.2` flavor with + 2vCPU, 8GB RAM, and 20GB storage. -- Make sure you have added rules in the - [Security Groups](../../../openstack/access-and-security/security-groups.md#allowing-ssh) - to allow **ssh** using Port 22 access to the instance. +- Make sure you have added rules in the + [Security Groups](../../../openstack/access-and-security/security-groups.md#allowing-ssh) + to allow **ssh** using Port 22 access to the instance. -- Setup a new Security Group with the following rules exposing **port 8080** and - attach it to your new instance. +- Setup a new Security Group with the following rules exposing **port 8080** and + attach it to your new instance. ![Jenkins Server Security Group](images/security_groups_jenkins.png) -- [Assign a Floating IP](../../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to your new instance so that you will be able to ssh into this machine: +- [Assign a Floating IP](../../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to your new instance so that you will be able to ssh into this machine: ssh ubuntu@ -A -i @@ -43,16 +43,16 @@ Upon successfully SSH accessing the machine, execute the following dependencies: Run the following steps as non-root user i.e. **ubuntu**. -- Update the repositories and packages: +- Update the repositories and packages: sudo apt-get update && sudo apt-get upgrade -y -- Turn off `swap` +- Turn off `swap` swapoff -a sudo sed -i '/ swap / s/^/#/' /etc/fstab -- Install `curl` and `apt-transport-https` +- Install `curl` and `apt-transport-https` sudo apt-get update && sudo apt-get install -y apt-transport-https curl @@ -60,12 +60,12 @@ Upon successfully SSH accessing the machine, execute the following dependencies: ## Download and install the latest version of **Docker CE** -- Download and install Docker CE: +- Download and install Docker CE: curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh -- Configure the Docker daemon: +- Configure the Docker daemon: sudo usermod -aG docker $USER && newgrp docker @@ -75,23 +75,23 @@ Upon successfully SSH accessing the machine, execute the following dependencies: **kubectl**: the command line util to talk to your cluster. -- Download the Google Cloud public signing key and add key to verify releases +- Download the Google Cloud public signing key and add key to verify releases curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo \ apt-key add - -- add kubernetes apt repo +- add kubernetes apt repo cat < Manage Plugins" as shown below: +- Jenkins has a wide range of plugin options. From your Jenkins dashboard navigate + to "Manage Jenkins > Manage Plugins" as shown below: ![Jenkins Plugin Installation](images/plugins-installation.png) @@ -171,8 +171,8 @@ copy and paste on the web GUI on the browser. ## Create the required Credentials -- Create a global credential for your Docker Hub Registry by providing the username - and password that will be used by the Jenkins pipelines: +- Create a global credential for your Docker Hub Registry by providing the username + and password that will be used by the Jenkins pipelines: 1. Click on the "Manage Jenkins" menu and then click on the "Manage Credentials" link as shown below: @@ -188,8 +188,8 @@ copy and paste on the web GUI on the browser. ![Adding Credentials](images/add-credentials.png) -- First, add the **'DockerHub'** credentials as 'Username with password' with the - ID `dockerhublogin`. +- First, add the **'DockerHub'** credentials as 'Username with password' with the + ID `dockerhublogin`. a. Select the Kind "Username with password" from the dropdown options. @@ -200,9 +200,9 @@ copy and paste on the web GUI on the browser. ![Docker Hub Credentials](images/docker-hub-credentials.png) -- Config the **'Kubeconfig'** credentials as 'Secret file' that holds Kubeconfig - file from K8s master i.e. located at `/etc/kubernetes/admin.conf` with the ID - 'kubernetes' +- Config the **'Kubeconfig'** credentials as 'Secret file' that holds Kubeconfig + file from K8s master i.e. located at `/etc/kubernetes/admin.conf` with the ID + 'kubernetes' a. Click on the "Add Credentials" button in the left pane. @@ -247,8 +247,8 @@ To create a fork of the example `nodeapp` repository: ## Modify the Jenkins Declarative Pipeline Script file -- Modify the provided ‘**Jenkinsfile**’ to specify your own Docker Hub account - and github repository as specified in "``" and "``". +- Modify the provided ‘**Jenkinsfile**’ to specify your own Docker Hub account + and github repository as specified in "``" and "``". !!! warning "Very Important Information" @@ -261,7 +261,7 @@ To create a fork of the example `nodeapp` repository: password. Similarly, `kubernetes` corresponds to the **'Kubeconfig'** ID assigned for the Kubeconfig credential file. -- Below is an example of a Jenkins declarative Pipeline Script file: +- Below is an example of a Jenkins declarative Pipeline Script file: pipeline { @@ -328,8 +328,8 @@ To create a fork of the example `nodeapp` repository: ## Setup a Pipeline -- Once you review the provided **Jenkinsfile** and understand the stages, - you can now create a pipeline to trigger it on your newly setup Jenkins server: +- Once you review the provided **Jenkinsfile** and understand the stages, + you can now create a pipeline to trigger it on your newly setup Jenkins server: a. Click on the "New Item" link. @@ -360,10 +360,10 @@ To create a fork of the example `nodeapp` repository: ## How to manually Trigger the Pipeline -- Finally, click on the **"Build Now"** menu link on right side navigation that - will triggers the Pipeline process i.e. Build docker image, Push Image to your - Docker Hub Registry and Pull the image from Docker Registry, Remove local Docker - images and then Deploy to K8s Cluster as shown below: +- Finally, click on the **"Build Now"** menu link on right side navigation that + will triggers the Pipeline process i.e. Build docker image, Push Image to your + Docker Hub Registry and Pull the image from Docker Registry, Remove local Docker + images and then Deploy to K8s Cluster as shown below: ![Jenkins Pipeline Build Now](images/jenkins-pipeline-build.png) diff --git a/docs/other-tools/apache-spark/spark.md b/docs/other-tools/apache-spark/spark.md index 7d0610d3..0478e1f7 100644 --- a/docs/other-tools/apache-spark/spark.md +++ b/docs/other-tools/apache-spark/spark.md @@ -27,16 +27,16 @@ and Scala applications using the IP address of the master VM. ### Setup a Master VM -- To create a master VM for the first time, ensure that the "Image" dropdown option - is selected. In this example, we selected **ubuntu-22.04-x86_64** and the `cpu-su.2` - flavor is being used. +- To create a master VM for the first time, ensure that the "Image" dropdown option + is selected. In this example, we selected **ubuntu-22.04-x86_64** and the `cpu-su.2` + flavor is being used. -- Make sure you have added rules in the - [Security Groups](../../openstack/access-and-security/security-groups.md#allowing-ssh) - to allow **ssh** using Port 22 access to the instance. +- Make sure you have added rules in the + [Security Groups](../../openstack/access-and-security/security-groups.md#allowing-ssh) + to allow **ssh** using Port 22 access to the instance. -- [Assign a Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to your new instance so that you will be able to ssh into this machine: +- [Assign a Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to your new instance so that you will be able to ssh into this machine: ```sh ssh ubuntu@ -A -i @@ -48,14 +48,14 @@ and Scala applications using the IP address of the master VM. ssh ubuntu@199.94.61.4 -A -i cloud.key ``` -- Upon successfully accessing the machine, execute the following dependencies: +- Upon successfully accessing the machine, execute the following dependencies: ```sh sudo apt-get -y update sudo apt install default-jre -y ``` -- Download and install Scala: +- Download and install Scala: ```sh wget https://downloads.lightbend.com/scala/2.13.10/scala-2.13.10.deb @@ -68,7 +68,7 @@ and Scala applications using the IP address of the master VM. Installing Scala means installing various command-line tools such as the Scala compiler and build tools. -- Download and unpack Apache Spark: +- Download and unpack Apache Spark: ```sh SPARK_VERSION="3.4.2" @@ -86,7 +86,7 @@ and Scala applications using the IP address of the master VM. exists on the `APACHE_MIRROR` website. Please note the value of `SPARK_VERSION` as you will need it during [Preparing Jobs for Execution and Examination](#preparing-jobs-for-execution-and-examination). -- Create an SSH/RSA Key by running `ssh-keygen -t rsa` without using any passphrase: +- Create an SSH/RSA Key by running `ssh-keygen -t rsa` without using any passphrase: ```sh ssh-keygen -t rsa @@ -113,30 +113,30 @@ and Scala applications using the IP address of the master VM. +----[SHA256]-----+ ``` -- Copy and append the contents of **SSH public key** i.e. `~/.ssh/id_rsa.pub` to - the `~/.ssh/authorized_keys` file. +- Copy and append the contents of **SSH public key** i.e. `~/.ssh/id_rsa.pub` to + the `~/.ssh/authorized_keys` file. ### Create a Volume Snapshot of the master VM -- Once you're logged in to NERC's Horizon dashboard. You need to **Shut Off** the - master vm before creating a volume snapshot. +- Once you're logged in to NERC's Horizon dashboard. You need to **Shut Off** the + master vm before creating a volume snapshot. Click _Action -> Shut Off Instance_. Status will change to `Shutoff`. -- Then, create a snapshot of its attached volume by clicking on the "Create snapshot" - from the _Project -> Volumes -> Volumes_ as [described here](../../openstack/backup/backup-with-snapshots.md#volume-snapshots). +- Then, create a snapshot of its attached volume by clicking on the "Create snapshot" + from the _Project -> Volumes -> Volumes_ as [described here](../../openstack/backup/backup-with-snapshots.md#volume-snapshots). ### Create Two Worker Instances from the Volume Snapshot -- Once a snapshot is created and is in "Available" status, you can view and manage - it under the Volumes menu in the Horizon dashboard under Volume Snapshots. +- Once a snapshot is created and is in "Available" status, you can view and manage + it under the Volumes menu in the Horizon dashboard under Volume Snapshots. Navigate to _Project -> Volumes -> Snapshots_. -- You have the option to directly launch this volume as an instance by clicking - on the arrow next to "Create Volume" and selecting "Launch as Instance". +- You have the option to directly launch this volume as an instance by clicking + on the arrow next to "Create Volume" and selecting "Launch as Instance". **NOTE:** Specify **Count: 2** to launch 2 instances using the volume snapshot as shown below: @@ -155,21 +155,21 @@ Additionally, during launch, you will have the option to choose your preferred flavor for the worker nodes, which can differ from the master VM based on your computational requirements. -- Navigate to _Project -> Compute -> Instances_. +- Navigate to _Project -> Compute -> Instances_. -- Restart the shutdown master VM, click _Action -> Start Instance_. +- Restart the shutdown master VM, click _Action -> Start Instance_. -- The final set up for our Spark cluster looks like this, with 1 master node and - 2 worker nodes: +- The final set up for our Spark cluster looks like this, with 1 master node and + 2 worker nodes: ![Spark Cluster VMs](images/spark-nodes.png) ### Configure Spark on the Master VM -- SSH login into the master VM again. +- SSH login into the master VM again. -- Update the `/etc/hosts` file to specify all three hostnames with their corresponding - internal IP addresses. +- Update the `/etc/hosts` file to specify all three hostnames with their corresponding + internal IP addresses. ```sh sudo nano /etc/hosts @@ -202,18 +202,18 @@ computational requirements. 192.168.0.136 worker2 ``` -- Verify that you can SSH into both worker nodes by using `ssh worker1` and - `ssh worker2` from the Spark master node's terminal. +- Verify that you can SSH into both worker nodes by using `ssh worker1` and + `ssh worker2` from the Spark master node's terminal. -- Copy the sample configuration file for the Spark: +- Copy the sample configuration file for the Spark: ```sh cd /usr/local/spark/conf/ cp spark-env.sh.template spark-env.sh ``` -- Update the environment variables file i.e. `spark-env.sh` to include the following - information: +- Update the environment variables file i.e. `spark-env.sh` to include the following + information: ```sh export SPARK_MASTER_HOST='' @@ -237,15 +237,15 @@ computational requirements. echo "export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64" >> spark-env.sh ``` -- Source the changed environment variables file i.e. `spark-env.sh`: +- Source the changed environment variables file i.e. `spark-env.sh`: ```sh source spark-env.sh ``` -- Create a file named `slaves` in the Spark configuration directory (i.e., - `/usr/local/spark/conf/`) that specifies all 3 hostnames (nodes) as specified - in `/etc/hosts`: +- Create a file named `slaves` in the Spark configuration directory (i.e., + `/usr/local/spark/conf/`) that specifies all 3 hostnames (nodes) as specified + in `/etc/hosts`: ```sh sudo cat slaves @@ -256,9 +256,9 @@ computational requirements. ## Run the Spark cluster from the Master VM -- SSH into the master VM again if you are not already logged in. +- SSH into the master VM again if you are not already logged in. -- You need to run the Spark cluster from `/usr/local/spark`: +- You need to run the Spark cluster from `/usr/local/spark`: ```sh cd /usr/local/spark @@ -283,9 +283,9 @@ that you can use to monitor the status and resource consumption of your Spark cl Apache Spark provides different web UIs: **Master web UI**, **Worker web UI**, and **Application web UI**. -- You can connect to the **Master web UI** using - [SSH Port Forwarding, aka SSH Tunneling](https://www.ssh.com/academy/ssh/tunneling-example) - i.e. **Local Port Forwarding** from your local machine's terminal by running: +- You can connect to the **Master web UI** using + [SSH Port Forwarding, aka SSH Tunneling](https://www.ssh.com/academy/ssh/tunneling-example) + i.e. **Local Port Forwarding** from your local machine's terminal by running: ```sh ssh -N -L :localhost:8080 @ -i @@ -301,19 +301,19 @@ that you can use to monitor the status and resource consumption of your Spark cl ssh -N -L 8080:localhost:8080 ubuntu@199.94.61.4 -i ~/.ssh/cloud.key ``` -- Once the SSH Tunneling is successful, please do not close or stop the terminal - where you are running the SSH Tunneling. Instead, log in to the Master web UI - using your web browser: `http://localhost:` i.e. `http://localhost:8080`. +- Once the SSH Tunneling is successful, please do not close or stop the terminal + where you are running the SSH Tunneling. Instead, log in to the Master web UI + using your web browser: `http://localhost:` i.e. `http://localhost:8080`. The Master web UI offers an overview of the Spark cluster, showcasing the following details: -- Master URL and REST URL -- Available CPUs and memory for the Spark cluster -- Status and allocated resources for each worker -- Details on active and completed applications, including their status, resources, - and duration -- Details on active and completed drivers, including their status and resources +- Master URL and REST URL +- Available CPUs and memory for the Spark cluster +- Status and allocated resources for each worker +- Details on active and completed applications, including their status, resources, + and duration +- Details on active and completed drivers, including their status and resources The Master web UI appears as shown below when you navigate to `http://localhost:` i.e. `http://localhost:8080` from your web browser: @@ -326,7 +326,7 @@ resources for both the Spark cluster and individual applications. ## Preparing Jobs for Execution and Examination -- To run jobs from `/usr/local/spark`, execute the following commands: +- To run jobs from `/usr/local/spark`, execute the following commands: ```sh cd /usr/local/spark @@ -339,7 +339,7 @@ resources for both the Spark cluster and individual applications. [downloaded and installed previously](#setup-a-master-vm) as the value of `SPARK_VERSION` in the above script. -- **Single Node Job:** +- **Single Node Job:** Let's quickly start to run a simple job: @@ -347,7 +347,7 @@ resources for both the Spark cluster and individual applications. ./bin/spark-submit --driver-memory 2g --class org.apache.spark.examples.SparkPi examples/jars/spark-examples_2.13-$SPARK_VERSION.jar 50 ``` -- **Cluster Mode Job:** +- **Cluster Mode Job:** Let's submit a longer and more complex job with many tasks that will be distributed among the multi-node cluster, and then view the Master web UI: diff --git a/docs/other-tools/index.md b/docs/other-tools/index.md index 97fef9f4..8c11a703 100644 --- a/docs/other-tools/index.md +++ b/docs/other-tools/index.md @@ -1,8 +1,8 @@ # Kubernetes -- [Kubernetes Overview](kubernetes/kubernetes.md) +- [Kubernetes Overview](kubernetes/kubernetes.md) -- [K8s Flavors Comparision](kubernetes/comparisons.md) +- [K8s Flavors Comparision](kubernetes/comparisons.md) ## i. **Kubernetes Development environment** @@ -38,7 +38,7 @@ ## CI/ CD Tools -- [CI/CD Overview](CI-CD/CI-CD-pipeline.md) +- [CI/CD Overview](CI-CD/CI-CD-pipeline.md) 1. Using Jenkins @@ -52,6 +52,6 @@ ## Apache Spark -- [Apache Spark](apache-spark/spark.md) +- [Apache Spark](apache-spark/spark.md) --- diff --git a/docs/other-tools/kubernetes/k0s.md b/docs/other-tools/kubernetes/k0s.md index fcfa5aab..ae12d59e 100644 --- a/docs/other-tools/kubernetes/k0s.md +++ b/docs/other-tools/kubernetes/k0s.md @@ -2,26 +2,26 @@ ## Key Features -- Available as a single static binary -- Offers a self-hosted, isolated control plane -- Supports a variety of storage backends, including etcd, SQLite, MySQL (or any - compatible), and PostgreSQL. -- Offers an Elastic control plane -- Vanilla upstream Kubernetes -- Supports custom container runtimes (containerd is the default) -- Supports custom Container Network Interface (CNI) plugins (calico is the default) -- Supports x86_64 and arm64 +- Available as a single static binary +- Offers a self-hosted, isolated control plane +- Supports a variety of storage backends, including etcd, SQLite, MySQL (or any + compatible), and PostgreSQL. +- Offers an Elastic control plane +- Vanilla upstream Kubernetes +- Supports custom container runtimes (containerd is the default) +- Supports custom Container Network Interface (CNI) plugins (calico is the default) +- Supports x86_64 and arm64 ## Pre-requisite We will need 1 VM to create a single node kubernetes cluster using `k0s`. We are using following setting for this purpose: -- 1 Linux machine, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS image, - `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to this VM. +- 1 Linux machine, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS image, + `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to this VM. -- setup Unique hostname to the machine using the following command: +- setup Unique hostname to the machine using the following command: ```sh echo " " >> /etc/hosts @@ -39,23 +39,23 @@ We are using following setting for this purpose: Run the below command on the Ubuntu VM: -- SSH into **k0s** machine +- SSH into **k0s** machine -- Switch to root user: `sudo su` +- Switch to root user: `sudo su` -- Update the repositories and packages: +- Update the repositories and packages: ```sh apt-get update && apt-get upgrade -y ``` -- Download k0s: +- Download k0s: ```sh curl -sSLf https://get.k0s.sh | sudo sh ``` -- Install k0s as a service: +- Install k0s as a service: ```sh k0s install controller --single @@ -68,13 +68,13 @@ Run the below command on the Ubuntu VM: INFO[2021-10-12 01:46:01] Installing k0s service ``` -- Start `k0s` as a service: +- Start `k0s` as a service: ```sh k0s start ``` -- Check service, logs and `k0s` status: +- Check service, logs and `k0s` status: ```sh k0s status @@ -85,7 +85,7 @@ Run the below command on the Ubuntu VM: Workloads: true ``` -- Access your cluster using `kubectl`: +- Access your cluster using `kubectl`: ```sh k0s kubectl get nodes @@ -107,19 +107,19 @@ Run the below command on the Ubuntu VM: ## Uninstall k0s -- Stop the service: +- Stop the service: ```sh sudo k0s stop ``` -- Execute the `k0s reset` command - cleans up the installed system service, data - directories, containers, mounts and network namespaces. +- Execute the `k0s reset` command - cleans up the installed system service, data + directories, containers, mounts and network namespaces. ```sh sudo k0s reset ``` -- Reboot the system +- Reboot the system --- diff --git a/docs/other-tools/kubernetes/k3s/k3s-ha-cluster-using-k3d.md b/docs/other-tools/kubernetes/k3s/k3s-ha-cluster-using-k3d.md index 967c011a..2d060355 100644 --- a/docs/other-tools/kubernetes/k3s/k3s-ha-cluster-using-k3d.md +++ b/docs/other-tools/kubernetes/k3s/k3s-ha-cluster-using-k3d.md @@ -21,7 +21,7 @@ Here, `--server 3`: specifies requests three nodes to be created with the role s and `--image rancher/k3s:latest`: specifies the K3s image to be used here we are using `latest` -- Switch context to the new cluster: +- Switch context to the new cluster: ```sh kubectl config use-context k3d-k3s-default diff --git a/docs/other-tools/kubernetes/k3s/k3s-ha-cluster.md b/docs/other-tools/kubernetes/k3s/k3s-ha-cluster.md index 0112ddfd..74486ea0 100644 --- a/docs/other-tools/kubernetes/k3s/k3s-ha-cluster.md +++ b/docs/other-tools/kubernetes/k3s/k3s-ha-cluster.md @@ -79,15 +79,15 @@ curl -sfL https://get.k3s.io | sh -s - server \ --tls-san ``` -- Verify all master nodes are visible to one another: +- Verify all master nodes are visible to one another: ```sh sudo k3s kubectl get node ``` -- Generate **token** from one of the K3s Master VMs: - You need to extract a token from the master that will be used to join the nodes - to the control plane by running following command on one of the K3s master node: +- Generate **token** from one of the K3s Master VMs: + You need to extract a token from the master that will be used to join the nodes + to the control plane by running following command on one of the K3s master node: ```sh sudo cat /var/lib/rancher/k3s/server/node-token @@ -127,7 +127,7 @@ sudo systemctl stop k3s **The third server will take over at this point.** -- To restart servers manually: +- To restart servers manually: ```sh sudo systemctl restart k3s @@ -139,11 +139,11 @@ sudo systemctl stop k3s Your local development machine must have installed `kubectl`. -- Copy kubernetes config to your local machine: - Copy the `kubeconfig` file's content located at the K3s master node at `/etc/rancher/k3s/k3s.yaml` - to your local machine's `~/.kube/config` file. Before saving, please change the - cluster server path from **127.0.0.1** to **``**. This - will allow your local machine to see the cluster nodes: +- Copy kubernetes config to your local machine: + Copy the `kubeconfig` file's content located at the K3s master node at `/etc/rancher/k3s/k3s.yaml` + to your local machine's `~/.kube/config` file. Before saving, please change the + cluster server path from **127.0.0.1** to **``**. This + will allow your local machine to see the cluster nodes: ```sh kubectl get nodes @@ -162,7 +162,7 @@ to use for _Installation_: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml ``` -- Dashboard RBAC Configuration: +- Dashboard RBAC Configuration: `dashboard.admin-user.yml` @@ -191,7 +191,7 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/a namespace: kubernetes-dashboard ``` -- Deploy the `admin-user` configuration: +- Deploy the `admin-user` configuration: ```sh sudo k3s kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role.yml @@ -199,17 +199,17 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/a !!! note "Important Note" - If you're doing this from your local development machine, remove `sudo k3s` - and just use `kubectl`) + If you're doing this from your local development machine, remove + `sudo k3s` and just use `kubectl`) -- Get bearer **token** +- Get bearer **token** ```sh sudo k3s kubectl -n kubernetes-dashboard describe secret admin-user-token \ | grep ^token ``` -- Start _dashboard_ locally: +- Start _dashboard_ locally: ```sh sudo k3s kubectl proxy @@ -223,13 +223,13 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/a ## Deploying Nginx using deployment -- Create a deployment `nginx.yaml`: +- Create a deployment `nginx.yaml`: ```sh vi nginx.yaml ``` -- Copy and paste the following content in `nginx.yaml`: +- Copy and paste the following content in `nginx.yaml`: ```sh apiVersion: apps/v1 @@ -259,7 +259,7 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/a sudo k3s kubectl apply -f nginx.yaml ``` -- Verify the nginx pod is in **Running** state: +- Verify the nginx pod is in **Running** state: ```sh sudo k3s kubectl get pods --all-namespaces @@ -277,13 +277,13 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/a kubectl get pods -A -o wide ``` -- Scale the pods to available agents: +- Scale the pods to available agents: ```sh sudo k3s kubectl scale --replicas=2 deploy/mysite ``` -- View all deployment status: +- View all deployment status: ```sh sudo k3s kubectl get deploy mysite @@ -292,7 +292,7 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/a mysite 2/2 2 2 85s ``` -- Delete the nginx deployment and pod: +- Delete the nginx deployment and pod: ```sh sudo k3s kubectl delete -f nginx.yaml diff --git a/docs/other-tools/kubernetes/k3s/k3s-using-k3d.md b/docs/other-tools/kubernetes/k3s/k3s-using-k3d.md index fe2ffaaa..ca720590 100644 --- a/docs/other-tools/kubernetes/k3s/k3s-using-k3d.md +++ b/docs/other-tools/kubernetes/k3s/k3s-using-k3d.md @@ -16,14 +16,14 @@ Availability clusters just with few commands. ## Install **Docker** -- Install container runtime - **docker** +- Install container runtime - **docker** ```sh apt-get install docker.io -y ``` -- Configure the Docker daemon, in particular to use systemd for the management - of the container’s cgroups +- Configure the Docker daemon, in particular to use systemd for the management + of the container’s cgroups ```sh cat < @@ -70,7 +70,7 @@ k3sup --help k3sup install --ip $SERVER_IP --user $USER ``` -- On _Agent_ Node: +- On _Agent_ Node: Next join one or more `agents` to the cluster: @@ -133,7 +133,7 @@ k3sup join --user root --server-ip $LB_IP --ip $AGENT2 \ There will be a kubeconfig file created in the current working directory with the IP address of the LoadBalancer set for kubectl to use. -- Check the nodes have joined: +- Check the nodes have joined: ```sh export KUBECONFIG=`pwd`/kubeconfig diff --git a/docs/other-tools/kubernetes/k3s/k3s.md b/docs/other-tools/kubernetes/k3s/k3s.md index a744ebd1..f2bd4d70 100644 --- a/docs/other-tools/kubernetes/k3s/k3s.md +++ b/docs/other-tools/kubernetes/k3s/k3s.md @@ -2,24 +2,24 @@ ## Features -- Lightweight certified K8s distro +- Lightweight certified K8s distro -- Built for production operations +- Built for production operations -- 40MB binary, 250MB memeory consumption +- 40MB binary, 250MB memeory consumption -- Single process w/ integrated K8s master, Kubelet, and containerd +- Single process w/ integrated K8s master, Kubelet, and containerd -- Supports not only `etcd` to hold the cluster state, but also `SQLite` - (for single-node, simpler setups) or external DBs like `MySQL` and `PostgreSQL` +- Supports not only `etcd` to hold the cluster state, but also `SQLite` + (for single-node, simpler setups) or external DBs like `MySQL` and `PostgreSQL` -- Open source project +- Open source project ## Components and architecure ![K3s Components and architecure](../images/k3s_architecture.png) -- High-Availability K3s Server with an External DB: +- High-Availability K3s Server with an External DB: ![K3s Components and architecure](../images/k3s_high_availability.png) or, ![K3s Components and architecure](../images/k3s_ha_architecture.jpg) @@ -32,16 +32,16 @@ We will need 1 control-plane(master) and 2 worker nodes to create a single control-plane kubernetes cluster using `k3s`. We are using following setting for this purpose: -- 1 Linux machine for master, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS - image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also - [assign Floating IP](../../../openstack/../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to the master node. +- 1 Linux machine for master, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS + image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also + [assign Floating IP](../../../openstack/../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to the master node. -- 2 Linux machines for worker, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS - image, `cpu-su.1` flavor with 1vCPU, 4GB RAM, 20GB storage. +- 2 Linux machines for worker, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS + image, `cpu-su.1` flavor with 1vCPU, 4GB RAM, 20GB storage. -- ssh access to all machines: [Read more here](../../../openstack/../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) - on how to set up SSH on your remote VMs. +- ssh access to all machines: [Read more here](../../../openstack/../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) + on how to set up SSH on your remote VMs. ## Networking @@ -59,18 +59,18 @@ on each node. If you plan on achieving high availability with **embedded etcd**, server nodes must be accessible to each other on ports **2379** and **2380**. -- Create 1 security group with appropriate [Inbound Rules for K3s Server Nodes](https://rancher.com/docs/k3s/latest/en/installation/installation-requirements/#networking) - that will be used by all 3 nodes: +- Create 1 security group with appropriate [Inbound Rules for K3s Server Nodes](https://rancher.com/docs/k3s/latest/en/installation/installation-requirements/#networking) + that will be used by all 3 nodes: ![Inbound Rules for K3s Server Nodes](../images/k3s_security_group.png) !!! note "Important Note" - The VXLAN overlay networking port on nodes should not be exposed to the world - as it opens up your cluster network to be accessed by anyone. Run your nodes - behind a firewall/security group that disables access to port **8472**. + The VXLAN overlay networking port on nodes should not be exposed to the world + as it opens up your cluster network to be accessed by anyone. Run your nodes + behind a firewall/security group that disables access to port **8472**. -- setup Unique hostname to each machine using the following command: +- setup Unique hostname to each machine using the following command: ```sh echo " " >> /etc/hosts @@ -86,25 +86,25 @@ must be accessible to each other on ports **2379** and **2380**. In this step, you will setup the following nodes: -- k3s-master +- k3s-master -- k3s-worker1 +- k3s-worker1 -- k3s-worker2 +- k3s-worker2 The below steps will be performed on all the above mentioned nodes: -- SSH into all the 3 machines +- SSH into all the 3 machines -- Switch as root: `sudo su` +- Switch as root: `sudo su` -- Update the repositories and packages: +- Update the repositories and packages: ```sh apt-get update && apt-get upgrade -y ``` -- Install `curl` and `apt-transport-https` +- Install `curl` and `apt-transport-https` ```sh apt-get update && apt-get install -y apt-transport-https curl @@ -114,14 +114,14 @@ The below steps will be performed on all the above mentioned nodes: ## Install **Docker** -- Install container runtime - **docker** +- Install container runtime - **docker** ```sh apt-get install docker.io -y ``` -- Configure the Docker daemon, in particular to use systemd for the management - of the container’s cgroups +- Configure the Docker daemon, in particular to use systemd for the management + of the container’s cgroups ```sh cat < ``` -- We have successfully deployed nginx web-proxy on k3s. Go to browser, visit `http://` - i.e. to check the nginx default page. +- We have successfully deployed nginx web-proxy on k3s. Go to browser, visit `http://` + i.e. to check the nginx default page. ## Upgrade K3s Using the Installation Script diff --git a/docs/other-tools/kubernetes/kind.md b/docs/other-tools/kubernetes/kind.md index c4712153..4e7720fc 100644 --- a/docs/other-tools/kubernetes/kind.md +++ b/docs/other-tools/kubernetes/kind.md @@ -5,11 +5,11 @@ We will need 1 VM to create a single node kubernetes cluster using `kind`. We are using following setting for this purpose: -- 1 Linux machine, `almalinux-9-x86_64`, `cpu-su.2` flavor with 2vCPU, 8GB RAM, - 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to this VM. +- 1 Linux machine, `almalinux-9-x86_64`, `cpu-su.2` flavor with 2vCPU, 8GB RAM, + 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to this VM. -- setup Unique hostname to the machine using the following command: +- setup Unique hostname to the machine using the following command: ```sh echo " " >> /etc/hosts @@ -27,11 +27,11 @@ We are using following setting for this purpose: Run the below command on the AlmaLinux VM: -- SSH into **kind** machine +- SSH into **kind** machine -- Switch to root user: `sudo su` +- Switch to root user: `sudo su` -- Execute the below command to initialize the cluster: +- Execute the below command to initialize the cluster: Please remove `container-tools` module that includes stable versions of podman, buildah, skopeo, runc, conmon, etc as well as dependencies and will be removed @@ -62,7 +62,7 @@ sudo install -o root -g root -m 0755 kubectl /usr/bin/kubectl chmod +x /usr/bin/kubectl ``` -- Test to ensure that the `kubectl` is installed: +- Test to ensure that the `kubectl` is installed: ```sh kubectl version --client @@ -88,7 +88,7 @@ kind version kind v0.11.1 go1.16.4 linux/amd64 ``` -- To communicate with cluster, just give the cluster name as a context in kubectl: +- To communicate with cluster, just give the cluster name as a context in kubectl: ```sh kind create cluster --name k8s-kind-cluster1 @@ -108,7 +108,7 @@ kind v0.11.1 go1.16.4 linux/amd64 Have a nice day! 👋 ``` -- Get the cluster details: +- Get the cluster details: ```sh kubectl cluster-info --context kind-k8s-kind-cluster1 diff --git a/docs/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm.md b/docs/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm.md index 020fdba2..ca16f8a0 100644 --- a/docs/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm.md +++ b/docs/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm.md @@ -2,13 +2,13 @@ ## Objectives -- Install a multi control-plane(master) Kubernetes cluster +- Install a multi control-plane(master) Kubernetes cluster -- Install a Pod network on the cluster so that your Pods can talk to each other +- Install a Pod network on the cluster so that your Pods can talk to each other -- Deploy and test a sample app +- Deploy and test a sample app -- Deploy K8s Dashboard to view all cluster's components +- Deploy K8s Dashboard to view all cluster's components ## Components and architecure @@ -25,21 +25,21 @@ You will need 2 control-plane(master node) and 2 worker nodes to create a multi-master kubernetes cluster using `kubeadm`. You are going to use the following set up for this purpose: -- 2 Linux machines for master, `ubuntu-20.04-x86_64` or your choice of Ubuntu OS - image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage. +- 2 Linux machines for master, `ubuntu-20.04-x86_64` or your choice of Ubuntu OS + image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage. -- 2 Linux machines for worker, `ubuntu-20.04-x86_64` or your choice of Ubuntu OS - image, `cpu-su.1` flavor with 1vCPU, 4GB RAM, 20GB storage - also - [assign Floating IPs](../../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to both of the worker nodes. +- 2 Linux machines for worker, `ubuntu-20.04-x86_64` or your choice of Ubuntu OS + image, `cpu-su.1` flavor with 1vCPU, 4GB RAM, 20GB storage - also + [assign Floating IPs](../../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to both of the worker nodes. -- 1 Linux machine for loadbalancer, `ubuntu-20.04-x86_64` or your choice of Ubuntu - OS image, `cpu-su.1` flavor with 1vCPU, 4GB RAM, 20GB storage. +- 1 Linux machine for loadbalancer, `ubuntu-20.04-x86_64` or your choice of Ubuntu + OS image, `cpu-su.1` flavor with 1vCPU, 4GB RAM, 20GB storage. -- ssh access to all machines: [Read more here](../../../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) - on how to setup SSH to your remote VMs. +- ssh access to all machines: [Read more here](../../../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) + on how to setup SSH to your remote VMs. -- Create 2 security groups with appropriate [ports and protocols](https://kubernetes.io/docs/reference/ports-and-protocols/): +- Create 2 security groups with appropriate [ports and protocols](https://kubernetes.io/docs/reference/ports-and-protocols/): i. To be used by the master nodes: @@ -49,7 +49,7 @@ ii. To be used by the worker nodes: ![Worker node ports and protocols](../images/worker_nodes_ports_protocols.png) -- setup Unique hostname to each machine using the following command: +- setup Unique hostname to each machine using the following command: ```sh echo " " >> /etc/hosts @@ -105,23 +105,23 @@ outside of the cluster and interacts with the cluster using ports. You have 2 master nodes. Which means the user can connect to either of the 2 apiservers. The loadbalancer will be used to loadbalance between the 2 apiservers. -- Login to the loadbalancer node +- Login to the loadbalancer node -- Switch as root - `sudo su` +- Switch as root - `sudo su` -- Update your repository and your system +- Update your repository and your system ```sh sudo apt-get update && sudo apt-get upgrade -y ``` -- Install haproxy +- Install haproxy ```sh sudo apt-get install haproxy -y ``` -- Edit haproxy configuration +- Edit haproxy configuration ```sh vi /etc/haproxy/haproxy.cfg @@ -159,13 +159,13 @@ apiservers. The loadbalancer will be used to loadbalance between the 2 apiserver Here - **master1** and **master2** are the hostnames of the master nodes and **10.138.0.15** and **10.138.0.16** are the corresponding internal IP addresses. -- Ensure haproxy config file is correctly formatted: +- Ensure haproxy config file is correctly formatted: ```sh haproxy -c -q -V -f /etc/haproxy/haproxy.cfg ``` -- Restart and Verify haproxy +- Restart and Verify haproxy ```sh systemctl restart haproxy @@ -203,44 +203,44 @@ does things like starting pods and containers. In this step, you will install kubelet and kubeadm on the below nodes -- master1 +- master1 -- master2 +- master2 -- worker1 +- worker1 -- worker2 +- worker2 The below steps will be performed on all the above mentioned nodes: -- SSH into all the 4 machines +- SSH into all the 4 machines -- Update the repositories and packages: +- Update the repositories and packages: ```sh sudo apt-get update && sudo apt-get upgrade -y ``` -- Turn off `swap` +- Turn off `swap` ```sh swapoff -a sudo sed -i '/ swap / s/^/#/' /etc/fstab ``` -- Install `curl` and `apt-transport-https` +- Install `curl` and `apt-transport-https` ```sh sudo apt-get update && sudo apt-get install -y apt-transport-https curl ``` -- Download the Google Cloud public signing key and add key to verify releases +- Download the Google Cloud public signing key and add key to verify releases ```sh curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - ``` -- add kubernetes apt repo +- add kubernetes apt repo ```sh cat < --delete-emptydir-data --force --ignore-daemonsets ``` -- Before removing the node, reset the state installed by kubeadm: +- Before removing the node, reset the state installed by kubeadm: ```sh kubeadm reset @@ -1008,7 +1008,7 @@ kubectl drain --delete-emptydir-data --force --ignore-daemonsets ipvsadm -C ``` -- Now remove the node: +- Now remove the node: ```sh kubectl delete node diff --git a/docs/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm.md b/docs/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm.md index 7239bdc2..9918f323 100644 --- a/docs/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm.md +++ b/docs/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm.md @@ -2,13 +2,13 @@ ## Objectives -- Install a single control-plane(master) Kubernetes cluster +- Install a single control-plane(master) Kubernetes cluster -- Install a Pod network on the cluster so that your Pods can talk to each other +- Install a Pod network on the cluster so that your Pods can talk to each other -- Deploy and test a sample app +- Deploy and test a sample app -- Deploy K8s Dashboard to view all cluster's components +- Deploy K8s Dashboard to view all cluster's components ## Components and architecure @@ -22,17 +22,17 @@ We will need 1 control-plane(master) and 2 worker node to create a single control-plane kubernetes cluster using `kubeadm`. We are using following setting for this purpose: -- 1 Linux machine for master, `ubuntu-20.04-x86_64`, `cpu-su.2` flavor with 2vCPU, - 8GB RAM, 20GB storage. +- 1 Linux machine for master, `ubuntu-20.04-x86_64`, `cpu-su.2` flavor with 2vCPU, + 8GB RAM, 20GB storage. -- 2 Linux machines for worker, `ubuntu-20.04-x86_64`, `cpu-su.1` flavor with 1vCPU, - 4GB RAM, 20GB storage - also [assign Floating IPs](../../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to both of the worker nodes. +- 2 Linux machines for worker, `ubuntu-20.04-x86_64`, `cpu-su.1` flavor with 1vCPU, + 4GB RAM, 20GB storage - also [assign Floating IPs](../../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to both of the worker nodes. -- ssh access to all machines: [Read more here](../../../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) - on how to set up SSH on your remote VMs. +- ssh access to all machines: [Read more here](../../../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) + on how to set up SSH on your remote VMs. -- Create 2 security groups with appropriate [ports and protocols](https://kubernetes.io/docs/reference/ports-and-protocols/): +- Create 2 security groups with appropriate [ports and protocols](https://kubernetes.io/docs/reference/ports-and-protocols/): i. To be used by the master nodes: @@ -42,7 +42,7 @@ ii. To be used by the worker nodes: ![Worker node ports and protocols](../images/worker_nodes_ports_protocols.png) -- setup Unique hostname to each machine using the following command: +- setup Unique hostname to each machine using the following command: ```sh echo " " >> /etc/hosts @@ -91,42 +91,42 @@ does things like starting pods and containers. In this step, you will install kubelet and kubeadm on the below nodes -- master +- master -- worker1 +- worker1 -- worker2 +- worker2 The below steps will be performed on all the above mentioned nodes: -- SSH into all the 3 machines +- SSH into all the 3 machines -- Update the repositories and packages: +- Update the repositories and packages: ```sh sudo apt-get update && sudo apt-get upgrade -y ``` -- Turn off `swap` +- Turn off `swap` ```sh swapoff -a sudo sed -i '/ swap / s/^/#/' /etc/fstab ``` -- Install `curl` and `apt-transport-https` +- Install `curl` and `apt-transport-https` ```sh sudo apt-get update && sudo apt-get install -y apt-transport-https curl ``` -- Download the Google Cloud public signing key and add key to verify releases +- Download the Google Cloud public signing key and add key to verify releases ```sh curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - ``` -- add kubernetes apt repo +- add kubernetes apt repo ```sh cat < @@ -405,20 +405,20 @@ control plane. Now that you have initialized the master - you can now work on bootstrapping the worker nodes. -- SSH into **worker1** and **worker2** +- SSH into **worker1** and **worker2** -- Switch to root user on both the machines: `sudo su` +- Switch to root user on both the machines: `sudo su` -- Check the output given by the init command on **master** to join worker node: +- Check the output given by the init command on **master** to join worker node: ```sh kubeadm join 192.168.0.167:6443 --token cnslau.kd5fjt96jeuzymzb \ --discovery-token-ca-cert-hash sha256:871ab3f050bc9790c977daee9e44cf52e15ee37ab9834567333b939458a5bfb5 ``` -- Execute the above command on both the nodes: +- Execute the above command on both the nodes: -- Your output should look like: +- Your output should look like: ```sh This node has joined the cluster: @@ -431,7 +431,7 @@ worker nodes. ## Validate all cluster components and nodes are visible on all nodes -- Verify the cluster +- Verify the cluster ```sh kubectl get nodes @@ -609,11 +609,11 @@ For your example, You will going to setup [K8dash/Skooner](https://github.com/skooner-k8s/skooner) to view a dashboard that shows all your K8s cluster components. -- SSH into `master` node +- SSH into `master` node -- Switch to root user: `sudo su` +- Switch to root user: `sudo su` -- Apply available deployment by running the following command: +- Apply available deployment by running the following command: ```sh kubectl apply -f https://raw.githubusercontent.com/skooner-k8s/skooner/master/kubernetes-skooner-nodeport.yaml @@ -663,19 +663,19 @@ Setup the **Service Account Token** to access the Skooner Dashboard: The first (and easiest) option is to create a dedicated service account. Run the following commands: -- Create the service account in the current namespace (we assume default) +- Create the service account in the current namespace (we assume default) ```sh kubectl create serviceaccount skooner-sa ``` -- Give that service account root on the cluster +- Give that service account root on the cluster ```sh kubectl create clusterrolebinding skooner-sa --clusterrole=cluster-admin --serviceaccount=default:skooner-sa ``` -- Create a secret that was created to hold the token for the SA: +- Create a secret that was created to hold the token for the SA: ```sh kubectl apply -f - < --delete-emptydir-data --force --ignore-daemonsets ``` -- Before removing the node, reset the state installed by kubeadm: +- Before removing the node, reset the state installed by kubeadm: ```sh kubeadm reset @@ -800,7 +800,7 @@ kubectl drain --delete-emptydir-data --force --ignore-daemonsets ipvsadm -C ``` -- Now remove the node: +- Now remove the node: ```sh kubectl delete node diff --git a/docs/other-tools/kubernetes/kubespray.md b/docs/other-tools/kubernetes/kubespray.md index 453647a2..58a11dde 100644 --- a/docs/other-tools/kubernetes/kubespray.md +++ b/docs/other-tools/kubernetes/kubespray.md @@ -6,22 +6,22 @@ We will need 1 control-plane(master) and 1 worker node to create a single control-plane kubernetes cluster using `Kubespray`. We are using following setting for this purpose: -- 1 Linux machine for Ansible master, `ubuntu-22.04-x86_64` or your choice of Ubuntu - OS image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage. +- 1 Linux machine for Ansible master, `ubuntu-22.04-x86_64` or your choice of Ubuntu + OS image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage. -- 1 Linux machine for master, `ubuntu-22.04-x86_64` or your choice of Ubuntu - OS image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to the master node. +- 1 Linux machine for master, `ubuntu-22.04-x86_64` or your choice of Ubuntu + OS image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - + also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to the master node. -- 1 Linux machines for worker, `ubuntu-22.04-x86_64` or your choice of Ubuntu - OS image, `cpu-su.1` flavor with 1vCPU, 4GB RAM, 20GB storage. +- 1 Linux machines for worker, `ubuntu-22.04-x86_64` or your choice of Ubuntu + OS image, `cpu-su.1` flavor with 1vCPU, 4GB RAM, 20GB storage. -- ssh access to all machines: [Read more here](../../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) - on how to set up SSH on your remote VMs. +- ssh access to all machines: [Read more here](../../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) + on how to set up SSH on your remote VMs. -- To allow SSH from **Ansible master** to all **other nodes**: [Read more here](../../openstack/create-and-connect-to-the-VM/ssh-to-the-VM.md#adding-other-peoples-ssh-keys-to-the-instance) - Generate SSH key for Ansible master node using: +- To allow SSH from **Ansible master** to all **other nodes**: [Read more here](../../openstack/create-and-connect-to-the-VM/ssh-to-the-VM.md#adding-other-peoples-ssh-keys-to-the-instance) + Generate SSH key for Ansible master node using: ```sh ssh-keygen -t rsa @@ -54,7 +54,7 @@ for this purpose: end of `~/.ssh/authorized_keys` file of the other master and worker nodes. This will allow `ssh ` from the Ansible master node's terminal. -- Create 2 security groups with appropriate [ports and protocols](https://kubernetes.io/docs/reference/ports-and-protocols/): +- Create 2 security groups with appropriate [ports and protocols](https://kubernetes.io/docs/reference/ports-and-protocols/): i. To be used by the master nodes: ![Control plane ports and protocols](images/control_plane_ports_protocols.png) @@ -62,7 +62,7 @@ for this purpose: ii. To be used by the worker nodes: ![Worker node ports and protocols](images/worker_nodes_ports_protocols.png) -- setup Unique hostname to each machine using the following command: +- setup Unique hostname to each machine using the following command: ```sh echo " " >> /etc/hosts @@ -78,25 +78,25 @@ for this purpose: In this step, you will update packages and disable `swap` on the all 3 nodes: -- 1 Ansible Master Node - ansible_master +- 1 Ansible Master Node - ansible_master -- 1 Kubernetes Master Node - kubspray_master +- 1 Kubernetes Master Node - kubspray_master -- 1 Kubernetes Worker Node - kubspray_worker1 +- 1 Kubernetes Worker Node - kubspray_worker1 The below steps will be performed on all the above mentioned nodes: -- SSH into all the 3 machines +- SSH into all the 3 machines -- Switch as root: `sudo su` +- Switch as root: `sudo su` -- Update the repositories and packages: +- Update the repositories and packages: ```sh apt-get update && apt-get upgrade -y ``` -- Turn off `swap` +- Turn off `swap` ```sh swapoff -a @@ -110,13 +110,13 @@ The below steps will be performed on all the above mentioned nodes: Run the below command on the master node i.e. `master` that you want to setup as control plane. -- SSH into **ansible_master** machine +- SSH into **ansible_master** machine -- Switch to root user: `sudo su` +- Switch to root user: `sudo su` -- Execute the below command to initialize the cluster: +- Execute the below command to initialize the cluster: -- Install Python3 and upgrade pip to pip3: +- Install Python3 and upgrade pip to pip3: ```sh apt install python3-pip -y @@ -125,26 +125,26 @@ control plane. pip -V ``` -- Clone the _Kubespray_ git repository: +- Clone the _Kubespray_ git repository: ```sh git clone https://github.com/kubernetes-sigs/kubespray.git cd kubespray ``` -- Install dependencies from `requirements.txt`: +- Install dependencies from `requirements.txt`: ```sh pip install -r requirements.txt ``` -- Copy `inventory/sample` as `inventory/mycluster` +- Copy `inventory/sample` as `inventory/mycluster` ```sh cp -rfp inventory/sample inventory/mycluster ``` -- Update Ansible inventory file with inventory builder: +- Update Ansible inventory file with inventory builder: This step is little trivial because we need to update `hosts.yml` with the nodes IP. @@ -176,7 +176,7 @@ control plane. DEBUG: adding host node2 to group kube_node ``` -- After running the above commands do verify the `hosts.yml` and its content: +- After running the above commands do verify the `hosts.yml` and its content: ```sh cat inventory/mycluster/hosts.yml @@ -215,18 +215,18 @@ control plane. hosts: {} ``` -- Review and change parameters under `inventory/mycluster/group_vars` +- Review and change parameters under `inventory/mycluster/group_vars` ```sh cat inventory/mycluster/group_vars/all/all.yml cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml ``` -- It can be useful to set the following two variables to **true** in - `inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml`: `kubeconfig_localhost` - (to make a copy of `kubeconfig` on the host that runs Ansible in - `{ inventory_dir }/artifacts`) and `kubectl_localhost` - (to download `kubectl` onto the host that runs Ansible in `{ bin_dir }`). +- It can be useful to set the following two variables to **true** in + `inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml`: `kubeconfig_localhost` + (to make a copy of `kubeconfig` on the host that runs Ansible in + `{ inventory_dir }/artifacts`) and `kubectl_localhost` + (to download `kubectl` onto the host that runs Ansible in `{ bin_dir }`). !!! note "Very Important" @@ -235,10 +235,10 @@ control plane. `enable_nodelocaldns: false` and `kube_proxy_mode: iptables` which will _Disable nodelocal dns cache_ and _Kube-proxy proxyMode to iptables_ respectively. -- Deploy Kubespray with Ansible Playbook - run the playbook as `root` user. - The option `--become` is required, as for example writing SSL keys in `/etc/`, - installing packages and interacting with various `systemd` daemons. Without - `--become` the playbook will fail to run! +- Deploy Kubespray with Ansible Playbook - run the playbook as `root` user. + The option `--become` is required, as for example writing SSL keys in `/etc/`, + installing packages and interacting with various `systemd` daemons. Without + `--become` the playbook will fail to run! ```sh ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml @@ -253,7 +253,7 @@ control plane. ## Install **kubectl** on Kubernetes master node .i.e. `kubspray_master` -- Install kubectl binary +- Install kubectl binary ```sh snap install kubectl --classic @@ -261,7 +261,7 @@ control plane. This outputs: `kubectl 1.26.1 from Canonical✓ installed` -- Now verify the kubectl version: +- Now verify the kubectl version: ```sh kubectl version -o yaml @@ -271,7 +271,7 @@ control plane. ## Validate all cluster components and nodes are visible on all nodes -- Verify the cluster +- Verify the cluster ```sh kubectl get nodes @@ -285,8 +285,8 @@ control plane. ## Deploy A [Hello Minikube Application](minikube.md#deploy-a-hello-minikube-application) -- Use the kubectl create command to create a Deployment that manages a Pod. The - Pod runs a Container based on the provided Docker image. +- Use the kubectl create command to create a Deployment that manages a Pod. The + Pod runs a Container based on the provided Docker image. ```sh kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4 @@ -298,7 +298,7 @@ control plane. service/hello-minikube exposed ``` -- View the deployments information: +- View the deployments information: ```sh kubectl get deployments @@ -307,7 +307,7 @@ control plane. hello-minikube 1/1 1 1 50s ``` -- View the port information: +- View the port information: ```sh kubectl get svc hello-minikube @@ -316,7 +316,7 @@ control plane. hello-minikube LoadBalancer 10.233.35.126 8080:30723/TCP 40s ``` -- Expose the service locally +- Expose the service locally ```sh kubectl port-forward svc/hello-minikube 30723:8080 diff --git a/docs/other-tools/kubernetes/microk8s.md b/docs/other-tools/kubernetes/microk8s.md index 8a9ab5e3..b5ab508b 100644 --- a/docs/other-tools/kubernetes/microk8s.md +++ b/docs/other-tools/kubernetes/microk8s.md @@ -5,11 +5,11 @@ We will need 1 VM to create a single node kubernetes cluster using `microk8s`. We are using following setting for this purpose: -- 1 Linux machine, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS image, - `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to this VM. +- 1 Linux machine, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS image, + `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to this VM. -- setup Unique hostname to the machine using the following command: +- setup Unique hostname to the machine using the following command: ```sh echo " " >> /etc/hosts @@ -27,29 +27,29 @@ We are using following setting for this purpose: Run the below command on the Ubuntu VM: -- SSH into **microk8s** machine +- SSH into **microk8s** machine -- Switch to root user: `sudo su` +- Switch to root user: `sudo su` -- Update the repositories and packages: +- Update the repositories and packages: ```sh apt-get update && apt-get upgrade -y ``` -- Install MicroK8s: +- Install MicroK8s: ```sh sudo snap install microk8s --classic ``` -- Check the status while Kubernetes starts +- Check the status while Kubernetes starts ```sh microk8s status --wait-ready ``` -- Turn on the services you want: +- Turn on the services you want: ```sh microk8s enable dns dashboard @@ -59,7 +59,7 @@ Run the below command on the Ubuntu VM: `microk8s disable ` turns off a service. For example other useful services are: `microk8s enable registry istio storage` -- Start using Kubernetes +- Start using Kubernetes ```sh microk8s kubectl get all --all-namespaces @@ -70,8 +70,8 @@ Run the below command on the Ubuntu VM: upstream kubectl, you can also drive other Kubernetes clusters with it by pointing to the respective kubeconfig file via the `--kubeconfig` argument. -- Access the [Kubernetes dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) - UI: +- Access the [Kubernetes dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) + UI: ![Microk8s Dashboard Ports](images/microk8s_dashboard_ports.png) @@ -82,15 +82,15 @@ Run the below command on the Ubuntu VM: !!! note "Note" - Another way to access the default token to be used for the dashboard access - can be retrieved with: + Another way to access the default token to be used for the dashboard access + can be retrieved with: - ```sh - token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d "" -f1) - microk8s kubectl -n kube-system describe secret $token - ``` + ```sh + token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d "" -f1) + microk8s kubectl -n kube-system describe secret $token + ``` -- Keep running the kubernetes-dashboad on Proxy to access it via web browser: +- Keep running the kubernetes-dashboad on Proxy to access it via web browser: ```sh microk8s dashboard-proxy @@ -103,11 +103,11 @@ Run the below command on the Ubuntu VM: !!! note "Important" - This tells us the IP address of the Dashboard and the port. The values assigned - to your Dashboard will differ. Please note the displayed **PORT** and - the **TOKEN** that are required to access the kubernetes-dashboard. Make - sure, the exposed **PORT** is opened in Security Groups for the instance - following [this guide](../../openstack/access-and-security/security-groups.md). + This tells us the IP address of the Dashboard and the port. The values assigned + to your Dashboard will differ. Please note the displayed **PORT** and + the **TOKEN** that are required to access the kubernetes-dashboard. Make + sure, the exposed **PORT** is opened in Security Groups for the instance + following [this guide](../../openstack/access-and-security/security-groups.md). This will show the token to login to the Dashbord shown on the url with NodePort. @@ -162,19 +162,19 @@ i.e. to check the nginx default page. ## Deploy A Sample Nginx Application -- Create an alias: +- Create an alias: ```sh alias mkctl="microk8s kubectl" ``` -- Create a deployment, in this case **Nginx**: +- Create a deployment, in this case **Nginx**: ```sh mkctl create deployment --image nginx my-nginx ``` -- To access the deployment we will need to expose it: +- To access the deployment we will need to expose it: ```sh mkctl expose deployment my-nginx --port=80 --type=NodePort diff --git a/docs/other-tools/kubernetes/minikube.md b/docs/other-tools/kubernetes/minikube.md index 10c2de55..45d08503 100644 --- a/docs/other-tools/kubernetes/minikube.md +++ b/docs/other-tools/kubernetes/minikube.md @@ -2,24 +2,24 @@ ## Minimum system requirements for minikube -- 2 GB RAM or more -- 2 CPU / vCPUs or more -- 20 GB free hard disk space or more -- Docker / Virtual Machine Manager – KVM & VirtualBox. Docker, Hyperkit, Hyper-V, - KVM, Parallels, Podman, VirtualBox, or VMWare are examples of container or virtual - machine managers. +- 2 GB RAM or more +- 2 CPU / vCPUs or more +- 20 GB free hard disk space or more +- Docker / Virtual Machine Manager – KVM & VirtualBox. Docker, Hyperkit, Hyper-V, + KVM, Parallels, Podman, VirtualBox, or VMWare are examples of container or virtual + machine managers. ## Pre-requisite We will need 1 VM to create a single node kubernetes cluster using `minikube`. We are using following setting for this purpose: -- 1 Linux machine for master, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS - image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also - [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to this VM. +- 1 Linux machine for master, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS + image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also + [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to this VM. -- setup Unique hostname to the machine using the following command: +- setup Unique hostname to the machine using the following command: ```sh echo " " >> /etc/hosts @@ -41,15 +41,15 @@ Run the below command on the Ubuntu VM: Run the following steps as non-root user i.e. **ubuntu**. -- SSH into **minikube** machine +- SSH into **minikube** machine -- Update the repositories and packages: +- Update the repositories and packages: ```sh sudo apt-get update && sudo apt-get upgrade -y ``` -- Install `curl`, `wget`, and `apt-transport-https` +- Install `curl`, `wget`, and `apt-transport-https` ```sh sudo apt-get update && sudo apt-get install -y curl wget apt-transport-https @@ -59,14 +59,14 @@ Run the below command on the Ubuntu VM: ## Download and install the latest version of **Docker CE** -- Download and install Docker CE: +- Download and install Docker CE: ```sh curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh ``` -- Configure the Docker daemon: +- Configure the Docker daemon: ```sh sudo usermod -aG docker $USER && newgrp docker @@ -76,7 +76,7 @@ Run the below command on the Ubuntu VM: ## Install **kubectl** -- Install kubectl binary +- Install kubectl binary **kubectl**: the command line util to talk to your cluster. @@ -90,7 +90,7 @@ Run the below command on the Ubuntu VM: kubectl 1.26.1 from Canonical✓ installed ``` -- Now verify the kubectl version: +- Now verify the kubectl version: ```sh sudo kubectl version -o yaml @@ -105,7 +105,7 @@ To run containers in Pods, Kubernetes uses a [container runtime](https://kuberne By default, Kubernetes uses the **Container Runtime Interface (CRI)** to interface with your chosen container runtime. -- Install container runtime - **containerd** +- Install container runtime - **containerd** The first thing to do is configure the persistent loading of the necessary `containerd` modules. This forwarding IPv4 and letting iptables see bridged @@ -121,7 +121,7 @@ with your chosen container runtime. sudo modprobe br_netfilter ``` -- Ensure `net.bridge.bridge-nf-call-iptables` is set to `1` in your sysctl config: +- Ensure `net.bridge.bridge-nf-call-iptables` is set to `1` in your sysctl config: ```sh # sysctl params required by setup, params persist across reboots @@ -132,20 +132,20 @@ with your chosen container runtime. EOF ``` -- Apply sysctl params without reboot: +- Apply sysctl params without reboot: ```sh sudo sysctl --system ``` -- Install the necessary dependencies with: +- Install the necessary dependencies with: ```sh sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates ``` -- The `containerd.io` packages in DEB and RPM formats are distributed by Docker. - Add the required GPG key with: +- The `containerd.io` packages in DEB and RPM formats are distributed by Docker. + Add the required GPG key with: ```sh curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - @@ -177,7 +177,7 @@ with your chosen container runtime. ## Installing minikube -- Install minikube +- Install minikube ```sh curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb @@ -192,7 +192,7 @@ with your chosen container runtime. chmod +x /usr/bin/minikube ``` -- Verify the Minikube installation: +- Verify the Minikube installation: ```sh minikube version @@ -201,7 +201,7 @@ with your chosen container runtime. commit: ddac20b4b34a9c8c857fc602203b6ba2679794d3 ``` -- Install conntrack: +- Install conntrack: Kubernetes 1.26.1 requires conntrack to be installed in root's path: @@ -209,7 +209,7 @@ with your chosen container runtime. sudo apt-get install -y conntrack ``` -- Start minikube: +- Start minikube: As we are already stated in the beginning that we would be using docker as base for minikue, so start the minikube with the docker driver, @@ -245,7 +245,7 @@ with your chosen container runtime. Perfect, above confirms that minikube cluster has been configured and started successfully. -- Run below minikube command to check status: +- Run below minikube command to check status: ```sh minikube status @@ -258,7 +258,7 @@ with your chosen container runtime. kubeconfig: Configured ``` -- Run following kubectl command to verify the cluster info and node status: +- Run following kubectl command to verify the cluster info and node status: ```sh kubectl cluster-info @@ -276,7 +276,7 @@ with your chosen container runtime. minikube Ready control-plane,master 5m v1.26.1 ``` -- To see the kubectl configuration use the command: +- To see the kubectl configuration use the command: ```sh kubectl config view @@ -286,7 +286,7 @@ with your chosen container runtime. ![Minikube config view](images/minikube_config.png) -- Get minikube addon details: +- Get minikube addon details: ```sh minikube addons list @@ -301,7 +301,7 @@ with your chosen container runtime. minikube addons enable ``` -- Enable minikube dashboard addon: +- Enable minikube dashboard addon: ```sh minikube dashboard @@ -315,7 +315,7 @@ with your chosen container runtime. http://127.0.0.1:40783/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ ``` -- To view minikube dashboard url: +- To view minikube dashboard url: ```sh minikube dashboard --url @@ -326,7 +326,7 @@ with your chosen container runtime. http://127.0.0.1:42669/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ ``` -- Expose Dashboard on **NodePort** instead of **ClusterIP**: +- Expose Dashboard on **NodePort** instead of **ClusterIP**: -- Check the current port for `kubernetes-dashboard`: @@ -358,7 +358,7 @@ with your chosen container runtime. ## Deploy A Sample Nginx Application -- Create a deployment, in this case **Nginx**: +- Create a deployment, in this case **Nginx**: A Kubernetes Pod is a group of one or more Containers, tied together for the purposes of administration and networking. The Pod in this tutorial has only @@ -366,7 +366,7 @@ with your chosen container runtime. restarts the Pod's Container if it terminates. Deployments are the recommended way to manage the creation and scaling of Pods. -- Let's check if the Kubernetes cluster is up and running: +- Let's check if the Kubernetes cluster is up and running: ```sh kubectl get all --all-namespaces @@ -378,7 +378,7 @@ with your chosen container runtime. kubectl create deployment --image nginx my-nginx ``` -- To access the deployment we will need to expose it: +- To access the deployment we will need to expose it: ```sh kubectl expose deployment my-nginx --port=80 --type=NodePort @@ -431,15 +431,15 @@ with your chosen container runtime. ## Deploy A Hello Minikube Application -- Use the kubectl create command to create a Deployment that manages a Pod. The - Pod runs a Container based on the provided Docker image. +- Use the kubectl create command to create a Deployment that manages a Pod. The + Pod runs a Container based on the provided Docker image. ```sh kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4 kubectl expose deployment hello-minikube --type=NodePort --port=8080 ``` -- View the port information: +- View the port information: ```sh kubectl get svc hello-minikube @@ -469,39 +469,39 @@ kubectl delete deployment hello-minikube ## Managing Minikube Cluster -- To stop the minikube, run +- To stop the minikube, run ```sh minikube stop ``` -- To delete the single node cluster: +- To delete the single node cluster: ```sh minikube delete ``` -- To Start the minikube, run +- To Start the minikube, run ```sh minikube start ``` -- Remove the Minikube configuration and data directories: +- Remove the Minikube configuration and data directories: ```sh rm -rf ~/.minikube rm -rf ~/.kube ``` -- If you have installed any Minikube related packages, remove them: +- If you have installed any Minikube related packages, remove them: ```sh sudo apt remove -y conntrack ``` -- In case you want to start the minikube with higher resource like 8 GB RM and - 4 CPU then execute following commands one after the another. +- In case you want to start the minikube with higher resource like 8 GB RM and + 4 CPU then execute following commands one after the another. ```sh minikube config set cpus 4 diff --git a/docs/other-tools/nfs/nfs-server-client-setup.md b/docs/other-tools/nfs/nfs-server-client-setup.md index ab9acb3d..1869700e 100644 --- a/docs/other-tools/nfs/nfs-server-client-setup.md +++ b/docs/other-tools/nfs/nfs-server-client-setup.md @@ -10,20 +10,20 @@ allowing them to access and work with the files it contains. We are using the following configuration to set up the NFS server and client on Ubuntu-based NERC OpenStack VMs: -- 1 Linux machine for the **NFS Server**, `ubuntu-24.04-x86_64`, `cpu-su.1` flavor - with 1vCPU, 4GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md). - Please note the NFS Server's Internal IP i.e. `` - i.e. `192.168.0.73` in this example. +- 1 Linux machine for the **NFS Server**, `ubuntu-24.04-x86_64`, `cpu-su.1` flavor + with 1vCPU, 4GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md). + Please note the NFS Server's Internal IP i.e. `` + i.e. `192.168.0.73` in this example. -- 1 Linux machine for the **NFS Client**, `ubuntu-24.04-x86_64`, `cpu-su.1` flavor - with 1vCPU, 4GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md). +- 1 Linux machine for the **NFS Client**, `ubuntu-24.04-x86_64`, `cpu-su.1` flavor + with 1vCPU, 4GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md). -- ssh access to both machines: [Read more here](../../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) - on how to set up SSH on your remote VMs. +- ssh access to both machines: [Read more here](../../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) + on how to set up SSH on your remote VMs. -- Create a security group with a rule that opens **Port 2049** (the default - _NFS_ port) for file sharing. Update Security Group to the **NFS Server** VM - only following [this reference](../../openstack/access-and-security/security-groups.md#update-security-groups-to-a-running-vm). +- Create a security group with a rule that opens **Port 2049** (the default + _NFS_ port) for file sharing. Update Security Group to the **NFS Server** VM + only following [this reference](../../openstack/access-and-security/security-groups.md#update-security-groups-to-a-running-vm). ## Installing and configuring NFS Server @@ -150,9 +150,9 @@ Ubuntu-based NERC OpenStack VMs: **Explanation:** - - **rw**: Read and write access. - - **sync**: Changes are written to disk immediately. - - **no_subtree_check**: Avoid permission issues for subdirectories. + - **rw**: Read and write access. + - **sync**: Changes are written to disk immediately. + - **no_subtree_check**: Avoid permission issues for subdirectories. !!! info "Other Options for Directory Permissions for the NFS share directory" @@ -310,13 +310,13 @@ example.hostname.com:/srv /opt/example nfs rsize=8192,wsize=8192,timeo=14,intr ## Test the Setup -- On the **NFS Server**, write a test file: +- On the **NFS Server**, write a test file: ```sh echo "Hello from NFS Server" | sudo tee /mnt/nfs_share/test.txt ``` -- On the **NFS Client**, verify the file is accessible: +- On the **NFS Client**, verify the file is accessible: ```sh cat /mnt/nfs_clientshare/test.txt diff --git a/nerc-theme/main.html b/nerc-theme/main.html index c9dc6f40..a2356f6f 100644 --- a/nerc-theme/main.html +++ b/nerc-theme/main.html @@ -1,12 +1,12 @@ {% extends "base.html" %} {% block announce %}
-
Upcoming Multi-Day NERC OpenStack Platform Version Upgrade
+
Upcoming NERC Network Equipment and Switch Maintenance
- (Dec 12, 2024 8:00 AM ET - Dec 14, 2024 8:00 PM ET) + (Tuesday Jan 7, 2025 9 AM ET - Wednesday Jan 8, 2025 9 AM ET) [Timeline and info] From f443b24d1ca88cf4b1159172de1a7b7e920f029d Mon Sep 17 00:00:00 2001 From: Milson Munakami Date: Mon, 16 Dec 2024 17:30:48 -0500 Subject: [PATCH 2/8] clean build --- .../billing-process-for-harvard.md | 10 +- .../scaling-and-performance-guide.md | 20 +-- .../logging-in/web-console-overview.md | 14 +- .../access-and-security/security-groups.md | 10 +- .../set-up-a-private-network.md | 6 +- .../using-vpn/wireguard/index.md | 8 +- .../data-transfer/data-transfer-from-to-vm.md | 54 +++--- docs/openstack/openstack-cli/openstack-CLI.md | 8 +- .../mount-the-object-storage.md | 8 +- .../persistent-storage/object-storage.md | 18 +- .../persistent-storage/transfer-a-volume.md | 6 +- .../setup-github-actions-pipeline.md | 16 +- .../jenkins/setup-jenkins-CI-CD-pipeline.md | 160 +++++++++--------- docs/other-tools/apache-spark/spark.md | 50 +++--- .../kubernetes/k3s/k3s-ha-cluster.md | 4 +- .../kubernetes/k3s/k3s-using-k3d.md | 6 +- docs/other-tools/kubernetes/k3s/k3s.md | 6 +- .../kubeadm/HA-clusters-with-kubeadm.md | 86 +++++----- .../single-master-clusters-with-kubeadm.md | 60 +++---- docs/other-tools/kubernetes/kubespray.md | 12 +- docs/other-tools/kubernetes/microk8s.md | 22 +-- docs/other-tools/kubernetes/minikube.md | 36 ++-- nerc-theme/main.html | 6 +- 23 files changed, 313 insertions(+), 313 deletions(-) diff --git a/docs/get-started/cost-billing/billing-process-for-harvard.md b/docs/get-started/cost-billing/billing-process-for-harvard.md index f95afc27..5f1c7aff 100644 --- a/docs/get-started/cost-billing/billing-process-for-harvard.md +++ b/docs/get-started/cost-billing/billing-process-for-harvard.md @@ -32,11 +32,11 @@ Please follow these two steps to ensure proper billing setup: !!! abstract "What if you already have an existing Customer Code?" - Please note that if you already have an existing active NERC account, you - need to provide your HUIT Customer Code to NERC. If you think your department - may already have a HUIT account but you don’t know the corresponding Customer - Code then you can [contact HUIT Billing](https://billing.huit.harvard.edu/portal/allusers/contactus) - to get the required Customer Code. + Please note that if you already have an existing active NERC account, you + need to provide your HUIT Customer Code to NERC. If you think your department + may already have a HUIT account but you don’t know the corresponding Customer + Code then you can [contact HUIT Billing](https://billing.huit.harvard.edu/portal/allusers/contactus) + to get the required Customer Code. 2. During the Resource Allocation review and approval process, we will utilize the HUIT "Customer Code" provided by the PI in step #1 to align it with the approved diff --git a/docs/openshift/applications/scaling-and-performance-guide.md b/docs/openshift/applications/scaling-and-performance-guide.md index 912d2909..be829347 100644 --- a/docs/openshift/applications/scaling-and-performance-guide.md +++ b/docs/openshift/applications/scaling-and-performance-guide.md @@ -102,9 +102,9 @@ CPU and memory can be specified in a couple of ways: !!! note "Important Information" - If a Pod's total requests are not available on a single node, then the Pod - will remain in a *Pending* state (i.e. not running) until these resources - become available. + If a Pod's total requests are not available on a single node, then the Pod + will remain in a *Pending* state (i.e. not running) until these resources + become available. - The **limit** value specifies the max value you can consume. Limit is the value applications should be tuned to use. Pods will be memory, CPU throttled when @@ -283,11 +283,11 @@ Click the **Observe** tab to: !!! note "Detailed Monitoring your project and application metrics" - On the left navigation panel of the **Developer** perspective, click - **Observe** to see the Dashboard, Metrics, Alerts, and Events for your project. - For more information about Monitoring project and application metrics - using the Developer perspective, please - [read this](https://docs.openshift.com/container-platform/4.10/applications/odc-monitoring-project-and-application-metrics-using-developer-perspective.html). + On the left navigation panel of the **Developer** perspective, click + **Observe** to see the Dashboard, Metrics, Alerts, and Events for your project. + For more information about Monitoring project and application metrics + using the Developer perspective, please + [read this](https://docs.openshift.com/container-platform/4.10/applications/odc-monitoring-project-and-application-metrics-using-developer-perspective.html). ## Scaling manually @@ -402,8 +402,8 @@ maximum numbers to maintain the specified CPU utilization across all pods. !!! note "Configure via: Form or YAML View" - While creating or editing the horizontal pod autoscaler in the web console, - you can switch from **Form view** to **YAML view**. + While creating or editing the horizontal pod autoscaler in the web console, + you can switch from **Form view** to **YAML view**. - From the **Add HorizontalPodAutoscaler** form, define the name, minimum and maximum pod limits, the CPU and memory usage, and click **Save**. diff --git a/docs/openshift/logging-in/web-console-overview.md b/docs/openshift/logging-in/web-console-overview.md index d8170ac8..0bf9c5a7 100644 --- a/docs/openshift/logging-in/web-console-overview.md +++ b/docs/openshift/logging-in/web-console-overview.md @@ -56,9 +56,9 @@ administrators and cluster administrators can view the Administrator perspective !!! note "Important Note" - The default web console perspective that is shown depends on the role of the - user. The **Administrator** perspective is displayed by default if the user - is recognized as an administrator. +The default web console perspective that is shown depends on the role of the +user. The **Administrator** perspective is displayed by default if the user is +recognized as an administrator. ### About the Developer perspective in the web console @@ -67,8 +67,8 @@ services, and databases. !!! info "Important Note" - The default view for the OpenShift Container Platform web console is the **Developer** - perspective. +The default view for the OpenShift Container Platform web console is the **Developer** +perspective. The web console provides a comprehensive set of tools for managing your projects and applications. @@ -82,8 +82,8 @@ located on top navigation as shown below: !!! info "Important Note" - You can identify the currently selected project with **tick** mark and also - you can click on **star** icon to keep the project under your **Favorites** list. +You can identify the currently selected project with **tick** mark and also +you can click on **star** icon to keep the project under your **Favorites** list. ## Navigation Menu diff --git a/docs/openstack/access-and-security/security-groups.md b/docs/openstack/access-and-security/security-groups.md index 5221238b..9ff4d3ca 100644 --- a/docs/openstack/access-and-security/security-groups.md +++ b/docs/openstack/access-and-security/security-groups.md @@ -79,8 +79,8 @@ Enter the following values: !!! note "Note" - To accept requests from a particular range of IP addresses, specify the - IP address block in the CIDR box. + To accept requests from a particular range of IP addresses, specify the IP + address block in the CIDR box. The new rule now appears in the list. This signifies that any instances using this newly added Security Group will now have SSH port 22 open for requests @@ -141,10 +141,10 @@ Enter the following values: - CIDR: 0.0.0.0/0 - !!! note "Note" +!!! note "Note" - To accept requests from a particular range of IP addresses, specify the - IP address block in the CIDR box. + To accept requests from a particular range of IP addresses, specify the IP + address block in the CIDR box. The new rule now appears in the list. This signifies that any instances using this newly added Security Group will now have RDP port 3389 open for requests diff --git a/docs/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md b/docs/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md index 91eb3b08..77bce7cd 100644 --- a/docs/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md +++ b/docs/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md @@ -44,9 +44,9 @@ In the Create Network dialog box, specify the following values. networks, you should use IP addresses which fall within the ranges that are specifically reserved for private networks: - 10.0.0.0/8 - 172.16.0.0/12 - 192.168.0.0/16 + 10.0.0.0/8 + 172.16.0.0/12 + 192.168.0.0/16 In the example below, we configure a network containing addresses 192.168.0.1 to 192.168.0.255 using CIDR 192.168.0.0/24 diff --git a/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md b/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md index 7f11de82..5849849a 100644 --- a/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md +++ b/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md @@ -126,10 +126,10 @@ To deactivate config: `wg-quick down /path/to/file_name.config` !!! note "Important Note" - You need to contact your project administrator to get your own WireGUard - configuration file (file with .conf extension). Download it and Keep it in - your local machine so in next steps we can use this configuration client - profile file. + You need to contact your project administrator to get your own WireGUard + configuration file (file with .conf extension). Download it and Keep it in + your local machine so in next steps we can use this configuration client + profile file. A WireGuard client or compatible software is needed to connect to the WireGuard VPN server. Please install[one of these clients](https://www.wireguard.com/install/) diff --git a/docs/openstack/data-transfer/data-transfer-from-to-vm.md b/docs/openstack/data-transfer/data-transfer-from-to-vm.md index 9cd136e6..e17b29e8 100644 --- a/docs/openstack/data-transfer/data-transfer-from-to-vm.md +++ b/docs/openstack/data-transfer/data-transfer-from-to-vm.md @@ -434,20 +434,20 @@ using FTP, FTPS, SCP, SFTP, WebDAV, or S3 file transfer protocols. !!! info "Default User name based on OS" - - **all Ubuntu images**: ubuntu + - **all Ubuntu images**: ubuntu - - **all AlmaLinux images**: almalinux + - **all AlmaLinux images**: almalinux - - **all Rocky Linux images**: rocky + - **all Rocky Linux images**: rocky - - **all Fedora images**: fedora + - **all Fedora images**: fedora - - **all Debian images**: debian + - **all Debian images**: debian - - **all RHEL images**: cloud-user + - **all RHEL images**: cloud-user - If you still have VMs running with deleted **CentOS** images, you need to - use the following default username for your CentOS images: `centos`. + If you still have VMs running with deleted **CentOS** images, you need to + use the following default username for your CentOS images: `centos`. **"Password"**: "``" @@ -462,12 +462,12 @@ from the file picker. !!! tip "Helpful Tip" - You can save your above configured site with some preferred name by - clicking the "Save" button and then giving a proper name to your site. - This prevents needing to manually enter all of your configuration again the - next time you need to use WinSCP. + You can save your above configured site with some preferred name by + clicking the "Save" button and then giving a proper name to your site. + This prevents needing to manually enter all of your configuration again the + next time you need to use WinSCP. - ![Save Site WinSCP](images/winscp-save-site.png) + ![Save Site WinSCP](images/winscp-save-site.png) #### Using WinSCP @@ -516,17 +516,17 @@ connections to servers, enterprise file sharing, and various cloud storage platf !!! info "Default User name based on OS" - - **all Ubuntu images**: ubuntu + - **all Ubuntu images**: ubuntu - - **all AlmaLinux images**: almalinux + - **all AlmaLinux images**: almalinux - - **all Rocky Linux images**: rocky + - **all Rocky Linux images**: rocky - - **all Fedora images**: fedora + - **all Fedora images**: fedora - - **all Debian images**: debian + - **all Debian images**: debian - - **all RHEL images**: cloud-user + - **all RHEL images**: cloud-user **"Password"**: "``" @@ -585,20 +585,20 @@ computer (shared drives, Dropbox, etc.) !!! info "Default User name based on OS" - - **all Ubuntu images**: ubuntu + - **all Ubuntu images**: ubuntu - - **all AlmaLinux images**: almalinux + - **all AlmaLinux images**: almalinux - - **all Rocky Linux images**: rocky + - **all Rocky Linux images**: rocky - - **all Fedora images**: fedora + - **all Fedora images**: fedora - - **all Debian images**: debian + - **all Debian images**: debian - - **all RHEL images**: cloud-user + - **all RHEL images**: cloud-user - If you still have VMs running with deleted **CentOS** images, you need to - use the following default username for your CentOS images: `centos`. + If you still have VMs running with deleted **CentOS** images, you need to + use the following default username for your CentOS images: `centos`. **"Key file"**: "Browse and choose the appropriate SSH Private Key from you local machine that has corresponding Public Key attached to your VM" diff --git a/docs/openstack/openstack-cli/openstack-CLI.md b/docs/openstack/openstack-cli/openstack-CLI.md index b1d3fa37..552f5980 100644 --- a/docs/openstack/openstack-cli/openstack-CLI.md +++ b/docs/openstack/openstack-cli/openstack-CLI.md @@ -37,10 +37,10 @@ You can download the environment file with the credentials from the [OpenStack d !!! note "Important Note" - Please note that an application credential is only valid for a single - project, and to access multiple projects you need to create an application - credential for each. You can switch projects by clicking on the project name - at the top right corner and choosing from the dropdown under "Project". + Please note that an application credential is only valid for a single + project, and to access multiple projects you need to create an application + credential for each. You can switch projects by clicking on the project name + at the top right corner and choosing from the dropdown under "Project". After clicking "Create Application Credential" button, the **ID** and **Secret** will be displayed and you will be prompted to `Download openrc file` diff --git a/docs/openstack/persistent-storage/mount-the-object-storage.md b/docs/openstack/persistent-storage/mount-the-object-storage.md index aed80104..76efd12f 100644 --- a/docs/openstack/persistent-storage/mount-the-object-storage.md +++ b/docs/openstack/persistent-storage/mount-the-object-storage.md @@ -59,7 +59,7 @@ parts are `EC2_ACCESS_KEY` and `EC2_SECRET_KEY`, keep them noted. - Allow Other User option by editing fuse config by editing `/etc/fuse.conf` file and uncomment "user_allow_other" option. - sudo nano /etc/fuse.conf + sudo nano /etc/fuse.conf The output going to look like this: @@ -977,9 +977,9 @@ Also, check that binding to `localhost` is working fine by running the following !!! warning "Important Note" - The `netstat` command may not be available on your system by default. If - this is the case, you can install it (along with a number of other handy - networking tools) with the following command: `sudo apt install net-tools`. + The `netstat` command may not be available on your system by default. If + this is the case, you can install it (along with a number of other handy + networking tools) with the following command: `sudo apt install net-tools`. ##### Configuring a Redis Password diff --git a/docs/openstack/persistent-storage/object-storage.md b/docs/openstack/persistent-storage/object-storage.md index a75309bf..7c17e5cb 100644 --- a/docs/openstack/persistent-storage/object-storage.md +++ b/docs/openstack/persistent-storage/object-storage.md @@ -256,13 +256,13 @@ This is a python client for the Swift API. There's a [Python API](https://github - This example uses a `Python3` virtual environment, but you are free to choose any other method to create a local virtual environment like `Conda`. - python3 -m venv venv + python3 -m venv venv !!! note "Choosing Correct Python Interpreter" - Make sure you are able to use `python` or `python3` or **`py -3`** (For - Windows Only) to create a directory named `venv` (or whatever name you - specified) in your current working directory. + Make sure you are able to use `python` or `python3` or **`py -3`** (For + Windows Only) to create a directory named `venv` (or whatever name you + specified) in your current working directory. - Activate the virtual environment by running: @@ -526,8 +526,8 @@ directory `~/.aws/config` with the ec2 profile and credentials as shown below: !!! note "Information" - We need to have a profile that you use must have permissions to allow - the AWS operations can be performed. + We need to have a profile that you use must have permissions to allow + the AWS operations can be performed. #### Listing buckets using **aws-cli** @@ -1062,9 +1062,9 @@ respectively. !!! note "Helpful Tips" - You can save your above configured session with some preferred name by - clicking the "Save" button and then giving a proper name to your session. - So that next time you don't need to again manually enter all your configuration. + You can save your above configured session with some preferred name by + clicking the "Save" button and then giving a proper name to your session. + So that next time you don't need to again manually enter all your configuration. #### Using WinSCP diff --git a/docs/openstack/persistent-storage/transfer-a-volume.md b/docs/openstack/persistent-storage/transfer-a-volume.md index f5e5b776..951e0e25 100644 --- a/docs/openstack/persistent-storage/transfer-a-volume.md +++ b/docs/openstack/persistent-storage/transfer-a-volume.md @@ -104,9 +104,9 @@ openstack volume transfer request create my-volume !!! tip "Pro Tip" - If your volume name includes spaces, you need to enclose them in quotes, - i.e. `""`. - For example: `openstack volume transfer request create "My Volume"` + If your volume name includes spaces, you need to enclose them in quotes, + i.e. `""`. + For example: `openstack volume transfer request create "My Volume"` - The volume can be checked as in the transfer status using `openstack volume transfer request list` as follows and the volume is in status diff --git a/docs/other-tools/CI-CD/github-actions/setup-github-actions-pipeline.md b/docs/other-tools/CI-CD/github-actions/setup-github-actions-pipeline.md index 76168e72..d66acece 100644 --- a/docs/other-tools/CI-CD/github-actions/setup-github-actions-pipeline.md +++ b/docs/other-tools/CI-CD/github-actions/setup-github-actions-pipeline.md @@ -84,14 +84,14 @@ workflow. !!! info "Very Important Information" - Workflow execution on OpenShift pipelines follows these steps: - - 1. Checkout your repository - 2. Perform a container image build - 3. Push the built image to the GitHub Container Registry (GHCR) or - your preferred Registry - 4. Log in to your NERC OpenShift cluster's project space - 5. Create an OpenShift app from the image and expose it to the internet + Workflow execution on OpenShift pipelines follows these steps: + + 1. Checkout your repository + 2. Perform a container image build + 3. Push the built image to the GitHub Container Registry (GHCR) or + your preferred Registry + 4. Log in to your NERC OpenShift cluster's project space + 5. Create an OpenShift app from the image and expose it to the internet 8. Edit the top-level 'env' section as marked with '🖊️' if the defaults are not suitable for your project. diff --git a/docs/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline.md b/docs/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline.md index 0e89ec4b..9020eb1c 100644 --- a/docs/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline.md +++ b/docs/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline.md @@ -31,11 +31,11 @@ _Figure: CI/CD Pipeline To Deploy To Kubernetes Cluster Using Jenkins on NERC_ - [Assign a Floating IP](../../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) to your new instance so that you will be able to ssh into this machine: - ssh ubuntu@ -A -i + ssh ubuntu@ -A -i For example: - ssh ubuntu@199.94.60.4 -A -i cloud.key + ssh ubuntu@199.94.60.4 -A -i cloud.key Upon successfully SSH accessing the machine, execute the following dependencies: @@ -45,16 +45,16 @@ Upon successfully SSH accessing the machine, execute the following dependencies: - Update the repositories and packages: - sudo apt-get update && sudo apt-get upgrade -y + sudo apt-get update && sudo apt-get upgrade -y - Turn off `swap` - swapoff -a - sudo sed -i '/ swap / s/^/#/' /etc/fstab + swapoff -a + sudo sed -i '/ swap / s/^/#/' /etc/fstab - Install `curl` and `apt-transport-https` - sudo apt-get update && sudo apt-get install -y apt-transport-https curl + sudo apt-get update && sudo apt-get install -y apt-transport-https curl --- @@ -62,12 +62,12 @@ Upon successfully SSH accessing the machine, execute the following dependencies: - Download and install Docker CE: - curl -fsSL https://get.docker.com -o get-docker.sh - sudo sh get-docker.sh + curl -fsSL https://get.docker.com -o get-docker.sh + sudo sh get-docker.sh - Configure the Docker daemon: - sudo usermod -aG docker $USER && newgrp docker + sudo usermod -aG docker $USER && newgrp docker --- @@ -77,23 +77,23 @@ Upon successfully SSH accessing the machine, execute the following dependencies: - Download the Google Cloud public signing key and add key to verify releases - curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo \ - apt-key add - + curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo \ + apt-key add - - add kubernetes apt repo - cat <`" and "``" - with your actual DockerHub and GitHub usernames, respectively. Also, - ensure that the global credentials IDs mentioned above match those used - during the credential saving steps mentioned earlier. For instance, - `dockerhublogin` corresponds to the **DockerHub** ID saved during the - credential saving process for your Docker Hub Registry's username and - password. Similarly, `kubernetes` corresponds to the **'Kubeconfig'** ID - assigned for the Kubeconfig credential file. + You need to replace "``" and "``" + with your actual DockerHub and GitHub usernames, respectively. Also, + ensure that the global credentials IDs mentioned above match those used + during the credential saving steps mentioned earlier. For instance, + `dockerhublogin` corresponds to the **DockerHub** ID saved during the + credential saving process for your Docker Hub Registry's username and + password. Similarly, `kubernetes` corresponds to the **'Kubeconfig'** ID + assigned for the Kubeconfig credential file. - Below is an example of a Jenkins declarative Pipeline Script file: pipeline { - environment { - dockerimagename = "/nodeapp:${env.BUILD_NUMBER}" - dockerImage = "" - } - - agent any - - stages { - - stage('Checkout Source') { - steps { - git branch: 'main', url: 'https://github.com//nodeapp.git' - } - } - - stage('Build image') { - steps{ - script { - dockerImage = docker.build dockerimagename - } - } - } - - stage('Pushing Image') { - environment { - registryCredential = 'dockerhublogin' - } - steps{ - script { - docker.withRegistry('https://registry.hub.docker.com', registryCredential){ - dockerImage.push() - } - } - } - } - - stage('Docker Remove Image') { - steps { - sh "docker rmi -f ${dockerimagename}" - sh "docker rmi -f registry.hub.docker.com/${dockerimagename}" - } - } - - stage('Deploying App to Kubernetes') { - steps { - sh "sed -i 's/nodeapp:latest/nodeapp:${env.BUILD_NUMBER}/g' deploymentservice.yml" - withKubeConfig([credentialsId: 'kubernetes']) { - sh 'kubectl apply -f deploymentservice.yml' - } - } - } - } + environment { + dockerimagename = "/nodeapp:${env.BUILD_NUMBER}" + dockerImage = "" + } + + agent any + + stages { + + stage('Checkout Source') { + steps { + git branch: 'main', url: 'https://github.com//nodeapp.git' + } + } + + stage('Build image') { + steps{ + script { + dockerImage = docker.build dockerimagename + } + } + } + + stage('Pushing Image') { + environment { + registryCredential = 'dockerhublogin' + } + steps{ + script { + docker.withRegistry('https://registry.hub.docker.com', registryCredential){ + dockerImage.push() + } + } + } + } + + stage('Docker Remove Image') { + steps { + sh "docker rmi -f ${dockerimagename}" + sh "docker rmi -f registry.hub.docker.com/${dockerimagename}" + } + } + + stage('Deploying App to Kubernetes') { + steps { + sh "sed -i 's/nodeapp:latest/nodeapp:${env.BUILD_NUMBER}/g' deploymentservice.yml" + withKubeConfig([credentialsId: 'kubernetes']) { + sh 'kubectl apply -f deploymentservice.yml' + } + } + } + } } !!! question "Other way to Generate Pipeline Jenkinsfile" - You can generate your custom Jenkinsfile by clicking on **"Pipeline Syntax"** - link shown when you create a new Pipeline when clicking the "New Item" menu - link. + You can generate your custom Jenkinsfile by clicking on **"Pipeline Syntax"** + link shown when you create a new Pipeline when clicking the "New Item" menu + link. ## Setup a Pipeline diff --git a/docs/other-tools/apache-spark/spark.md b/docs/other-tools/apache-spark/spark.md index 0478e1f7..d9905720 100644 --- a/docs/other-tools/apache-spark/spark.md +++ b/docs/other-tools/apache-spark/spark.md @@ -65,8 +65,8 @@ and Scala applications using the IP address of the master VM. !!! note "Note" - Installing Scala means installing various command-line tools such as the - Scala compiler and build tools. + Installing Scala means installing various command-line tools such as the + Scala compiler and build tools. - Download and unpack Apache Spark: @@ -81,10 +81,10 @@ and Scala applications using the IP address of the master VM. !!! warning "Very Important Note" - Please ensure you are using the latest Spark version by modifying the - `SPARK_VERSION` in the above script. Additionally, verify that the version - exists on the `APACHE_MIRROR` website. Please note the value of `SPARK_VERSION` - as you will need it during [Preparing Jobs for Execution and Examination](#preparing-jobs-for-execution-and-examination). + Please ensure you are using the latest Spark version by modifying the + `SPARK_VERSION` in the above script. Additionally, verify that the version + exists on the `APACHE_MIRROR` website. Please note the value of `SPARK_VERSION` + as you will need it during [Preparing Jobs for Execution and Examination](#preparing-jobs-for-execution-and-examination). - Create an SSH/RSA Key by running `ssh-keygen -t rsa` without using any passphrase: @@ -145,11 +145,11 @@ and Scala applications using the IP address of the master VM. !!! note "Naming, Security Group and Flavor for Worker Nodes" - You can specify the "Instance Name" as "spark-worker", and for each instance, - it will automatically append incremental values at the end, such as - `spark-worker-1` and `spark-worker-2`. Also, make sure you have attached - the [Security Groups](../../openstack/access-and-security/security-groups.md#allowing-ssh) - to allow **ssh** using Port 22 access to the worker instances. + You can specify the "Instance Name" as "spark-worker", and for each instance, + it will automatically append incremental values at the end, such as + `spark-worker-1` and `spark-worker-2`. Also, make sure you have attached + the [Security Groups](../../openstack/access-and-security/security-groups.md#allowing-ssh) + to allow **ssh** using Port 22 access to the worker instances. Additionally, during launch, you will have the option to choose your preferred flavor for the worker nodes, which can differ from the master VM based on your @@ -189,8 +189,8 @@ computational requirements. !!! danger "Very Important Note" - Make sure to use `>>` instead of `>` to avoid overwriting the existing content - and append the new content at the end of the file. + Make sure to use `>>` instead of `>` to avoid overwriting the existing content + and append the new content at the end of the file. For example, the end of the `/etc/hosts` file looks like this: @@ -222,13 +222,13 @@ computational requirements. !!! tip "Environment Variables" - Executing this command: `readlink -f $(which java)` will display the path - to the current Java setup in your VM. For example: - `/usr/lib/jvm/java-11-openjdk-amd64/bin/java`, you need to remove the - last `bin/java` part, i.e. `/usr/lib/jvm/java-11-openjdk-amd64`, to set - it as the `JAVA_HOME` environment variable. - Learn more about other Spark settings that can be configured through environment - variables [here](https://spark.apache.org/docs/3.4.2/configuration.html#environment-variables). + Executing this command: `readlink -f $(which java)` will display the path + to the current Java setup in your VM. For example: + `/usr/lib/jvm/java-11-openjdk-amd64/bin/java`, you need to remove the + last `bin/java` part, i.e. `/usr/lib/jvm/java-11-openjdk-amd64`, to set + it as the `JAVA_HOME` environment variable. + Learn more about other Spark settings that can be configured through environment + variables [here](https://spark.apache.org/docs/3.4.2/configuration.html#environment-variables). For example: @@ -269,8 +269,8 @@ computational requirements. !!! info "How to Stop All Spark Cluster" - To stop all of the Spark cluster nodes, execute `./sbin/stop-all.sh` - command from `/usr/local/spark`. + To stop all of the Spark cluster nodes, execute `./sbin/stop-all.sh` + command from `/usr/local/spark`. ## Connect to the Spark WebUI @@ -335,9 +335,9 @@ resources for both the Spark cluster and individual applications. !!! warning "Very Important Note" - Please ensure you are using the same Spark version that you have - [downloaded and installed previously](#setup-a-master-vm) as the value - of `SPARK_VERSION` in the above script. + Please ensure you are using the same Spark version that you have + [downloaded and installed previously](#setup-a-master-vm) as the value + of `SPARK_VERSION` in the above script. - **Single Node Job:** diff --git a/docs/other-tools/kubernetes/k3s/k3s-ha-cluster.md b/docs/other-tools/kubernetes/k3s/k3s-ha-cluster.md index 74486ea0..4752f35f 100644 --- a/docs/other-tools/kubernetes/k3s/k3s-ha-cluster.md +++ b/docs/other-tools/kubernetes/k3s/k3s-ha-cluster.md @@ -199,8 +199,8 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/a !!! note "Important Note" - If you're doing this from your local development machine, remove - `sudo k3s` and just use `kubectl`) + If you're doing this from your local development machine, remove `sudo k3s` + and just use `kubectl`) - Get bearer **token** diff --git a/docs/other-tools/kubernetes/k3s/k3s-using-k3d.md b/docs/other-tools/kubernetes/k3s/k3s-using-k3d.md index ca720590..a18250af 100644 --- a/docs/other-tools/kubernetes/k3s/k3s-using-k3d.md +++ b/docs/other-tools/kubernetes/k3s/k3s-using-k3d.md @@ -46,15 +46,15 @@ Availability clusters just with few commands. **kubectl**: the command line util to talk to your cluster. - snap install kubectl --classic + snap install kubectl --classic This outputs: - kubectl 1.26.1 from Canonical✓ installed + kubectl 1.26.1 from Canonical✓ installed - Now verify the kubectl version: - kubectl version -o yaml + kubectl version -o yaml --- diff --git a/docs/other-tools/kubernetes/k3s/k3s.md b/docs/other-tools/kubernetes/k3s/k3s.md index f2bd4d70..7d04aff7 100644 --- a/docs/other-tools/kubernetes/k3s/k3s.md +++ b/docs/other-tools/kubernetes/k3s/k3s.md @@ -66,9 +66,9 @@ must be accessible to each other on ports **2379** and **2380**. !!! note "Important Note" - The VXLAN overlay networking port on nodes should not be exposed to the world - as it opens up your cluster network to be accessed by anyone. Run your nodes - behind a firewall/security group that disables access to port **8472**. + The VXLAN overlay networking port on nodes should not be exposed to the world + as it opens up your cluster network to be accessed by anyone. Run your nodes + behind a firewall/security group that disables access to port **8472**. - setup Unique hostname to each machine using the following command: diff --git a/docs/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm.md b/docs/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm.md index ca16f8a0..5c4d0a93 100644 --- a/docs/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm.md +++ b/docs/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm.md @@ -142,7 +142,7 @@ apiservers. The loadbalancer will be used to loadbalance between the 2 apiserver !!! note "Note" - 6443 is the default port of **kube-apiserver** + 6443 is the default port of **kube-apiserver** ```sh backend be-apiserver @@ -183,8 +183,8 @@ apiservers. The loadbalancer will be used to loadbalance between the 2 apiserver !!! note "Note" - If you see failures for `master1` and `master2` connectivity, you can ignore - them for time being as you have not yet installed anything on the servers. + If you see failures for `master1` and `master2` connectivity, you can ignore + them for time being as you have not yet installed anything on the servers. --- @@ -352,10 +352,10 @@ same in `master2`. !!! danger "Configuring the kubelet cgroup driver" - From 1.22 onwards, if you do not set the `cgroupDriver` field under - `KubeletConfiguration`, `kubeadm` will default it to `systemd`. So you do - not need to do anything here by default but if you want you change it you can - refer to [this documentation](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/). + From 1.22 onwards, if you do not set the `cgroupDriver` field under + `KubeletConfiguration`, `kubeadm` will default it to `systemd`. So you do + not need to do anything here by default but if you want you change it you can + refer to [this documentation](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/). - Execute the below command to initialize the cluster: @@ -377,12 +377,12 @@ same in `master2`. !!! note "Important Note" - `--pod-network-cidr` value depends upon what CNI plugin you going to use so - need to be very careful while setting this CIDR values. In our case, you are - going to use **Flannel** CNI network plugin so you will use: - `--pod-network-cidr=10.244.0.0/16`. If you are opted to use **Calico** CNI - network plugin then you need to use: `--pod-network-cidr=192.168.0.0/16` and - if you are opted to use **Weave Net** no need to pass this parameter. + `--pod-network-cidr` value depends upon what CNI plugin you going to use so + need to be very careful while setting this CIDR values. In our case, you are + going to use **Flannel** CNI network plugin so you will use: + `--pod-network-cidr=10.244.0.0/16`. If you are opted to use **Calico** CNI + network plugin then you need to use: `--pod-network-cidr=192.168.0.0/16` and + if you are opted to use **Weave Net** no need to pass this parameter. For example, our `Flannel` CNI network plugin based kubeadm init command with _loadbalancer node_ with internal IP: `192.168.0.167` look like below: @@ -454,12 +454,12 @@ same in `master2`. !!! warning "Warning" - Kubeadm signs the certificate in the admin.conf to have - `Subject: O = system:masters, CN = kubernetes-admin. system:masters` is a - break-glass, super user group that bypasses the authorization layer - (e.g. RBAC). Do not share the admin.conf file with anyone and instead - grant users custom permissions by generating them a kubeconfig file using - the `kubeadm kubeconfig user` command. + Kubeadm signs the certificate in the admin.conf to have + `Subject: O = system:masters, CN = kubernetes-admin. system:masters` is a + break-glass, super user group that bypasses the authorization layer + (e.g. RBAC). Do not share the admin.conf file with anyone and instead + grant users custom permissions by generating them a kubeconfig file using + the `kubeadm kubeconfig user` command. B. Setup a new control plane (master) i.e. `master2` by running following command on **master2** node: @@ -480,9 +480,9 @@ same in `master2`. !!! note "Important Note" - **Your output will be different than what is provided here. While - performing the rest of the demo, ensure that you are executing the - command provided by your output and dont copy and paste from here.** + **Your output will be different than what is provided here. While + performing the rest of the demo, ensure that you are executing the + command provided by your output and dont copy and paste from here.** If you do not have the token, you can get it by running the following command on the control-plane node: @@ -618,11 +618,11 @@ kubeconfig and `kubectl`. !!! note "Important Note" - If you havent setup ssh connection between master node and loadbalancer, you - can manually copy the contents of the file `/etc/kubernetes/admin.conf` from - `master1` node and then paste it to `$HOME/.kube/config` file on the - loadbalancer node. Ensure that the kubeconfig file path is - **`$HOME/.kube/config`** on the loadbalancer node. + If you havent setup ssh connection between master node and loadbalancer, you + can manually copy the contents of the file `/etc/kubernetes/admin.conf` from + `master1` node and then paste it to `$HOME/.kube/config` file on the + loadbalancer node. Ensure that the kubeconfig file path is + **`$HOME/.kube/config`** on the loadbalancer node. - Provide appropriate ownership to the copied file @@ -638,21 +638,21 @@ kubeconfig and `kubectl`. **kubectl**: the command line util to talk to your cluster. - snap install kubectl --classic + snap install kubectl --classic This outputs: - kubectl 1.26.1 from Canonical✓ installed + kubectl 1.26.1 from Canonical✓ installed - Verify the cluster - kubectl get nodes + kubectl get nodes - NAME STATUS ROLES AGE VERSION - master1 NotReady control-plane,master 21m v1.26.1 - master2 NotReady control-plane,master 15m v1.26.1 - worker1 Ready 9m17s v1.26.1 - worker2 Ready 9m25s v1.26.1 + NAME STATUS ROLES AGE VERSION + master1 NotReady control-plane,master 21m v1.26.1 + master2 NotReady control-plane,master 15m v1.26.1 + worker1 Ready 9m17s v1.26.1 + worker2 Ready 9m25s v1.26.1 --- @@ -900,14 +900,14 @@ following commands: !!! info "Information" - Since 1.22, this type of Secret is no longer used to mount credentials into - Pods, and obtaining tokens via the [TokenRequest API](https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) - is recommended instead of using service account token Secret objects. Tokens - obtained from the _TokenRequest API_ are more secure than ones stored in Secret - objects, because they have a bounded lifetime and are not readable by other API - clients. You can use the `kubectl create token` command to obtain a token from - the TokenRequest API. For example: `kubectl create token skooner-sa`, where - `skooner-sa` is service account name. + Since 1.22, this type of Secret is no longer used to mount credentials into + Pods, and obtaining tokens via the [TokenRequest API](https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) + is recommended instead of using service account token Secret objects. Tokens + obtained from the _TokenRequest API_ are more secure than ones stored in Secret + objects, because they have a bounded lifetime and are not readable by other API + clients. You can use the `kubectl create token` command to obtain a token from + the TokenRequest API. For example: `kubectl create token skooner-sa`, where + `skooner-sa` is service account name. - Find the secret that was created to hold the token for the SA diff --git a/docs/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm.md b/docs/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm.md index 9918f323..27a29278 100644 --- a/docs/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm.md +++ b/docs/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm.md @@ -226,10 +226,10 @@ with your chosen container runtime. !!! danger "Configuring the kubelet cgroup driver" - From 1.22 onwards, if you do not set the `cgroupDriver` field under - `KubeletConfiguration`, `kubeadm` will default it to `systemd`. So you do - not need to do anything here by default but if you want you change it you - can refer to [this documentation](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/). + From 1.22 onwards, if you do not set the `cgroupDriver` field under + `KubeletConfiguration`, `kubeadm` will default it to `systemd`. So you do + not need to do anything here by default but if you want you change it you + can refer to [this documentation](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/). --- @@ -252,14 +252,14 @@ control plane. !!! note "Important Note" - Please make sure you replace the correct IP of the node with - `` which is the Internal IP of master node. - `--pod-network-cidr` value depends upon what CNI plugin you going to use - so need to be very careful while setting this CIDR values. In our case, - you are going to use **Flannel** CNI network plugin so you will use: - `--pod-network-cidr=10.244.0.0/16`. If you are opted to use **Calico** CNI - network plugin then you need to use: `--pod-network-cidr=192.168.0.0/16` - and if you are opted to use **Weave Net** no need to pass this parameter. + Please make sure you replace the correct IP of the node with + `` which is the Internal IP of master node. + `--pod-network-cidr` value depends upon what CNI plugin you going to use + so need to be very careful while setting this CIDR values. In our case, + you are going to use **Flannel** CNI network plugin so you will use: + `--pod-network-cidr=10.244.0.0/16`. If you are opted to use **Calico** CNI + network plugin then you need to use: `--pod-network-cidr=192.168.0.0/16` + and if you are opted to use **Weave Net** no need to pass this parameter. For example, our `Flannel` CNI network plugin based kubeadm init command with _master node_ with internal IP: `192.168.0.167` look like below: @@ -334,12 +334,12 @@ control plane. !!! warning "Warning" - Kubeadm signs the certificate in the admin.conf to have - `Subject: O = system:masters, CN = kubernetes-admin. system:masters` is a - break-glass, super user group that bypasses the authorization layer - (e.g. RBAC). Do not share the admin.conf file with anyone and instead - grant users custom permissions by generating them a kubeconfig file using - the `kubeadm kubeconfig user` command. + Kubeadm signs the certificate in the admin.conf to have + `Subject: O = system:masters, CN = kubernetes-admin. system:masters` is a + break-glass, super user group that bypasses the authorization layer + (e.g. RBAC). Do not share the admin.conf file with anyone and instead + grant users custom permissions by generating them a kubeconfig file using + the `kubeadm kubeconfig user` command. B. Join worker nodes running following command on individual worker nodes: @@ -350,9 +350,9 @@ control plane. !!! note "Important Note" - **Your output will be different than what is provided here. While - performing the rest of the demo, ensure that you are executing the - command provided by your output and dont copy and paste from here.** + **Your output will be different than what is provided here. While + performing the rest of the demo, ensure that you are executing the + command provided by your output and dont copy and paste from here.** If you do not have the token, you can get it by running the following command on the control-plane node: @@ -691,15 +691,15 @@ following commands: !!! info "Information" - Since 1.22, this type of Secret is no longer used to mount credentials into - Pods, and obtaining tokens via the [TokenRequest API](https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) - is recommended instead of using service account token Secret objects. Tokens - obtained from the _TokenRequest API_ are more secure than ones stored in - Secret objects, because they have a bounded lifetime and are not readable - by other API clients. You can use the `kubectl create token` command to - obtain a token from the TokenRequest API. For example: - `kubectl create token skooner-sa`, where `skooner-sa` is service account - name. + Since 1.22, this type of Secret is no longer used to mount credentials into + Pods, and obtaining tokens via the [TokenRequest API](https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-request-v1/) + is recommended instead of using service account token Secret objects. Tokens + obtained from the _TokenRequest API_ are more secure than ones stored in + Secret objects, because they have a bounded lifetime and are not readable + by other API clients. You can use the `kubectl create token` command to + obtain a token from the TokenRequest API. For example: + `kubectl create token skooner-sa`, where `skooner-sa` is service account + name. - Find the secret that was created to hold the token for the SA diff --git a/docs/other-tools/kubernetes/kubespray.md b/docs/other-tools/kubernetes/kubespray.md index 58a11dde..fe774f6e 100644 --- a/docs/other-tools/kubernetes/kubespray.md +++ b/docs/other-tools/kubernetes/kubespray.md @@ -230,10 +230,10 @@ control plane. !!! note "Very Important" - As **Ubuntu 20 kvm kernel** doesn't have **dummy module** we need to **modify** - the following two variables in `inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml`: - `enable_nodelocaldns: false` and `kube_proxy_mode: iptables` which will - _Disable nodelocal dns cache_ and _Kube-proxy proxyMode to iptables_ respectively. + As **Ubuntu 20 kvm kernel** doesn't have **dummy module** we need to **modify** + the following two variables in `inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml`: + `enable_nodelocaldns: false` and `kube_proxy_mode: iptables` which will + _Disable nodelocal dns cache_ and _Kube-proxy proxyMode to iptables_ respectively. - Deploy Kubespray with Ansible Playbook - run the playbook as `root` user. The option `--become` is required, as for example writing SSL keys in `/etc/`, @@ -246,8 +246,8 @@ control plane. !!! note "Note" - Running ansible playbook takes little time because it depends on the network - bandwidth also. + Running ansible playbook takes little time because it depends on the network + bandwidth also. --- diff --git a/docs/other-tools/kubernetes/microk8s.md b/docs/other-tools/kubernetes/microk8s.md index b5ab508b..0cf5f652 100644 --- a/docs/other-tools/kubernetes/microk8s.md +++ b/docs/other-tools/kubernetes/microk8s.md @@ -82,13 +82,13 @@ Run the below command on the Ubuntu VM: !!! note "Note" - Another way to access the default token to be used for the dashboard access - can be retrieved with: + Another way to access the default token to be used for the dashboard access + can be retrieved with: - ```sh - token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d "" -f1) - microk8s kubectl -n kube-system describe secret $token - ``` + ```sh + token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d "" -f1) + microk8s kubectl -n kube-system describe secret $token + ``` - Keep running the kubernetes-dashboad on Proxy to access it via web browser: @@ -103,11 +103,11 @@ Run the below command on the Ubuntu VM: !!! note "Important" - This tells us the IP address of the Dashboard and the port. The values assigned - to your Dashboard will differ. Please note the displayed **PORT** and - the **TOKEN** that are required to access the kubernetes-dashboard. Make - sure, the exposed **PORT** is opened in Security Groups for the instance - following [this guide](../../openstack/access-and-security/security-groups.md). + This tells us the IP address of the Dashboard and the port. The values assigned + to your Dashboard will differ. Please note the displayed **PORT** and + the **TOKEN** that are required to access the kubernetes-dashboard. Make + sure, the exposed **PORT** is opened in Security Groups for the instance + following [this guide](../../openstack/access-and-security/security-groups.md). This will show the token to login to the Dashbord shown on the url with NodePort. diff --git a/docs/other-tools/kubernetes/minikube.md b/docs/other-tools/kubernetes/minikube.md index 45d08503..379e41ce 100644 --- a/docs/other-tools/kubernetes/minikube.md +++ b/docs/other-tools/kubernetes/minikube.md @@ -220,30 +220,30 @@ with your chosen container runtime. !!! note "Note" - - To check the internal IP, run the `minikube ip` command. + - To check the internal IP, run the `minikube ip` command. - - By default, Minikube uses the driver most relevant to the host OS. To - use a different driver, set the `--driver` flag in `minikube start`. For - example, to use others or none instead of Docker, run - `minikube start --driver=none`. To persistent configuration so that - you to run minikube start without explicitly passing i.e. in global scope - the `--vm-driver docker` flag each time, run: - `minikube config set vm-driver docker`. + - By default, Minikube uses the driver most relevant to the host OS. To + use a different driver, set the `--driver` flag in `minikube start`. For + example, to use others or none instead of Docker, run + `minikube start --driver=none`. To persistent configuration so that + you to run minikube start without explicitly passing i.e. in global scope + the `--vm-driver docker` flag each time, run: + `minikube config set vm-driver docker`. - - Other start options: - `minikube start --force --driver=docker --network-plugin=cni --container-runtime=containerd` + - Other start options: + `minikube start --force --driver=docker --network-plugin=cni --container-runtime=containerd` - - In case you want to start minikube with customize resources and want installer - to automatically select the driver then you can run following command, - `minikube start --addons=ingress --cpus=2 --cni=flannel --install-addons=true - --kubernetes-version=stable --memory=6g` + - In case you want to start minikube with customize resources and want installer + to automatically select the driver then you can run following command, + `minikube start --addons=ingress --cpus=2 --cni=flannel --install-addons=true + --kubernetes-version=stable --memory=6g` - Output would like below: + Output would like below: - ![Minikube sucessfully started](images/minikube_started.png) + ![Minikube sucessfully started](images/minikube_started.png) - Perfect, above confirms that minikube cluster has been configured and started - successfully. + Perfect, above confirms that minikube cluster has been configured and started + successfully. - Run below minikube command to check status: diff --git a/nerc-theme/main.html b/nerc-theme/main.html index a2356f6f..c9dc6f40 100644 --- a/nerc-theme/main.html +++ b/nerc-theme/main.html @@ -1,12 +1,12 @@ {% extends "base.html" %} {% block announce %}
-
Upcoming NERC Network Equipment and Switch Maintenance
+
Upcoming Multi-Day NERC OpenStack Platform Version Upgrade
- (Tuesday Jan 7, 2025 9 AM ET - Wednesday Jan 8, 2025 9 AM ET) + (Dec 12, 2024 8:00 AM ET - Dec 14, 2024 8:00 PM ET) [Timeline and info] From f5bf07ca76bff6f2d07264fbadceba1713e513bb Mon Sep 17 00:00:00 2001 From: Milson Munakami Date: Mon, 16 Dec 2024 18:16:31 -0500 Subject: [PATCH 3/8] removed prettier package --- .pre-commit-config.yaml | 10 - .prettierrc.yaml | 2 +- .yamllint.yaml | 52 ++-- docs/get-started/allocation/coldfront.md | 30 +-- .../best-practices-for-harvard.md | 20 +- .../best-practices/best-practices.md | 50 ++-- .../cost-billing/how-pricing-works.md | 20 +- .../explore-the-jupyterlab-environment.md | 18 +- .../model-serving-in-the-rhoai.md | 74 +++--- .../testing-model-in-the-rhoai.md | 70 +++--- .../using-projects-the-rhoai.md | 32 +-- .../get-started/rhoai-overview.md | 24 +- docs/openshift-ai/index.md | 20 +- .../the-rhoai-dashboard-overview.md | 14 +- ...jupyter-notebook-use-gpus-aiml-modeling.md | 6 +- ...ss-s3-data-then-download-and-analyze-it.md | 18 +- .../scaling-and-performance-guide.md | 60 ++--- .../decommission-openshift-resources.md | 16 +- .../get-started/openshift-overview.md | 120 ++++----- docs/openshift/index.md | 26 +- .../access-the-openshift-web-console.md | 8 +- .../openshift/logging-in/the-openshift-cli.md | 8 +- .../access-and-security/create-a-key-pair.md | 6 +- .../access-and-security/security-groups.md | 28 +-- .../domain-names-for-your-vms.md | 12 +- .../set-up-a-private-network.md | 12 +- .../how-to-build-windows-image.md | 12 +- .../openstack/backup/backup-with-snapshots.md | 26 +- .../bastion-host-based-ssh/index.md | 26 +- .../create-a-Windows-VM.md | 16 +- .../create-and-connect-to-the-VM/flavors.md | 6 +- .../launch-a-VM.md | 18 +- .../ssh-to-the-VM.md | 30 +-- .../using-vpn/sshuttle/index.md | 4 +- .../using-vpn/wireguard/index.md | 8 +- .../data-transfer/data-transfer-from-to-vm.md | 56 ++--- .../decommission-openstack-resources.md | 16 +- docs/openstack/index.md | 66 ++--- .../access-the-openstack-dashboard.md | 8 +- .../logging-in/dashboard-overview.md | 58 ++--- docs/openstack/management/vm-management.md | 48 ++-- .../launch-a-VM-using-openstack-CLI.md | 10 +- docs/openstack/openstack-cli/openstack-CLI.md | 20 +- .../attach-the-volume-to-an-instance.md | 4 +- .../create-an-empty-volume.md | 4 +- .../persistent-storage/delete-volumes.md | 4 +- .../persistent-storage/detach-a-volume.md | 4 +- .../persistent-storage/extending-volume.md | 8 +- .../format-and-mount-the-volume.md | 14 +- .../mount-the-object-storage.md | 96 +++---- .../persistent-storage/object-storage.md | 74 +++--- .../persistent-storage/transfer-a-volume.md | 22 +- docs/openstack/persistent-storage/volumes.md | 32 +-- docs/other-tools/CI-CD/CI-CD-pipeline.md | 18 +- .../setup-github-actions-pipeline.md | 8 +- .../jenkins/setup-jenkins-CI-CD-pipeline.md | 236 +++++++++--------- docs/other-tools/apache-spark/spark.md | 160 ++++++------ docs/other-tools/index.md | 8 +- docs/other-tools/kubernetes/k0s.md | 50 ++-- .../k3s/k3s-ha-cluster-using-k3d.md | 2 +- .../kubernetes/k3s/k3s-ha-cluster.md | 44 ++-- .../kubernetes/k3s/k3s-using-k3d.md | 22 +- .../kubernetes/k3s/k3s-using-k3sup.md | 12 +- docs/other-tools/kubernetes/k3s/k3s.md | 114 ++++----- docs/other-tools/kubernetes/kind.md | 20 +- .../kubeadm/HA-clusters-with-kubeadm.md | 230 ++++++++--------- .../single-master-clusters-with-kubeadm.md | 162 ++++++------ docs/other-tools/kubernetes/kubespray.md | 108 ++++---- docs/other-tools/kubernetes/microk8s.md | 56 ++--- docs/other-tools/kubernetes/minikube.md | 132 +++++----- .../nfs/nfs-server-client-setup.md | 32 +-- 71 files changed, 1427 insertions(+), 1433 deletions(-) diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index c9fe717d..966b4138 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -42,16 +42,6 @@ repos: - id: markdownlint args: [-c, .markdownlint.yaml, --fix] - - repo: https://github.com/pre-commit/mirrors-prettier - rev: v4.0.0-alpha.8 - hooks: - - id: prettier - args: - - --ignore-path - - .prettierignore - - --config - - .prettierrc.yaml - - repo: https://github.com/Yelp/detect-secrets rev: v1.5.0 hooks: diff --git a/.prettierrc.yaml b/.prettierrc.yaml index 1aaeb8bf..5101d9dc 100644 --- a/.prettierrc.yaml +++ b/.prettierrc.yaml @@ -1,2 +1,2 @@ -printWidth: 100 +printWidth: 80 tabWidth: 4 diff --git a/.yamllint.yaml b/.yamllint.yaml index 690cfca6..435cdb40 100644 --- a/.yamllint.yaml +++ b/.yamllint.yaml @@ -1,26 +1,30 @@ -extends: default +.yamllint: + extends: default -ignore: | - *.param.yaml + ignore: | + *.param.yaml -rules: - braces: - level: error - max-spaces-inside: 1 # To format with Prettier - comments: - level: error - min-spaces-from-content: 1 # To be compatible with C++ and Python - document-start: - level: error - present: false # Don't need document start markers - line-length: disable # Delegate to Prettier - indentation: - indent-sequences: whatever - hyphens: - max-spaces-after: 4 - truthy: - level: error - check-keys: false # To allow 'on' of GitHub Actions - quoted-strings: - level: error - required: only-when-needed # To keep consistent style + rules: + braces: + level: error + max-spaces-inside: 1 # Ensure consistent spacing inside braces + comments: + level: error + min-spaces-from-content: 1 # Keep compatibility with C++ and Python + document-start: + level: error + present: false # Document start markers not needed + line-length: + level: warning + max: 80 # Enforce a maximum line length + indentation: + spaces: 4 # Set tab width to 4 spaces + indent-sequences: whatever + hyphens: + max-spaces-after: 4 + truthy: + level: error + check-keys: false # Allow 'on' for GitHub Actions + quoted-strings: + level: error + required: only-when-needed # Ensure consistent style for strings diff --git a/docs/get-started/allocation/coldfront.md b/docs/get-started/allocation/coldfront.md index 6966c462..43aef008 100644 --- a/docs/get-started/allocation/coldfront.md +++ b/docs/get-started/allocation/coldfront.md @@ -30,28 +30,28 @@ can see an administrative view of it as [described here](../allocation/allocation-details.md#pi-and-manager-view) and can do the following tasks: -- **Only PI** can add a new project and archive any existing project(s) +- **Only PI** can add a new project and archive any existing project(s) -- Manage existing projects +- Manage existing projects -- Request allocations that fall under projects in NERC's resources such as clusters, - cloud resources, servers, storage, and software licenses +- Request allocations that fall under projects in NERC's resources such as clusters, + cloud resources, servers, storage, and software licenses -- Add/remove user access to/from allocated resources who is a member of the project - without requiring system administrator interaction +- Add/remove user access to/from allocated resources who is a member of the project + without requiring system administrator interaction -- Elevate selected users to 'manager' status, allowing them to handle some of the - PI asks such as request new resource allocations, add/remove users to/from resource - allocations, add project data such as grants and publications +- Elevate selected users to 'manager' status, allowing them to handle some of the + PI asks such as request new resource allocations, add/remove users to/from resource + allocations, add project data such as grants and publications -- Monitor resource utilization such as storage and cloud usage +- Monitor resource utilization such as storage and cloud usage -- Receive email notifications for expiring/renewing access to resources as well - as notifications when allocations change status - i.e. Active, Active (Needs - Renewal), Denied, Expired +- Receive email notifications for expiring/renewing access to resources as well + as notifications when allocations change status - i.e. Active, Active (Needs + Renewal), Denied, Expired -- Provide information such as grants, publications, and other reportable data for - periodic review by center director to demonstrate need for the resources +- Provide information such as grants, publications, and other reportable data for + periodic review by center director to demonstrate need for the resources ## How to login to NERC's ColdFront? diff --git a/docs/get-started/best-practices/best-practices-for-harvard.md b/docs/get-started/best-practices/best-practices-for-harvard.md index ae7f2016..b853e7bb 100644 --- a/docs/get-started/best-practices/best-practices-for-harvard.md +++ b/docs/get-started/best-practices/best-practices-for-harvard.md @@ -100,9 +100,9 @@ Attackers often review running code on the server to see if they can obtain any sensitive credentials that may have been included in each script. To better manage your credentials, we recommend either using: -- [1password Credential Manager](https://1password.com/) +- [1password Credential Manager](https://1password.com/) -- [AWS Secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) +- [AWS Secrets](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) #### Not Running the Application as the Root/Superuser @@ -136,13 +136,13 @@ and we recommend the following ways to strengthen your SSH accounts 1. Disable password only logins - - In file `/etc/ssh/sshd_config` change `PasswordAuthentication` to `no` to - disable tunneled clear text passwords i.e. `PasswordAuthentication no`. + - In file `/etc/ssh/sshd_config` change `PasswordAuthentication` to `no` to + disable tunneled clear text passwords i.e. `PasswordAuthentication no`. - - Uncomment the permit empty passwords option in the second line, and, if - needed, change `yes` to `no` i.e. `PermitEmptyPasswords no`. + - Uncomment the permit empty passwords option in the second line, and, if + needed, change `yes` to `no` i.e. `PermitEmptyPasswords no`. - - Then run `service ssh restart`. + - Then run `service ssh restart`. 2. Use SSH keys with passwords enabled on them @@ -191,11 +191,11 @@ In the event you suspect a security issue has occurred or wanted someone to supp a security assessment, please feel free to reach out to the HUIT Internet Security and Data Privacy group, specifically the Operations & Engineering team. -- [Email Harvard ITSEC-OPS](mailto:itsec-ops@harvard.edu) +- [Email Harvard ITSEC-OPS](mailto:itsec-ops@harvard.edu) -- [Service Queue](https://harvard.service-now.com/ithelp?id=submit_ticket&sys_id=3f1dd0320a0a0b99000a53f7604a2ef9) +- [Service Queue](https://harvard.service-now.com/ithelp?id=submit_ticket&sys_id=3f1dd0320a0a0b99000a53f7604a2ef9) -- [Harvard HUIT Slack](https://harvard-huit.slack.com) Channel: **#isdp-public** +- [Harvard HUIT Slack](https://harvard-huit.slack.com) Channel: **#isdp-public** ## Further References diff --git a/docs/get-started/best-practices/best-practices.md b/docs/get-started/best-practices/best-practices.md index 9e438918..2bd7257f 100644 --- a/docs/get-started/best-practices/best-practices.md +++ b/docs/get-started/best-practices/best-practices.md @@ -35,33 +35,33 @@ containers, workloads, and any code or data generated by the platform. All NERC users are responsible for their use of NERC services, which include: -- Following the best practices for security on NERC services. Please review your - institutional guidelines [next](best-practices-for-my-institution.md). +- Following the best practices for security on NERC services. Please review your + institutional guidelines [next](best-practices-for-my-institution.md). -- Complying with security policies regarding VMs and containers. NERC admins are - not responsible for maintaining or deploying VMs or containers created by PIs - for their projects. See Harvard University and Boston University policies - [here](https://nerc.mghpcc.org/privacy-and-security/). We will be adding more - institutions under this page soon. Without prior notice, NERC reserves the right - to shut down any VM or container that is causing internal or external problems - or violating these policies. +- Complying with security policies regarding VMs and containers. NERC admins are + not responsible for maintaining or deploying VMs or containers created by PIs + for their projects. See Harvard University and Boston University policies + [here](https://nerc.mghpcc.org/privacy-and-security/). We will be adding more + institutions under this page soon. Without prior notice, NERC reserves the right + to shut down any VM or container that is causing internal or external problems + or violating these policies. -- Adhering to institutional restrictions and compliance policies around the data - they upload and provide access to/from NERC. At NERC, we only offer users to - store internal data in which information is chosen to keep confidential but the - disclosure of which would not cause material harm to you, your users and your - institution. Your institution may have already classified and categorized data - and implemented security policies and guidance for each category. If your project - includes sensitive data and information then you might need to contact NERC's - admin as soon as possible to discuss other potential options. +- Adhering to institutional restrictions and compliance policies around the data + they upload and provide access to/from NERC. At NERC, we only offer users to + store internal data in which information is chosen to keep confidential but the + disclosure of which would not cause material harm to you, your users and your + institution. Your institution may have already classified and categorized data + and implemented security policies and guidance for each category. If your project + includes sensitive data and information then you might need to contact NERC's + admin as soon as possible to discuss other potential options. -- [Backups and/or snapshots](../../openstack/backup/backup-with-snapshots.md) - are the user's responsibility for volumes/data, configurations, objects, and - their state, which are useful in the case when users accidentally delete/lose - their data. NERC admins cannot recover lost data. In addition, while NERC stores - data with high redundancy to deal with computer or disk failures, PIs should - ensure they have off-site backups for disaster recovery, e.g., to deal with - occasional disruptions and outages due to the natural disasters that impact the - MGHPCC data center. +- [Backups and/or snapshots](../../openstack/backup/backup-with-snapshots.md) + are the user's responsibility for volumes/data, configurations, objects, and + their state, which are useful in the case when users accidentally delete/lose + their data. NERC admins cannot recover lost data. In addition, while NERC stores + data with high redundancy to deal with computer or disk failures, PIs should + ensure they have off-site backups for disaster recovery, e.g., to deal with + occasional disruptions and outages due to the natural disasters that impact the + MGHPCC data center. --- diff --git a/docs/get-started/cost-billing/how-pricing-works.md b/docs/get-started/cost-billing/how-pricing-works.md index 9baf6da3..5b25d5ef 100644 --- a/docs/get-started/cost-billing/how-pricing-works.md +++ b/docs/get-started/cost-billing/how-pricing-works.md @@ -54,11 +54,11 @@ of the base SU for the maximum resource they reserve. **GPU SU Example:** -- A Project or VM with: +- A Project or VM with: `1 A100 GPU, 24 vCPUs, 95MiB RAM, 199.2hrs` -- Will be charged: +- Will be charged: `1 A100 GPU SUs x 200hrs (199.2 rounded up) x $1.803` @@ -66,11 +66,11 @@ of the base SU for the maximum resource they reserve. **OpenStack CPU SU Example:** -- A Project or VM with: +- A Project or VM with: `3 vCPU, 20 GiB RAM, 720hrs (24hr x 30days)` -- Will be charged: +- Will be charged: `5 CPU SUs due to the extra RAM (20GiB vs. 12GiB(3 x 4GiB)) x 720hrs x $0.013` @@ -91,7 +91,7 @@ of the base SU for the maximum resource they reserve. **OpenShift CPU SU Example:** -- Project with 3 Pods with: +- Project with 3 Pods with: i. `1 vCPU, 3 GiB RAM, 720hrs (24hr*30days)` @@ -99,7 +99,7 @@ of the base SU for the maximum resource they reserve. iii. `2 vCPU, 4 GiB RAM, 720hrs (24hr*30days)` -- Project Will be charged: +- Project Will be charged: `RoundUP(Sum(` @@ -161,11 +161,11 @@ provisioned until it is deleted. **Storage Example 1:** -- Volume or VM with: +- Volume or VM with: `500GiB for 699.2hrs` -- Will be charged: +- Will be charged: `.5 Storage TiB SU (.5 TiB x 700hrs) x $0.009 TiB/hr` @@ -173,11 +173,11 @@ provisioned until it is deleted. **Storage Example 2:** -- Volume or VM with: +- Volume or VM with: `10TiB for 720hrs (24hr x 30days)` -- Will be charged: +- Will be charged: `10 Storage TiB SU (10TiB x 720 hrs) x $0.009 TiB/hr` diff --git a/docs/openshift-ai/data-science-project/explore-the-jupyterlab-environment.md b/docs/openshift-ai/data-science-project/explore-the-jupyterlab-environment.md index 2105c38f..f3d2c173 100644 --- a/docs/openshift-ai/data-science-project/explore-the-jupyterlab-environment.md +++ b/docs/openshift-ai/data-science-project/explore-the-jupyterlab-environment.md @@ -83,15 +83,15 @@ And a cell where we have entered some Python code: ![Jupyter Cell With Python Code](images/jupyter-cell-with-code.png) -- Code cells contain Python code that can be run interactively. It means that you - can modify the code, then run it, but only for this cell, not for the whole - content of the notebook! The code will not run on your computer or in the browser, - but directly in the environment you are connected to NERC RHOAI. - -- To run a code cell, you simply select it (select the cell, or on the left side - of it), and select the Run/Play button from the toolbar (you can also press - `CTRL+Enter` to run a cell, or `Shift+Enter` to run the cell and automatically - select the following one). +- Code cells contain Python code that can be run interactively. It means that you + can modify the code, then run it, but only for this cell, not for the whole + content of the notebook! The code will not run on your computer or in the browser, + but directly in the environment you are connected to NERC RHOAI. + +- To run a code cell, you simply select it (select the cell, or on the left side + of it), and select the Run/Play button from the toolbar (you can also press + `CTRL+Enter` to run a cell, or `Shift+Enter` to run the cell and automatically + select the following one). The Run button on the toolbar: diff --git a/docs/openshift-ai/data-science-project/model-serving-in-the-rhoai.md b/docs/openshift-ai/data-science-project/model-serving-in-the-rhoai.md index 2f715713..ca3d2648 100644 --- a/docs/openshift-ai/data-science-project/model-serving-in-the-rhoai.md +++ b/docs/openshift-ai/data-science-project/model-serving-in-the-rhoai.md @@ -4,9 +4,9 @@ To run a **model server** and **deploy a model** on it, you need to have: -- Select the correct data science project and create workbench, see [Populate - the data science project](using-projects-the-rhoai.md#populate-the-data-science-project-with-a-workbench) - for more information. +- Select the correct data science project and create workbench, see [Populate + the data science project](using-projects-the-rhoai.md#populate-the-data-science-project-with-a-workbench) + for more information. ## Create a data connection @@ -20,17 +20,17 @@ Data connections are configurations for remote data location. Within this window enter the information about the S3-compatible object bucket where the model is stored. Enter the following information: -- **Name**: The name you want to give to the data connection. +- **Name**: The name you want to give to the data connection. -- **Access Key**: The access key to the bucket. +- **Access Key**: The access key to the bucket. -- **Secret Key**: The secret for the access key. +- **Secret Key**: The secret for the access key. -- **Endpoint**: The endpoint to connect to the storage. +- **Endpoint**: The endpoint to connect to the storage. -- **Region**: The region to connect to the storage. +- **Region**: The region to connect to the storage. -- **Bucket**: The name of the bucket. +- **Bucket**: The name of the bucket. **NOTE**: However, you are not required to use the S3 service from **Amazon Web Services (AWS)**. Any S3-compatible storage i.e. NERC OpenStack Container (Ceph), @@ -88,24 +88,24 @@ following details: ![Configure A New Model Server](images/configure-a-new-model-server.png) -- **Model server name** +- **Model server name** -- **Serving runtime**: either "OpenVINO Model Server" or "OpenVINO Model Server - (Supports GPUs)" +- **Serving runtime**: either "OpenVINO Model Server" or "OpenVINO Model Server + (Supports GPUs)" -- **Number of model server replicas**: This is the number of instances of the - model server engine that you want to deploy. You can scale it up as needed, - depending on the number of requests you will receive. +- **Number of model server replicas**: This is the number of instances of the + model server engine that you want to deploy. You can scale it up as needed, + depending on the number of requests you will receive. -- **Model server size**: This is the amount of resources, CPU, and RAM that will - be allocated to your server. Select the appropriate configuration for size and - the complexity of your model. +- **Model server size**: This is the amount of resources, CPU, and RAM that will + be allocated to your server. Select the appropriate configuration for size and + the complexity of your model. -- **Model route**: Check this box if you want the serving endpoint (the model serving - API) to be accessible outside of the OpenShift cluster through an external route. +- **Model route**: Check this box if you want the serving endpoint (the model serving + API) to be accessible outside of the OpenShift cluster through an external route. -- **Token authorization**: Check this box if you want to secure or restrict access - to the model by forcing requests to provide an authorization token. +- **Token authorization**: Check this box if you want to secure or restrict access + to the model by forcing requests to provide an authorization token. After adding and selecting options within the **Add model server** pop-up window, click **Add** to create the model server. @@ -146,17 +146,17 @@ initiate the Deploy model pop-up window as shown below: Enter the following information for your new model: -- **Model Name**: The name you want to give to your model (e.g., "coolstore"). +- **Model Name**: The name you want to give to your model (e.g., "coolstore"). -- **Model framework (name-version)**: The framework used to save this model. - At this time, OpenVINO IR or ONNX or Tensorflow are supported. +- **Model framework (name-version)**: The framework used to save this model. + At this time, OpenVINO IR or ONNX or Tensorflow are supported. -- **Model location**: Select the data connection that you created to store the - model. Alternatively, you can create another data connection directly from this - menu. +- **Model location**: Select the data connection that you created to store the + model. Alternatively, you can create another data connection directly from this + menu. -- **Folder path**: If your model is not located at the root of the bucket of your - data connection, you must enter the path to the folder it is in. +- **Folder path**: If your model is not located at the root of the bucket of your + data connection, you must enter the path to the folder it is in. For our example project, let's name the **Model** as "coolstore", select "onnx-1" for the framework, select the Data location you created before for the @@ -184,16 +184,16 @@ for the gRPC and the REST URLs for the inference endpoints as shown below: **Notes:** -- The REST URL displayed is only the base address of the endpoint. You must - append `/v2/models/name-of-your-model/infer` to it to have the full address. - Example: `http://modelmesh-serving.model-serving:8008/v2/models/coolstore/infer` +- The REST URL displayed is only the base address of the endpoint. You must + append `/v2/models/name-of-your-model/infer` to it to have the full address. + Example: `http://modelmesh-serving.model-serving:8008/v2/models/coolstore/infer` -- The full documentation of the API (REST and gRPC) is [available here](https://github.com/kserve/kserve/blob/master/docs/predict-api/v2/required_api.md). +- The full documentation of the API (REST and gRPC) is [available here](https://github.com/kserve/kserve/blob/master/docs/predict-api/v2/required_api.md). -- The gRPC proto file for the Model Server is [available here](https://github.com/kserve/kserve/blob/master/docs/predict-api/v2/grpc_predict_v2.proto). +- The gRPC proto file for the Model Server is [available here](https://github.com/kserve/kserve/blob/master/docs/predict-api/v2/grpc_predict_v2.proto). -- If you have exposed the model through an external route, the Inference endpoint - displays the full URL that you can copy. +- If you have exposed the model through an external route, the Inference endpoint + displays the full URL that you can copy. !!! note "Important Note" diff --git a/docs/openshift-ai/data-science-project/testing-model-in-the-rhoai.md b/docs/openshift-ai/data-science-project/testing-model-in-the-rhoai.md index 40b375ae..0b30b181 100644 --- a/docs/openshift-ai/data-science-project/testing-model-in-the-rhoai.md +++ b/docs/openshift-ai/data-science-project/testing-model-in-the-rhoai.md @@ -10,17 +10,17 @@ we can test it. ![Jupyter Hub Control Panel Menu](images/juyter-hub-control-panel-menu.png) -- In your project in JupyterLab, open the notebook `03_remote_inference.ipynb` - and follow the instructions to see how the model can be queried. +- In your project in JupyterLab, open the notebook `03_remote_inference.ipynb` + and follow the instructions to see how the model can be queried. -- Update the `grpc_url` as [noted before](model-serving-in-the-rhoai.md#deploy-the-model) - for the **the grpc URL value** from the deployed model on the NERC RHOAI Model - server. +- Update the `grpc_url` as [noted before](model-serving-in-the-rhoai.md#deploy-the-model) + for the **the grpc URL value** from the deployed model on the NERC RHOAI Model + server. ![Change grpc URL Value](images/change-grpc-url-value.png) -- Once you've completed the notebook's instructions, the object detection model - can isolate and recognize T-shirts, bottles, and hats in pictures, as shown below: +- Once you've completed the notebook's instructions, the object detection model + can isolate and recognize T-shirts, bottles, and hats in pictures, as shown below: ![Model Test to Detect Objects In An Image](images/model-test-object-detection.png) @@ -68,36 +68,36 @@ environment, or directly [here](https://github.com/nerc-project/nerc_rhoai_mlops To deploy the Pre-Post Processing Service service and the Application: -- From your [NERC's OpenShift Web Console](https://console.apps.shift.nerc.mghpcc.org/), - navigate to your project corresponding to the _NERC RHOAI Data Science Project_ - and select the "Import YAML" button, represented by the "+" icon in the top - navigation bar as shown below: +- From your [NERC's OpenShift Web Console](https://console.apps.shift.nerc.mghpcc.org/), + navigate to your project corresponding to the _NERC RHOAI Data Science Project_ + and select the "Import YAML" button, represented by the "+" icon in the top + navigation bar as shown below: ![YAML Add Icon](images/yaml-upload-plus-icon.png) -- Verify that you selected the correct project. +- Verify that you selected the correct project. ![Correct Project Selected for YAML Editor](images/project-verify-yaml-editor.png) -- Copy/Paste the content of the file `pre_post_processor_deployment.yaml` inside - the opened YAML editor. If you have named your model **coolstore** as instructed, - you're good to go. If not, modify the value on **[line # 35](https://github.com/nerc-project/nerc_rhoai_mlops/blob/33b3b7fa7448756f3defb3d6ae793524d1c5ff14/deployment/pre_post_processor_deployment.yaml#L35C23-L35C32)** - with the name you set. You can then click the **Create** button as shown below: +- Copy/Paste the content of the file `pre_post_processor_deployment.yaml` inside + the opened YAML editor. If you have named your model **coolstore** as instructed, + you're good to go. If not, modify the value on **[line # 35](https://github.com/nerc-project/nerc_rhoai_mlops/blob/33b3b7fa7448756f3defb3d6ae793524d1c5ff14/deployment/pre_post_processor_deployment.yaml#L35C23-L35C32)** + with the name you set. You can then click the **Create** button as shown below: ![YAML Editor Add Pre-Post Processing Service Content](images/pre_post_processor_deployment-yaml-content.png) -- Once Resource is successfully created, you will see the following screen: +- Once Resource is successfully created, you will see the following screen: ![Resources successfully created Importing More YAML](images/yaml-import-new-content.png) -- Click on "Import more YAML" and Copy/Paste the content of the file `intelligent_application_deployment.yaml` - inside the opened YAML editor. Nothing to change here, you can then click the - **Create** button as shown below: +- Click on "Import more YAML" and Copy/Paste the content of the file `intelligent_application_deployment.yaml` + inside the opened YAML editor. Nothing to change here, you can then click the + **Create** button as shown below: ![YAML Editor Pre-Post Processing Service Content](images/intelligent_application_deployment-yaml-content.png) -- If both deployments are successful, you will be able to see both of them grouped - under "intelligent-application" on the **Topology View** menu, as shown below: +- If both deployments are successful, you will be able to see both of them grouped + under "intelligent-application" on the **Topology View** menu, as shown below: ![Intelligent Application Under Topology](images/intelligent_application-topology.png) @@ -112,18 +112,18 @@ You have first to allow it to use your camera, this is the interface you get: You have: -- The current view of your camera. +- The current view of your camera. -- A button to take a picture as shown here: +- A button to take a picture as shown here: ![Capture Camera Image](images/capture-camera-image.png) -- A button to switch from front to rear camera if you are using a phone: +- A button to switch from front to rear camera if you are using a phone: ![Switch Camera View](images/switch-camera-view.png) -- A **QR code** that you can use to quickly open the application on a phone - (much easier than typing the URL!): +- A **QR code** that you can use to quickly open the application on a phone + (much easier than typing the URL!): ![QR code](images/QR-code.png) @@ -137,14 +137,14 @@ below: There are two parameters you can change on this application: -- On the `ia-frontend` Deployment, you can modify the `DISPLAY_BOX` environment - variable from `true` to `false`. It will hide the bounding box and the inference - score, so that you get only the coupon flying over the item. +- On the `ia-frontend` Deployment, you can modify the `DISPLAY_BOX` environment + variable from `true` to `false`. It will hide the bounding box and the inference + score, so that you get only the coupon flying over the item. -- On the `ia-inference` Deployment, the one used for pre-post processing, you - can modify the `COUPON_VALUE` environment variable. The format is simply an - Array with the value of the coupon for the 3 classes: bottle, hat, shirt. As - you see, these values could be adjusted in real time, and this could even be - based on another ML model! +- On the `ia-inference` Deployment, the one used for pre-post processing, you + can modify the `COUPON_VALUE` environment variable. The format is simply an + Array with the value of the coupon for the 3 classes: bottle, hat, shirt. As + you see, these values could be adjusted in real time, and this could even be + based on another ML model! --- diff --git a/docs/openshift-ai/data-science-project/using-projects-the-rhoai.md b/docs/openshift-ai/data-science-project/using-projects-the-rhoai.md index 3974e29f..7b6d8c15 100644 --- a/docs/openshift-ai/data-science-project/using-projects-the-rhoai.md +++ b/docs/openshift-ai/data-science-project/using-projects-the-rhoai.md @@ -27,17 +27,17 @@ details page, as shown below: Within the data science project, you can add the following configuration options: -- **Workbenches**: Development environments within your project where you can access - notebooks and generate models. +- **Workbenches**: Development environments within your project where you can access + notebooks and generate models. -- **Cluster storage**: Storage for your project in your OpenShift cluster. +- **Cluster storage**: Storage for your project in your OpenShift cluster. -- **Data connections**: A list of data sources that your project uses. +- **Data connections**: A list of data sources that your project uses. -- **Pipelines**: A list of created and configured pipeline servers. +- **Pipelines**: A list of created and configured pipeline servers. -- **Models and model servers**: A list of models and model servers that your project - uses. +- **Models and model servers**: A list of models and model servers that your project + uses. As you can see in the project's details figure, our selected data science project currently has no workbenches, storage, data connections, pipelines, or model servers. @@ -58,23 +58,23 @@ On the Create workbench page, complete the following information. **Note**: Not all fields are required. -- Name +- Name -- Description +- Description -- Notebook image (Image selection) +- Notebook image (Image selection) -- Deployment size (Container size and Number of GPUs) +- Deployment size (Container size and Number of GPUs) -- Environment variables +- Environment variables -- Cluster storage name +- Cluster storage name -- Cluster storage description +- Cluster storage description -- Persistent storage size +- Persistent storage size -- Data connections +- Data connections !!! tip "How to specify CPUs, Memory, and GPUs for your JupyterLab workbench?" diff --git a/docs/openshift-ai/get-started/rhoai-overview.md b/docs/openshift-ai/get-started/rhoai-overview.md index 01b9361a..b5a24491 100644 --- a/docs/openshift-ai/get-started/rhoai-overview.md +++ b/docs/openshift-ai/get-started/rhoai-overview.md @@ -20,18 +20,18 @@ graphics processing unit (GPU) resources. Recent enhancements to Red Hat OpenShift AI include: -- Implementation **Deployment pipelines** for monitoring AI/ML experiments and - automating ML workflows accelerate the iteration process for data scientists - and developers of intelligent applications. This integration facilitates swift - iteration on machine learning projects and embeds automation into application - deployment and updates. - -- **Model serving** now incorporates GPU assistance for inference tasks and custom - model serving runtimes, enhancing inference performance and streamlining the - deployment of foundational models. - -- With **Model monitoring**, organizations can oversee performance and operational - metrics through a centralized dashboard, enhancing management capabilities. +- Implementation **Deployment pipelines** for monitoring AI/ML experiments and + automating ML workflows accelerate the iteration process for data scientists + and developers of intelligent applications. This integration facilitates swift + iteration on machine learning projects and embeds automation into application + deployment and updates. + +- **Model serving** now incorporates GPU assistance for inference tasks and custom + model serving runtimes, enhancing inference performance and streamlining the + deployment of foundational models. + +- With **Model monitoring**, organizations can oversee performance and operational + metrics through a centralized dashboard, enhancing management capabilities. ## Red Hat OpenShift AI ecosystem diff --git a/docs/openshift-ai/index.md b/docs/openshift-ai/index.md index b411871a..7e1f4446 100644 --- a/docs/openshift-ai/index.md +++ b/docs/openshift-ai/index.md @@ -9,29 +9,29 @@ the list below. ## NERC OpenShift AI Getting Started -- [NERC Red Hat OpenShift AI (RHOAI) Overview](get-started/rhoai-overview.md) - **<<-- Start Here** +- [NERC Red Hat OpenShift AI (RHOAI) Overview](get-started/rhoai-overview.md) + **<<-- Start Here** ## NERC OpenShift AI dashboard -- [Access the NERC's OpenShift AI dashboard](logging-in/access-the-rhoai-dashboard.md) +- [Access the NERC's OpenShift AI dashboard](logging-in/access-the-rhoai-dashboard.md) -- [The NERC's OpenShift AI dashboard Overview](logging-in/the-rhoai-dashboard-overview.md) +- [The NERC's OpenShift AI dashboard Overview](logging-in/the-rhoai-dashboard-overview.md) ## Using Data Science Project in the NERC RHOAI -- [Using Your Data Science Project (DSP)](data-science-project/using-projects-the-rhoai.md) +- [Using Your Data Science Project (DSP)](data-science-project/using-projects-the-rhoai.md) -- [Explore the JupyterLab Environment](data-science-project/explore-the-jupyterlab-environment.md) +- [Explore the JupyterLab Environment](data-science-project/explore-the-jupyterlab-environment.md) -- [Model Serving in the NERC RHOAI](data-science-project/model-serving-in-the-rhoai.md) +- [Model Serving in the NERC RHOAI](data-science-project/model-serving-in-the-rhoai.md) -- [Test the Model in the NERC RHOAI](data-science-project/testing-model-in-the-rhoai.md) +- [Test the Model in the NERC RHOAI](data-science-project/testing-model-in-the-rhoai.md) ## Other Example Projects -- [How to access, download, and analyze data for S3 usage](other-projects/how-access-s3-data-then-download-and-analyze-it.md) +- [How to access, download, and analyze data for S3 usage](other-projects/how-access-s3-data-then-download-and-analyze-it.md) -- [Configure a Jupyter Notebook to use GPUs for AI/ML modeling](other-projects/configure-jupyter-notebook-use-gpus-aiml-modeling.md) +- [Configure a Jupyter Notebook to use GPUs for AI/ML modeling](other-projects/configure-jupyter-notebook-use-gpus-aiml-modeling.md) --- diff --git a/docs/openshift-ai/logging-in/the-rhoai-dashboard-overview.md b/docs/openshift-ai/logging-in/the-rhoai-dashboard-overview.md index beafa00e..d11483cd 100644 --- a/docs/openshift-ai/logging-in/the-rhoai-dashboard-overview.md +++ b/docs/openshift-ai/logging-in/the-rhoai-dashboard-overview.md @@ -4,10 +4,10 @@ In the NERC's RHOAI dashboard, you can see multiple links on your left hand side 1. **Applications**: - - _Enabled_: Launch your enabled applications, view documentation, or get - started with quick start instructions and tasks. + - _Enabled_: Launch your enabled applications, view documentation, or get + started with quick start instructions and tasks. - - _Explore_: View optional applications for your RHOAI instance. + - _Explore_: View optional applications for your RHOAI instance. **NOTE**: Most of them are disabled by default on NERC RHOAI right now. @@ -26,11 +26,11 @@ In the NERC's RHOAI dashboard, you can see multiple links on your left hand side 3. **Data Science Pipelines**: - - _Pipelines_: Manage your pipelines for a specific project selected from the - dropdown menu. + - _Pipelines_: Manage your pipelines for a specific project selected from the + dropdown menu. - - _Runs_: Manage and view your runs for a specific project selected from the - dropdown menu. + - _Runs_: Manage and view your runs for a specific project selected from the + dropdown menu. 4. **Model Serving**: Manage and view the health and performance of your deployed models across different projects corresponding to your NERC-OCP (OpenShift) diff --git a/docs/openshift-ai/other-projects/configure-jupyter-notebook-use-gpus-aiml-modeling.md b/docs/openshift-ai/other-projects/configure-jupyter-notebook-use-gpus-aiml-modeling.md index aa718e72..b1be38de 100644 --- a/docs/openshift-ai/other-projects/configure-jupyter-notebook-use-gpus-aiml-modeling.md +++ b/docs/openshift-ai/other-projects/configure-jupyter-notebook-use-gpus-aiml-modeling.md @@ -4,9 +4,9 @@ Prepare your Jupyter notebook server for using a GPU, you need to have: -- Select the correct data science project and create workbench, see - [Populate the data science project](../data-science-project/using-projects-the-rhoai.md#populate-the-data-science-project-with-a-workbench) - for more information. +- Select the correct data science project and create workbench, see + [Populate the data science project](../data-science-project/using-projects-the-rhoai.md#populate-the-data-science-project-with-a-workbench) + for more information. Please ensure that you start your Jupyter notebook server with options as depicted in the following configuration screen. This screen provides you with the opportunity diff --git a/docs/openshift-ai/other-projects/how-access-s3-data-then-download-and-analyze-it.md b/docs/openshift-ai/other-projects/how-access-s3-data-then-download-and-analyze-it.md index 2c65ab11..3a8f4953 100644 --- a/docs/openshift-ai/other-projects/how-access-s3-data-then-download-and-analyze-it.md +++ b/docs/openshift-ai/other-projects/how-access-s3-data-then-download-and-analyze-it.md @@ -4,9 +4,9 @@ Prepare your Jupyter notebook server for using a GPU, you need to have: -- Select the correct data science project and create workbench, see - [Populate the data science project](../data-science-project/using-projects-the-rhoai.md#populate-the-data-science-project-with-a-workbench) - for more information. +- Select the correct data science project and create workbench, see + [Populate the data science project](../data-science-project/using-projects-the-rhoai.md#populate-the-data-science-project-with-a-workbench) + for more information. Please ensure that you start your Jupyter notebook server with options as depicted in the following configuration screen. This screen provides you with the opportunity @@ -73,11 +73,11 @@ content section of the environment, on the right. Run each cell in the notebook, using the _Shift-Enter_ key combination, and pay attention to the execution results. Using this notebook, we will: -- Make a connection to an AWS S3 storage bucket +- Make a connection to an AWS S3 storage bucket -- Download a CSV file into the "datasets" folder +- Download a CSV file into the "datasets" folder -- Rename the downloaded CSV file to "newtruckdata.csv" +- Rename the downloaded CSV file to "newtruckdata.csv" ### View your new CSV file @@ -93,11 +93,11 @@ The file contains the data you will analyze and perform some analytics. Since you now have data, you can open the next Jupyter notebook, `simpleCalc.ipynb`, and perform the following operations: -- Create a dataframe. +- Create a dataframe. -- Perform simple total and average calculations. +- Perform simple total and average calculations. -- Print the calculation results. +- Print the calculation results. ## Analyzing your S3 data access run results diff --git a/docs/openshift/applications/scaling-and-performance-guide.md b/docs/openshift/applications/scaling-and-performance-guide.md index be829347..f47d95e9 100644 --- a/docs/openshift/applications/scaling-and-performance-guide.md +++ b/docs/openshift/applications/scaling-and-performance-guide.md @@ -83,10 +83,10 @@ below: CPU and memory can be specified in a couple of ways: -- Resource **requests** and _limits_ are optional parameters specified at the container - level. OpenShift computes a Pod's request and limit as the sum of requests and - limits across all of its containers. OpenShift then uses these parameters for - scheduling and resource allocation decisions. +- Resource **requests** and _limits_ are optional parameters specified at the container + level. OpenShift computes a Pod's request and limit as the sum of requests and + limits across all of its containers. OpenShift then uses these parameters for + scheduling and resource allocation decisions. The **request** value specifies the min value you will be guaranteed. The request value is also used by the scheduler to assign pods to nodes. @@ -106,9 +106,9 @@ CPU and memory can be specified in a couple of ways: will remain in a *Pending* state (i.e. not running) until these resources become available. -- The **limit** value specifies the max value you can consume. Limit is the value - applications should be tuned to use. Pods will be memory, CPU throttled when - they exceed their available memory and CPU limit. +- The **limit** value specifies the max value you can consume. Limit is the value + applications should be tuned to use. Pods will be memory, CPU throttled when + they exceed their available memory and CPU limit. CPU is measured in units called millicores, where 1000 millicores ("m") = 1 vCPU or 1 Core. Each node in a cluster inspects the operating system to determine the @@ -253,8 +253,8 @@ Click on the component node to see the _Overview_ panel to the right. Use the **Details** tab to: -- Scale your pods using the up and down arrows to increase or decrease the number - of pods or instances of the application manually as shown below: +- Scale your pods using the up and down arrows to increase or decrease the number + of pods or instances of the application manually as shown below: ![Scale the Pod Count](images/pod-scale-count-arrow.png) @@ -264,22 +264,22 @@ Use the **Details** tab to: ![Edit the Pod Count](images/scale-pod-count.png) -- Check the Labels, Annotations, and Status of the application. +- Check the Labels, Annotations, and Status of the application. Click the **Resources** tab to: -- See the list of all the pods, view their status, access logs, and click on the - pod to see the pod details. +- See the list of all the pods, view their status, access logs, and click on the + pod to see the pod details. -- See the builds, their status, access logs, and start a new build if needed. +- See the builds, their status, access logs, and start a new build if needed. -- See the services and routes used by the component. +- See the services and routes used by the component. Click the **Observe** tab to: -- See the metrics to see CPU usage, Memory usage and Bandwidth consumption. +- See the metrics to see CPU usage, Memory usage and Bandwidth consumption. -- See the Events. +- See the Events. !!! note "Detailed Monitoring your project and application metrics" @@ -389,14 +389,14 @@ maximum numbers to maintain the specified CPU utilization across all pods. #### To create an HPA in the web console -- In the **Topology** view, click the node to reveal the side pane. +- In the **Topology** view, click the node to reveal the side pane. -- From the _Actions_ drop-down list, select **Add HorizontalPodAutoscaler** as - shown below: +- From the _Actions_ drop-down list, select **Add HorizontalPodAutoscaler** as + shown below: ![Horizontal Pod Autoscaler Popup](images/add-hpa-popup.png) -- This will open the **Add HorizontalPodAutoscaler** form as shown below: +- This will open the **Add HorizontalPodAutoscaler** form as shown below: ![Horizontal Pod Autoscaler Form](images/hpa-form.png) @@ -405,26 +405,26 @@ maximum numbers to maintain the specified CPU utilization across all pods. While creating or editing the horizontal pod autoscaler in the web console, you can switch from **Form view** to **YAML view**. -- From the **Add HorizontalPodAutoscaler** form, define the name, minimum and maximum - pod limits, the CPU and memory usage, and click **Save**. +- From the **Add HorizontalPodAutoscaler** form, define the name, minimum and maximum + pod limits, the CPU and memory usage, and click **Save**. #### To edit an HPA in the web console -- In the **Topology** view, click the node to reveal the side pane. +- In the **Topology** view, click the node to reveal the side pane. -- From the **Actions** drop-down list, select **Edit HorizontalPodAutoscaler** - to open the **Edit Horizontal Pod Autoscaler** form. +- From the **Actions** drop-down list, select **Edit HorizontalPodAutoscaler** + to open the **Edit Horizontal Pod Autoscaler** form. -- From the **Edit Horizontal Pod Autoscaler** form, edit the minimum and maximum - pod limits and the CPU and memory usage, and click **Save**. +- From the **Edit Horizontal Pod Autoscaler** form, edit the minimum and maximum + pod limits and the CPU and memory usage, and click **Save**. #### To remove an HPA in the web console -- In the **Topology** view, click the node to reveal the side panel. +- In the **Topology** view, click the node to reveal the side panel. -- From the **Actions** drop-down list, select **Remove HorizontalPodAutoscaler**. +- From the **Actions** drop-down list, select **Remove HorizontalPodAutoscaler**. -- In the confirmation pop-up window, click **Remove** to remove the HPA. +- In the confirmation pop-up window, click **Remove** to remove the HPA. !!! tip "Best Practices" diff --git a/docs/openshift/decommission/decommission-openshift-resources.md b/docs/openshift/decommission/decommission-openshift-resources.md index 8b82b07d..35d010bf 100644 --- a/docs/openshift/decommission/decommission-openshift-resources.md +++ b/docs/openshift/decommission/decommission-openshift-resources.md @@ -5,16 +5,16 @@ below. ## Prerequisite -- **Backup**: Back up any critical data or configurations stored on the resources - that going to be decommissioned. This ensures that important information is not - lost during the process. +- **Backup**: Back up any critical data or configurations stored on the resources + that going to be decommissioned. This ensures that important information is not + lost during the process. -- **Kubernetes Objects (Resources)**: Please review all OpenShift Kubernetes Objects - (Resources) to ensure they are not actively used and ready to be decommissioned. +- **Kubernetes Objects (Resources)**: Please review all OpenShift Kubernetes Objects + (Resources) to ensure they are not actively used and ready to be decommissioned. -- Install and configure the **OpenShift CLI (oc)**, see [How to Setup the - OpenShift CLI Tools](../logging-in/setup-the-openshift-cli.md) - for more information. +- Install and configure the **OpenShift CLI (oc)**, see [How to Setup the + OpenShift CLI Tools](../logging-in/setup-the-openshift-cli.md) + for more information. ## Delete all Data Science Project resources from the NERC's Red Hat OpenShift AI diff --git a/docs/openshift/get-started/openshift-overview.md b/docs/openshift/get-started/openshift-overview.md index 68d2860f..376d9b59 100644 --- a/docs/openshift/get-started/openshift-overview.md +++ b/docs/openshift/get-started/openshift-overview.md @@ -15,65 +15,65 @@ OpenShift is a container orchestration platform that provides a number of compon and tools to help you build, deploy, and manage applications. Here are some of the basic components of OpenShift: -- **Project**: A project is a logical grouping of resources in the NERC's OpenShift - platform that provides isolation from others resources. - -- **Nodes**: Nodes are the physical or virtual machines that run the applications - and services in your OpenShift cluster. - -- **Image**: An image is a non-changing, definition of file structures and programs - for running an application. - -- **Container**: A container is an instance of an image with the addition of other - operating system components such as networking and running programs. Containers - are used to run applications and services in OpenShift. - -- **Pods**: Pods are the smallest deployable units defined, deployed, and managed - in OpenShift, that group related one or more containers that need to share resources. - -- **Services**: Services are logical representations of a set of pods that provide - a network endpoint for access to the application or service. Services can be - used to load balance traffic across multiple pods, and they can be accessed - using a stable DNS name. Services are assigned an IP address and port and proxy - connections to backend pods. This allows the pods to change while the connection - details of the service remain consistent. - -- **Volume**: A volume is a persistent file space available to pods and containers - for storing data. Containers are immutable and therefore upon a restart any - contents are cleared and reset to the original state of the image used to create - the container. Volumes provide storage space for files that need to persist - through container restarts. - -- **Routes**: Routes can be used to expose services to external clients to connections - outside the platform. A route is assigned a name in DNS when set up to make it - easily accessible. They can be configured with custom hostnames and TLS certificates. - -- **Replication Controllers**: A replication controller (rc) is a built-in mechanism - that ensures a defined number of pods are running at all times. An asset that - indicates how many pod replicas are required to run at a time. If a pod unexpectedly - quits or is deleted, a new copy of the pod is created and started. Additionally, - if more pods are running than the defined number, the replication controller - will delete the extra pods to get down to the defined number. - -- **Namespace**: A Namespace is a way to logically isolate resources within the - Cluster. In our case every project gets an unique namespace. - -- **Role-based access control (RBAC)**: A key security control to ensure that cluster - users and workloads have only access to resources required to execute their roles. - -- **Deployment Configurations**: A deployment configuration (dc) is an extension - of a replication controller that is used to push out a new version of application - code. Deployment configurations are used to define the process of deploying - applications and services to OpenShift. Deployment configurations - can be used to specify the number of replicas, the resources required by the - application, and the deployment strategy to use. - -- **Application URL Components**: When an application developer adds an application - to a project, a unique DNS name is created for the application via a Route. All - application DNS names will have a hyphen separator between your application name - and your unique project namespace. If the application is a web application, this - DNS name is also used for the URL to access the application. All names are in - the form of `-.apps.shift.nerc.mghpcc.org`. - For example: `mytestapp-mynamespace.apps.shift.nerc.mghpcc.org`. +- **Project**: A project is a logical grouping of resources in the NERC's OpenShift + platform that provides isolation from others resources. + +- **Nodes**: Nodes are the physical or virtual machines that run the applications + and services in your OpenShift cluster. + +- **Image**: An image is a non-changing, definition of file structures and programs + for running an application. + +- **Container**: A container is an instance of an image with the addition of other + operating system components such as networking and running programs. Containers + are used to run applications and services in OpenShift. + +- **Pods**: Pods are the smallest deployable units defined, deployed, and managed + in OpenShift, that group related one or more containers that need to share resources. + +- **Services**: Services are logical representations of a set of pods that provide + a network endpoint for access to the application or service. Services can be + used to load balance traffic across multiple pods, and they can be accessed + using a stable DNS name. Services are assigned an IP address and port and proxy + connections to backend pods. This allows the pods to change while the connection + details of the service remain consistent. + +- **Volume**: A volume is a persistent file space available to pods and containers + for storing data. Containers are immutable and therefore upon a restart any + contents are cleared and reset to the original state of the image used to create + the container. Volumes provide storage space for files that need to persist + through container restarts. + +- **Routes**: Routes can be used to expose services to external clients to connections + outside the platform. A route is assigned a name in DNS when set up to make it + easily accessible. They can be configured with custom hostnames and TLS certificates. + +- **Replication Controllers**: A replication controller (rc) is a built-in mechanism + that ensures a defined number of pods are running at all times. An asset that + indicates how many pod replicas are required to run at a time. If a pod unexpectedly + quits or is deleted, a new copy of the pod is created and started. Additionally, + if more pods are running than the defined number, the replication controller + will delete the extra pods to get down to the defined number. + +- **Namespace**: A Namespace is a way to logically isolate resources within the + Cluster. In our case every project gets an unique namespace. + +- **Role-based access control (RBAC)**: A key security control to ensure that cluster + users and workloads have only access to resources required to execute their roles. + +- **Deployment Configurations**: A deployment configuration (dc) is an extension + of a replication controller that is used to push out a new version of application + code. Deployment configurations are used to define the process of deploying + applications and services to OpenShift. Deployment configurations + can be used to specify the number of replicas, the resources required by the + application, and the deployment strategy to use. + +- **Application URL Components**: When an application developer adds an application + to a project, a unique DNS name is created for the application via a Route. All + application DNS names will have a hyphen separator between your application name + and your unique project namespace. If the application is a web application, this + DNS name is also used for the URL to access the application. All names are in + the form of `-.apps.shift.nerc.mghpcc.org`. + For example: `mytestapp-mynamespace.apps.shift.nerc.mghpcc.org`. --- diff --git a/docs/openshift/index.md b/docs/openshift/index.md index 918ec616..03f8ce2d 100644 --- a/docs/openshift/index.md +++ b/docs/openshift/index.md @@ -8,41 +8,41 @@ the list below. ## OpenShift Getting Started -- [OpenShift Overview](get-started/openshift-overview.md) - **<<-- Start Here** +- [OpenShift Overview](get-started/openshift-overview.md) + **<<-- Start Here** ## OpenShift Web Console -- [Access the NERC's OpenShift Web Console](logging-in/access-the-openshift-web-console.md) -- [Web Console Overview](logging-in/web-console-overview.md) +- [Access the NERC's OpenShift Web Console](logging-in/access-the-openshift-web-console.md) +- [Web Console Overview](logging-in/web-console-overview.md) ## OpenShift command-line interface (CLI) Tools -- [OpenShift CLI Tools Overview](logging-in/the-openshift-cli.md) -- [How to Setup the OpenShift CLI Tools](logging-in/setup-the-openshift-cli.md) +- [OpenShift CLI Tools Overview](logging-in/the-openshift-cli.md) +- [How to Setup the OpenShift CLI Tools](logging-in/setup-the-openshift-cli.md) ## Creating Your First Application on OpenShift -- [Creating A Sample Application](applications/creating-a-sample-application.md) +- [Creating A Sample Application](applications/creating-a-sample-application.md) -- [Creating Your Own Developer Catalog Service](applications/creating-your-own-developer-catalog-service.md) +- [Creating Your Own Developer Catalog Service](applications/creating-your-own-developer-catalog-service.md) ## Editing Applications -- [Editing your applications](applications/editing-applications.md) +- [Editing your applications](applications/editing-applications.md) -- [Scaling and Performance Guide](applications/scaling-and-performance-guide.md) +- [Scaling and Performance Guide](applications/scaling-and-performance-guide.md) ## Storage -- [Storage Overview](storage/storage-overview.md) +- [Storage Overview](storage/storage-overview.md) ## Deleting Applications -- [Deleting your applications](applications/deleting-applications.md) +- [Deleting your applications](applications/deleting-applications.md) ## Decommission OpenShift Resources -- [Decommission OpenShift Resources](decommission/decommission-openshift-resources.md) +- [Decommission OpenShift Resources](decommission/decommission-openshift-resources.md) --- diff --git a/docs/openshift/logging-in/access-the-openshift-web-console.md b/docs/openshift/logging-in/access-the-openshift-web-console.md index 147e4a95..e986ebf5 100644 --- a/docs/openshift/logging-in/access-the-openshift-web-console.md +++ b/docs/openshift/logging-in/access-the-openshift-web-console.md @@ -20,13 +20,13 @@ Next, you will be redirected to CILogon welcome page as shown below: MGHPCC Shared Services (MSS) Keycloak will request approval of access to the following information from the user: -- Your CILogon user identifier +- Your CILogon user identifier -- Your name +- Your name -- Your email address +- Your email address -- Your username and affiliation from your identity provider +- Your username and affiliation from your identity provider which are required in order to allow access your account on NERC's OpenStack web console. diff --git a/docs/openshift/logging-in/the-openshift-cli.md b/docs/openshift/logging-in/the-openshift-cli.md index 9e4d71fb..3f2b6f8c 100644 --- a/docs/openshift/logging-in/the-openshift-cli.md +++ b/docs/openshift/logging-in/the-openshift-cli.md @@ -9,12 +9,12 @@ a command-line tool called _oc_. The OpenShift CLI is ideal in the following situations: -- Working directly with project source code +- Working directly with project source code -- Scripting OpenShift Container Platform operations +- Scripting OpenShift Container Platform operations -- Managing projects while restricted by bandwidth resources and the web console - is unavailable +- Managing projects while restricted by bandwidth resources and the web console + is unavailable It is recommended that developers should be comfortable with simple command-line tasks and the the NERC's OpenShift command-line tool. diff --git a/docs/openstack/access-and-security/create-a-key-pair.md b/docs/openstack/access-and-security/create-a-key-pair.md index 63c6c619..b9c083fe 100644 --- a/docs/openstack/access-and-security/create-a-key-pair.md +++ b/docs/openstack/access-and-security/create-a-key-pair.md @@ -96,9 +96,9 @@ You can now skip ahead to [Adding the key to an ssh-agent](#adding-your-ssh-key- To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see - [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see + [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. To create OpenStack keypair using the CLI, do this: diff --git a/docs/openstack/access-and-security/security-groups.md b/docs/openstack/access-and-security/security-groups.md index 9ff4d3ca..c530dcb9 100644 --- a/docs/openstack/access-and-security/security-groups.md +++ b/docs/openstack/access-and-security/security-groups.md @@ -71,11 +71,11 @@ dialog box. Enter the following values: -- Rule: SSH +- Rule: SSH -- Remote: CIDR +- Remote: CIDR -- CIDR: 0.0.0.0/0 +- CIDR: 0.0.0.0/0 !!! note "Note" @@ -99,13 +99,13 @@ choose "ALL ICMP" from the dropdown. In the Add Rule dialog box, enter the following values: -- Rule: All ICMP +- Rule: All ICMP -- Direction: Ingress +- Direction: Ingress -- Remote: CIDR +- Remote: CIDR -- CIDR: 0.0.0.0/0 +- CIDR: 0.0.0.0/0 ![Adding ICMP - ping in Security Group Rules](images/ping_icmp_security_rule.png) @@ -135,11 +135,11 @@ Choose "RDP" from the Rule dropdown option as shown below: Enter the following values: -- Rule: RDP +- Rule: RDP -- Remote: CIDR +- Remote: CIDR -- CIDR: 0.0.0.0/0 +- CIDR: 0.0.0.0/0 !!! note "Note" @@ -154,15 +154,15 @@ from any IP address. ## Editing Existing Security Group and Adding New Security Rules -- Navigate to Security Groups: +- Navigate to Security Groups: Navigate to _Project -> Network -> Security Groups_. -- Select the Security Group: +- Select the Security Group: Choose the security group to which you want to add new rules. -- Add New Rule: +- Add New Rule: Look for an option to add a new rule within the selected security group. @@ -173,7 +173,7 @@ from any IP address. ![Add New Security Rules](images/sg_new_rule.png) -- Save Changes: +- Save Changes: Save the changes to apply the new security rules to the selected security group. diff --git a/docs/openstack/advanced-openstack-topics/domain-name-system/domain-names-for-your-vms.md b/docs/openstack/advanced-openstack-topics/domain-name-system/domain-names-for-your-vms.md index 94d08498..02732f75 100644 --- a/docs/openstack/advanced-openstack-topics/domain-name-system/domain-names-for-your-vms.md +++ b/docs/openstack/advanced-openstack-topics/domain-name-system/domain-names-for-your-vms.md @@ -154,16 +154,16 @@ its administrative access) Please fill out the following information on this popup box: -- Scheme: _http_ +- Scheme: _http_ -- Forward Hostname/IP: - _``_ +- Forward Hostname/IP: + _``_ -- Forward Port: _``_ +- Forward Port: _``_ -- Enable all toggles i.e. Cache Assets, Block Common Exploits, Websockets Support +- Enable all toggles i.e. Cache Assets, Block Common Exploits, Websockets Support -- Access List: _Publicly Accessible_ +- Access List: _Publicly Accessible_ For your reference, you can review your selection should looks like below with your own Domain Name and other settings: diff --git a/docs/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md b/docs/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md index 77bce7cd..7d72b70c 100644 --- a/docs/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md +++ b/docs/openstack/advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md @@ -20,7 +20,7 @@ side of the screen. In the Create Network dialog box, specify the following values. -- Network tab: +- Network tab: Network Name: Specify a name to identify the network. @@ -33,7 +33,7 @@ In the Create Network dialog box, specify the following values. ![Create a Network](images/create_network.png) -- Subnet tab: +- Subnet tab: You do not have to specify a subnet when you create a network, but if you do not specify a subnet, the network can not be attached to an instance. @@ -44,9 +44,9 @@ In the Create Network dialog box, specify the following values. networks, you should use IP addresses which fall within the ranges that are specifically reserved for private networks: - 10.0.0.0/8 - 172.16.0.0/12 - 192.168.0.0/16 + 10.0.0.0/8 + 172.16.0.0/12 + 192.168.0.0/16 In the example below, we configure a network containing addresses 192.168.0.1 to 192.168.0.255 using CIDR 192.168.0.0/24 @@ -62,7 +62,7 @@ In the Create Network dialog box, specify the following values. Disable Gateway: Select this check box to disable a gateway IP address. -- Subnet Details tab +- Subnet Details tab Enable DHCP: Select this check box to enable DHCP so that your VM instances will automatically be assigned an IP on the subnet. diff --git a/docs/openstack/advanced-openstack-topics/setting-up-your-own-images/how-to-build-windows-image.md b/docs/openstack/advanced-openstack-topics/setting-up-your-own-images/how-to-build-windows-image.md index 6a045946..5df1d680 100644 --- a/docs/openstack/advanced-openstack-topics/setting-up-your-own-images/how-to-build-windows-image.md +++ b/docs/openstack/advanced-openstack-topics/setting-up-your-own-images/how-to-build-windows-image.md @@ -58,7 +58,7 @@ b. Download the signed **VirtIO drivers** ISO file from the [Fedora website](htt c. Install [Virtual Machine Manager](https://virt-manager.org/download/) on your local Windows 10 machine using WSL: -- **Enable WSL on your local Windows 10 subsystem for Linux:** +- **Enable WSL on your local Windows 10 subsystem for Linux:** The steps given here are straightforward, however, before following them make sure on Windows 10, you have WSL enabled and have at least Ubuntu @@ -66,12 +66,12 @@ local Windows 10 machine using WSL: that then see our tutorial on [how to enable WSL and install Ubuntu over it](https://www.how2shout.com/how-to/enable-windows-subsystem-linux-feature.html). -- **Download and install MobaXterm:** +- **Download and install MobaXterm:** **MobaXterm** is a free application that can be downloaded using [this link](https://mobaxterm.mobatek.net/download-home-edition.html). After downloading, install it like any other normal Windows software. -- **Open MobaXterm and run WSL Linux:** +- **Open MobaXterm and run WSL Linux:** As you open this advanced terminal for Windows 10, WSL installed Ubuntu app will show on the left side panel of it. Double click on that to start @@ -79,14 +79,14 @@ local Windows 10 machine using WSL: ![MobaXterm WSL Ubuntu-20.04 LTS](images/a.mobaxterm_ubuntu_WSL.png) -- **Install Virt-Manager:** +- **Install Virt-Manager:** ```sh sudo apt update sudo apt install virt-manager ``` -- **Run Virtual Machine Manager:** +- **Run Virtual Machine Manager:** Start the Virtual Machine Manager running this command on the opened terminal: `virt-manager` as shown below: @@ -97,7 +97,7 @@ local Windows 10 machine using WSL: ![Virt-Manager interface](images/0.virtual-manager.png) -- **Connect QEMU/KVM user session on Virt-Manager:** +- **Connect QEMU/KVM user session on Virt-Manager:** ![Virt-Manager Add Connection](images/0.0.add_virtual_connection.png) diff --git a/docs/openstack/backup/backup-with-snapshots.md b/docs/openstack/backup/backup-with-snapshots.md index 50c62a08..37b0c84a 100644 --- a/docs/openstack/backup/backup-with-snapshots.md +++ b/docs/openstack/backup/backup-with-snapshots.md @@ -3,13 +3,13 @@ When you start a new instance, you can choose the Instance Boot Source from the following list: -- boot from image +- boot from image -- boot from instance snapshot +- boot from instance snapshot -- boot from volume +- boot from volume -- boot from volume snapshot +- boot from volume snapshot In its default configuration, when the instance is launched from an **Image** or an **Instance Snapshot**, the choice for utilizing persistent storage is configured @@ -34,12 +34,12 @@ data. This mainly serves two purposes: -- _As a backup mechanism:_ save the main disk of your instance to an image in - Horizon dashboard under _Project -> Compute -> Images_ and later boot a new instance - from this image with the saved data. +- _As a backup mechanism:_ save the main disk of your instance to an image in + Horizon dashboard under _Project -> Compute -> Images_ and later boot a new instance + from this image with the saved data. -- _As a templating mechanism:_ customise and upgrade a base image and save it to - use as a template for new instances. +- _As a templating mechanism:_ customise and upgrade a base image and save it to + use as a template for new instances. !!! info "Considerations: using Instance snapshots" @@ -57,8 +57,8 @@ This mainly serves two purposes: To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. To snapshot an instance to an image using the CLI, do this: @@ -164,8 +164,8 @@ Also, it consumes **less storage space** compared to instance snapshots. To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. To snapshot an instance to an image using the CLI, do this: diff --git a/docs/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md b/docs/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md index 630aba45..2f8b6d7c 100644 --- a/docs/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md +++ b/docs/openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md @@ -27,13 +27,13 @@ enabling SSH access to the private instances. Before trying to access instances from the outside world using SSH tunneling via Bastion Host, you need to make sure you have followed these steps: -- You followed the instruction in [Create a Key Pair](../../access-and-security/create-a-key-pair.md) - to set up a public ssh key. You can use the same key for both the bastion - host and the remote instances, or different keys; you'll just need to ensure - that the keys are loaded by ssh-agent appropriately so they can be used as - needed. Please read [this instruction](../../access-and-security/create-a-key-pair.md#adding-your-ssh-key-to-the-ssh-agent) - on how to add ssh-agent and load your private key using ssh-add command to - access the bastion host. +- You followed the instruction in [Create a Key Pair](../../access-and-security/create-a-key-pair.md) + to set up a public ssh key. You can use the same key for both the bastion + host and the remote instances, or different keys; you'll just need to ensure + that the keys are loaded by ssh-agent appropriately so they can be used as + needed. Please read [this instruction](../../access-and-security/create-a-key-pair.md#adding-your-ssh-key-to-the-ssh-agent) + on how to add ssh-agent and load your private key using ssh-add command to + access the bastion host. **Verify you have an SSH agent running. This should match whatever you built your cluster with.** @@ -54,11 +54,11 @@ ssh-add path/to/private/key ssh -A @ ``` -- Your public ssh-key was selected (in the Access and Security tab) while - [launching the instance](../launch-a-VM.md). +- Your public ssh-key was selected (in the Access and Security tab) while + [launching the instance](../launch-a-VM.md). -- Add two Security Groups, one will be used by the Bastion host and another one - will be used by any private instances. +- Add two Security Groups, one will be used by the Bastion host and another one + will be used by any private instances. ![Security Groups](images/security_groups.png) @@ -80,8 +80,8 @@ Group option as shown below: ![Private Instances Security Group](images/private_instances_sg.png) -- [Assign a Floating IP](../assign-a-floating-IP.md) to the Bastion host instance - in order to access it from outside world. +- [Assign a Floating IP](../assign-a-floating-IP.md) to the Bastion host instance + in order to access it from outside world. Make a note of the Floating IP you have associated to your instance. diff --git a/docs/openstack/create-and-connect-to-the-VM/create-a-Windows-VM.md b/docs/openstack/create-and-connect-to-the-VM/create-a-Windows-VM.md index e7cb14e7..1f3fb49d 100644 --- a/docs/openstack/create-and-connect-to-the-VM/create-a-Windows-VM.md +++ b/docs/openstack/create-and-connect-to-the-VM/create-a-Windows-VM.md @@ -7,9 +7,9 @@ Windows virtual machine, similar steps can be used on other types of virtual machines. The following steps show how to create a virtual machine which boots from an external volume: -- Create a volume with source data from the image +- Create a volume with source data from the image -- Launch a VM with that volume as the system disk +- Launch a VM with that volume as the system disk !!! note "Recommendations" @@ -48,9 +48,9 @@ for the size of the volume as shown below: To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see - [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see + [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. To create a volume from image using the CLI, do this: @@ -168,9 +168,9 @@ Attach a Floating IP to your instance: To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see - [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see + [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. To launch an instance from existing bootable volume using the CLI, do this: diff --git a/docs/openstack/create-and-connect-to-the-VM/flavors.md b/docs/openstack/create-and-connect-to-the-VM/flavors.md index cf1c2541..86a6c41d 100644 --- a/docs/openstack/create-and-connect-to-the-VM/flavors.md +++ b/docs/openstack/create-and-connect-to-the-VM/flavors.md @@ -310,9 +310,9 @@ process. To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see - [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see + [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. If you want to change the **flavor** that is bound to a VM, then you can run the following openstack client commands, here we are changing flavor of an existing diff --git a/docs/openstack/create-and-connect-to-the-VM/launch-a-VM.md b/docs/openstack/create-and-connect-to-the-VM/launch-a-VM.md index f63b7fbf..8c79ee33 100644 --- a/docs/openstack/create-and-connect-to-the-VM/launch-a-VM.md +++ b/docs/openstack/create-and-connect-to-the-VM/launch-a-VM.md @@ -2,12 +2,12 @@ **Prerequisites**: -- You followed the instruction in [Create a Key Pair](../access-and-security/create-a-key-pair.md) - to set up a public ssh key. +- You followed the instruction in [Create a Key Pair](../access-and-security/create-a-key-pair.md) + to set up a public ssh key. -- Make sure you have added rules in the - [Security Groups](../access-and-security/security-groups.md#allowing-ssh) to - allow **ssh** using Port 22 access to the instance. +- Make sure you have added rules in the + [Security Groups](../access-and-security/security-groups.md#allowing-ssh) to + allow **ssh** using Port 22 access to the instance. ## Using Horizon dashboard @@ -46,13 +46,13 @@ Double check that in the dropdown "Select Boot Source". When you start a new instance, you can choose the Instance Boot Source from the following list: -- boot from image +- boot from image -- boot from instance snapshot +- boot from instance snapshot -- boot from volume +- boot from volume -- boot from volume snapshot +- boot from volume snapshot In its default configuration, when the instance is launched from an **Image** or an **Instance Snapshot**, the choice for utilizing persistent storage is configured diff --git a/docs/openstack/create-and-connect-to-the-VM/ssh-to-the-VM.md b/docs/openstack/create-and-connect-to-the-VM/ssh-to-the-VM.md index 7d11a490..506642ce 100644 --- a/docs/openstack/create-and-connect-to-the-VM/ssh-to-the-VM.md +++ b/docs/openstack/create-and-connect-to-the-VM/ssh-to-the-VM.md @@ -4,18 +4,18 @@ Before trying to access instances from the outside world, you need to make sure you have followed these steps: -- You followed the instruction in [Create a Key Pair](../access-and-security/create-a-key-pair.md) - to set up a public ssh key. +- You followed the instruction in [Create a Key Pair](../access-and-security/create-a-key-pair.md) + to set up a public ssh key. -- Your public ssh-key has selected (in "Key Pair" tab) while - [launching the instance](launch-a-VM.md). +- Your public ssh-key has selected (in "Key Pair" tab) while + [launching the instance](launch-a-VM.md). -- [Assign a Floating IP](assign-a-floating-IP.md) to the instance in order to - access it from outside world. +- [Assign a Floating IP](assign-a-floating-IP.md) to the instance in order to + access it from outside world. -- Make sure you have added rules in the - [Security Groups](../access-and-security/security-groups.md#allowing-ssh) to - allow **ssh** using Port 22 access to the instance. +- Make sure you have added rules in the + [Security Groups](../access-and-security/security-groups.md#allowing-ssh) to + allow **ssh** using Port 22 access to the instance. !!! info "How to update New Security Group(s) on any running VM?" @@ -33,17 +33,17 @@ In our example, the IP is `199.94.60.66`. Default usernames for all the base images are: -- **all Ubuntu images**: ubuntu +- **all Ubuntu images**: ubuntu -- **all AlmaLinux images**: almalinux +- **all AlmaLinux images**: almalinux -- **all Rocky Linux images**: rocky +- **all Rocky Linux images**: rocky -- **all Fedora images**: fedora +- **all Fedora images**: fedora -- **all Debian images**: debian +- **all Debian images**: debian -- **all RHEL images**: cloud-user +- **all RHEL images**: cloud-user !!! warning "Removed Centos Images" diff --git a/docs/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/index.md b/docs/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/index.md index 52ecb585..43566e52 100644 --- a/docs/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/index.md +++ b/docs/openstack/create-and-connect-to-the-VM/using-vpn/sshuttle/index.md @@ -81,7 +81,7 @@ sudo dnf install sshuttle It is also possible to install into a **virtualenv** as _a non-root user_. -- From PyPI: +- From PyPI: ```sh virtualenv -p python3 /tmp/sshuttle @@ -89,7 +89,7 @@ virtualenv -p python3 /tmp/sshuttle pip install sshuttle ``` -- Clone: +- Clone: ```sh virtualenv -p python3 /tmp/sshuttle diff --git a/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md b/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md index 5849849a..17bfd550 100644 --- a/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md +++ b/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md @@ -102,11 +102,11 @@ file and share it to the new client. It would be kind of pointless to have our VPN server allow anyone to connect. This is where our public & private keys come into play. -- Each **client's \*\*public\*\* key** needs to be added to the - **SERVER'S** configuration file +- Each **client's \*\*public\*\* key** needs to be added to the + **SERVER'S** configuration file -- The **server's \*\*public\*\* key** added to the **CLIENT'S** - configuration file +- The **server's \*\*public\*\* key** added to the **CLIENT'S** + configuration file ### Useful commands diff --git a/docs/openstack/data-transfer/data-transfer-from-to-vm.md b/docs/openstack/data-transfer/data-transfer-from-to-vm.md index e17b29e8..46e9835e 100644 --- a/docs/openstack/data-transfer/data-transfer-from-to-vm.md +++ b/docs/openstack/data-transfer/data-transfer-from-to-vm.md @@ -254,9 +254,9 @@ given NERC VM. To run the `rclone` commands, you need to have: -- To run the `rclone` commands you will need to have `rclone` installed. - See [Downloading and Installing the latest version of Rclone](https://rclone.org/downloads/) - for more information. +- To run the `rclone` commands you will need to have `rclone` installed. + See [Downloading and Installing the latest version of Rclone](https://rclone.org/downloads/) + for more information. ### Configuring Rclone @@ -396,33 +396,33 @@ using FTP, FTPS, SCP, SFTP, WebDAV, or S3 file transfer protocols. **Prerequisites**: -- WinSCP installed, see [Download and Install the latest version of the WinSCP](https://winscp.net/eng/docs/guide_install) - for more information. +- WinSCP installed, see [Download and Install the latest version of the WinSCP](https://winscp.net/eng/docs/guide_install) + for more information. -- Go to WinSCP menu and open "View > Preferences". +- Go to WinSCP menu and open "View > Preferences". -- When the "Preferences" dialog window appears, select "Transfer" in the options - on the left pane. +- When the "Preferences" dialog window appears, select "Transfer" in the options + on the left pane. -- Click on the "Edit" button. +- Click on the "Edit" button. -- Then, in the popup dialog box, review the "Common options" group and uncheck - the "Preserve timestamp" option as shown below: +- Then, in the popup dialog box, review the "Common options" group and uncheck + the "Preserve timestamp" option as shown below: ![Disable Preserve TimeStamp](images/winscp-preferences-perserve-timestamp-disable.png) #### Configuring WinSCP -- Click on the "New Tab" button as shown below: +- Click on the "New Tab" button as shown below: ![Login](images/winscp-new-tab.png) -- Select either **"SFTP"** or **"SCP"** from the "File protocol" dropdown options - as shown below: +- Select either **"SFTP"** or **"SCP"** from the "File protocol" dropdown options + as shown below: ![Choose SFTP or SCP File Protocol](images/choose_SFTP_or_SCP_protocol.png) -- Provide the following required information: +- Provide the following required information: **"File protocol"**: Choose either "**"SFTP"** or **"SCP"**" @@ -451,7 +451,7 @@ using FTP, FTPS, SCP, SFTP, WebDAV, or S3 file transfer protocols. **"Password"**: "``" -- Change Authentication Options +- Change Authentication Options Before saving, click the "Advanced" button. In the "Advanced Site Settings", under "SSH >> Authentication" settings, check @@ -493,20 +493,20 @@ connections to servers, enterprise file sharing, and various cloud storage platf **Prerequisites**: -- Cyberduck installed, see [Download and Install the latest version of the Cyberduck](https://cyberduck.io/download/) - for more information. +- Cyberduck installed, see [Download and Install the latest version of the Cyberduck](https://cyberduck.io/download/) + for more information. #### Configuring Cyberduck -- Click on the "Open Connection" button as shown below: +- Click on the "Open Connection" button as shown below: ![Open Connection](images/cyberduck-open-connection-new.png) -- Select either **"SFTP"** or **"FTP"** from the dropdown options as shown below: +- Select either **"SFTP"** or **"FTP"** from the dropdown options as shown below: ![Choose Amazon S3](images/cyberduck-select-sftp-or-ftp.png) -- Provide the following required information: +- Provide the following required information: **"Server"**: "``" @@ -555,25 +555,25 @@ computer (shared drives, Dropbox, etc.) **Prerequisites**: -- Filezilla installed, see - [Download and Install the latest version of the Filezilla](https://wiki.filezilla-project.org/Client_Installation) - for more information. +- Filezilla installed, see + [Download and Install the latest version of the Filezilla](https://wiki.filezilla-project.org/Client_Installation) + for more information. #### Configuring Filezilla -- Click on "Site Manager" icon as shown below: +- Click on "Site Manager" icon as shown below: ![Site Manager](images/filezilla-new-site.png) -- Click on "New Site" as shown below: +- Click on "New Site" as shown below: ![Click New Site](images/filezilla-click-new-site.png) -- Select either **"SFTP"** or **"FTP"** from the dropdown options as shown below: +- Select either **"SFTP"** or **"FTP"** from the dropdown options as shown below: ![Select Protocol](images/filezilla-sftp-or-ftp.png) -- Provide the following required information: +- Provide the following required information: **"Server"**: "``" diff --git a/docs/openstack/decommission/decommission-openstack-resources.md b/docs/openstack/decommission/decommission-openstack-resources.md index 92fbc0b8..4b442f99 100644 --- a/docs/openstack/decommission/decommission-openstack-resources.md +++ b/docs/openstack/decommission/decommission-openstack-resources.md @@ -5,16 +5,16 @@ below. ## Prerequisite -- **Backup**: Back up any critical data or configurations stored on the resources - that going to be decommissioned. This ensures that important information is not - lost during the process. You can refer to [this guide](../data-transfer/data-transfer-from-to-vm.md) - to initiate and carry out data transfer to and from the virtual machine. +- **Backup**: Back up any critical data or configurations stored on the resources + that going to be decommissioned. This ensures that important information is not + lost during the process. You can refer to [this guide](../data-transfer/data-transfer-from-to-vm.md) + to initiate and carry out data transfer to and from the virtual machine. -- **Shutdown Instances**: If applicable, [Shut Off any running instances](../management/vm-management.md#stopping-and-starting) - to ensure they are not actively processing data during decommissioning. +- **Shutdown Instances**: If applicable, [Shut Off any running instances](../management/vm-management.md#stopping-and-starting) + to ensure they are not actively processing data during decommissioning. -- Setup **OpenStack CLI**, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- Setup **OpenStack CLI**, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. ## Delete all VMs diff --git a/docs/openstack/index.md b/docs/openstack/index.md index 9943c091..329b4ca7 100644 --- a/docs/openstack/index.md +++ b/docs/openstack/index.md @@ -10,62 +10,62 @@ the list below. ## Logging In -- [Access the OpenStack Dashboard](logging-in/access-the-openstack-dashboard.md) - **<<-- Start Here** -- [Dashboard Overview](logging-in/dashboard-overview.md) +- [Access the OpenStack Dashboard](logging-in/access-the-openstack-dashboard.md) + **<<-- Start Here** +- [Dashboard Overview](logging-in/dashboard-overview.md) ## Access and Security -- [Security Groups](access-and-security/security-groups.md) -- [Create a Key Pair](access-and-security/create-a-key-pair.md) +- [Security Groups](access-and-security/security-groups.md) +- [Create a Key Pair](access-and-security/create-a-key-pair.md) ## Create & Connect to the VM -- [Launch a VM](create-and-connect-to-the-VM/launch-a-VM.md) -- [Create a Windows VM](create-and-connect-to-the-VM/create-a-Windows-VM.md) -- [Available Images](create-and-connect-to-the-VM/images.md) -- [Available NOVA Flavors](create-and-connect-to-the-VM/flavors.md) -- [Assign a Floating IP](create-and-connect-to-the-VM/assign-a-floating-IP.md) -- [SSH to the VM](create-and-connect-to-the-VM/ssh-to-the-VM.md) +- [Launch a VM](create-and-connect-to-the-VM/launch-a-VM.md) +- [Create a Windows VM](create-and-connect-to-the-VM/create-a-Windows-VM.md) +- [Available Images](create-and-connect-to-the-VM/images.md) +- [Available NOVA Flavors](create-and-connect-to-the-VM/flavors.md) +- [Assign a Floating IP](create-and-connect-to-the-VM/assign-a-floating-IP.md) +- [SSH to the VM](create-and-connect-to-the-VM/ssh-to-the-VM.md) ## OpenStack CLI -- [OpenStack CLI](openstack-cli/openstack-CLI.md) -- [Launch a VM using OpenStack CLI](openstack-cli/launch-a-VM-using-openstack-CLI.md) +- [OpenStack CLI](openstack-cli/openstack-CLI.md) +- [Launch a VM using OpenStack CLI](openstack-cli/launch-a-VM-using-openstack-CLI.md) ## Persistent Storage ### Block Storage/ Volumes/ Cinder -- [Block Storage/ Volumes/ Cinder](persistent-storage/volumes.md) -- [Create an empty volume](persistent-storage/create-an-empty-volume.md) -- [Attach the volume to an instance](persistent-storage/attach-the-volume-to-an-instance.md) -- [Format and Mount the Volume](persistent-storage/format-and-mount-the-volume.md) -- [Detach a Volume](persistent-storage/detach-a-volume.md) -- [Delete Volumes](persistent-storage/delete-volumes.md) -- [Extending Volume](persistent-storage/extending-volume.md) -- [Transfer a Volume](persistent-storage/transfer-a-volume.md) +- [Block Storage/ Volumes/ Cinder](persistent-storage/volumes.md) +- [Create an empty volume](persistent-storage/create-an-empty-volume.md) +- [Attach the volume to an instance](persistent-storage/attach-the-volume-to-an-instance.md) +- [Format and Mount the Volume](persistent-storage/format-and-mount-the-volume.md) +- [Detach a Volume](persistent-storage/detach-a-volume.md) +- [Delete Volumes](persistent-storage/delete-volumes.md) +- [Extending Volume](persistent-storage/extending-volume.md) +- [Transfer a Volume](persistent-storage/transfer-a-volume.md) ### Object Storage/ Swift -- [Object Storage/ Swift](persistent-storage/object-storage.md) -- [Mount The Object Storage](persistent-storage/mount-the-object-storage.md) +- [Object Storage/ Swift](persistent-storage/object-storage.md) +- [Mount The Object Storage](persistent-storage/mount-the-object-storage.md) ## Data Transfer -- [Data Transfer To/ From NERC VM](data-transfer/data-transfer-from-to-vm.md) +- [Data Transfer To/ From NERC VM](data-transfer/data-transfer-from-to-vm.md) ## Backup your instance and data -- [Backup with snapshots](backup/backup-with-snapshots.md) +- [Backup with snapshots](backup/backup-with-snapshots.md) ## VM Management -- [VM Management](management/vm-management.md) +- [VM Management](management/vm-management.md) ## Decommission OpenStack Resources -- [Decommission OpenStack Resources](decommission/decommission-openstack-resources.md) +- [Decommission OpenStack Resources](decommission/decommission-openstack-resources.md) --- @@ -75,23 +75,23 @@ the list below. ## Setting Up Your Own Network -- [Set up your own Private Network](advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md) -- [Create a Router](advanced-openstack-topics/setting-up-a-network/create-a-router.md) +- [Set up your own Private Network](advanced-openstack-topics/setting-up-a-network/set-up-a-private-network.md) +- [Create a Router](advanced-openstack-topics/setting-up-a-network/create-a-router.md) ## Domain or Host Name for your VM -- [Domain Name System (DNS)](advanced-openstack-topics/domain-name-system/domain-names-for-your-vms.md) +- [Domain Name System (DNS)](advanced-openstack-topics/domain-name-system/domain-names-for-your-vms.md) ## Using Terraform to provision NERC resources -- [Terraform on NERC](advanced-openstack-topics/terraform/terraform-on-NERC.md) +- [Terraform on NERC](advanced-openstack-topics/terraform/terraform-on-NERC.md) ## Python SDK -- [Python SDK](advanced-openstack-topics/python-sdk/python-SDK.md) +- [Python SDK](advanced-openstack-topics/python-sdk/python-SDK.md) ## Setting Up Your Own Images -- [Microsoft Windows image](advanced-openstack-topics/setting-up-your-own-images/how-to-build-windows-image.md) +- [Microsoft Windows image](advanced-openstack-topics/setting-up-your-own-images/how-to-build-windows-image.md) --- diff --git a/docs/openstack/logging-in/access-the-openstack-dashboard.md b/docs/openstack/logging-in/access-the-openstack-dashboard.md index 176a978f..2ceeb2c0 100644 --- a/docs/openstack/logging-in/access-the-openstack-dashboard.md +++ b/docs/openstack/logging-in/access-the-openstack-dashboard.md @@ -19,13 +19,13 @@ Next, you will be redirected to CILogon welcome page as shown below: MGHPCC Shared Services (MSS) Keycloak will request approval of access to the following information from the user: -- Your CILogon user identifier +- Your CILogon user identifier -- Your name +- Your name -- Your email address +- Your email address -- Your username and affiliation from your identity provider +- Your username and affiliation from your identity provider which are required in order to allow access your account on NERC's OpenStack dashboard. diff --git a/docs/openstack/logging-in/dashboard-overview.md b/docs/openstack/logging-in/dashboard-overview.md index 79f1edc0..e7697b3f 100644 --- a/docs/openstack/logging-in/dashboard-overview.md +++ b/docs/openstack/logging-in/dashboard-overview.md @@ -10,7 +10,7 @@ Beneath that you can see six panels in larger print: "Project", "Compute", Navigate: Project -> Project -- API Access: View API endpoints. +- API Access: View API endpoints. ![Project API Access](images/project_API_access.png) @@ -18,75 +18,75 @@ Navigate: Project -> Project Navigate: Project -> Compute -- Overview: View reports for the project. +- Overview: View reports for the project. ![Compute dashboard](images/horizon_dashboard.png) -- Instances: View, launch, create a snapshot from, stop, pause, or reboot - instances, or connect to them through VNC. +- Instances: View, launch, create a snapshot from, stop, pause, or reboot + instances, or connect to them through VNC. -- Images: View images and instance snapshots created by project users, plus any - images that are publicly available. Create, edit, and delete images, and launch - instances from images and snapshots. +- Images: View images and instance snapshots created by project users, plus any + images that are publicly available. Create, edit, and delete images, and launch + instances from images and snapshots. -- Key Pairs: View, create, edit, import, and delete key pairs. +- Key Pairs: View, create, edit, import, and delete key pairs. -- Server Groups: View, create, edit, and delete server groups. +- Server Groups: View, create, edit, and delete server groups. ## Volume Panel Navigate: Project -> Volume -- Volumes: View, create, edit, delete volumes, and accept volume trnasfer. +- Volumes: View, create, edit, delete volumes, and accept volume trnasfer. -- Backups: View, create, edit, and delete backups. +- Backups: View, create, edit, and delete backups. -- Snapshots: View, create, edit, and delete volume snapshots. +- Snapshots: View, create, edit, and delete volume snapshots. -- Groups: View, create, edit, and delete groups. +- Groups: View, create, edit, and delete groups. -- Group Snapshots: View, create, edit, and delete group snapshots. +- Group Snapshots: View, create, edit, and delete group snapshots. ## Network Panel Navigate: Project -> Network -- Network Topology: View the network topology. +- Network Topology: View the network topology. ![Network Topology](images/network_topology.png) -- Networks: Create and manage public and private networks. +- Networks: Create and manage public and private networks. -- Routers: Create and manage routers. +- Routers: Create and manage routers. -- Security Groups: View, create, edit, and delete security groups and security - group rules.. +- Security Groups: View, create, edit, and delete security groups and security + group rules.. -- Load Balancers: View, create, edit, and delete load balancers. +- Load Balancers: View, create, edit, and delete load balancers. -- Floating IPs: Allocate an IP address to or release it from a project. +- Floating IPs: Allocate an IP address to or release it from a project. -- Trunks: View, create, edit, and delete trunk. +- Trunks: View, create, edit, and delete trunk. ## Orchestration Panel Navigate: Project->Orchestration -- Stacks: Use the REST API to orchestrate multiple composite cloud applications. +- Stacks: Use the REST API to orchestrate multiple composite cloud applications. -- Resource Types: view various resources types and their details. +- Resource Types: view various resources types and their details. -- Template Versions: view different heat templates. +- Template Versions: view different heat templates. -- Template Generator: GUI to generate and save template using drag and drop resources. +- Template Generator: GUI to generate and save template using drag and drop resources. ## Object Store Panel Navigate: Project->Object Store -- Containers: Create and manage containers and objects. In future you would use - this tab to [create Swift object storage](../persistent-storage/object-storage.md) - for your projects on a need basis. +- Containers: Create and manage containers and objects. In future you would use + this tab to [create Swift object storage](../persistent-storage/object-storage.md) + for your projects on a need basis. ![Swift Object Containers](images/object_containers.png) diff --git a/docs/openstack/management/vm-management.md b/docs/openstack/management/vm-management.md index e6027e51..8fa2d09b 100644 --- a/docs/openstack/management/vm-management.md +++ b/docs/openstack/management/vm-management.md @@ -134,20 +134,20 @@ openstack server restart my-vm ## Create Snapshot -- Click _Action -> Create Snapshot_. +- Click _Action -> Create Snapshot_. -- Instances must have status `Active`, `Suspended`, or `Shutoff` to create snapshot. +- Instances must have status `Active`, `Suspended`, or `Shutoff` to create snapshot. -- This creates an image template from a VM instance also known as "Instance Snapshot" - as [described here](../backup/backup-with-snapshots.md#create-and-use-instance-snapshots). +- This creates an image template from a VM instance also known as "Instance Snapshot" + as [described here](../backup/backup-with-snapshots.md#create-and-use-instance-snapshots). -- The menu will automatically shift to _Project -> Compute -> Images_ once the - image is created. +- The menu will automatically shift to _Project -> Compute -> Images_ once the + image is created. -- The sole distinction between an _image_ directly uploaded to the image data - service, [glance](https://docs.openstack.org/glance) and an _image_ generated - through a snapshot is that the snapshot-created image possesses additional - properties in the glance database and defaults to being **private**. +- The sole distinction between an _image_ directly uploaded to the image data + service, [glance](https://docs.openstack.org/glance) and an _image_ generated + through a snapshot is that the snapshot-created image possesses additional + properties in the glance database and defaults to being **private**. !!! info "Glance Image Service" @@ -290,8 +290,8 @@ There are other options available if you wish to keep the virtual machine for future usage. These do, however, continue to use quota for the project even though the VM is not running. -- **Snapshot the VM** to keep an offline copy of the virtual machine that can be - performed as [described here](../backup/backup-with-snapshots.md#how-to-create-an-instance-snapshot). +- **Snapshot the VM** to keep an offline copy of the virtual machine that can be + performed as [described here](../backup/backup-with-snapshots.md#how-to-create-an-instance-snapshot). If however, the virtual machine is no longer required and no data on the associated system or ephemeral disk needs to be preserved, the following command @@ -315,24 +315,24 @@ Click _Action -> Delete Instance_. the deletion process, as failure to do so may lead to data corruption in both your data and the associated volume. -- If the instance is using [Ephemeral disk](../persistent-storage/volumes.md#ephemeral-disk): - It stops and removes the instance along with the ephemeral disk. - **All data will be permanently lost!** +- If the instance is using [Ephemeral disk](../persistent-storage/volumes.md#ephemeral-disk): + It stops and removes the instance along with the ephemeral disk. + **All data will be permanently lost!** -- If the instance is using [Volume-backed disk](../persistent-storage/volumes.md#volumes): - It stops and removes the instance. If **"Delete Volume on Instance Delete"** - was explicitely set to **Yes**, **All data will be permanently lost!**. If set - to **No** (which is default selected while launching an instance), the volume - may be used to boot a new instance, though any data stored in memory will be - permanently lost. For more in-depth information on making your VM setup and - data persistent, you can explore the details [here](../persistent-storage/volumes.md#how-do-you-make-your-vm-setup-and-data-persistent). +- If the instance is using [Volume-backed disk](../persistent-storage/volumes.md#volumes): + It stops and removes the instance. If **"Delete Volume on Instance Delete"** + was explicitely set to **Yes**, **All data will be permanently lost!**. If set + to **No** (which is default selected while launching an instance), the volume + may be used to boot a new instance, though any data stored in memory will be + permanently lost. For more in-depth information on making your VM setup and + data persistent, you can explore the details [here](../persistent-storage/volumes.md#how-do-you-make-your-vm-setup-and-data-persistent). -- Status will briefly change to **Deleting** while the instance is being removed. +- Status will briefly change to **Deleting** while the instance is being removed. The quota associated with this virtual machine will be returned to the project and you can review and verify that looking at your [OpenStack dashboard overview](../logging-in/dashboard-overview.md#compute-panel). -- Navigate to _Project -> Compute -> Overview_. +- Navigate to _Project -> Compute -> Overview_. --- diff --git a/docs/openstack/openstack-cli/launch-a-VM-using-openstack-CLI.md b/docs/openstack/openstack-cli/launch-a-VM-using-openstack-CLI.md index 5076c026..d97a076d 100644 --- a/docs/openstack/openstack-cli/launch-a-VM-using-openstack-CLI.md +++ b/docs/openstack/openstack-cli/launch-a-VM-using-openstack-CLI.md @@ -3,15 +3,15 @@ First find the following details using openstack command, we would required these details during the creation of virtual machine. -- Flavor +- Flavor -- Image +- Image -- Network +- Network -- Security Group +- Security Group -- Key Name +- Key Name Get the flavor list using below openstack command: diff --git a/docs/openstack/openstack-cli/openstack-CLI.md b/docs/openstack/openstack-cli/openstack-CLI.md index 552f5980..50176ef2 100644 --- a/docs/openstack/openstack-cli/openstack-CLI.md +++ b/docs/openstack/openstack-cli/openstack-CLI.md @@ -24,14 +24,14 @@ appropriate environment variables. You can download the environment file with the credentials from the [OpenStack dashboard](https://stack.nerc.mghpcc.org/dashboard/identity/application_credentials/). -- Log in to the [NERC's OpenStack dashboard](https://stack.nerc.mghpcc.org), choose - the project for which you want to download the OpenStack RC file. +- Log in to the [NERC's OpenStack dashboard](https://stack.nerc.mghpcc.org), choose + the project for which you want to download the OpenStack RC file. -- Navigate to _Identity -> Application Credentials_. +- Navigate to _Identity -> Application Credentials_. -- Click on "Create Application Credential" button and provide a **Name** and **Roles** - for the application credential. All other fields are optional and leaving the - "Secret" field empty will set it to autogenerate (recommended). +- Click on "Create Application Credential" button and provide a **Name** and **Roles** + for the application credential. All other fields are optional and leaving the + "Secret" field empty will set it to autogenerate (recommended). ![OpenStackClient Credentials Setup](images/openstack_cli_cred.png) @@ -95,13 +95,13 @@ For more information on configuring the OpenStackClient please see the Generally, the OpenStack terminal client offers the following methods: -- **list**: Lists information about objects currently in the cloud. +- **list**: Lists information about objects currently in the cloud. -- **show**: Displays information about a single object currently in the cloud. +- **show**: Displays information about a single object currently in the cloud. -- **create**: Creates a new object in the cloud. +- **create**: Creates a new object in the cloud. -- **set**: Edits an existing object in the cloud. +- **set**: Edits an existing object in the cloud. To test that you have everything configured, try out some commands. The following command lists all the images available to your project: diff --git a/docs/openstack/persistent-storage/attach-the-volume-to-an-instance.md b/docs/openstack/persistent-storage/attach-the-volume-to-an-instance.md index 4e46bae1..33a603bd 100644 --- a/docs/openstack/persistent-storage/attach-the-volume-to-an-instance.md +++ b/docs/openstack/persistent-storage/attach-the-volume-to-an-instance.md @@ -31,8 +31,8 @@ Make note of the device name of your volume. To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. To attach the volume to an instance using the CLI, do this: diff --git a/docs/openstack/persistent-storage/create-an-empty-volume.md b/docs/openstack/persistent-storage/create-an-empty-volume.md index 7b9da48d..6d90b2bc 100644 --- a/docs/openstack/persistent-storage/create-an-empty-volume.md +++ b/docs/openstack/persistent-storage/create-an-empty-volume.md @@ -43,8 +43,8 @@ A set of volume_image meta data is also copied from the image service. To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. To create a volume using the CLI, do this: diff --git a/docs/openstack/persistent-storage/delete-volumes.md b/docs/openstack/persistent-storage/delete-volumes.md index 262b61e2..c868d9e3 100644 --- a/docs/openstack/persistent-storage/delete-volumes.md +++ b/docs/openstack/persistent-storage/delete-volumes.md @@ -30,8 +30,8 @@ confirm the action. To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. To delete a volume using the CLI, do this: diff --git a/docs/openstack/persistent-storage/detach-a-volume.md b/docs/openstack/persistent-storage/detach-a-volume.md index c9a9bd37..0157afcb 100644 --- a/docs/openstack/persistent-storage/detach-a-volume.md +++ b/docs/openstack/persistent-storage/detach-a-volume.md @@ -33,8 +33,8 @@ This will popup the following interface to proceed: To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. #### Using the openstack client diff --git a/docs/openstack/persistent-storage/extending-volume.md b/docs/openstack/persistent-storage/extending-volume.md index 93d766ef..1927f6c3 100644 --- a/docs/openstack/persistent-storage/extending-volume.md +++ b/docs/openstack/persistent-storage/extending-volume.md @@ -6,9 +6,9 @@ VM and in **"Available"** status. The steps are as follows: -- Extend the volume to its new size +- Extend the volume to its new size -- Extend the filesystem to its new size +- Extend the filesystem to its new size ## Using Horizon dashboard @@ -28,8 +28,8 @@ Specify, the new extened size in GiB: To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. ### Using the openstack client diff --git a/docs/openstack/persistent-storage/format-and-mount-the-volume.md b/docs/openstack/persistent-storage/format-and-mount-the-volume.md index 51ff5f7a..e07b884b 100644 --- a/docs/openstack/persistent-storage/format-and-mount-the-volume.md +++ b/docs/openstack/persistent-storage/format-and-mount-the-volume.md @@ -127,22 +127,22 @@ partition style (GPT or MBR), see [Compare partition styles - GPT and MBR](https Format the New Volume: -- Select and hold (or right-click) the unallocated space of the new disk. +- Select and hold (or right-click) the unallocated space of the new disk. -- Select "New Simple Volume" and follow the wizard to create a new partition. +- Select "New Simple Volume" and follow the wizard to create a new partition. ![Windows Simple Volume Wizard Start](images/win_disk_simple_volume.png) -- Choose the file system (usually NTFS for Windows). +- Choose the file system (usually NTFS for Windows). -- Assign a drive letter or mount point. +- Assign a drive letter or mount point. Complete Formatting: -- Complete the wizard to format the new volume. +- Complete the wizard to format the new volume. -- Once formatting is complete, the new volume should be visible in File Explorer - as shown below: +- Once formatting is complete, the new volume should be visible in File Explorer + as shown below: ![Windows Simple Volume Wizard Start](images/win_new_drive.png) diff --git a/docs/openstack/persistent-storage/mount-the-object-storage.md b/docs/openstack/persistent-storage/mount-the-object-storage.md index 76efd12f..d219acde 100644 --- a/docs/openstack/persistent-storage/mount-the-object-storage.md +++ b/docs/openstack/persistent-storage/mount-the-object-storage.md @@ -5,11 +5,11 @@ We are using following setting for this purpose to mount the object storage to an NERC OpenStack VM: -- 1 Linux machine, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS image, - `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to this VM. +- 1 Linux machine, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS image, + `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to this VM. -- Setup and enable your S3 API credentials: +- Setup and enable your S3 API credentials: To access the API credentials, you must login through the OpenStack Dashboard and navigate to "Projects > API Access" where you can download the "Download @@ -52,14 +52,14 @@ parts are `EC2_ACCESS_KEY` and `EC2_SECRET_KEY`, keep them noted. openstack ec2 credentials create -- Source the downloaded OpenStack RC File from _Projects > API Access_ by using: - `source *-openrc.sh` command. Sourcing the RC File will set the required environment - variables. +- Source the downloaded OpenStack RC File from _Projects > API Access_ by using: + `source *-openrc.sh` command. Sourcing the RC File will set the required environment + variables. -- Allow Other User option by editing fuse config by editing `/etc/fuse.conf` file - and uncomment "user_allow_other" option. +- Allow Other User option by editing fuse config by editing `/etc/fuse.conf` file + and uncomment "user_allow_other" option. - sudo nano /etc/fuse.conf + sudo nano /etc/fuse.conf The output going to look like this: @@ -147,31 +147,31 @@ The object storage container i.e. "bucket1" will be mounted in the directory `~/ In this command, -- `mount-s3` is the Mountpoint for Amazon S3 package as installed in `/usr/bin/` - path we don't need to specify the full path. +- `mount-s3` is the Mountpoint for Amazon S3 package as installed in `/usr/bin/` + path we don't need to specify the full path. -- `--profile` corresponds to the name given on the `~/.aws/credentials` file i.e. - `[nerc]`. +- `--profile` corresponds to the name given on the `~/.aws/credentials` file i.e. + `[nerc]`. -- `--endpoint-url` corresponds to the Object Storage endpoint url for NERC Object - Storage. You don't need to modify this url. +- `--endpoint-url` corresponds to the Object Storage endpoint url for NERC Object + Storage. You don't need to modify this url. -- `--allow-other`: Allows other users to access the mounted filesystem. This is - particularly useful when multiple users need to access the mounted S3 bucket. - Only allowed if `user_allow_other` is set in `/etc/fuse.conf`. +- `--allow-other`: Allows other users to access the mounted filesystem. This is + particularly useful when multiple users need to access the mounted S3 bucket. + Only allowed if `user_allow_other` is set in `/etc/fuse.conf`. -- `--force-path-style`: Forces the use of path-style URLs when accessing the S3 - bucket. This is necessary when working with certain S3-compatible storage services - that do not support virtual-hosted-style URLs. +- `--force-path-style`: Forces the use of path-style URLs when accessing the S3 + bucket. This is necessary when working with certain S3-compatible storage services + that do not support virtual-hosted-style URLs. -- `--debug`: Enables debug mode, providing additional information about the mounting - process. +- `--debug`: Enables debug mode, providing additional information about the mounting + process. -- `bucket1` is the name of the container which contains the NERC Object Storage - resources. +- `bucket1` is the name of the container which contains the NERC Object Storage + resources. -- `~/bucket1` is the location of the folder in which you want to mount the Object - Storage filesystem. +- `~/bucket1` is the location of the folder in which you want to mount the Object + Storage filesystem. !!! tip "Important Note" @@ -436,25 +436,25 @@ The object storage container i.e. "bucket1" will be mounted in the directory `~/ In this command, -- `goofys` is the goofys binary as we already copied this in `/usr/bin/` path we - don't need to specify the full path. +- `goofys` is the goofys binary as we already copied this in `/usr/bin/` path we + don't need to specify the full path. -- `-o` stands for goofys options, and is handled differently. +- `-o` stands for goofys options, and is handled differently. -- `allow_other` Allows goofys with option `allow_other` only allowed if `user_allow_other` - is set in `/etc/fuse.conf`. +- `allow_other` Allows goofys with option `allow_other` only allowed if `user_allow_other` + is set in `/etc/fuse.conf`. -- `--profile` corresponds to the name given on the `~/.aws/credentials` file i.e. - `[nerc]`. +- `--profile` corresponds to the name given on the `~/.aws/credentials` file i.e. + `[nerc]`. -- `--endpoint` corresponds to the Object Storage endpoint url for NERC Object Storage. - You don't need to modify this url. +- `--endpoint` corresponds to the Object Storage endpoint url for NERC Object Storage. + You don't need to modify this url. -- `bucket1` is the name of the container which contains the NERC Object Storage - resources. +- `bucket1` is the name of the container which contains the NERC Object Storage + resources. -- `~/bucket1` is the location of the folder in which you want to mount the Object - Storage filesystem. +- `~/bucket1` is the location of the folder in which you want to mount the Object + Storage filesystem. In order to test whether the mount was successful, navigate to the directory in which you mounted the NERC container repository, for example: @@ -869,11 +869,11 @@ Verify, if the container is mounted successfully: A JuiceFS file system consists of two parts: -- **Object Storage:** Used for data storage. +- **Object Storage:** Used for data storage. -- **Metadata Engine:** A database used for storing metadata. In this case, we will - use a durable [**Redis**](https://redis.io/) in-memory database service that - provides extremely fast performance. +- **Metadata Engine:** A database used for storing metadata. In this case, we will + use a durable [**Redis**](https://redis.io/) in-memory database service that + provides extremely fast performance. #### Installation of the JuiceFS client @@ -921,7 +921,7 @@ init system, change this to `systemd` as shown here: ![Redis Server Config](images/redis-server-config.png) -- Binding to localhost: +- Binding to localhost: By default, Redis is only accessible from `localhost`. We need to verify that by locating this line by running: @@ -1273,9 +1273,9 @@ After JuiceFS has been successfully formatted, follow this guide to clean up. JuiceFS client provides the destroy command to completely destroy a file system, which will result in: -- Deletion of all metadata entries of this file system +- Deletion of all metadata entries of this file system -- Deletion of all data blocks of this file system +- Deletion of all data blocks of this file system Use this command in the following format: diff --git a/docs/openstack/persistent-storage/object-storage.md b/docs/openstack/persistent-storage/object-storage.md index 7c17e5cb..3ac2b366 100644 --- a/docs/openstack/persistent-storage/object-storage.md +++ b/docs/openstack/persistent-storage/object-storage.md @@ -135,8 +135,8 @@ This will deactivate the public URL of the container and then it will show "Disa To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. #### Some Object Storage management examples @@ -253,10 +253,10 @@ To check the space used by a specific container This is a python client for the Swift API. There's a [Python API](https://github.com/openstack/python-swiftclient) (the `swiftclient` module), and a command-line script (`swift`). -- This example uses a `Python3` virtual environment, but you are free to choose - any other method to create a local virtual environment like `Conda`. +- This example uses a `Python3` virtual environment, but you are free to choose + any other method to create a local virtual environment like `Conda`. - python3 -m venv venv + python3 -m venv venv !!! note "Choosing Correct Python Interpreter" @@ -264,7 +264,7 @@ This is a python client for the Swift API. There's a [Python API](https://github Windows Only) to create a directory named `venv` (or whatever name you specified) in your current working directory. -- Activate the virtual environment by running: +- Activate the virtual environment by running: **on Linux/Mac:** `source venv/bin/activate` @@ -272,12 +272,12 @@ This is a python client for the Swift API. There's a [Python API](https://github #### Install [Python Swift Client page at PyPi](https://pypi.org/project/python-swiftclient/) -- Once virtual environment is activated, install `python-swiftclient` and `python-keystoneclient` +- Once virtual environment is activated, install `python-swiftclient` and `python-keystoneclient` pip install python-swiftclient python-keystoneclient -- Swift authenticates using a user, tenant, and key, which map to your OpenStack - username, project,and password. +- Swift authenticates using a user, tenant, and key, which map to your OpenStack + username, project,and password. For this, you need to download the **"NERC's OpenStack RC File"** with the credentials for your NERC project from the [NERC's OpenStack dashboard](https://stack.nerc.mghpcc.org/). @@ -425,11 +425,11 @@ to access object storage on your NERC project. To run the `s3` or `s3api` commands, you need to have: -- AWS CLI installed, see - [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) - for more information. +- AWS CLI installed, see + [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) + for more information. -- The NERC's Swift End Point URL: `https://stack.nerc.mghpcc.org:13808` +- The NERC's Swift End Point URL: `https://stack.nerc.mghpcc.org:13808` !!! note "Understand these Amazon S3 terms" @@ -482,9 +482,9 @@ While clicking on "EC2 Credentials", this will download a file **zip file** incl openstack ec2 credentials create -- Source the downloaded OpenStack RC File from _Projects > API Access_ by using: - `source *-openrc.sh` command. Sourcing the RC File will set the required environment - variables. +- Source the downloaded OpenStack RC File from _Projects > API Access_ by using: + `source *-openrc.sh` command. Sourcing the RC File will set the required environment + variables. Then run aws configuration command which requires the `EC2_ACCESS_KEY` and `EC2_SECRET_KEY` keys that you noted from `ec2rc.sh` file (during the **"Configuring @@ -628,8 +628,8 @@ the S3 protocol. **Prerequisites**: -- S3cmd installed, see [Download and Install the latest version of the S3cmd](https://s3tools.org/download) - for more information. +- S3cmd installed, see [Download and Install the latest version of the S3cmd](https://s3tools.org/download) + for more information. #### Configuring s3cmd @@ -785,9 +785,9 @@ NERC's containers. To run the `rclone` commands, you need to have: -- `rclone` installed, see - [Downloading and Installing the latest version of the Rclone](https://rclone.org/downloads/) - for more information. +- `rclone` installed, see + [Downloading and Installing the latest version of the Rclone](https://rclone.org/downloads/) + for more information. #### Configuring Rclone @@ -1023,32 +1023,32 @@ using FTP, FTPS, SCP, SFTP, WebDAV or S3 file transfer protocols. **Prerequisites**: -- WinSCP installed, see [Download and Install the latest version of the WinSCP](https://winscp.net/eng/docs/guide_install) - for more information. +- WinSCP installed, see [Download and Install the latest version of the WinSCP](https://winscp.net/eng/docs/guide_install) + for more information. -- Go to WinSCP menu and open "Options > Preferences". +- Go to WinSCP menu and open "Options > Preferences". -- When the "Preferences" dialog window appears, select "Transfer" in the options - on the left pane. +- When the "Preferences" dialog window appears, select "Transfer" in the options + on the left pane. -- Click on "Edit" button. +- Click on "Edit" button. -- Then, on shown popup dialog box review the "Common options" group, uncheck the - "Preserve timestamp" option as shown below: +- Then, on shown popup dialog box review the "Common options" group, uncheck the + "Preserve timestamp" option as shown below: ![Disable Preserve TimeStamp](images/winscp-perserve-timestamp-disable.png) #### Configuring WinSCP -- Click on "New Session" tab button as shown below: +- Click on "New Session" tab button as shown below: ![Login](images/winscp-new-session.png) -- Select **"Amazon S3"** from the "File protocol" dropdown options as shown below: +- Select **"Amazon S3"** from the "File protocol" dropdown options as shown below: ![Choose Amazon S3 File Protocol](images/choose_S3_protocol.png) -- Provide the following required endpoint information: +- Provide the following required endpoint information: **"Host name"**: "stack.nerc.mghpcc.org" @@ -1088,20 +1088,20 @@ servers, enterprise file sharing, and cloud storage. **Prerequisites**: -- Cyberduck installed, see [Download and Install the latest version of the Cyberduck](https://cyberduck.io/download/) - for more information. +- Cyberduck installed, see [Download and Install the latest version of the Cyberduck](https://cyberduck.io/download/) + for more information. #### Configuring Cyberduck -- Click on "Open Connection" tab button as shown below: +- Click on "Open Connection" tab button as shown below: ![Open Connection](images/cyberduck-open-connection.png) -- Select **"Amazon S3"** from the dropdown options as shown below: +- Select **"Amazon S3"** from the dropdown options as shown below: ![Choose Amazon S3](images/cyberduck-select-Amazon-s3.png) -- Provide the following required endpoint information: +- Provide the following required endpoint information: **"Server"**: "stack.nerc.mghpcc.org" diff --git a/docs/openstack/persistent-storage/transfer-a-volume.md b/docs/openstack/persistent-storage/transfer-a-volume.md index 951e0e25..1847b87d 100644 --- a/docs/openstack/persistent-storage/transfer-a-volume.md +++ b/docs/openstack/persistent-storage/transfer-a-volume.md @@ -75,12 +75,12 @@ below: To run the OpenStack CLI commands, you need to have: -- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) - for more information. +- OpenStack CLI setup, see [OpenStack Command Line setup](../openstack-cli/openstack-CLI.md#command-line-setup) + for more information. ### Using the openstack client -- Identifying volume to transfer in your source project +- Identifying volume to transfer in your source project openstack volume list +---------------------------+-----------+-----------+------+-------------+ @@ -89,7 +89,7 @@ openstack volume list | d8a5da4c-...-8b6678ce4936 | my-volume | available | 100 | | +---------------------------+-----------+-----------+------+-------------+ -- Create the transfer request +- Create the transfer request openstack volume transfer request create my-volume +------------+--------------------------------------+ @@ -108,10 +108,10 @@ openstack volume transfer request create my-volume i.e. `""`. For example: `openstack volume transfer request create "My Volume"` -- The volume can be checked as in the transfer status using - `openstack volume transfer request list` as follows and the volume is in status - `awaiting-transfer` while running `openstack volume show ` - as shown below: +- The volume can be checked as in the transfer status using + `openstack volume transfer request list` as follows and the volume is in status + `awaiting-transfer` while running `openstack volume show ` + as shown below: openstack volume transfer request list +---------------------------+------+--------------------------------------+ @@ -130,8 +130,8 @@ openstack volume show my-volume | status | awaiting-transfer | +------------------------------+--------------------------------------+ -- The user of the destination project can authenticate and receive the authentication - key reported above. The transfer can then be initiated. +- The user of the destination project can authenticate and receive the authentication + key reported above. The transfer can then be initiated. openstack volume transfer request accept --auth-key b92d98fec2766582 a16494cf-cfa0-47f6-b606-62573357922a +-----------+--------------------------------------+ @@ -142,7 +142,7 @@ openstack volume transfer request accept --auth-key b92d98fec2766582 a16494cf-cf | volume_id | d8a5da4c-41c8-4c2d-b57a-8b6678ce4936 | +-----------+--------------------------------------+ -- And the results confirmed in the volume list for the destination project. +- And the results confirmed in the volume list for the destination project. openstack volume list +---------------------------+-----------+-----------+------+-------------+ diff --git a/docs/openstack/persistent-storage/volumes.md b/docs/openstack/persistent-storage/volumes.md index fe710f44..ca11ab1c 100644 --- a/docs/openstack/persistent-storage/volumes.md +++ b/docs/openstack/persistent-storage/volumes.md @@ -44,31 +44,31 @@ another project as [described here](../persistent-storage/transfer-a-volume.md). Some uses for volumes: -- Persistent data storage for ephemeral instances. +- Persistent data storage for ephemeral instances. -- Transfer of data between projects +- Transfer of data between projects -- Bootable image where disk changes persist +- Bootable image where disk changes persist -- Mounting the disk of one instance to another for troubleshooting +- Mounting the disk of one instance to another for troubleshooting ## How do you make your VM setup and data persistent? -- By default, when the instance is launched from an **Image** or an - **Instance Snapshot**, the choice for utilizing persistent storage is configured - by selecting the **Yes** option for **"Create New Volume"**. It's crucial to - note that this configuration automatically creates persistent block storage - in the form of a Volume instead of using Ephemeral disk, which appears in - the "Volumes" list in the Horizon dashboard: _Project -> Volumes -> Volumes_. +- By default, when the instance is launched from an **Image** or an + **Instance Snapshot**, the choice for utilizing persistent storage is configured + by selecting the **Yes** option for **"Create New Volume"**. It's crucial to + note that this configuration automatically creates persistent block storage + in the form of a Volume instead of using Ephemeral disk, which appears in + the "Volumes" list in the Horizon dashboard: _Project -> Volumes -> Volumes_. ![Instance Persistent Storage Option](images/instance-persistent-storage-option.png) -- By default, the setting for **"Delete Volume on Instance Delete"** is configured - to use **No**. This setting ensures that the volume created during the launch - of a virtual machine remains persistent and won't be deleted alongside the - instance unless explicitly chosen as "Yes". Such instances boot from a - **bootable volume**, utilizing an existing volume listed in the - _Project -> Volumes -> Volumes_ menu. +- By default, the setting for **"Delete Volume on Instance Delete"** is configured + to use **No**. This setting ensures that the volume created during the launch + of a virtual machine remains persistent and won't be deleted alongside the + instance unless explicitly chosen as "Yes". Such instances boot from a + **bootable volume**, utilizing an existing volume listed in the + _Project -> Volumes -> Volumes_ menu. To minimize the risk of potential data loss, we highly recommend consistently [creating backups through snapshots](../backup/backup-with-snapshots.md). diff --git a/docs/other-tools/CI-CD/CI-CD-pipeline.md b/docs/other-tools/CI-CD/CI-CD-pipeline.md index acc0e113..76298584 100644 --- a/docs/other-tools/CI-CD/CI-CD-pipeline.md +++ b/docs/other-tools/CI-CD/CI-CD-pipeline.md @@ -9,19 +9,19 @@ pipelines are a practice focused on improving software delivery using automation The steps that form a CI/CD pipeline are distinct subsets of tasks that are grouped into a pipeline stage. Typical pipeline stages include: -- **Build** - The stage where the application is compiled. +- **Build** - The stage where the application is compiled. -- **Test** - The stage where code is tested. Automation here can save both time - and effort. +- **Test** - The stage where code is tested. Automation here can save both time + and effort. -- **Release** - The stage where the application is delivered to the central repository. +- **Release** - The stage where the application is delivered to the central repository. -- **Deploy** - In this stage code is deployed to production environment. +- **Deploy** - In this stage code is deployed to production environment. -- **Validation and compliance** - The steps to validate a build are determined - by the needs of your organization. Image security scanning, security scanning - and code analysis of applications ensure the quality of images and written application's - code. +- **Validation and compliance** - The steps to validate a build are determined + by the needs of your organization. Image security scanning, security scanning + and code analysis of applications ensure the quality of images and written application's + code. ![CI/CD Pipeline Stages](images/ci-cd-flow.png) _Figure: CI/CD Pipeline Stages_ diff --git a/docs/other-tools/CI-CD/github-actions/setup-github-actions-pipeline.md b/docs/other-tools/CI-CD/github-actions/setup-github-actions-pipeline.md index d66acece..76a15d8f 100644 --- a/docs/other-tools/CI-CD/github-actions/setup-github-actions-pipeline.md +++ b/docs/other-tools/CI-CD/github-actions/setup-github-actions-pipeline.md @@ -23,7 +23,7 @@ workflow. ## Deploy an Application to your NERC OpenShift Project -- **Prerequisites** +- **Prerequisites** You must have at least one active **NERC-OCP (OpenShift)** type resource allocation. You can refer to [this documentation](../../../get-started/allocation/requesting-an-allocation.md#request-a-new-openshift-resource-allocation-for-an-openshift-project) @@ -77,10 +77,10 @@ workflow. 7. Enable and Update GitHub Actions Pipeline on your own forked repo: - - Enable the OpenShift Workflow in the Actions tab of in your GitHub repository. + - Enable the OpenShift Workflow in the Actions tab of in your GitHub repository. - - Update the provided sample OpenShift workflow YAML file i.e. `openshift.yml`, - which is located at "`https://github.com//simple-node-app/actions/workflows/openshift.yml`". + - Update the provided sample OpenShift workflow YAML file i.e. `openshift.yml`, + which is located at "`https://github.com//simple-node-app/actions/workflows/openshift.yml`". !!! info "Very Important Information" diff --git a/docs/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline.md b/docs/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline.md index 9020eb1c..f4207c29 100644 --- a/docs/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline.md +++ b/docs/other-tools/CI-CD/jenkins/setup-jenkins-CI-CD-pipeline.md @@ -16,26 +16,26 @@ _Figure: CI/CD Pipeline To Deploy To Kubernetes Cluster Using Jenkins on NERC_ ## Setup a Jenkins Server VM -- Launch 1 Linux machine based on `ubuntu-20.04-x86_64` and `cpu-su.2` flavor with - 2vCPU, 8GB RAM, and 20GB storage. +- Launch 1 Linux machine based on `ubuntu-20.04-x86_64` and `cpu-su.2` flavor with + 2vCPU, 8GB RAM, and 20GB storage. -- Make sure you have added rules in the - [Security Groups](../../../openstack/access-and-security/security-groups.md#allowing-ssh) - to allow **ssh** using Port 22 access to the instance. +- Make sure you have added rules in the + [Security Groups](../../../openstack/access-and-security/security-groups.md#allowing-ssh) + to allow **ssh** using Port 22 access to the instance. -- Setup a new Security Group with the following rules exposing **port 8080** and - attach it to your new instance. +- Setup a new Security Group with the following rules exposing **port 8080** and + attach it to your new instance. ![Jenkins Server Security Group](images/security_groups_jenkins.png) -- [Assign a Floating IP](../../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to your new instance so that you will be able to ssh into this machine: +- [Assign a Floating IP](../../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to your new instance so that you will be able to ssh into this machine: - ssh ubuntu@ -A -i + ssh ubuntu@ -A -i For example: - ssh ubuntu@199.94.60.4 -A -i cloud.key + ssh ubuntu@199.94.60.4 -A -i cloud.key Upon successfully SSH accessing the machine, execute the following dependencies: @@ -43,31 +43,31 @@ Upon successfully SSH accessing the machine, execute the following dependencies: Run the following steps as non-root user i.e. **ubuntu**. -- Update the repositories and packages: +- Update the repositories and packages: - sudo apt-get update && sudo apt-get upgrade -y + sudo apt-get update && sudo apt-get upgrade -y -- Turn off `swap` +- Turn off `swap` - swapoff -a - sudo sed -i '/ swap / s/^/#/' /etc/fstab + swapoff -a + sudo sed -i '/ swap / s/^/#/' /etc/fstab -- Install `curl` and `apt-transport-https` +- Install `curl` and `apt-transport-https` - sudo apt-get update && sudo apt-get install -y apt-transport-https curl + sudo apt-get update && sudo apt-get install -y apt-transport-https curl --- ## Download and install the latest version of **Docker CE** -- Download and install Docker CE: +- Download and install Docker CE: - curl -fsSL https://get.docker.com -o get-docker.sh - sudo sh get-docker.sh + curl -fsSL https://get.docker.com -o get-docker.sh + sudo sh get-docker.sh -- Configure the Docker daemon: +- Configure the Docker daemon: - sudo usermod -aG docker $USER && newgrp docker + sudo usermod -aG docker $USER && newgrp docker --- @@ -75,25 +75,25 @@ Upon successfully SSH accessing the machine, execute the following dependencies: **kubectl**: the command line util to talk to your cluster. -- Download the Google Cloud public signing key and add key to verify releases +- Download the Google Cloud public signing key and add key to verify releases - curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo \ - apt-key add - + curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo \ + apt-key add - -- add kubernetes apt repo +- add kubernetes apt repo - cat < Manage Plugins" as shown below: +- Jenkins has a wide range of plugin options. From your Jenkins dashboard navigate + to "Manage Jenkins > Manage Plugins" as shown below: ![Jenkins Plugin Installation](images/plugins-installation.png) @@ -171,8 +171,8 @@ copy and paste on the web GUI on the browser. ## Create the required Credentials -- Create a global credential for your Docker Hub Registry by providing the username - and password that will be used by the Jenkins pipelines: +- Create a global credential for your Docker Hub Registry by providing the username + and password that will be used by the Jenkins pipelines: 1. Click on the "Manage Jenkins" menu and then click on the "Manage Credentials" link as shown below: @@ -188,8 +188,8 @@ copy and paste on the web GUI on the browser. ![Adding Credentials](images/add-credentials.png) -- First, add the **'DockerHub'** credentials as 'Username with password' with the - ID `dockerhublogin`. +- First, add the **'DockerHub'** credentials as 'Username with password' with the + ID `dockerhublogin`. a. Select the Kind "Username with password" from the dropdown options. @@ -200,9 +200,9 @@ copy and paste on the web GUI on the browser. ![Docker Hub Credentials](images/docker-hub-credentials.png) -- Config the **'Kubeconfig'** credentials as 'Secret file' that holds Kubeconfig - file from K8s master i.e. located at `/etc/kubernetes/admin.conf` with the ID - 'kubernetes' +- Config the **'Kubeconfig'** credentials as 'Secret file' that holds Kubeconfig + file from K8s master i.e. located at `/etc/kubernetes/admin.conf` with the ID + 'kubernetes' a. Click on the "Add Credentials" button in the left pane. @@ -247,89 +247,89 @@ To create a fork of the example `nodeapp` repository: ## Modify the Jenkins Declarative Pipeline Script file -- Modify the provided ‘**Jenkinsfile**’ to specify your own Docker Hub account - and github repository as specified in "``" and "``". +- Modify the provided ‘**Jenkinsfile**’ to specify your own Docker Hub account + and github repository as specified in "``" and "``". !!! warning "Very Important Information" - You need to replace "``" and "``" - with your actual DockerHub and GitHub usernames, respectively. Also, - ensure that the global credentials IDs mentioned above match those used - during the credential saving steps mentioned earlier. For instance, - `dockerhublogin` corresponds to the **DockerHub** ID saved during the - credential saving process for your Docker Hub Registry's username and - password. Similarly, `kubernetes` corresponds to the **'Kubeconfig'** ID - assigned for the Kubeconfig credential file. + You need to replace "``" and "``" + with your actual DockerHub and GitHub usernames, respectively. Also, + ensure that the global credentials IDs mentioned above match those used + during the credential saving steps mentioned earlier. For instance, + `dockerhublogin` corresponds to the **DockerHub** ID saved during the + credential saving process for your Docker Hub Registry's username and + password. Similarly, `kubernetes` corresponds to the **'Kubeconfig'** ID + assigned for the Kubeconfig credential file. -- Below is an example of a Jenkins declarative Pipeline Script file: +- Below is an example of a Jenkins declarative Pipeline Script file: pipeline { - environment { - dockerimagename = "/nodeapp:${env.BUILD_NUMBER}" - dockerImage = "" - } - - agent any - - stages { - - stage('Checkout Source') { - steps { - git branch: 'main', url: 'https://github.com//nodeapp.git' - } - } - - stage('Build image') { - steps{ - script { - dockerImage = docker.build dockerimagename - } - } - } - - stage('Pushing Image') { - environment { - registryCredential = 'dockerhublogin' - } - steps{ - script { - docker.withRegistry('https://registry.hub.docker.com', registryCredential){ - dockerImage.push() - } - } - } - } - - stage('Docker Remove Image') { - steps { - sh "docker rmi -f ${dockerimagename}" - sh "docker rmi -f registry.hub.docker.com/${dockerimagename}" - } - } - - stage('Deploying App to Kubernetes') { - steps { - sh "sed -i 's/nodeapp:latest/nodeapp:${env.BUILD_NUMBER}/g' deploymentservice.yml" - withKubeConfig([credentialsId: 'kubernetes']) { - sh 'kubectl apply -f deploymentservice.yml' - } - } - } - } + environment { + dockerimagename = "/nodeapp:${env.BUILD_NUMBER}" + dockerImage = "" + } + + agent any + + stages { + + stage('Checkout Source') { + steps { + git branch: 'main', url: 'https://github.com//nodeapp.git' + } + } + + stage('Build image') { + steps{ + script { + dockerImage = docker.build dockerimagename + } + } + } + + stage('Pushing Image') { + environment { + registryCredential = 'dockerhublogin' + } + steps{ + script { + docker.withRegistry('https://registry.hub.docker.com', registryCredential){ + dockerImage.push() + } + } + } + } + + stage('Docker Remove Image') { + steps { + sh "docker rmi -f ${dockerimagename}" + sh "docker rmi -f registry.hub.docker.com/${dockerimagename}" + } + } + + stage('Deploying App to Kubernetes') { + steps { + sh "sed -i 's/nodeapp:latest/nodeapp:${env.BUILD_NUMBER}/g' deploymentservice.yml" + withKubeConfig([credentialsId: 'kubernetes']) { + sh 'kubectl apply -f deploymentservice.yml' + } + } + } + } } !!! question "Other way to Generate Pipeline Jenkinsfile" - You can generate your custom Jenkinsfile by clicking on **"Pipeline Syntax"** - link shown when you create a new Pipeline when clicking the "New Item" menu - link. + You can generate your custom Jenkinsfile by clicking on **"Pipeline Syntax"** + link shown when you create a new Pipeline when clicking the "New Item" menu + link. ## Setup a Pipeline -- Once you review the provided **Jenkinsfile** and understand the stages, - you can now create a pipeline to trigger it on your newly setup Jenkins server: +- Once you review the provided **Jenkinsfile** and understand the stages, + you can now create a pipeline to trigger it on your newly setup Jenkins server: a. Click on the "New Item" link. @@ -360,10 +360,10 @@ To create a fork of the example `nodeapp` repository: ## How to manually Trigger the Pipeline -- Finally, click on the **"Build Now"** menu link on right side navigation that - will triggers the Pipeline process i.e. Build docker image, Push Image to your - Docker Hub Registry and Pull the image from Docker Registry, Remove local Docker - images and then Deploy to K8s Cluster as shown below: +- Finally, click on the **"Build Now"** menu link on right side navigation that + will triggers the Pipeline process i.e. Build docker image, Push Image to your + Docker Hub Registry and Pull the image from Docker Registry, Remove local Docker + images and then Deploy to K8s Cluster as shown below: ![Jenkins Pipeline Build Now](images/jenkins-pipeline-build.png) diff --git a/docs/other-tools/apache-spark/spark.md b/docs/other-tools/apache-spark/spark.md index d9905720..7d0610d3 100644 --- a/docs/other-tools/apache-spark/spark.md +++ b/docs/other-tools/apache-spark/spark.md @@ -27,16 +27,16 @@ and Scala applications using the IP address of the master VM. ### Setup a Master VM -- To create a master VM for the first time, ensure that the "Image" dropdown option - is selected. In this example, we selected **ubuntu-22.04-x86_64** and the `cpu-su.2` - flavor is being used. +- To create a master VM for the first time, ensure that the "Image" dropdown option + is selected. In this example, we selected **ubuntu-22.04-x86_64** and the `cpu-su.2` + flavor is being used. -- Make sure you have added rules in the - [Security Groups](../../openstack/access-and-security/security-groups.md#allowing-ssh) - to allow **ssh** using Port 22 access to the instance. +- Make sure you have added rules in the + [Security Groups](../../openstack/access-and-security/security-groups.md#allowing-ssh) + to allow **ssh** using Port 22 access to the instance. -- [Assign a Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to your new instance so that you will be able to ssh into this machine: +- [Assign a Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to your new instance so that you will be able to ssh into this machine: ```sh ssh ubuntu@ -A -i @@ -48,14 +48,14 @@ and Scala applications using the IP address of the master VM. ssh ubuntu@199.94.61.4 -A -i cloud.key ``` -- Upon successfully accessing the machine, execute the following dependencies: +- Upon successfully accessing the machine, execute the following dependencies: ```sh sudo apt-get -y update sudo apt install default-jre -y ``` -- Download and install Scala: +- Download and install Scala: ```sh wget https://downloads.lightbend.com/scala/2.13.10/scala-2.13.10.deb @@ -65,10 +65,10 @@ and Scala applications using the IP address of the master VM. !!! note "Note" - Installing Scala means installing various command-line tools such as the - Scala compiler and build tools. + Installing Scala means installing various command-line tools such as the + Scala compiler and build tools. -- Download and unpack Apache Spark: +- Download and unpack Apache Spark: ```sh SPARK_VERSION="3.4.2" @@ -81,12 +81,12 @@ and Scala applications using the IP address of the master VM. !!! warning "Very Important Note" - Please ensure you are using the latest Spark version by modifying the - `SPARK_VERSION` in the above script. Additionally, verify that the version - exists on the `APACHE_MIRROR` website. Please note the value of `SPARK_VERSION` - as you will need it during [Preparing Jobs for Execution and Examination](#preparing-jobs-for-execution-and-examination). + Please ensure you are using the latest Spark version by modifying the + `SPARK_VERSION` in the above script. Additionally, verify that the version + exists on the `APACHE_MIRROR` website. Please note the value of `SPARK_VERSION` + as you will need it during [Preparing Jobs for Execution and Examination](#preparing-jobs-for-execution-and-examination). -- Create an SSH/RSA Key by running `ssh-keygen -t rsa` without using any passphrase: +- Create an SSH/RSA Key by running `ssh-keygen -t rsa` without using any passphrase: ```sh ssh-keygen -t rsa @@ -113,30 +113,30 @@ and Scala applications using the IP address of the master VM. +----[SHA256]-----+ ``` -- Copy and append the contents of **SSH public key** i.e. `~/.ssh/id_rsa.pub` to - the `~/.ssh/authorized_keys` file. +- Copy and append the contents of **SSH public key** i.e. `~/.ssh/id_rsa.pub` to + the `~/.ssh/authorized_keys` file. ### Create a Volume Snapshot of the master VM -- Once you're logged in to NERC's Horizon dashboard. You need to **Shut Off** the - master vm before creating a volume snapshot. +- Once you're logged in to NERC's Horizon dashboard. You need to **Shut Off** the + master vm before creating a volume snapshot. Click _Action -> Shut Off Instance_. Status will change to `Shutoff`. -- Then, create a snapshot of its attached volume by clicking on the "Create snapshot" - from the _Project -> Volumes -> Volumes_ as [described here](../../openstack/backup/backup-with-snapshots.md#volume-snapshots). +- Then, create a snapshot of its attached volume by clicking on the "Create snapshot" + from the _Project -> Volumes -> Volumes_ as [described here](../../openstack/backup/backup-with-snapshots.md#volume-snapshots). ### Create Two Worker Instances from the Volume Snapshot -- Once a snapshot is created and is in "Available" status, you can view and manage - it under the Volumes menu in the Horizon dashboard under Volume Snapshots. +- Once a snapshot is created and is in "Available" status, you can view and manage + it under the Volumes menu in the Horizon dashboard under Volume Snapshots. Navigate to _Project -> Volumes -> Snapshots_. -- You have the option to directly launch this volume as an instance by clicking - on the arrow next to "Create Volume" and selecting "Launch as Instance". +- You have the option to directly launch this volume as an instance by clicking + on the arrow next to "Create Volume" and selecting "Launch as Instance". **NOTE:** Specify **Count: 2** to launch 2 instances using the volume snapshot as shown below: @@ -145,31 +145,31 @@ and Scala applications using the IP address of the master VM. !!! note "Naming, Security Group and Flavor for Worker Nodes" - You can specify the "Instance Name" as "spark-worker", and for each instance, - it will automatically append incremental values at the end, such as - `spark-worker-1` and `spark-worker-2`. Also, make sure you have attached - the [Security Groups](../../openstack/access-and-security/security-groups.md#allowing-ssh) - to allow **ssh** using Port 22 access to the worker instances. + You can specify the "Instance Name" as "spark-worker", and for each instance, + it will automatically append incremental values at the end, such as + `spark-worker-1` and `spark-worker-2`. Also, make sure you have attached + the [Security Groups](../../openstack/access-and-security/security-groups.md#allowing-ssh) + to allow **ssh** using Port 22 access to the worker instances. Additionally, during launch, you will have the option to choose your preferred flavor for the worker nodes, which can differ from the master VM based on your computational requirements. -- Navigate to _Project -> Compute -> Instances_. +- Navigate to _Project -> Compute -> Instances_. -- Restart the shutdown master VM, click _Action -> Start Instance_. +- Restart the shutdown master VM, click _Action -> Start Instance_. -- The final set up for our Spark cluster looks like this, with 1 master node and - 2 worker nodes: +- The final set up for our Spark cluster looks like this, with 1 master node and + 2 worker nodes: ![Spark Cluster VMs](images/spark-nodes.png) ### Configure Spark on the Master VM -- SSH login into the master VM again. +- SSH login into the master VM again. -- Update the `/etc/hosts` file to specify all three hostnames with their corresponding - internal IP addresses. +- Update the `/etc/hosts` file to specify all three hostnames with their corresponding + internal IP addresses. ```sh sudo nano /etc/hosts @@ -189,8 +189,8 @@ computational requirements. !!! danger "Very Important Note" - Make sure to use `>>` instead of `>` to avoid overwriting the existing content - and append the new content at the end of the file. + Make sure to use `>>` instead of `>` to avoid overwriting the existing content + and append the new content at the end of the file. For example, the end of the `/etc/hosts` file looks like this: @@ -202,18 +202,18 @@ computational requirements. 192.168.0.136 worker2 ``` -- Verify that you can SSH into both worker nodes by using `ssh worker1` and - `ssh worker2` from the Spark master node's terminal. +- Verify that you can SSH into both worker nodes by using `ssh worker1` and + `ssh worker2` from the Spark master node's terminal. -- Copy the sample configuration file for the Spark: +- Copy the sample configuration file for the Spark: ```sh cd /usr/local/spark/conf/ cp spark-env.sh.template spark-env.sh ``` -- Update the environment variables file i.e. `spark-env.sh` to include the following - information: +- Update the environment variables file i.e. `spark-env.sh` to include the following + information: ```sh export SPARK_MASTER_HOST='' @@ -222,13 +222,13 @@ computational requirements. !!! tip "Environment Variables" - Executing this command: `readlink -f $(which java)` will display the path - to the current Java setup in your VM. For example: - `/usr/lib/jvm/java-11-openjdk-amd64/bin/java`, you need to remove the - last `bin/java` part, i.e. `/usr/lib/jvm/java-11-openjdk-amd64`, to set - it as the `JAVA_HOME` environment variable. - Learn more about other Spark settings that can be configured through environment - variables [here](https://spark.apache.org/docs/3.4.2/configuration.html#environment-variables). + Executing this command: `readlink -f $(which java)` will display the path + to the current Java setup in your VM. For example: + `/usr/lib/jvm/java-11-openjdk-amd64/bin/java`, you need to remove the + last `bin/java` part, i.e. `/usr/lib/jvm/java-11-openjdk-amd64`, to set + it as the `JAVA_HOME` environment variable. + Learn more about other Spark settings that can be configured through environment + variables [here](https://spark.apache.org/docs/3.4.2/configuration.html#environment-variables). For example: @@ -237,15 +237,15 @@ computational requirements. echo "export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64" >> spark-env.sh ``` -- Source the changed environment variables file i.e. `spark-env.sh`: +- Source the changed environment variables file i.e. `spark-env.sh`: ```sh source spark-env.sh ``` -- Create a file named `slaves` in the Spark configuration directory (i.e., - `/usr/local/spark/conf/`) that specifies all 3 hostnames (nodes) as specified - in `/etc/hosts`: +- Create a file named `slaves` in the Spark configuration directory (i.e., + `/usr/local/spark/conf/`) that specifies all 3 hostnames (nodes) as specified + in `/etc/hosts`: ```sh sudo cat slaves @@ -256,9 +256,9 @@ computational requirements. ## Run the Spark cluster from the Master VM -- SSH into the master VM again if you are not already logged in. +- SSH into the master VM again if you are not already logged in. -- You need to run the Spark cluster from `/usr/local/spark`: +- You need to run the Spark cluster from `/usr/local/spark`: ```sh cd /usr/local/spark @@ -269,8 +269,8 @@ computational requirements. !!! info "How to Stop All Spark Cluster" - To stop all of the Spark cluster nodes, execute `./sbin/stop-all.sh` - command from `/usr/local/spark`. + To stop all of the Spark cluster nodes, execute `./sbin/stop-all.sh` + command from `/usr/local/spark`. ## Connect to the Spark WebUI @@ -283,9 +283,9 @@ that you can use to monitor the status and resource consumption of your Spark cl Apache Spark provides different web UIs: **Master web UI**, **Worker web UI**, and **Application web UI**. -- You can connect to the **Master web UI** using - [SSH Port Forwarding, aka SSH Tunneling](https://www.ssh.com/academy/ssh/tunneling-example) - i.e. **Local Port Forwarding** from your local machine's terminal by running: +- You can connect to the **Master web UI** using + [SSH Port Forwarding, aka SSH Tunneling](https://www.ssh.com/academy/ssh/tunneling-example) + i.e. **Local Port Forwarding** from your local machine's terminal by running: ```sh ssh -N -L :localhost:8080 @ -i @@ -301,19 +301,19 @@ that you can use to monitor the status and resource consumption of your Spark cl ssh -N -L 8080:localhost:8080 ubuntu@199.94.61.4 -i ~/.ssh/cloud.key ``` -- Once the SSH Tunneling is successful, please do not close or stop the terminal - where you are running the SSH Tunneling. Instead, log in to the Master web UI - using your web browser: `http://localhost:` i.e. `http://localhost:8080`. +- Once the SSH Tunneling is successful, please do not close or stop the terminal + where you are running the SSH Tunneling. Instead, log in to the Master web UI + using your web browser: `http://localhost:` i.e. `http://localhost:8080`. The Master web UI offers an overview of the Spark cluster, showcasing the following details: -- Master URL and REST URL -- Available CPUs and memory for the Spark cluster -- Status and allocated resources for each worker -- Details on active and completed applications, including their status, resources, - and duration -- Details on active and completed drivers, including their status and resources +- Master URL and REST URL +- Available CPUs and memory for the Spark cluster +- Status and allocated resources for each worker +- Details on active and completed applications, including their status, resources, + and duration +- Details on active and completed drivers, including their status and resources The Master web UI appears as shown below when you navigate to `http://localhost:` i.e. `http://localhost:8080` from your web browser: @@ -326,7 +326,7 @@ resources for both the Spark cluster and individual applications. ## Preparing Jobs for Execution and Examination -- To run jobs from `/usr/local/spark`, execute the following commands: +- To run jobs from `/usr/local/spark`, execute the following commands: ```sh cd /usr/local/spark @@ -335,11 +335,11 @@ resources for both the Spark cluster and individual applications. !!! warning "Very Important Note" - Please ensure you are using the same Spark version that you have - [downloaded and installed previously](#setup-a-master-vm) as the value - of `SPARK_VERSION` in the above script. + Please ensure you are using the same Spark version that you have + [downloaded and installed previously](#setup-a-master-vm) as the value + of `SPARK_VERSION` in the above script. -- **Single Node Job:** +- **Single Node Job:** Let's quickly start to run a simple job: @@ -347,7 +347,7 @@ resources for both the Spark cluster and individual applications. ./bin/spark-submit --driver-memory 2g --class org.apache.spark.examples.SparkPi examples/jars/spark-examples_2.13-$SPARK_VERSION.jar 50 ``` -- **Cluster Mode Job:** +- **Cluster Mode Job:** Let's submit a longer and more complex job with many tasks that will be distributed among the multi-node cluster, and then view the Master web UI: diff --git a/docs/other-tools/index.md b/docs/other-tools/index.md index 8c11a703..97fef9f4 100644 --- a/docs/other-tools/index.md +++ b/docs/other-tools/index.md @@ -1,8 +1,8 @@ # Kubernetes -- [Kubernetes Overview](kubernetes/kubernetes.md) +- [Kubernetes Overview](kubernetes/kubernetes.md) -- [K8s Flavors Comparision](kubernetes/comparisons.md) +- [K8s Flavors Comparision](kubernetes/comparisons.md) ## i. **Kubernetes Development environment** @@ -38,7 +38,7 @@ ## CI/ CD Tools -- [CI/CD Overview](CI-CD/CI-CD-pipeline.md) +- [CI/CD Overview](CI-CD/CI-CD-pipeline.md) 1. Using Jenkins @@ -52,6 +52,6 @@ ## Apache Spark -- [Apache Spark](apache-spark/spark.md) +- [Apache Spark](apache-spark/spark.md) --- diff --git a/docs/other-tools/kubernetes/k0s.md b/docs/other-tools/kubernetes/k0s.md index ae12d59e..fcfa5aab 100644 --- a/docs/other-tools/kubernetes/k0s.md +++ b/docs/other-tools/kubernetes/k0s.md @@ -2,26 +2,26 @@ ## Key Features -- Available as a single static binary -- Offers a self-hosted, isolated control plane -- Supports a variety of storage backends, including etcd, SQLite, MySQL (or any - compatible), and PostgreSQL. -- Offers an Elastic control plane -- Vanilla upstream Kubernetes -- Supports custom container runtimes (containerd is the default) -- Supports custom Container Network Interface (CNI) plugins (calico is the default) -- Supports x86_64 and arm64 +- Available as a single static binary +- Offers a self-hosted, isolated control plane +- Supports a variety of storage backends, including etcd, SQLite, MySQL (or any + compatible), and PostgreSQL. +- Offers an Elastic control plane +- Vanilla upstream Kubernetes +- Supports custom container runtimes (containerd is the default) +- Supports custom Container Network Interface (CNI) plugins (calico is the default) +- Supports x86_64 and arm64 ## Pre-requisite We will need 1 VM to create a single node kubernetes cluster using `k0s`. We are using following setting for this purpose: -- 1 Linux machine, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS image, - `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to this VM. +- 1 Linux machine, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS image, + `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to this VM. -- setup Unique hostname to the machine using the following command: +- setup Unique hostname to the machine using the following command: ```sh echo " " >> /etc/hosts @@ -39,23 +39,23 @@ We are using following setting for this purpose: Run the below command on the Ubuntu VM: -- SSH into **k0s** machine +- SSH into **k0s** machine -- Switch to root user: `sudo su` +- Switch to root user: `sudo su` -- Update the repositories and packages: +- Update the repositories and packages: ```sh apt-get update && apt-get upgrade -y ``` -- Download k0s: +- Download k0s: ```sh curl -sSLf https://get.k0s.sh | sudo sh ``` -- Install k0s as a service: +- Install k0s as a service: ```sh k0s install controller --single @@ -68,13 +68,13 @@ Run the below command on the Ubuntu VM: INFO[2021-10-12 01:46:01] Installing k0s service ``` -- Start `k0s` as a service: +- Start `k0s` as a service: ```sh k0s start ``` -- Check service, logs and `k0s` status: +- Check service, logs and `k0s` status: ```sh k0s status @@ -85,7 +85,7 @@ Run the below command on the Ubuntu VM: Workloads: true ``` -- Access your cluster using `kubectl`: +- Access your cluster using `kubectl`: ```sh k0s kubectl get nodes @@ -107,19 +107,19 @@ Run the below command on the Ubuntu VM: ## Uninstall k0s -- Stop the service: +- Stop the service: ```sh sudo k0s stop ``` -- Execute the `k0s reset` command - cleans up the installed system service, data - directories, containers, mounts and network namespaces. +- Execute the `k0s reset` command - cleans up the installed system service, data + directories, containers, mounts and network namespaces. ```sh sudo k0s reset ``` -- Reboot the system +- Reboot the system --- diff --git a/docs/other-tools/kubernetes/k3s/k3s-ha-cluster-using-k3d.md b/docs/other-tools/kubernetes/k3s/k3s-ha-cluster-using-k3d.md index 2d060355..967c011a 100644 --- a/docs/other-tools/kubernetes/k3s/k3s-ha-cluster-using-k3d.md +++ b/docs/other-tools/kubernetes/k3s/k3s-ha-cluster-using-k3d.md @@ -21,7 +21,7 @@ Here, `--server 3`: specifies requests three nodes to be created with the role s and `--image rancher/k3s:latest`: specifies the K3s image to be used here we are using `latest` -- Switch context to the new cluster: +- Switch context to the new cluster: ```sh kubectl config use-context k3d-k3s-default diff --git a/docs/other-tools/kubernetes/k3s/k3s-ha-cluster.md b/docs/other-tools/kubernetes/k3s/k3s-ha-cluster.md index 4752f35f..0112ddfd 100644 --- a/docs/other-tools/kubernetes/k3s/k3s-ha-cluster.md +++ b/docs/other-tools/kubernetes/k3s/k3s-ha-cluster.md @@ -79,15 +79,15 @@ curl -sfL https://get.k3s.io | sh -s - server \ --tls-san ``` -- Verify all master nodes are visible to one another: +- Verify all master nodes are visible to one another: ```sh sudo k3s kubectl get node ``` -- Generate **token** from one of the K3s Master VMs: - You need to extract a token from the master that will be used to join the nodes - to the control plane by running following command on one of the K3s master node: +- Generate **token** from one of the K3s Master VMs: + You need to extract a token from the master that will be used to join the nodes + to the control plane by running following command on one of the K3s master node: ```sh sudo cat /var/lib/rancher/k3s/server/node-token @@ -127,7 +127,7 @@ sudo systemctl stop k3s **The third server will take over at this point.** -- To restart servers manually: +- To restart servers manually: ```sh sudo systemctl restart k3s @@ -139,11 +139,11 @@ sudo systemctl stop k3s Your local development machine must have installed `kubectl`. -- Copy kubernetes config to your local machine: - Copy the `kubeconfig` file's content located at the K3s master node at `/etc/rancher/k3s/k3s.yaml` - to your local machine's `~/.kube/config` file. Before saving, please change the - cluster server path from **127.0.0.1** to **``**. This - will allow your local machine to see the cluster nodes: +- Copy kubernetes config to your local machine: + Copy the `kubeconfig` file's content located at the K3s master node at `/etc/rancher/k3s/k3s.yaml` + to your local machine's `~/.kube/config` file. Before saving, please change the + cluster server path from **127.0.0.1** to **``**. This + will allow your local machine to see the cluster nodes: ```sh kubectl get nodes @@ -162,7 +162,7 @@ to use for _Installation_: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml ``` -- Dashboard RBAC Configuration: +- Dashboard RBAC Configuration: `dashboard.admin-user.yml` @@ -191,7 +191,7 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/a namespace: kubernetes-dashboard ``` -- Deploy the `admin-user` configuration: +- Deploy the `admin-user` configuration: ```sh sudo k3s kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role.yml @@ -199,17 +199,17 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/a !!! note "Important Note" - If you're doing this from your local development machine, remove `sudo k3s` - and just use `kubectl`) + If you're doing this from your local development machine, remove `sudo k3s` + and just use `kubectl`) -- Get bearer **token** +- Get bearer **token** ```sh sudo k3s kubectl -n kubernetes-dashboard describe secret admin-user-token \ | grep ^token ``` -- Start _dashboard_ locally: +- Start _dashboard_ locally: ```sh sudo k3s kubectl proxy @@ -223,13 +223,13 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/a ## Deploying Nginx using deployment -- Create a deployment `nginx.yaml`: +- Create a deployment `nginx.yaml`: ```sh vi nginx.yaml ``` -- Copy and paste the following content in `nginx.yaml`: +- Copy and paste the following content in `nginx.yaml`: ```sh apiVersion: apps/v1 @@ -259,7 +259,7 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/a sudo k3s kubectl apply -f nginx.yaml ``` -- Verify the nginx pod is in **Running** state: +- Verify the nginx pod is in **Running** state: ```sh sudo k3s kubectl get pods --all-namespaces @@ -277,13 +277,13 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/a kubectl get pods -A -o wide ``` -- Scale the pods to available agents: +- Scale the pods to available agents: ```sh sudo k3s kubectl scale --replicas=2 deploy/mysite ``` -- View all deployment status: +- View all deployment status: ```sh sudo k3s kubectl get deploy mysite @@ -292,7 +292,7 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/a mysite 2/2 2 2 85s ``` -- Delete the nginx deployment and pod: +- Delete the nginx deployment and pod: ```sh sudo k3s kubectl delete -f nginx.yaml diff --git a/docs/other-tools/kubernetes/k3s/k3s-using-k3d.md b/docs/other-tools/kubernetes/k3s/k3s-using-k3d.md index a18250af..fe2ffaaa 100644 --- a/docs/other-tools/kubernetes/k3s/k3s-using-k3d.md +++ b/docs/other-tools/kubernetes/k3s/k3s-using-k3d.md @@ -16,14 +16,14 @@ Availability clusters just with few commands. ## Install **Docker** -- Install container runtime - **docker** +- Install container runtime - **docker** ```sh apt-get install docker.io -y ``` -- Configure the Docker daemon, in particular to use systemd for the management - of the container’s cgroups +- Configure the Docker daemon, in particular to use systemd for the management + of the container’s cgroups ```sh cat < @@ -70,7 +70,7 @@ k3sup --help k3sup install --ip $SERVER_IP --user $USER ``` -- On _Agent_ Node: +- On _Agent_ Node: Next join one or more `agents` to the cluster: @@ -133,7 +133,7 @@ k3sup join --user root --server-ip $LB_IP --ip $AGENT2 \ There will be a kubeconfig file created in the current working directory with the IP address of the LoadBalancer set for kubectl to use. -- Check the nodes have joined: +- Check the nodes have joined: ```sh export KUBECONFIG=`pwd`/kubeconfig diff --git a/docs/other-tools/kubernetes/k3s/k3s.md b/docs/other-tools/kubernetes/k3s/k3s.md index 7d04aff7..a744ebd1 100644 --- a/docs/other-tools/kubernetes/k3s/k3s.md +++ b/docs/other-tools/kubernetes/k3s/k3s.md @@ -2,24 +2,24 @@ ## Features -- Lightweight certified K8s distro +- Lightweight certified K8s distro -- Built for production operations +- Built for production operations -- 40MB binary, 250MB memeory consumption +- 40MB binary, 250MB memeory consumption -- Single process w/ integrated K8s master, Kubelet, and containerd +- Single process w/ integrated K8s master, Kubelet, and containerd -- Supports not only `etcd` to hold the cluster state, but also `SQLite` - (for single-node, simpler setups) or external DBs like `MySQL` and `PostgreSQL` +- Supports not only `etcd` to hold the cluster state, but also `SQLite` + (for single-node, simpler setups) or external DBs like `MySQL` and `PostgreSQL` -- Open source project +- Open source project ## Components and architecure ![K3s Components and architecure](../images/k3s_architecture.png) -- High-Availability K3s Server with an External DB: +- High-Availability K3s Server with an External DB: ![K3s Components and architecure](../images/k3s_high_availability.png) or, ![K3s Components and architecure](../images/k3s_ha_architecture.jpg) @@ -32,16 +32,16 @@ We will need 1 control-plane(master) and 2 worker nodes to create a single control-plane kubernetes cluster using `k3s`. We are using following setting for this purpose: -- 1 Linux machine for master, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS - image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also - [assign Floating IP](../../../openstack/../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to the master node. +- 1 Linux machine for master, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS + image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also + [assign Floating IP](../../../openstack/../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to the master node. -- 2 Linux machines for worker, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS - image, `cpu-su.1` flavor with 1vCPU, 4GB RAM, 20GB storage. +- 2 Linux machines for worker, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS + image, `cpu-su.1` flavor with 1vCPU, 4GB RAM, 20GB storage. -- ssh access to all machines: [Read more here](../../../openstack/../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) - on how to set up SSH on your remote VMs. +- ssh access to all machines: [Read more here](../../../openstack/../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) + on how to set up SSH on your remote VMs. ## Networking @@ -59,18 +59,18 @@ on each node. If you plan on achieving high availability with **embedded etcd**, server nodes must be accessible to each other on ports **2379** and **2380**. -- Create 1 security group with appropriate [Inbound Rules for K3s Server Nodes](https://rancher.com/docs/k3s/latest/en/installation/installation-requirements/#networking) - that will be used by all 3 nodes: +- Create 1 security group with appropriate [Inbound Rules for K3s Server Nodes](https://rancher.com/docs/k3s/latest/en/installation/installation-requirements/#networking) + that will be used by all 3 nodes: ![Inbound Rules for K3s Server Nodes](../images/k3s_security_group.png) !!! note "Important Note" - The VXLAN overlay networking port on nodes should not be exposed to the world - as it opens up your cluster network to be accessed by anyone. Run your nodes - behind a firewall/security group that disables access to port **8472**. + The VXLAN overlay networking port on nodes should not be exposed to the world + as it opens up your cluster network to be accessed by anyone. Run your nodes + behind a firewall/security group that disables access to port **8472**. -- setup Unique hostname to each machine using the following command: +- setup Unique hostname to each machine using the following command: ```sh echo " " >> /etc/hosts @@ -86,25 +86,25 @@ must be accessible to each other on ports **2379** and **2380**. In this step, you will setup the following nodes: -- k3s-master +- k3s-master -- k3s-worker1 +- k3s-worker1 -- k3s-worker2 +- k3s-worker2 The below steps will be performed on all the above mentioned nodes: -- SSH into all the 3 machines +- SSH into all the 3 machines -- Switch as root: `sudo su` +- Switch as root: `sudo su` -- Update the repositories and packages: +- Update the repositories and packages: ```sh apt-get update && apt-get upgrade -y ``` -- Install `curl` and `apt-transport-https` +- Install `curl` and `apt-transport-https` ```sh apt-get update && apt-get install -y apt-transport-https curl @@ -114,14 +114,14 @@ The below steps will be performed on all the above mentioned nodes: ## Install **Docker** -- Install container runtime - **docker** +- Install container runtime - **docker** ```sh apt-get install docker.io -y ``` -- Configure the Docker daemon, in particular to use systemd for the management - of the container’s cgroups +- Configure the Docker daemon, in particular to use systemd for the management + of the container’s cgroups ```sh cat < ``` -- We have successfully deployed nginx web-proxy on k3s. Go to browser, visit `http://` - i.e. to check the nginx default page. +- We have successfully deployed nginx web-proxy on k3s. Go to browser, visit `http://` + i.e. to check the nginx default page. ## Upgrade K3s Using the Installation Script diff --git a/docs/other-tools/kubernetes/kind.md b/docs/other-tools/kubernetes/kind.md index 4e7720fc..c4712153 100644 --- a/docs/other-tools/kubernetes/kind.md +++ b/docs/other-tools/kubernetes/kind.md @@ -5,11 +5,11 @@ We will need 1 VM to create a single node kubernetes cluster using `kind`. We are using following setting for this purpose: -- 1 Linux machine, `almalinux-9-x86_64`, `cpu-su.2` flavor with 2vCPU, 8GB RAM, - 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to this VM. +- 1 Linux machine, `almalinux-9-x86_64`, `cpu-su.2` flavor with 2vCPU, 8GB RAM, + 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to this VM. -- setup Unique hostname to the machine using the following command: +- setup Unique hostname to the machine using the following command: ```sh echo " " >> /etc/hosts @@ -27,11 +27,11 @@ We are using following setting for this purpose: Run the below command on the AlmaLinux VM: -- SSH into **kind** machine +- SSH into **kind** machine -- Switch to root user: `sudo su` +- Switch to root user: `sudo su` -- Execute the below command to initialize the cluster: +- Execute the below command to initialize the cluster: Please remove `container-tools` module that includes stable versions of podman, buildah, skopeo, runc, conmon, etc as well as dependencies and will be removed @@ -62,7 +62,7 @@ sudo install -o root -g root -m 0755 kubectl /usr/bin/kubectl chmod +x /usr/bin/kubectl ``` -- Test to ensure that the `kubectl` is installed: +- Test to ensure that the `kubectl` is installed: ```sh kubectl version --client @@ -88,7 +88,7 @@ kind version kind v0.11.1 go1.16.4 linux/amd64 ``` -- To communicate with cluster, just give the cluster name as a context in kubectl: +- To communicate with cluster, just give the cluster name as a context in kubectl: ```sh kind create cluster --name k8s-kind-cluster1 @@ -108,7 +108,7 @@ kind v0.11.1 go1.16.4 linux/amd64 Have a nice day! 👋 ``` -- Get the cluster details: +- Get the cluster details: ```sh kubectl cluster-info --context kind-k8s-kind-cluster1 diff --git a/docs/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm.md b/docs/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm.md index 5c4d0a93..020fdba2 100644 --- a/docs/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm.md +++ b/docs/other-tools/kubernetes/kubeadm/HA-clusters-with-kubeadm.md @@ -2,13 +2,13 @@ ## Objectives -- Install a multi control-plane(master) Kubernetes cluster +- Install a multi control-plane(master) Kubernetes cluster -- Install a Pod network on the cluster so that your Pods can talk to each other +- Install a Pod network on the cluster so that your Pods can talk to each other -- Deploy and test a sample app +- Deploy and test a sample app -- Deploy K8s Dashboard to view all cluster's components +- Deploy K8s Dashboard to view all cluster's components ## Components and architecure @@ -25,21 +25,21 @@ You will need 2 control-plane(master node) and 2 worker nodes to create a multi-master kubernetes cluster using `kubeadm`. You are going to use the following set up for this purpose: -- 2 Linux machines for master, `ubuntu-20.04-x86_64` or your choice of Ubuntu OS - image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage. +- 2 Linux machines for master, `ubuntu-20.04-x86_64` or your choice of Ubuntu OS + image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage. -- 2 Linux machines for worker, `ubuntu-20.04-x86_64` or your choice of Ubuntu OS - image, `cpu-su.1` flavor with 1vCPU, 4GB RAM, 20GB storage - also - [assign Floating IPs](../../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to both of the worker nodes. +- 2 Linux machines for worker, `ubuntu-20.04-x86_64` or your choice of Ubuntu OS + image, `cpu-su.1` flavor with 1vCPU, 4GB RAM, 20GB storage - also + [assign Floating IPs](../../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to both of the worker nodes. -- 1 Linux machine for loadbalancer, `ubuntu-20.04-x86_64` or your choice of Ubuntu - OS image, `cpu-su.1` flavor with 1vCPU, 4GB RAM, 20GB storage. +- 1 Linux machine for loadbalancer, `ubuntu-20.04-x86_64` or your choice of Ubuntu + OS image, `cpu-su.1` flavor with 1vCPU, 4GB RAM, 20GB storage. -- ssh access to all machines: [Read more here](../../../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) - on how to setup SSH to your remote VMs. +- ssh access to all machines: [Read more here](../../../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) + on how to setup SSH to your remote VMs. -- Create 2 security groups with appropriate [ports and protocols](https://kubernetes.io/docs/reference/ports-and-protocols/): +- Create 2 security groups with appropriate [ports and protocols](https://kubernetes.io/docs/reference/ports-and-protocols/): i. To be used by the master nodes: @@ -49,7 +49,7 @@ ii. To be used by the worker nodes: ![Worker node ports and protocols](../images/worker_nodes_ports_protocols.png) -- setup Unique hostname to each machine using the following command: +- setup Unique hostname to each machine using the following command: ```sh echo " " >> /etc/hosts @@ -105,23 +105,23 @@ outside of the cluster and interacts with the cluster using ports. You have 2 master nodes. Which means the user can connect to either of the 2 apiservers. The loadbalancer will be used to loadbalance between the 2 apiservers. -- Login to the loadbalancer node +- Login to the loadbalancer node -- Switch as root - `sudo su` +- Switch as root - `sudo su` -- Update your repository and your system +- Update your repository and your system ```sh sudo apt-get update && sudo apt-get upgrade -y ``` -- Install haproxy +- Install haproxy ```sh sudo apt-get install haproxy -y ``` -- Edit haproxy configuration +- Edit haproxy configuration ```sh vi /etc/haproxy/haproxy.cfg @@ -142,7 +142,7 @@ apiservers. The loadbalancer will be used to loadbalance between the 2 apiserver !!! note "Note" - 6443 is the default port of **kube-apiserver** + 6443 is the default port of **kube-apiserver** ```sh backend be-apiserver @@ -159,13 +159,13 @@ apiservers. The loadbalancer will be used to loadbalance between the 2 apiserver Here - **master1** and **master2** are the hostnames of the master nodes and **10.138.0.15** and **10.138.0.16** are the corresponding internal IP addresses. -- Ensure haproxy config file is correctly formatted: +- Ensure haproxy config file is correctly formatted: ```sh haproxy -c -q -V -f /etc/haproxy/haproxy.cfg ``` -- Restart and Verify haproxy +- Restart and Verify haproxy ```sh systemctl restart haproxy @@ -183,8 +183,8 @@ apiservers. The loadbalancer will be used to loadbalance between the 2 apiserver !!! note "Note" - If you see failures for `master1` and `master2` connectivity, you can ignore - them for time being as you have not yet installed anything on the servers. + If you see failures for `master1` and `master2` connectivity, you can ignore + them for time being as you have not yet installed anything on the servers. --- @@ -203,44 +203,44 @@ does things like starting pods and containers. In this step, you will install kubelet and kubeadm on the below nodes -- master1 +- master1 -- master2 +- master2 -- worker1 +- worker1 -- worker2 +- worker2 The below steps will be performed on all the above mentioned nodes: -- SSH into all the 4 machines +- SSH into all the 4 machines -- Update the repositories and packages: +- Update the repositories and packages: ```sh sudo apt-get update && sudo apt-get upgrade -y ``` -- Turn off `swap` +- Turn off `swap` ```sh swapoff -a sudo sed -i '/ swap / s/^/#/' /etc/fstab ``` -- Install `curl` and `apt-transport-https` +- Install `curl` and `apt-transport-https` ```sh sudo apt-get update && sudo apt-get install -y apt-transport-https curl ``` -- Download the Google Cloud public signing key and add key to verify releases +- Download the Google Cloud public signing key and add key to verify releases ```sh curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - ``` -- add kubernetes apt repo +- add kubernetes apt repo ```sh cat < 9m17s v1.26.1 - worker2 Ready 9m25s v1.26.1 + NAME STATUS ROLES AGE VERSION + master1 NotReady control-plane,master 21m v1.26.1 + master2 NotReady control-plane,master 15m v1.26.1 + worker1 Ready 9m17s v1.26.1 + worker2 Ready 9m25s v1.26.1 --- @@ -816,11 +816,11 @@ For your example, You will going to setup [K8dash/Skooner](https://github.com/skooner-k8s/skooner) to view a dashboard that shows all your K8s cluster components. -- SSH into `loadbalancer` node +- SSH into `loadbalancer` node -- Switch to root user: `sudo su` +- Switch to root user: `sudo su` -- Apply available deployment by running the following command: +- Apply available deployment by running the following command: ```sh kubectl apply -f https://raw.githubusercontent.com/skooner-k8s/skooner/master/kubernetes-skooner-nodeport.yaml @@ -872,19 +872,19 @@ Setup the **Service Account Token** to access the Skooner Dashboard: The first (and easiest) option is to create a dedicated service account. Run the following commands: -- Create the service account in the current namespace (we assume default) +- Create the service account in the current namespace (we assume default) ```sh kubectl create serviceaccount skooner-sa ``` -- Give that service account root on the cluster +- Give that service account root on the cluster ```sh kubectl create clusterrolebinding skooner-sa --clusterrole=cluster-admin --serviceaccount=default:skooner-sa ``` -- Create a secret that was created to hold the token for the SA: +- Create a secret that was created to hold the token for the SA: ```sh kubectl apply -f - < --delete-emptydir-data --force --ignore-daemonsets ``` -- Before removing the node, reset the state installed by kubeadm: +- Before removing the node, reset the state installed by kubeadm: ```sh kubeadm reset @@ -1008,7 +1008,7 @@ kubectl drain --delete-emptydir-data --force --ignore-daemonsets ipvsadm -C ``` -- Now remove the node: +- Now remove the node: ```sh kubectl delete node diff --git a/docs/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm.md b/docs/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm.md index 27a29278..7239bdc2 100644 --- a/docs/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm.md +++ b/docs/other-tools/kubernetes/kubeadm/single-master-clusters-with-kubeadm.md @@ -2,13 +2,13 @@ ## Objectives -- Install a single control-plane(master) Kubernetes cluster +- Install a single control-plane(master) Kubernetes cluster -- Install a Pod network on the cluster so that your Pods can talk to each other +- Install a Pod network on the cluster so that your Pods can talk to each other -- Deploy and test a sample app +- Deploy and test a sample app -- Deploy K8s Dashboard to view all cluster's components +- Deploy K8s Dashboard to view all cluster's components ## Components and architecure @@ -22,17 +22,17 @@ We will need 1 control-plane(master) and 2 worker node to create a single control-plane kubernetes cluster using `kubeadm`. We are using following setting for this purpose: -- 1 Linux machine for master, `ubuntu-20.04-x86_64`, `cpu-su.2` flavor with 2vCPU, - 8GB RAM, 20GB storage. +- 1 Linux machine for master, `ubuntu-20.04-x86_64`, `cpu-su.2` flavor with 2vCPU, + 8GB RAM, 20GB storage. -- 2 Linux machines for worker, `ubuntu-20.04-x86_64`, `cpu-su.1` flavor with 1vCPU, - 4GB RAM, 20GB storage - also [assign Floating IPs](../../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to both of the worker nodes. +- 2 Linux machines for worker, `ubuntu-20.04-x86_64`, `cpu-su.1` flavor with 1vCPU, + 4GB RAM, 20GB storage - also [assign Floating IPs](../../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to both of the worker nodes. -- ssh access to all machines: [Read more here](../../../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) - on how to set up SSH on your remote VMs. +- ssh access to all machines: [Read more here](../../../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) + on how to set up SSH on your remote VMs. -- Create 2 security groups with appropriate [ports and protocols](https://kubernetes.io/docs/reference/ports-and-protocols/): +- Create 2 security groups with appropriate [ports and protocols](https://kubernetes.io/docs/reference/ports-and-protocols/): i. To be used by the master nodes: @@ -42,7 +42,7 @@ ii. To be used by the worker nodes: ![Worker node ports and protocols](../images/worker_nodes_ports_protocols.png) -- setup Unique hostname to each machine using the following command: +- setup Unique hostname to each machine using the following command: ```sh echo " " >> /etc/hosts @@ -91,42 +91,42 @@ does things like starting pods and containers. In this step, you will install kubelet and kubeadm on the below nodes -- master +- master -- worker1 +- worker1 -- worker2 +- worker2 The below steps will be performed on all the above mentioned nodes: -- SSH into all the 3 machines +- SSH into all the 3 machines -- Update the repositories and packages: +- Update the repositories and packages: ```sh sudo apt-get update && sudo apt-get upgrade -y ``` -- Turn off `swap` +- Turn off `swap` ```sh swapoff -a sudo sed -i '/ swap / s/^/#/' /etc/fstab ``` -- Install `curl` and `apt-transport-https` +- Install `curl` and `apt-transport-https` ```sh sudo apt-get update && sudo apt-get install -y apt-transport-https curl ``` -- Download the Google Cloud public signing key and add key to verify releases +- Download the Google Cloud public signing key and add key to verify releases ```sh curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - ``` -- add kubernetes apt repo +- add kubernetes apt repo ```sh cat < @@ -252,14 +252,14 @@ control plane. !!! note "Important Note" - Please make sure you replace the correct IP of the node with - `` which is the Internal IP of master node. - `--pod-network-cidr` value depends upon what CNI plugin you going to use - so need to be very careful while setting this CIDR values. In our case, - you are going to use **Flannel** CNI network plugin so you will use: - `--pod-network-cidr=10.244.0.0/16`. If you are opted to use **Calico** CNI - network plugin then you need to use: `--pod-network-cidr=192.168.0.0/16` - and if you are opted to use **Weave Net** no need to pass this parameter. + Please make sure you replace the correct IP of the node with + `` which is the Internal IP of master node. + `--pod-network-cidr` value depends upon what CNI plugin you going to use + so need to be very careful while setting this CIDR values. In our case, + you are going to use **Flannel** CNI network plugin so you will use: + `--pod-network-cidr=10.244.0.0/16`. If you are opted to use **Calico** CNI + network plugin then you need to use: `--pod-network-cidr=192.168.0.0/16` + and if you are opted to use **Weave Net** no need to pass this parameter. For example, our `Flannel` CNI network plugin based kubeadm init command with _master node_ with internal IP: `192.168.0.167` look like below: @@ -334,12 +334,12 @@ control plane. !!! warning "Warning" - Kubeadm signs the certificate in the admin.conf to have - `Subject: O = system:masters, CN = kubernetes-admin. system:masters` is a - break-glass, super user group that bypasses the authorization layer - (e.g. RBAC). Do not share the admin.conf file with anyone and instead - grant users custom permissions by generating them a kubeconfig file using - the `kubeadm kubeconfig user` command. + Kubeadm signs the certificate in the admin.conf to have + `Subject: O = system:masters, CN = kubernetes-admin. system:masters` is a + break-glass, super user group that bypasses the authorization layer + (e.g. RBAC). Do not share the admin.conf file with anyone and instead + grant users custom permissions by generating them a kubeconfig file using + the `kubeadm kubeconfig user` command. B. Join worker nodes running following command on individual worker nodes: @@ -350,9 +350,9 @@ control plane. !!! note "Important Note" - **Your output will be different than what is provided here. While - performing the rest of the demo, ensure that you are executing the - command provided by your output and dont copy and paste from here.** + **Your output will be different than what is provided here. While + performing the rest of the demo, ensure that you are executing the + command provided by your output and dont copy and paste from here.** If you do not have the token, you can get it by running the following command on the control-plane node: @@ -405,20 +405,20 @@ control plane. Now that you have initialized the master - you can now work on bootstrapping the worker nodes. -- SSH into **worker1** and **worker2** +- SSH into **worker1** and **worker2** -- Switch to root user on both the machines: `sudo su` +- Switch to root user on both the machines: `sudo su` -- Check the output given by the init command on **master** to join worker node: +- Check the output given by the init command on **master** to join worker node: ```sh kubeadm join 192.168.0.167:6443 --token cnslau.kd5fjt96jeuzymzb \ --discovery-token-ca-cert-hash sha256:871ab3f050bc9790c977daee9e44cf52e15ee37ab9834567333b939458a5bfb5 ``` -- Execute the above command on both the nodes: +- Execute the above command on both the nodes: -- Your output should look like: +- Your output should look like: ```sh This node has joined the cluster: @@ -431,7 +431,7 @@ worker nodes. ## Validate all cluster components and nodes are visible on all nodes -- Verify the cluster +- Verify the cluster ```sh kubectl get nodes @@ -609,11 +609,11 @@ For your example, You will going to setup [K8dash/Skooner](https://github.com/skooner-k8s/skooner) to view a dashboard that shows all your K8s cluster components. -- SSH into `master` node +- SSH into `master` node -- Switch to root user: `sudo su` +- Switch to root user: `sudo su` -- Apply available deployment by running the following command: +- Apply available deployment by running the following command: ```sh kubectl apply -f https://raw.githubusercontent.com/skooner-k8s/skooner/master/kubernetes-skooner-nodeport.yaml @@ -663,19 +663,19 @@ Setup the **Service Account Token** to access the Skooner Dashboard: The first (and easiest) option is to create a dedicated service account. Run the following commands: -- Create the service account in the current namespace (we assume default) +- Create the service account in the current namespace (we assume default) ```sh kubectl create serviceaccount skooner-sa ``` -- Give that service account root on the cluster +- Give that service account root on the cluster ```sh kubectl create clusterrolebinding skooner-sa --clusterrole=cluster-admin --serviceaccount=default:skooner-sa ``` -- Create a secret that was created to hold the token for the SA: +- Create a secret that was created to hold the token for the SA: ```sh kubectl apply -f - < --delete-emptydir-data --force --ignore-daemonsets ``` -- Before removing the node, reset the state installed by kubeadm: +- Before removing the node, reset the state installed by kubeadm: ```sh kubeadm reset @@ -800,7 +800,7 @@ kubectl drain --delete-emptydir-data --force --ignore-daemonsets ipvsadm -C ``` -- Now remove the node: +- Now remove the node: ```sh kubectl delete node diff --git a/docs/other-tools/kubernetes/kubespray.md b/docs/other-tools/kubernetes/kubespray.md index fe774f6e..453647a2 100644 --- a/docs/other-tools/kubernetes/kubespray.md +++ b/docs/other-tools/kubernetes/kubespray.md @@ -6,22 +6,22 @@ We will need 1 control-plane(master) and 1 worker node to create a single control-plane kubernetes cluster using `Kubespray`. We are using following setting for this purpose: -- 1 Linux machine for Ansible master, `ubuntu-22.04-x86_64` or your choice of Ubuntu - OS image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage. +- 1 Linux machine for Ansible master, `ubuntu-22.04-x86_64` or your choice of Ubuntu + OS image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage. -- 1 Linux machine for master, `ubuntu-22.04-x86_64` or your choice of Ubuntu - OS image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to the master node. +- 1 Linux machine for master, `ubuntu-22.04-x86_64` or your choice of Ubuntu + OS image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - + also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to the master node. -- 1 Linux machines for worker, `ubuntu-22.04-x86_64` or your choice of Ubuntu - OS image, `cpu-su.1` flavor with 1vCPU, 4GB RAM, 20GB storage. +- 1 Linux machines for worker, `ubuntu-22.04-x86_64` or your choice of Ubuntu + OS image, `cpu-su.1` flavor with 1vCPU, 4GB RAM, 20GB storage. -- ssh access to all machines: [Read more here](../../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) - on how to set up SSH on your remote VMs. +- ssh access to all machines: [Read more here](../../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) + on how to set up SSH on your remote VMs. -- To allow SSH from **Ansible master** to all **other nodes**: [Read more here](../../openstack/create-and-connect-to-the-VM/ssh-to-the-VM.md#adding-other-peoples-ssh-keys-to-the-instance) - Generate SSH key for Ansible master node using: +- To allow SSH from **Ansible master** to all **other nodes**: [Read more here](../../openstack/create-and-connect-to-the-VM/ssh-to-the-VM.md#adding-other-peoples-ssh-keys-to-the-instance) + Generate SSH key for Ansible master node using: ```sh ssh-keygen -t rsa @@ -54,7 +54,7 @@ for this purpose: end of `~/.ssh/authorized_keys` file of the other master and worker nodes. This will allow `ssh ` from the Ansible master node's terminal. -- Create 2 security groups with appropriate [ports and protocols](https://kubernetes.io/docs/reference/ports-and-protocols/): +- Create 2 security groups with appropriate [ports and protocols](https://kubernetes.io/docs/reference/ports-and-protocols/): i. To be used by the master nodes: ![Control plane ports and protocols](images/control_plane_ports_protocols.png) @@ -62,7 +62,7 @@ for this purpose: ii. To be used by the worker nodes: ![Worker node ports and protocols](images/worker_nodes_ports_protocols.png) -- setup Unique hostname to each machine using the following command: +- setup Unique hostname to each machine using the following command: ```sh echo " " >> /etc/hosts @@ -78,25 +78,25 @@ for this purpose: In this step, you will update packages and disable `swap` on the all 3 nodes: -- 1 Ansible Master Node - ansible_master +- 1 Ansible Master Node - ansible_master -- 1 Kubernetes Master Node - kubspray_master +- 1 Kubernetes Master Node - kubspray_master -- 1 Kubernetes Worker Node - kubspray_worker1 +- 1 Kubernetes Worker Node - kubspray_worker1 The below steps will be performed on all the above mentioned nodes: -- SSH into all the 3 machines +- SSH into all the 3 machines -- Switch as root: `sudo su` +- Switch as root: `sudo su` -- Update the repositories and packages: +- Update the repositories and packages: ```sh apt-get update && apt-get upgrade -y ``` -- Turn off `swap` +- Turn off `swap` ```sh swapoff -a @@ -110,13 +110,13 @@ The below steps will be performed on all the above mentioned nodes: Run the below command on the master node i.e. `master` that you want to setup as control plane. -- SSH into **ansible_master** machine +- SSH into **ansible_master** machine -- Switch to root user: `sudo su` +- Switch to root user: `sudo su` -- Execute the below command to initialize the cluster: +- Execute the below command to initialize the cluster: -- Install Python3 and upgrade pip to pip3: +- Install Python3 and upgrade pip to pip3: ```sh apt install python3-pip -y @@ -125,26 +125,26 @@ control plane. pip -V ``` -- Clone the _Kubespray_ git repository: +- Clone the _Kubespray_ git repository: ```sh git clone https://github.com/kubernetes-sigs/kubespray.git cd kubespray ``` -- Install dependencies from `requirements.txt`: +- Install dependencies from `requirements.txt`: ```sh pip install -r requirements.txt ``` -- Copy `inventory/sample` as `inventory/mycluster` +- Copy `inventory/sample` as `inventory/mycluster` ```sh cp -rfp inventory/sample inventory/mycluster ``` -- Update Ansible inventory file with inventory builder: +- Update Ansible inventory file with inventory builder: This step is little trivial because we need to update `hosts.yml` with the nodes IP. @@ -176,7 +176,7 @@ control plane. DEBUG: adding host node2 to group kube_node ``` -- After running the above commands do verify the `hosts.yml` and its content: +- After running the above commands do verify the `hosts.yml` and its content: ```sh cat inventory/mycluster/hosts.yml @@ -215,30 +215,30 @@ control plane. hosts: {} ``` -- Review and change parameters under `inventory/mycluster/group_vars` +- Review and change parameters under `inventory/mycluster/group_vars` ```sh cat inventory/mycluster/group_vars/all/all.yml cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml ``` -- It can be useful to set the following two variables to **true** in - `inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml`: `kubeconfig_localhost` - (to make a copy of `kubeconfig` on the host that runs Ansible in - `{ inventory_dir }/artifacts`) and `kubectl_localhost` - (to download `kubectl` onto the host that runs Ansible in `{ bin_dir }`). +- It can be useful to set the following two variables to **true** in + `inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml`: `kubeconfig_localhost` + (to make a copy of `kubeconfig` on the host that runs Ansible in + `{ inventory_dir }/artifacts`) and `kubectl_localhost` + (to download `kubectl` onto the host that runs Ansible in `{ bin_dir }`). !!! note "Very Important" - As **Ubuntu 20 kvm kernel** doesn't have **dummy module** we need to **modify** - the following two variables in `inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml`: - `enable_nodelocaldns: false` and `kube_proxy_mode: iptables` which will - _Disable nodelocal dns cache_ and _Kube-proxy proxyMode to iptables_ respectively. + As **Ubuntu 20 kvm kernel** doesn't have **dummy module** we need to **modify** + the following two variables in `inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml`: + `enable_nodelocaldns: false` and `kube_proxy_mode: iptables` which will + _Disable nodelocal dns cache_ and _Kube-proxy proxyMode to iptables_ respectively. -- Deploy Kubespray with Ansible Playbook - run the playbook as `root` user. - The option `--become` is required, as for example writing SSL keys in `/etc/`, - installing packages and interacting with various `systemd` daemons. Without - `--become` the playbook will fail to run! +- Deploy Kubespray with Ansible Playbook - run the playbook as `root` user. + The option `--become` is required, as for example writing SSL keys in `/etc/`, + installing packages and interacting with various `systemd` daemons. Without + `--become` the playbook will fail to run! ```sh ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml @@ -246,14 +246,14 @@ control plane. !!! note "Note" - Running ansible playbook takes little time because it depends on the network - bandwidth also. + Running ansible playbook takes little time because it depends on the network + bandwidth also. --- ## Install **kubectl** on Kubernetes master node .i.e. `kubspray_master` -- Install kubectl binary +- Install kubectl binary ```sh snap install kubectl --classic @@ -261,7 +261,7 @@ control plane. This outputs: `kubectl 1.26.1 from Canonical✓ installed` -- Now verify the kubectl version: +- Now verify the kubectl version: ```sh kubectl version -o yaml @@ -271,7 +271,7 @@ control plane. ## Validate all cluster components and nodes are visible on all nodes -- Verify the cluster +- Verify the cluster ```sh kubectl get nodes @@ -285,8 +285,8 @@ control plane. ## Deploy A [Hello Minikube Application](minikube.md#deploy-a-hello-minikube-application) -- Use the kubectl create command to create a Deployment that manages a Pod. The - Pod runs a Container based on the provided Docker image. +- Use the kubectl create command to create a Deployment that manages a Pod. The + Pod runs a Container based on the provided Docker image. ```sh kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4 @@ -298,7 +298,7 @@ control plane. service/hello-minikube exposed ``` -- View the deployments information: +- View the deployments information: ```sh kubectl get deployments @@ -307,7 +307,7 @@ control plane. hello-minikube 1/1 1 1 50s ``` -- View the port information: +- View the port information: ```sh kubectl get svc hello-minikube @@ -316,7 +316,7 @@ control plane. hello-minikube LoadBalancer 10.233.35.126 8080:30723/TCP 40s ``` -- Expose the service locally +- Expose the service locally ```sh kubectl port-forward svc/hello-minikube 30723:8080 diff --git a/docs/other-tools/kubernetes/microk8s.md b/docs/other-tools/kubernetes/microk8s.md index 0cf5f652..8a9ab5e3 100644 --- a/docs/other-tools/kubernetes/microk8s.md +++ b/docs/other-tools/kubernetes/microk8s.md @@ -5,11 +5,11 @@ We will need 1 VM to create a single node kubernetes cluster using `microk8s`. We are using following setting for this purpose: -- 1 Linux machine, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS image, - `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to this VM. +- 1 Linux machine, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS image, + `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to this VM. -- setup Unique hostname to the machine using the following command: +- setup Unique hostname to the machine using the following command: ```sh echo " " >> /etc/hosts @@ -27,29 +27,29 @@ We are using following setting for this purpose: Run the below command on the Ubuntu VM: -- SSH into **microk8s** machine +- SSH into **microk8s** machine -- Switch to root user: `sudo su` +- Switch to root user: `sudo su` -- Update the repositories and packages: +- Update the repositories and packages: ```sh apt-get update && apt-get upgrade -y ``` -- Install MicroK8s: +- Install MicroK8s: ```sh sudo snap install microk8s --classic ``` -- Check the status while Kubernetes starts +- Check the status while Kubernetes starts ```sh microk8s status --wait-ready ``` -- Turn on the services you want: +- Turn on the services you want: ```sh microk8s enable dns dashboard @@ -59,7 +59,7 @@ Run the below command on the Ubuntu VM: `microk8s disable ` turns off a service. For example other useful services are: `microk8s enable registry istio storage` -- Start using Kubernetes +- Start using Kubernetes ```sh microk8s kubectl get all --all-namespaces @@ -70,8 +70,8 @@ Run the below command on the Ubuntu VM: upstream kubectl, you can also drive other Kubernetes clusters with it by pointing to the respective kubeconfig file via the `--kubeconfig` argument. -- Access the [Kubernetes dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) - UI: +- Access the [Kubernetes dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) + UI: ![Microk8s Dashboard Ports](images/microk8s_dashboard_ports.png) @@ -82,15 +82,15 @@ Run the below command on the Ubuntu VM: !!! note "Note" - Another way to access the default token to be used for the dashboard access - can be retrieved with: + Another way to access the default token to be used for the dashboard access + can be retrieved with: - ```sh - token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d "" -f1) - microk8s kubectl -n kube-system describe secret $token - ``` + ```sh + token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d "" -f1) + microk8s kubectl -n kube-system describe secret $token + ``` -- Keep running the kubernetes-dashboad on Proxy to access it via web browser: +- Keep running the kubernetes-dashboad on Proxy to access it via web browser: ```sh microk8s dashboard-proxy @@ -103,11 +103,11 @@ Run the below command on the Ubuntu VM: !!! note "Important" - This tells us the IP address of the Dashboard and the port. The values assigned - to your Dashboard will differ. Please note the displayed **PORT** and - the **TOKEN** that are required to access the kubernetes-dashboard. Make - sure, the exposed **PORT** is opened in Security Groups for the instance - following [this guide](../../openstack/access-and-security/security-groups.md). + This tells us the IP address of the Dashboard and the port. The values assigned + to your Dashboard will differ. Please note the displayed **PORT** and + the **TOKEN** that are required to access the kubernetes-dashboard. Make + sure, the exposed **PORT** is opened in Security Groups for the instance + following [this guide](../../openstack/access-and-security/security-groups.md). This will show the token to login to the Dashbord shown on the url with NodePort. @@ -162,19 +162,19 @@ i.e. to check the nginx default page. ## Deploy A Sample Nginx Application -- Create an alias: +- Create an alias: ```sh alias mkctl="microk8s kubectl" ``` -- Create a deployment, in this case **Nginx**: +- Create a deployment, in this case **Nginx**: ```sh mkctl create deployment --image nginx my-nginx ``` -- To access the deployment we will need to expose it: +- To access the deployment we will need to expose it: ```sh mkctl expose deployment my-nginx --port=80 --type=NodePort diff --git a/docs/other-tools/kubernetes/minikube.md b/docs/other-tools/kubernetes/minikube.md index 379e41ce..10c2de55 100644 --- a/docs/other-tools/kubernetes/minikube.md +++ b/docs/other-tools/kubernetes/minikube.md @@ -2,24 +2,24 @@ ## Minimum system requirements for minikube -- 2 GB RAM or more -- 2 CPU / vCPUs or more -- 20 GB free hard disk space or more -- Docker / Virtual Machine Manager – KVM & VirtualBox. Docker, Hyperkit, Hyper-V, - KVM, Parallels, Podman, VirtualBox, or VMWare are examples of container or virtual - machine managers. +- 2 GB RAM or more +- 2 CPU / vCPUs or more +- 20 GB free hard disk space or more +- Docker / Virtual Machine Manager – KVM & VirtualBox. Docker, Hyperkit, Hyper-V, + KVM, Parallels, Podman, VirtualBox, or VMWare are examples of container or virtual + machine managers. ## Pre-requisite We will need 1 VM to create a single node kubernetes cluster using `minikube`. We are using following setting for this purpose: -- 1 Linux machine for master, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS - image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also - [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) - to this VM. +- 1 Linux machine for master, `ubuntu-22.04-x86_64` or your choice of Ubuntu OS + image, `cpu-su.2` flavor with 2vCPU, 8GB RAM, 20GB storage - also + [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md) + to this VM. -- setup Unique hostname to the machine using the following command: +- setup Unique hostname to the machine using the following command: ```sh echo " " >> /etc/hosts @@ -41,15 +41,15 @@ Run the below command on the Ubuntu VM: Run the following steps as non-root user i.e. **ubuntu**. -- SSH into **minikube** machine +- SSH into **minikube** machine -- Update the repositories and packages: +- Update the repositories and packages: ```sh sudo apt-get update && sudo apt-get upgrade -y ``` -- Install `curl`, `wget`, and `apt-transport-https` +- Install `curl`, `wget`, and `apt-transport-https` ```sh sudo apt-get update && sudo apt-get install -y curl wget apt-transport-https @@ -59,14 +59,14 @@ Run the below command on the Ubuntu VM: ## Download and install the latest version of **Docker CE** -- Download and install Docker CE: +- Download and install Docker CE: ```sh curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh ``` -- Configure the Docker daemon: +- Configure the Docker daemon: ```sh sudo usermod -aG docker $USER && newgrp docker @@ -76,7 +76,7 @@ Run the below command on the Ubuntu VM: ## Install **kubectl** -- Install kubectl binary +- Install kubectl binary **kubectl**: the command line util to talk to your cluster. @@ -90,7 +90,7 @@ Run the below command on the Ubuntu VM: kubectl 1.26.1 from Canonical✓ installed ``` -- Now verify the kubectl version: +- Now verify the kubectl version: ```sh sudo kubectl version -o yaml @@ -105,7 +105,7 @@ To run containers in Pods, Kubernetes uses a [container runtime](https://kuberne By default, Kubernetes uses the **Container Runtime Interface (CRI)** to interface with your chosen container runtime. -- Install container runtime - **containerd** +- Install container runtime - **containerd** The first thing to do is configure the persistent loading of the necessary `containerd` modules. This forwarding IPv4 and letting iptables see bridged @@ -121,7 +121,7 @@ with your chosen container runtime. sudo modprobe br_netfilter ``` -- Ensure `net.bridge.bridge-nf-call-iptables` is set to `1` in your sysctl config: +- Ensure `net.bridge.bridge-nf-call-iptables` is set to `1` in your sysctl config: ```sh # sysctl params required by setup, params persist across reboots @@ -132,20 +132,20 @@ with your chosen container runtime. EOF ``` -- Apply sysctl params without reboot: +- Apply sysctl params without reboot: ```sh sudo sysctl --system ``` -- Install the necessary dependencies with: +- Install the necessary dependencies with: ```sh sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates ``` -- The `containerd.io` packages in DEB and RPM formats are distributed by Docker. - Add the required GPG key with: +- The `containerd.io` packages in DEB and RPM formats are distributed by Docker. + Add the required GPG key with: ```sh curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - @@ -177,7 +177,7 @@ with your chosen container runtime. ## Installing minikube -- Install minikube +- Install minikube ```sh curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb @@ -192,7 +192,7 @@ with your chosen container runtime. chmod +x /usr/bin/minikube ``` -- Verify the Minikube installation: +- Verify the Minikube installation: ```sh minikube version @@ -201,7 +201,7 @@ with your chosen container runtime. commit: ddac20b4b34a9c8c857fc602203b6ba2679794d3 ``` -- Install conntrack: +- Install conntrack: Kubernetes 1.26.1 requires conntrack to be installed in root's path: @@ -209,7 +209,7 @@ with your chosen container runtime. sudo apt-get install -y conntrack ``` -- Start minikube: +- Start minikube: As we are already stated in the beginning that we would be using docker as base for minikue, so start the minikube with the docker driver, @@ -220,32 +220,32 @@ with your chosen container runtime. !!! note "Note" - - To check the internal IP, run the `minikube ip` command. + - To check the internal IP, run the `minikube ip` command. - - By default, Minikube uses the driver most relevant to the host OS. To - use a different driver, set the `--driver` flag in `minikube start`. For - example, to use others or none instead of Docker, run - `minikube start --driver=none`. To persistent configuration so that - you to run minikube start without explicitly passing i.e. in global scope - the `--vm-driver docker` flag each time, run: - `minikube config set vm-driver docker`. + - By default, Minikube uses the driver most relevant to the host OS. To + use a different driver, set the `--driver` flag in `minikube start`. For + example, to use others or none instead of Docker, run + `minikube start --driver=none`. To persistent configuration so that + you to run minikube start without explicitly passing i.e. in global scope + the `--vm-driver docker` flag each time, run: + `minikube config set vm-driver docker`. - - Other start options: - `minikube start --force --driver=docker --network-plugin=cni --container-runtime=containerd` + - Other start options: + `minikube start --force --driver=docker --network-plugin=cni --container-runtime=containerd` - - In case you want to start minikube with customize resources and want installer - to automatically select the driver then you can run following command, - `minikube start --addons=ingress --cpus=2 --cni=flannel --install-addons=true - --kubernetes-version=stable --memory=6g` + - In case you want to start minikube with customize resources and want installer + to automatically select the driver then you can run following command, + `minikube start --addons=ingress --cpus=2 --cni=flannel --install-addons=true + --kubernetes-version=stable --memory=6g` - Output would like below: + Output would like below: - ![Minikube sucessfully started](images/minikube_started.png) + ![Minikube sucessfully started](images/minikube_started.png) - Perfect, above confirms that minikube cluster has been configured and started - successfully. + Perfect, above confirms that minikube cluster has been configured and started + successfully. -- Run below minikube command to check status: +- Run below minikube command to check status: ```sh minikube status @@ -258,7 +258,7 @@ with your chosen container runtime. kubeconfig: Configured ``` -- Run following kubectl command to verify the cluster info and node status: +- Run following kubectl command to verify the cluster info and node status: ```sh kubectl cluster-info @@ -276,7 +276,7 @@ with your chosen container runtime. minikube Ready control-plane,master 5m v1.26.1 ``` -- To see the kubectl configuration use the command: +- To see the kubectl configuration use the command: ```sh kubectl config view @@ -286,7 +286,7 @@ with your chosen container runtime. ![Minikube config view](images/minikube_config.png) -- Get minikube addon details: +- Get minikube addon details: ```sh minikube addons list @@ -301,7 +301,7 @@ with your chosen container runtime. minikube addons enable ``` -- Enable minikube dashboard addon: +- Enable minikube dashboard addon: ```sh minikube dashboard @@ -315,7 +315,7 @@ with your chosen container runtime. http://127.0.0.1:40783/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ ``` -- To view minikube dashboard url: +- To view minikube dashboard url: ```sh minikube dashboard --url @@ -326,7 +326,7 @@ with your chosen container runtime. http://127.0.0.1:42669/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ ``` -- Expose Dashboard on **NodePort** instead of **ClusterIP**: +- Expose Dashboard on **NodePort** instead of **ClusterIP**: -- Check the current port for `kubernetes-dashboard`: @@ -358,7 +358,7 @@ with your chosen container runtime. ## Deploy A Sample Nginx Application -- Create a deployment, in this case **Nginx**: +- Create a deployment, in this case **Nginx**: A Kubernetes Pod is a group of one or more Containers, tied together for the purposes of administration and networking. The Pod in this tutorial has only @@ -366,7 +366,7 @@ with your chosen container runtime. restarts the Pod's Container if it terminates. Deployments are the recommended way to manage the creation and scaling of Pods. -- Let's check if the Kubernetes cluster is up and running: +- Let's check if the Kubernetes cluster is up and running: ```sh kubectl get all --all-namespaces @@ -378,7 +378,7 @@ with your chosen container runtime. kubectl create deployment --image nginx my-nginx ``` -- To access the deployment we will need to expose it: +- To access the deployment we will need to expose it: ```sh kubectl expose deployment my-nginx --port=80 --type=NodePort @@ -431,15 +431,15 @@ with your chosen container runtime. ## Deploy A Hello Minikube Application -- Use the kubectl create command to create a Deployment that manages a Pod. The - Pod runs a Container based on the provided Docker image. +- Use the kubectl create command to create a Deployment that manages a Pod. The + Pod runs a Container based on the provided Docker image. ```sh kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4 kubectl expose deployment hello-minikube --type=NodePort --port=8080 ``` -- View the port information: +- View the port information: ```sh kubectl get svc hello-minikube @@ -469,39 +469,39 @@ kubectl delete deployment hello-minikube ## Managing Minikube Cluster -- To stop the minikube, run +- To stop the minikube, run ```sh minikube stop ``` -- To delete the single node cluster: +- To delete the single node cluster: ```sh minikube delete ``` -- To Start the minikube, run +- To Start the minikube, run ```sh minikube start ``` -- Remove the Minikube configuration and data directories: +- Remove the Minikube configuration and data directories: ```sh rm -rf ~/.minikube rm -rf ~/.kube ``` -- If you have installed any Minikube related packages, remove them: +- If you have installed any Minikube related packages, remove them: ```sh sudo apt remove -y conntrack ``` -- In case you want to start the minikube with higher resource like 8 GB RM and - 4 CPU then execute following commands one after the another. +- In case you want to start the minikube with higher resource like 8 GB RM and + 4 CPU then execute following commands one after the another. ```sh minikube config set cpus 4 diff --git a/docs/other-tools/nfs/nfs-server-client-setup.md b/docs/other-tools/nfs/nfs-server-client-setup.md index 1869700e..ab9acb3d 100644 --- a/docs/other-tools/nfs/nfs-server-client-setup.md +++ b/docs/other-tools/nfs/nfs-server-client-setup.md @@ -10,20 +10,20 @@ allowing them to access and work with the files it contains. We are using the following configuration to set up the NFS server and client on Ubuntu-based NERC OpenStack VMs: -- 1 Linux machine for the **NFS Server**, `ubuntu-24.04-x86_64`, `cpu-su.1` flavor - with 1vCPU, 4GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md). - Please note the NFS Server's Internal IP i.e. `` - i.e. `192.168.0.73` in this example. +- 1 Linux machine for the **NFS Server**, `ubuntu-24.04-x86_64`, `cpu-su.1` flavor + with 1vCPU, 4GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md). + Please note the NFS Server's Internal IP i.e. `` + i.e. `192.168.0.73` in this example. -- 1 Linux machine for the **NFS Client**, `ubuntu-24.04-x86_64`, `cpu-su.1` flavor - with 1vCPU, 4GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md). +- 1 Linux machine for the **NFS Client**, `ubuntu-24.04-x86_64`, `cpu-su.1` flavor + with 1vCPU, 4GB RAM, 20GB storage - also [assign Floating IP](../../openstack/create-and-connect-to-the-VM/assign-a-floating-IP.md). -- ssh access to both machines: [Read more here](../../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) - on how to set up SSH on your remote VMs. +- ssh access to both machines: [Read more here](../../openstack/create-and-connect-to-the-VM/bastion-host-based-ssh/index.md) + on how to set up SSH on your remote VMs. -- Create a security group with a rule that opens **Port 2049** (the default - _NFS_ port) for file sharing. Update Security Group to the **NFS Server** VM - only following [this reference](../../openstack/access-and-security/security-groups.md#update-security-groups-to-a-running-vm). +- Create a security group with a rule that opens **Port 2049** (the default + _NFS_ port) for file sharing. Update Security Group to the **NFS Server** VM + only following [this reference](../../openstack/access-and-security/security-groups.md#update-security-groups-to-a-running-vm). ## Installing and configuring NFS Server @@ -150,9 +150,9 @@ Ubuntu-based NERC OpenStack VMs: **Explanation:** - - **rw**: Read and write access. - - **sync**: Changes are written to disk immediately. - - **no_subtree_check**: Avoid permission issues for subdirectories. + - **rw**: Read and write access. + - **sync**: Changes are written to disk immediately. + - **no_subtree_check**: Avoid permission issues for subdirectories. !!! info "Other Options for Directory Permissions for the NFS share directory" @@ -310,13 +310,13 @@ example.hostname.com:/srv /opt/example nfs rsize=8192,wsize=8192,timeo=14,intr ## Test the Setup -- On the **NFS Server**, write a test file: +- On the **NFS Server**, write a test file: ```sh echo "Hello from NFS Server" | sudo tee /mnt/nfs_share/test.txt ``` -- On the **NFS Client**, verify the file is accessible: +- On the **NFS Client**, verify the file is accessible: ```sh cat /mnt/nfs_clientshare/test.txt From a1c00e6e8154883505bf471448d19e16d2534c54 Mon Sep 17 00:00:00 2001 From: Milson Munakami Date: Mon, 16 Dec 2024 18:30:09 -0500 Subject: [PATCH 4/8] removed prettier files --- .prettierignore | 1 - .prettierrc.yaml | 2 -- 2 files changed, 3 deletions(-) delete mode 100644 .prettierignore delete mode 100644 .prettierrc.yaml diff --git a/.prettierignore b/.prettierignore deleted file mode 100644 index f7405807..00000000 --- a/.prettierignore +++ /dev/null @@ -1 +0,0 @@ -*.param.yaml diff --git a/.prettierrc.yaml b/.prettierrc.yaml deleted file mode 100644 index 5101d9dc..00000000 --- a/.prettierrc.yaml +++ /dev/null @@ -1,2 +0,0 @@ -printWidth: 80 -tabWidth: 4 From 32d23b9a8d3239a958238dcf0c37c20ac6dde35b Mon Sep 17 00:00:00 2001 From: Milson Munakami Date: Mon, 16 Dec 2024 18:38:38 -0500 Subject: [PATCH 5/8] added back the network maintenance info --- nerc-theme/main.html | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/nerc-theme/main.html b/nerc-theme/main.html index c9dc6f40..a2356f6f 100644 --- a/nerc-theme/main.html +++ b/nerc-theme/main.html @@ -1,12 +1,12 @@ {% extends "base.html" %} {% block announce %}
-
Upcoming Multi-Day NERC OpenStack Platform Version Upgrade
+
Upcoming NERC Network Equipment and Switch Maintenance
- (Dec 12, 2024 8:00 AM ET - Dec 14, 2024 8:00 PM ET) + (Tuesday Jan 7, 2025 9 AM ET - Wednesday Jan 8, 2025 9 AM ET) [Timeline and info] From 458949cb1262e040b4c549b89dc84651256ca4b3 Mon Sep 17 00:00:00 2001 From: Milson Munakami Date: Mon, 16 Dec 2024 19:24:45 -0500 Subject: [PATCH 6/8] removed unwanted spaces on info box --- .../billing-process-for-harvard.md | 10 ++-- .../logging-in/web-console-overview.md | 14 ++--- .../access-and-security/security-groups.md | 10 ++-- .../using-vpn/wireguard/index.md | 8 +-- .../data-transfer/data-transfer-from-to-vm.md | 54 +++++++++---------- docs/openstack/openstack-cli/openstack-CLI.md | 8 +-- .../mount-the-object-storage.md | 6 +-- .../persistent-storage/object-storage.md | 12 ++--- .../persistent-storage/transfer-a-volume.md | 6 +-- 9 files changed, 64 insertions(+), 64 deletions(-) diff --git a/docs/get-started/cost-billing/billing-process-for-harvard.md b/docs/get-started/cost-billing/billing-process-for-harvard.md index 5f1c7aff..f95afc27 100644 --- a/docs/get-started/cost-billing/billing-process-for-harvard.md +++ b/docs/get-started/cost-billing/billing-process-for-harvard.md @@ -32,11 +32,11 @@ Please follow these two steps to ensure proper billing setup: !!! abstract "What if you already have an existing Customer Code?" - Please note that if you already have an existing active NERC account, you - need to provide your HUIT Customer Code to NERC. If you think your department - may already have a HUIT account but you don’t know the corresponding Customer - Code then you can [contact HUIT Billing](https://billing.huit.harvard.edu/portal/allusers/contactus) - to get the required Customer Code. + Please note that if you already have an existing active NERC account, you + need to provide your HUIT Customer Code to NERC. If you think your department + may already have a HUIT account but you don’t know the corresponding Customer + Code then you can [contact HUIT Billing](https://billing.huit.harvard.edu/portal/allusers/contactus) + to get the required Customer Code. 2. During the Resource Allocation review and approval process, we will utilize the HUIT "Customer Code" provided by the PI in step #1 to align it with the approved diff --git a/docs/openshift/logging-in/web-console-overview.md b/docs/openshift/logging-in/web-console-overview.md index 0bf9c5a7..cf3aa588 100644 --- a/docs/openshift/logging-in/web-console-overview.md +++ b/docs/openshift/logging-in/web-console-overview.md @@ -56,9 +56,9 @@ administrators and cluster administrators can view the Administrator perspective !!! note "Important Note" -The default web console perspective that is shown depends on the role of the -user. The **Administrator** perspective is displayed by default if the user is -recognized as an administrator. + The default web console perspective that is shown depends on the role of the + user. The **Administrator** perspective is displayed by default if the user is + recognized as an administrator. ### About the Developer perspective in the web console @@ -67,8 +67,8 @@ services, and databases. !!! info "Important Note" -The default view for the OpenShift Container Platform web console is the **Developer** -perspective. + The default view for the OpenShift Container Platform web console is the **Developer** + perspective. The web console provides a comprehensive set of tools for managing your projects and applications. @@ -82,8 +82,8 @@ located on top navigation as shown below: !!! info "Important Note" -You can identify the currently selected project with **tick** mark and also -you can click on **star** icon to keep the project under your **Favorites** list. + You can identify the currently selected project with **tick** mark and also + you can click on **star** icon to keep the project under your **Favorites** list. ## Navigation Menu diff --git a/docs/openstack/access-and-security/security-groups.md b/docs/openstack/access-and-security/security-groups.md index c530dcb9..fd369384 100644 --- a/docs/openstack/access-and-security/security-groups.md +++ b/docs/openstack/access-and-security/security-groups.md @@ -79,8 +79,8 @@ Enter the following values: !!! note "Note" - To accept requests from a particular range of IP addresses, specify the IP - address block in the CIDR box. + To accept requests from a particular range of IP addresses, specify the + IP address block in the CIDR box. The new rule now appears in the list. This signifies that any instances using this newly added Security Group will now have SSH port 22 open for requests @@ -141,10 +141,10 @@ Enter the following values: - CIDR: 0.0.0.0/0 -!!! note "Note" + !!! note "Note" - To accept requests from a particular range of IP addresses, specify the IP - address block in the CIDR box. + To accept requests from a particular range of IP addresses, specify the + IP address block in the CIDR box. The new rule now appears in the list. This signifies that any instances using this newly added Security Group will now have RDP port 3389 open for requests diff --git a/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md b/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md index 17bfd550..724137f3 100644 --- a/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md +++ b/docs/openstack/create-and-connect-to-the-VM/using-vpn/wireguard/index.md @@ -126,10 +126,10 @@ To deactivate config: `wg-quick down /path/to/file_name.config` !!! note "Important Note" - You need to contact your project administrator to get your own WireGUard - configuration file (file with .conf extension). Download it and Keep it in - your local machine so in next steps we can use this configuration client - profile file. + You need to contact your project administrator to get your own WireGUard + configuration file (file with .conf extension). Download it and Keep it in + your local machine so in next steps we can use this configuration client + profile file. A WireGuard client or compatible software is needed to connect to the WireGuard VPN server. Please install[one of these clients](https://www.wireguard.com/install/) diff --git a/docs/openstack/data-transfer/data-transfer-from-to-vm.md b/docs/openstack/data-transfer/data-transfer-from-to-vm.md index 46e9835e..75ec2187 100644 --- a/docs/openstack/data-transfer/data-transfer-from-to-vm.md +++ b/docs/openstack/data-transfer/data-transfer-from-to-vm.md @@ -434,20 +434,20 @@ using FTP, FTPS, SCP, SFTP, WebDAV, or S3 file transfer protocols. !!! info "Default User name based on OS" - - **all Ubuntu images**: ubuntu + - **all Ubuntu images**: ubuntu - - **all AlmaLinux images**: almalinux + - **all AlmaLinux images**: almalinux - - **all Rocky Linux images**: rocky + - **all Rocky Linux images**: rocky - - **all Fedora images**: fedora + - **all Fedora images**: fedora - - **all Debian images**: debian + - **all Debian images**: debian - - **all RHEL images**: cloud-user + - **all RHEL images**: cloud-user - If you still have VMs running with deleted **CentOS** images, you need to - use the following default username for your CentOS images: `centos`. + If you still have VMs running with deleted **CentOS** images, you need to + use the following default username for your CentOS images: `centos`. **"Password"**: "``" @@ -462,12 +462,12 @@ from the file picker. !!! tip "Helpful Tip" - You can save your above configured site with some preferred name by - clicking the "Save" button and then giving a proper name to your site. - This prevents needing to manually enter all of your configuration again the - next time you need to use WinSCP. + You can save your above configured site with some preferred name by + clicking the "Save" button and then giving a proper name to your site. + This prevents needing to manually enter all of your configuration again the + next time you need to use WinSCP. - ![Save Site WinSCP](images/winscp-save-site.png) + ![Save Site WinSCP](images/winscp-save-site.png) #### Using WinSCP @@ -516,17 +516,17 @@ connections to servers, enterprise file sharing, and various cloud storage platf !!! info "Default User name based on OS" - - **all Ubuntu images**: ubuntu + - **all Ubuntu images**: ubuntu - - **all AlmaLinux images**: almalinux + - **all AlmaLinux images**: almalinux - - **all Rocky Linux images**: rocky + - **all Rocky Linux images**: rocky - - **all Fedora images**: fedora + - **all Fedora images**: fedora - - **all Debian images**: debian + - **all Debian images**: debian - - **all RHEL images**: cloud-user + - **all RHEL images**: cloud-user **"Password"**: "``" @@ -585,20 +585,20 @@ computer (shared drives, Dropbox, etc.) !!! info "Default User name based on OS" - - **all Ubuntu images**: ubuntu + - **all Ubuntu images**: ubuntu - - **all AlmaLinux images**: almalinux + - **all AlmaLinux images**: almalinux - - **all Rocky Linux images**: rocky + - **all Rocky Linux images**: rocky - - **all Fedora images**: fedora + - **all Fedora images**: fedora - - **all Debian images**: debian + - **all Debian images**: debian - - **all RHEL images**: cloud-user + - **all RHEL images**: cloud-user - If you still have VMs running with deleted **CentOS** images, you need to - use the following default username for your CentOS images: `centos`. + If you still have VMs running with deleted **CentOS** images, you need to + use the following default username for your CentOS images: `centos`. **"Key file"**: "Browse and choose the appropriate SSH Private Key from you local machine that has corresponding Public Key attached to your VM" diff --git a/docs/openstack/openstack-cli/openstack-CLI.md b/docs/openstack/openstack-cli/openstack-CLI.md index 50176ef2..316635ae 100644 --- a/docs/openstack/openstack-cli/openstack-CLI.md +++ b/docs/openstack/openstack-cli/openstack-CLI.md @@ -37,10 +37,10 @@ You can download the environment file with the credentials from the [OpenStack d !!! note "Important Note" - Please note that an application credential is only valid for a single - project, and to access multiple projects you need to create an application - credential for each. You can switch projects by clicking on the project name - at the top right corner and choosing from the dropdown under "Project". + Please note that an application credential is only valid for a single + project, and to access multiple projects you need to create an application + credential for each. You can switch projects by clicking on the project name + at the top right corner and choosing from the dropdown under "Project". After clicking "Create Application Credential" button, the **ID** and **Secret** will be displayed and you will be prompted to `Download openrc file` diff --git a/docs/openstack/persistent-storage/mount-the-object-storage.md b/docs/openstack/persistent-storage/mount-the-object-storage.md index d219acde..81beca45 100644 --- a/docs/openstack/persistent-storage/mount-the-object-storage.md +++ b/docs/openstack/persistent-storage/mount-the-object-storage.md @@ -977,9 +977,9 @@ Also, check that binding to `localhost` is working fine by running the following !!! warning "Important Note" - The `netstat` command may not be available on your system by default. If - this is the case, you can install it (along with a number of other handy - networking tools) with the following command: `sudo apt install net-tools`. + The `netstat` command may not be available on your system by default. If + this is the case, you can install it (along with a number of other handy + networking tools) with the following command: `sudo apt install net-tools`. ##### Configuring a Redis Password diff --git a/docs/openstack/persistent-storage/object-storage.md b/docs/openstack/persistent-storage/object-storage.md index 3ac2b366..537b225e 100644 --- a/docs/openstack/persistent-storage/object-storage.md +++ b/docs/openstack/persistent-storage/object-storage.md @@ -260,9 +260,9 @@ This is a python client for the Swift API. There's a [Python API](https://github !!! note "Choosing Correct Python Interpreter" - Make sure you are able to use `python` or `python3` or **`py -3`** (For - Windows Only) to create a directory named `venv` (or whatever name you - specified) in your current working directory. + Make sure you are able to use `python` or `python3` or **`py -3`** (For + Windows Only) to create a directory named `venv` (or whatever name you + specified) in your current working directory. - Activate the virtual environment by running: @@ -1062,9 +1062,9 @@ respectively. !!! note "Helpful Tips" - You can save your above configured session with some preferred name by - clicking the "Save" button and then giving a proper name to your session. - So that next time you don't need to again manually enter all your configuration. + You can save your above configured session with some preferred name by + clicking the "Save" button and then giving a proper name to your session. + So that next time you don't need to again manually enter all your configuration. #### Using WinSCP diff --git a/docs/openstack/persistent-storage/transfer-a-volume.md b/docs/openstack/persistent-storage/transfer-a-volume.md index 1847b87d..594fd567 100644 --- a/docs/openstack/persistent-storage/transfer-a-volume.md +++ b/docs/openstack/persistent-storage/transfer-a-volume.md @@ -104,9 +104,9 @@ openstack volume transfer request create my-volume !!! tip "Pro Tip" - If your volume name includes spaces, you need to enclose them in quotes, - i.e. `""`. - For example: `openstack volume transfer request create "My Volume"` + If your volume name includes spaces, you need to enclose them in quotes, + i.e. `""`. + For example: `openstack volume transfer request create "My Volume"` - The volume can be checked as in the transfer status using `openstack volume transfer request list` as follows and the volume is in status From 8ac935e6fc3743472b260a9cb2d94afcc46b01a9 Mon Sep 17 00:00:00 2001 From: Milson Munakami Date: Mon, 16 Dec 2024 19:27:38 -0500 Subject: [PATCH 7/8] fixed the lint issue --- docs/openstack/access-and-security/security-groups.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/openstack/access-and-security/security-groups.md b/docs/openstack/access-and-security/security-groups.md index fd369384..2fc5a769 100644 --- a/docs/openstack/access-and-security/security-groups.md +++ b/docs/openstack/access-and-security/security-groups.md @@ -143,7 +143,7 @@ Enter the following values: !!! note "Note" - To accept requests from a particular range of IP addresses, specify the + To accept requests from a particular range of IP addresses, specify the IP address block in the CIDR box. The new rule now appears in the list. This signifies that any instances using From cd1bb9abd4ff59a8f43cc0e8fef146c977adbfab Mon Sep 17 00:00:00 2001 From: Milson Munakami Date: Tue, 17 Dec 2024 10:36:58 -0500 Subject: [PATCH 8/8] updated after peer review --- docs/openstack/openstack-cli/openstack-CLI.md | 2 +- docs/openstack/persistent-storage/object-storage.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/openstack/openstack-cli/openstack-CLI.md b/docs/openstack/openstack-cli/openstack-CLI.md index 316635ae..360182bd 100644 --- a/docs/openstack/openstack-cli/openstack-CLI.md +++ b/docs/openstack/openstack-cli/openstack-CLI.md @@ -38,7 +38,7 @@ You can download the environment file with the credentials from the [OpenStack d !!! note "Important Note" Please note that an application credential is only valid for a single - project, and to access multiple projects you need to create an application + project to access multiple projects you need to create an application credential for each. You can switch projects by clicking on the project name at the top right corner and choosing from the dropdown under "Project". diff --git a/docs/openstack/persistent-storage/object-storage.md b/docs/openstack/persistent-storage/object-storage.md index 537b225e..4a3a333f 100644 --- a/docs/openstack/persistent-storage/object-storage.md +++ b/docs/openstack/persistent-storage/object-storage.md @@ -1062,7 +1062,7 @@ respectively. !!! note "Helpful Tips" - You can save your above configured session with some preferred name by + You can save your above configured session with a preferred name by clicking the "Save" button and then giving a proper name to your session. So that next time you don't need to again manually enter all your configuration.