diff --git a/index.xml b/index.xml index 692955821..d097f36db 100644 --- a/index.xml +++ b/index.xml @@ -2196,7 +2196,7 @@ https://harvester.github.io/tests/manual/advanced/fleet-support-with-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/fleet-support-with-harvester/ - Prerequisite Harvester cluster is imported in Rancher. Feature flag harvester-baremetal-container-workload is enabled. Harvester cluster is avaialble in the Explore cluster section of Rancher. Test cases 1. Deploy VM image, network etc objects through GitOps using Fleet 2. Editing the fleet deployment 3. Enabling/disabling harvester-baremetal-container-workload 4. Uninstalling using Fleet 5. Having other downstream clusters (like Harvester node driver) while deploying with Fleet 6. Negative testing - Remove some deployed object in the cluster and redeploy using Fleet + Fleet Support Pathways Fleet Support is enabled out of the box with Harvester, no Rancher integration needed, as Fleet Support does not need any Rancher integration to function Fleet Support can be used from within Rancher w/ Harvester Fleet Support w/ Rancher Prerequisites Harvester cluster is imported into Rancher. Rancher Feature Flag harvester-baremetal-container-workload is enabled. Harvester cluster is available to view via the Explore Cluster section of Rancher. Explore the Harvester cluster: Toggle “All Namespaces” to be selected Search for & “star” (marking favorite for ease of navigation): Git Repo Git Job Git Restrictions Fleet Support w/out Rancher Prerequisites An active Harvester Cluster Kubeconfig Additional Prerequisites Fork ibrokethecloud’s Harvester Fleet Demo into your own personal GitHub Repository Take a look at the different Harvester API Resources as YAML will be scaffolded to reflect those objects respectively Additional Prerequisites Airgapped, if desired Have an Airgapped GitLab Server Running somewhere with a Repo that takes the shape of ibrokethecloud’s Harvester Fleet Demo (setting up AirGapped GitLab Server is outside of this scope) Additional Prerequisites (Private Repository Testing), if desired Private Git Repo Key, will need to be added to -n fleet-local namespace Build a private GitHub Repo Add similar content to what ibrokethecloud’s Harvester Fleet Demo holds but take into consideration the following ( references: GitRepo CRD & Rancher Fleet Private Git Repo Blurb ): building a “separate” SINGLE REPOSITORY ONLY (zero-trust based) SSH Key Via something like: ssh-keygen -t rsa -b 4096 -m pem -C "testing-test-key-for-private-repo-deploy-key@email. Function keys on web VNC interface diff --git a/integration/modules/skel_skel_spec.html b/integration/modules/skel_skel_spec.html index e9e71d9b8..0fab2a7a2 100644 --- a/integration/modules/skel_skel_spec.html +++ b/integration/modules/skel_skel_spec.html @@ -1,11 +1,11 @@ -skel/skel.spec | Cypress Integration Tests for Harvester
Options
All
  • Public
  • Public/Protected
  • All
Menu

Index

Functions

  • changePassword(): void
  • +skel/skel.spec | Cypress Integration Tests for Harvester
    Options
    All
    • Public
    • Public/Protected
    • All
    Menu

    Index

    Functions

    • changePassword(): void
      1. Login
      2. Change Password
      3. Log out
      4. Login with new Password
      -
      notimplemented

      Returns void

    • deleteUser(): void
    • deleteUser(): void
      1. Log in as admin
      2. Navigate to user admin page
      3. @@ -14,7 +14,7 @@
      4. Try to log in as deleted user
      5. Verify that login fails
      -
      notimplemented

      Returns void

    • testSkelTest(): void
    • testSkelTest(): void
      1. Login to the page
      2. Edit the Type
      3. diff --git a/integration/modules/testcases_VM_settings_cloud_config_templates_spec.html b/integration/modules/testcases_VM_settings_cloud_config_templates_spec.html index db48357b6..5fae559f7 100644 --- a/integration/modules/testcases_VM_settings_cloud_config_templates_spec.html +++ b/integration/modules/testcases_VM_settings_cloud_config_templates_spec.html @@ -1,4 +1,4 @@ -testcases/VM settings/cloud-config-templates.spec | Cypress Integration Tests for Harvester
        Options
        All
        • Public
        • Public/Protected
        • All
        Menu

        Index

        Functions

        Functions

        • CheckUserData(): void
        • +testcases/VM settings/cloud-config-templates.spec | Cypress Integration Tests for Harvester
          Options
          All
          • Public
          • Public/Protected
          • All
          Menu

          Index

          Functions

          Functions

          • CheckUserData(): void
            1. Login
            2. Navigate to the cloud template create page
            3. diff --git a/integration/modules/testcases_VM_settings_ssh_keys_spec.html b/integration/modules/testcases_VM_settings_ssh_keys_spec.html index 007d07b13..a7f5ae4fa 100644 --- a/integration/modules/testcases_VM_settings_ssh_keys_spec.html +++ b/integration/modules/testcases_VM_settings_ssh_keys_spec.html @@ -1,4 +1,4 @@ -testcases/VM settings/ssh-keys.spec | Cypress Integration Tests for Harvester
              Options
              All
              • Public
              • Public/Protected
              • All
              Menu

              Index

              Functions

              • CheckCreateSsh(): void
              • PresetSsh(): void

              Legend

              • Function

              Settings

              Theme

              Generated using TypeDoc

              \ No newline at end of file diff --git a/integration/modules/testcases_networks_network_spec.html b/integration/modules/testcases_networks_network_spec.html index 3b08f0523..d8dea4491 100644 --- a/integration/modules/testcases_networks_network_spec.html +++ b/integration/modules/testcases_networks_network_spec.html @@ -1,4 +1,4 @@ -testcases/networks/network.spec | Cypress Integration Tests for Harvester
              Options
              All
              • Public
              • Public/Protected
              • All
              Menu

              Index

              Functions

              • CheckCreateNetwork(): void
              • CreateVlan1(): void

              Legend

              • Function

              Settings

              Theme

              Generated using TypeDoc

              \ No newline at end of file diff --git a/integration/modules/testcases_virtualmachines_virtual_machine_spec.html b/integration/modules/testcases_virtualmachines_virtual_machine_spec.html index adc667ccb..cdf45545a 100644 --- a/integration/modules/testcases_virtualmachines_virtual_machine_spec.html +++ b/integration/modules/testcases_virtualmachines_virtual_machine_spec.html @@ -1,4 +1,4 @@ -testcases/virtualmachines/virtual-machine.spec | Cypress Integration Tests for Harvester
              Options
              All
              • Public
              • Public/Protected
              • All
              Menu

              Index

              Functions

              • CheckMultiVMScheduler(): void
              • +testcases/virtualmachines/virtual-machine.spec | Cypress Integration Tests for Harvester
                Options
                All
                • Public
                • Public/Protected
                • All
                Menu

                Index

                Functions

                • CheckMultiVMScheduler(): void
                • DeleteVMWithImage(): void
                • DeleteVMWithImage(): void
                  1. Create vm “vm-1”
                  2. Create a image “img-1” by export the volume used by vm “vm-1”
                  3. diff --git a/manual/advanced/fleet-support-with-harvester/index.html b/manual/advanced/fleet-support-with-harvester/index.html index e1fd489d1..7fe024a1d 100644 --- a/manual/advanced/fleet-support-with-harvester/index.html +++ b/manual/advanced/fleet-support-with-harvester/index.html @@ -1934,19 +1934,201 @@

                    Fleet support with Harvester

                    -

                    Prerequisite

                    +

                    Fleet Support Pathways

                      -
                    1. Harvester cluster is imported in Rancher.
                    2. -
                    3. Feature flag harvester-baremetal-container-workload is enabled.
                    4. -
                    5. Harvester cluster is avaialble in the Explore cluster section of Rancher.
                    6. +
                    7. Fleet Support is enabled out of the box with Harvester, no Rancher integration needed, as Fleet Support does not need any Rancher integration to function
                    8. +
                    9. Fleet Support can be used from within Rancher w/ Harvester
                    -

                    Test cases

                    -

                    1. Deploy VM image, network etc objects through GitOps using Fleet

                    -

                    2. Editing the fleet deployment

                    -

                    3. Enabling/disabling harvester-baremetal-container-workload

                    -

                    4. Uninstalling using Fleet

                    -

                    5. Having other downstream clusters (like Harvester node driver) while deploying with Fleet

                    -

                    6. Negative testing - Remove some deployed object in the cluster and redeploy using Fleet

                    +

                    Fleet Support w/ Rancher Prerequisites

                    +
                      +
                    1. Harvester cluster is imported into Rancher.
                    2. +
                    3. Rancher Feature Flag harvester-baremetal-container-workload is enabled.
                    4. +
                    5. Harvester cluster is available to view via the Explore Cluster section of Rancher.
                    6. +
                    7. Explore the Harvester cluster: +
                        +
                      1. Toggle “All Namespaces” to be selected
                      2. +
                      3. Search for & “star” (marking favorite for ease of navigation): +
                          +
                        • Git Repo
                        • +
                        • Git Job
                        • +
                        • Git Restrictions
                        • +
                        +
                      4. +
                      +
                    8. +
                    +

                    Fleet Support w/out Rancher Prerequisites

                    +
                      +
                    1. An active Harvester Cluster Kubeconfig
                    2. +
                    +

                    Additional Prerequisites

                    +
                      +
                    1. Fork ibrokethecloud’s Harvester Fleet Demo into your own personal GitHub Repository
                    2. +
                    3. Take a look at the different Harvester API Resources as YAML will be scaffolded to reflect those objects respectively
                    4. +
                    +

                    Additional Prerequisites Airgapped, if desired

                    +
                      +
                    1. Have an Airgapped GitLab Server Running somewhere with a Repo that takes the shape of ibrokethecloud’s Harvester Fleet Demo +(setting up AirGapped GitLab Server is outside of this scope)
                    2. +
                    +

                    Additional Prerequisites (Private Repository Testing), if desired

                    +
                      +
                    1. Private Git Repo Key, will need to be added to -n fleet-local namespace
                    2. +
                    3. Build a private GitHub Repo
                    4. +
                    5. Add similar content to what ibrokethecloud’s Harvester Fleet Demo holds but take into consideration the following ( references: GitRepo CRD & Rancher Fleet Private Git Repo Blurb ): +
                        +
                      1. building a “separate” SINGLE REPOSITORY ONLY (zero-trust based) SSH Key Via something like:
                      2. +
                      +
                          ssh-keygen -t rsa -b 4096 -m pem -C "testing-test-key-for-private-repo-deploy-key@email.com"
                      +    Generating public/private rsa key pair.
                      +    Enter file in which to save the key (/home/mike/.ssh/id_rsa): /home/mike/.ssh/rsa_key_for_private_rancher_fleet_repo_testing
                      +    Enter passphrase (empty for no passphrase):
                      +    Enter same passphrase again:
                      +    Your identification has been saved in /home/mike/.ssh/rsa_key_for_private_rancher_fleet_repo_testing
                      +    Your public key has been saved in /home/mike/.ssh/rsa_key_for_private_rancher_fleet_repo_testing.pub
                      +
                        +
                      1. adding that key to the fleet-local namespace as a secret kubectl create secret generic ssh-key -n fleet-local --from-file=ssh-privatekey=/home/mike/.ssh/rsa_key_for_private_rancher_fleet_repo_testing --type=kubernetes.io/ssh-auth
                      2. +
                      3. going into your repo’s settings -> deploy keys and adding that SSH Key you just built as a deploy key
                      4. +
                      5. you’ll need to keep in mind that the setup.yaml and similar files will need to shift to reflect the fact you’re utilizing a private GitHub Repository, those changes may look similar to (please note: spec.clientSecretName to reference your SSH Key that was added to the private repository’s settings -> deploy keys && spec.repo shifts to hold your git based URL/URI not the https based one):
                      6. +
                      +
                          apiVersion: fleet.cattle.io/v1alpha1
                      +    kind: GitRepo
                      +    metadata:
                      +        name: setup-harvester-cluster
                      +        namespace: fleet-local
                      +    spec:
                      +        branch: main
                      +        insecureSkipTLSVerify: false
                      +        paths:
                      +            - "/vm-image"
                      +        pollingInterval: 15s
                      +        repo: 'git@github.com:irishgordo/sample-private-fleet.git'
                      +        clientSecretName: ssh-key
                      +        targetNamespace: default
                      +        targets:
                      +            - clusterSelector: {}
                      +
                    6. +
                    +

                    Additional Docs To Familiarize Yourself With:

                    + +

                    Non-Rancher-Integration Fleet Support Base Tests, Public Repo

                    +
                    Note: this can be expanded to encompass more for automation
                    +

                    Test Building A GitRepo Is Successful

                    +
                      +
                    1. Utilizing ibrokethecloud’s Harvester Fleet Demo as a base-layer create a folder called extra-vm-image
                    2. +
                    3. Build a file in the root directory called test-gitrepo-is-successful-public.yaml
                    4. +
                    5. Have it structured similar to:
                    6. +
                    +
                        apiVersion: fleet.cattle.io/v1alpha1
                    +    kind: GitRepo
                    +    metadata:
                    +        name: test-gitrepo-success-public
                    +        namespace: fleet-local
                    +    spec:
                    +        branch: main
                    +        paths:
                    +            - "/extra-vm-image"
                    +        pollingInterval: 15s
                    +        repo: YOUR-HTTPS-REPO-URL
                    +        targetNamespace: default
                    +        targets:
                    +            - clusterSelector: {}
                    +
                      +
                    1. Build a VM Image File in /extra-vm-image called my-test-public-image.yaml
                    2. +
                    3. Have that file structured similar to:
                    4. +
                    +
                        apiVersion: harvesterhci.io/v1beta1
                    +    kind: VirtualMachineImage
                    +    metadata:
                    +        annotations:
                    +            harvesterhci.io/storageClassName: harvester-longhorn
                    +    name: opensuse-default-image
                    +    labels:
                    +        testing: testing-pub-repo-non-rancher
                    +    namespace: default
                    +    spec:
                    +        displayName: provide-a-display-name
                    +        retry: 3
                    +        sourceType: download
                    +        storageClassParameters:
                    +            migratable: "true"
                    +            numberOfReplicas: "3"
                    +            staleReplicaTimeout: "30"
                    +        url: https://-or-http://provide-a-url-of-an-image-like-qcow2-to-download-or-img
                    +
                      +
                    1. Add, Commit, & Push to your public fork
                    2. +
                    3. Utilizing the kubeconfig from your Harvester cluster, go ahead and create the GitRepo CRD from the raw.github link of your test-gitrepo-is-successful-public.yaml something similar to: kubectl create -f https://raw.githubusercontent.com/YOUR-USER-NAME/harvester-fleet-demo/main/test-gitrepo-is-successful-public.yaml
                    4. +
                    5. Audit that it built the GitRepo CRD with something like kubectl get GitRepo -A -o wide you should see test-gitrepo-success-public as an available object
                    6. +
                    7. In the Harvester UI, you should see the VirtualMachineImage being downloaded
                    8. +
                    +

                    Test Updating A GitRepo Resource Is Successful

                    +
                      +
                    1. With the same /extra-vm-image/my-test-public-image.yaml go ahead and modify it creating a description annotation adding the line of something like:
                    2. +
                    +
                    metadata:
                    +  annotations:
                    +    harvesterhci.io/storageClassName: harvester-longhorn
                    +    field.cattle.io/description: "my new description for this VirtualMachineImage that's already been downloaded from the URL earlier for Harvester"
                    +
                      +
                    1. Git Add, Git Commit, & Git Push that change out to the main branch fork of the repo you’re utilizing
                    2. +
                    3. You should see that within a minute the VirtualMachineImage object you created through the CRD get’s a different displayed description, that reflects what you passed in with field.cattle.io/description
                    4. +
                    +

                    Test a forced Rancher Fleet Sync, Synchronizes Git in Comparison with the Current State of Harvester Cluster

                    +
                      +
                    1. Go ahead and delete that VirtualMachineImage from the Harvester UI
                    2. +
                    3. Ensure it’s deleted
                    4. +
                    5. Hop in with your favorite editor or perhaps a patch file (take this as inspiration), run kubectl edit GitRepo/test-gitrepo-success-public
                    6. +
                    7. be mindful we’re utilizing Rancher Fleet GitRepo Object Properties, but add a line in the yaml under spec like:
                    8. +
                    +
                      forceSyncGeneration: 1
                    +
                      +
                    1. save that edited file
                    2. +
                    3. watch over a period of time, that “user-deleted” VirtualMachineImage “should” come-back-to-life and exist once again in your Harvester cluster
                    4. +
                    +

                    Test Deleting A GitRepo Resource Is Successful

                    +
                      +
                    1. from within the root directory of the project go ahead and fire off a kubectl delete -f test-gitrepo-is-successful-public.yaml
                    2. +
                    3. validate that the VirtualMachineImage is deleted
                    4. +
                    5. validate that the GitRepo does not exist eg, kubectl get GitRepo -A -o wide
                    6. +
                    +

                    Rancher Integration Fleet Support Base Tests, Public Repo

                    +
                    Note: this can be expanded to encompass more for automation
                    +

                    Test Building A GitRepo Is Successful

                    +
                      +
                    1. In Rancher UI under the Harvester Cluster that has been imported with the feature flag of harvester-baremetal-container-workload created go ahead and start building a GitRepo
                    2. +
                    3. Link it to your https:// fork of the repo
                    4. +
                    5. Specify the paths of /vm-image (NOTE: PLEASE CROSS-CHECK the image URL for the VM Image OpenSuse Image URLs ‘Historically’ fall-out-of-date very fast due to the nature of new builds being rolled out frequently - you will more than likely want to change the opensuse-image.yaml spec.url beforehand and then pushing to your fork of the repo), /keypair, and /vm-network for the first iteration
                    6. +
                    7. Be sure to make sure to Edit the YAML directly changing: +
                        +
                      1. namespace from fleet-default to fleet-local
                      2. +
                      3. that targets.clusterSelector does not have that pre-baked-in-logic and is open ended with targets.clusterSelector: {}
                      4. +
                      +
                    8. +
                    9. Watch it respectively in the Harvester UI build out those resources
                    10. +
                    11. Go back into the GitRepo you created after the keypairs, vm-image, vm-network Harvester objects have been built out and edit it, adding the /vm-workload to the path
                    12. +
                    13. Watch over a period of time the Harvester UI eventually spin up a VM utilizing the provided keypairs, vm-image, vm-network
                    14. +
                    15. Audit things like GitJobs & GitRestrictions in the Rancher UI
                    16. +
                    +

                    Test Updates To Harvester Objects Work

                    +
                      +
                    1. Modify anything in either: /vm-image, /keypair, /vm-workload, or /vm-network leverage Harvester API Docs to validate modification of Harvester Object
                    2. +
                    3. Git Add, Git Commit, & Git Push that change out to your fork
                    4. +
                    5. Validate the change is reflected within the Harvester Cluster
                    6. +
                    +

                    Negative Test: Make Sure User-Created Rancher Resources Are Not Affected

                    +
                      +
                    1. Build an RKE2 or RKE1 cluster, utilizing Harvester: +
                        +
                      1. use a separate VirtualMachineImage
                      2. +
                      3. use separate keypair (if desired)
                      4. +
                      5. use a separate vm-network
                      6. +
                      +
                    2. +
                    3. Ensure that RKE2 / RKE1 comes up with the VMs running on Harvester
                    4. +
                    +

                    NOTE: Test Cases Can Also Be Reflected Via Either Private GitHub Repo & ALSO self-hosted GitLab Instance, See Earlier Prerequisites for that load-out

                    diff --git a/manual/advanced/index.xml b/manual/advanced/index.xml index 7830dfc99..17c680fea 100644 --- a/manual/advanced/index.xml +++ b/manual/advanced/index.xml @@ -54,7 +54,7 @@ https://harvester.github.io/tests/manual/advanced/fleet-support-with-harvester/ Mon, 01 Jan 0001 00:00:00 +0000 https://harvester.github.io/tests/manual/advanced/fleet-support-with-harvester/ - Prerequisite Harvester cluster is imported in Rancher. Feature flag harvester-baremetal-container-workload is enabled. Harvester cluster is avaialble in the Explore cluster section of Rancher. Test cases 1. Deploy VM image, network etc objects through GitOps using Fleet 2. Editing the fleet deployment 3. Enabling/disabling harvester-baremetal-container-workload 4. Uninstalling using Fleet 5. Having other downstream clusters (like Harvester node driver) while deploying with Fleet 6. Negative testing - Remove some deployed object in the cluster and redeploy using Fleet + Fleet Support Pathways Fleet Support is enabled out of the box with Harvester, no Rancher integration needed, as Fleet Support does not need any Rancher integration to function Fleet Support can be used from within Rancher w/ Harvester Fleet Support w/ Rancher Prerequisites Harvester cluster is imported into Rancher. Rancher Feature Flag harvester-baremetal-container-workload is enabled. Harvester cluster is available to view via the Explore Cluster section of Rancher. Explore the Harvester cluster: Toggle “All Namespaces” to be selected Search for & “star” (marking favorite for ease of navigation): Git Repo Git Job Git Restrictions Fleet Support w/out Rancher Prerequisites An active Harvester Cluster Kubeconfig Additional Prerequisites Fork ibrokethecloud’s Harvester Fleet Demo into your own personal GitHub Repository Take a look at the different Harvester API Resources as YAML will be scaffolded to reflect those objects respectively Additional Prerequisites Airgapped, if desired Have an Airgapped GitLab Server Running somewhere with a Repo that takes the shape of ibrokethecloud’s Harvester Fleet Demo (setting up AirGapped GitLab Server is outside of this scope) Additional Prerequisites (Private Repository Testing), if desired Private Git Repo Key, will need to be added to -n fleet-local namespace Build a private GitHub Repo Add similar content to what ibrokethecloud’s Harvester Fleet Demo holds but take into consideration the following ( references: GitRepo CRD & Rancher Fleet Private Git Repo Blurb ): building a “separate” SINGLE REPOSITORY ONLY (zero-trust based) SSH Key Via something like: ssh-keygen -t rsa -b 4096 -m pem -C "testing-test-key-for-private-repo-deploy-key@email. Set backup target S3 (e2e_fe)